Manage Learn to apply best practices and optimize your operations.

More value from more metadata

More value from more metadata
Ed Tittel

After last week's tip, TechTarget's executive technology editor asked me to dig deeper into the details of what we have learned about using XML to capture metadata for our online course- and exam-delivery environment. In this first of several more detailed reports on that system, I'll take you through the metadata we capture about review questions and exams, show you some markup, and explain what gives it value.

To begin with, here's what the header of a typical set of review questions looks like, represented in XML:

Example 1: Review question preamble and header

<?xml version="1.0" standalone="no" ?> 
  <!DOCTYPE Test (View Source for full doctype...)> 
- <Test type="review">
  <Title>TCP/IP Module 7 Review Questions</Title> 

The document preamble identifies the version of XML as 1.0 and points to a locally available DTD named Test.dtd. Then the real markup begins. We start by identifying the value for the type attribute associated with the root element Test as "review" to indicate it's a collection of review questions. Likewise the Title element not only provides information written as output for the user, it also helps provide documentation for our own purposes as well. The ModuleID element specifically identifies Lesson 7 in the class, while the TestID element says this is the first set of review questions for this lesson (actually, it's the only set of such questions, but that is not captured anywhere in this document as metadata).

Most of the remainder of the XML coding simply documents, labels, and describes review questions for Lesson 7. Here are a couple of examples:

Example 2: Multiple-choice question format for True/False question

<Question value="1" type="QMC" category="NFS">
  <Qbody>True or False: NFS uses IP for its network protocol, but 
not TCP for its transport protocol.</Qbody> 
  <Qopt value="a">True</Qopt> 
  <Qopt value="b">False</Qopt> 
  <Qdiscuss value="a" status="ANS">Indeed, NFS uses IP for 
its network protocol and UDP for its transport protocol. Thus, 
answer a is correct.</Qdiscuss> 
  <Qdiscuss value="b" status="ERR">NFS uses IP for its 
network protocol and UDP for its transport protocol. Thus, 
answer b is incorrect.</Qdiscuss> 

The Question element defines the content model for various types of questions our environment presents to users. This example includes a scalar attribute named value, numbered sequentially for each successive Question element. The type value QMC stands for Question, Multiple Choice. It indicates a multiple-choice question that can have only one correct answer (radio buttons indicate only a single choice is allowed when the user sees the question for input). The category attribute associates a topic with the question. We use the category to calculate topical scoring when we calculate test results (we score all questions with the same category value together as a group).

The Qbody element defines the content for the question text itself. The user display shows a question number and the body of the question. For multiple-choice questions the Qopt element defines the content model for potential answers; a True-False question has exactly two Qopt elements (one for true, the other for false) but multiple-choice questions may have an arbitrary number of options. The Qopt element's value attribute defines the display character to be associated with each potential answer. The Qdiscuss element provides an explanation for each potential answer, revealed only after the scoring of review questions or an exam. These provide an explanation of the answers. Here, the value attribute's value matches the value of that same attribute in the corresponding Qopt element, to link the discussion with the original answer option. The status attribute's value may either be "ANS" to indicate a correct answer, or "ERR" to indicate an wrong answer. The post-scoring user display highlights correct answers a in green on-screen, to contrast them with incorrect answers that remain as plain black text on a white background.

Example 3: A multiple-choice, multiple-answer question

<Question value="3" type="QMCM" category="NFS">
  <Qbody>Which of the following is not an NFS daemon?</Qbody> 
  <Qopt value="a">lockd</Qopt> 
  <Qopt value="b">mountd</Qopt> 
  <Qopt value="c">nfsd</Qopt> 
  <Qopt value="d">statd</Qopt> 
  <Qopt value="e">umountd</Qopt> 
  <Qdiscuss value="a" status="ERR">The lock daemon 
coordinates with statd to control how multiple writers can 
access NFS files. Therefore, answer a is incorrect, since 
lockd is indeed an NFS daemon.</Qdiscuss> 
  <Qdiscuss value="b" status="ERR">The mount daemon 
handles mounting and unmounting of NFS mount points. Thus, 
answer b is incorrect, since mountd is indeed an NFS 
  <Qdiscuss value="c" status="ERR">The NFS daemon 
handles all routine file access requests from clients. Thus, 
answer c is incorrect, since nfsd is the primary NFS 
  <Qdiscuss value="d" status="ERR">The status daemon works 
with lockd to control how multiple writers can access NFS files. 
Thus, answer d is incorrect, because statd is indeed an NFS daemon.</Qdiscuss> 
  <Qdiscuss value="e" status="ANS">The mount daemon handles 
both mount and unmount requests, so no separate unmount daemon 
is required for NFS. Thus, answer e is bogus; since that definitely 
disqualifies it as an NFS daemon, answer e is correct.</Qdiscuss> 

Here we set up a multiple-choice question that accepts one or more correct answers. Basically, it identifies the value of the Question type attribute as "QMCM" (for Question, Multiple Choice, Multiple answer) and then allows one or more Qdiscuss elements to assign the value "ANS" to their status attributes. For such a long example, it's a simple yet elegant result.

The entire DTD for this version of our test scripts requires only 16 lines of code, and appears as Example 4 below. But what this simple collection of markup does is truly remarkable:

  • It identifies the kind of test questions, be they review questions for a specific lesson or a final exam for an entire course.
  • It uniquely identifies the lesson (to which review questions correspond), or a final exam.
  • It permits lessons or final exams in a course to offer one or more sets of questions (for many courses, we supply two sets of review questions per lesson; for some courses we even provide two final exams, so one may function as a "practice final").
  • It supports all the kinds of multiple-choice questions necessary for true-false, multiple-choice single-answer, and multiple-choice multiple answer in a simple, easy format.

xample 4: The Test DTD (Test.dtd)

<!-- DTD for LANWrights Test -->
<!ELEMENT Test (Title|ModuleID|TestID|Question)* >
<!ELEMENT Title (#PCDATA)* >
<!ELEMENT Question (Qbody|Qopt|Qdiscuss)* >
<!ATTLIST Question value CDATA #IMPLIED>
<!ATTLIST Question category CDATA #IMPLIED>
<!ELEMENT Qbody (#PCDATA)* >
<!ELEMENT Qdiscuss (#PCDATA)* >
<!ATTLIST Qdiscuss value CDATA #IMPLIED>
<!ATTLIST Qdiscuss status CDATA #IMPLIED>

We've can turn this model over to test developers and put them to work in 15 minutes, with only limited explanations and documentation. The best example for them is a validated sample document for another test. Best of all, we use a validating XML parser to check our test scripts syntactically before testing them semantically, which saves lots of testing and debugging time.

Historically, this is the oldest element of our online course and exam delivery environment. In the next tip, we'll examine one of our newest elements: the XML documents that define the metadata we use to describe entire courses.

Ed Tittel is a principal at LANWrights, Inc., a wholly owned subsidiary of LANWrights offers training, writing, and consulting services on Internet, networking, and Web topics (including XML and XHTML), plus various IT certifications (Microsoft, Sun/Java, and Prosoft/CIW).

How was this tip? Let us know by sending an email, or go to our tips site, where you can rate this, and other tips.

This was last published in January 2001

Dig Deeper on Development implications of microservices architecture



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.