Metadata Harmonization: Making Standards Work Together

Below are listed questions that were submitted during the March 16, 2011 webinar. Answers from the presenters will be added when available. Not all the questions could be responded to during the live webinar, so those that could not be addressed at the time are also included below.

Speakers:

  • Makx Dekkers, Managing Director and Chief Executive Officer, has been the Managing Director of DCMI since March 2001. His main areas of interest and experience are development, interoperability and management of metadata solutions, both from a strategic as well as a technical perspective.

  • Mikael Nilsson, a PhD in media technology from the Royal Institute of Technology in Stockholm, has extensive expertise in metadata standardization and interoperability, particularly at the crossroads between the Dublin Core, World Wide Web Consortium, and IEEE Learning Object Metadata communities.

  • Thomas Baker, Chief Information Officer of the Dublin Core Metadata Initiative, was recently co-chair of the W3C Semantic Web Deployment Working Group and currently co-chairs a W3C Incubator Group on Library Linked Data.

Feel free to contact us if you have any additional questions about library, publishing, and technical services standards, standards development, or if you have suggestions for new standards, recommended practices, or areas where NISO should be engaged.

NISO Webinar Questions and Answers

  1. Doesn't the creation of equivalents at the term or element level contradict the earlier example of language in DC and LOM? If the models aren't aligned how can you create equivalences?

    Mikael Nilsson: Your observation is correct – alignment of terms requires the use of compatible models. Metadata alignment therefore assumes that the harmonization issue has been successfully addressed, and represents one “next step” beyond harmonization. In the case of DC and LOM, it means that the aligned terms are both expressed on the basis of the same underlying model. As RDF currently has the most traction to serve as that underlying model, it means that DC and LOM elements will both be expressed as RDF properties. In the case of DC, the properties are expressed in RDF natively. In the case of LOM, new RDF properties must be created to which the corresponding elements of its tree model must be mapped.
     
  2. What should Academic Publishers start doing to future proof their end-to-end metadata processes in relation to harmonization and ensure they are aligned with academic institutions? (e.g metadata capture, enrichment and output). Is there a core metadata model we should be moving towards?

    Mikael Nilsson: The most important thing is not necessarily to use a standard such as RDF internally, even though there are many cases where that is a valuable approach. Rather, publishers should make sure that they use well-documented and carefully designed metadata models internally, developed with RDF modeling knowledge but not necessarily using RDF. This will ensure that when interfacing with external metadata sources and consumers, the metadata is easily converted to RDF and back without important information loss.
     
  3. How do you characterize the overlap between DCMI metamodel and SKOS?

    Mikael Nilsson:
     SKOS is an RDF vocabulary for describing metadata vocabularies. While SKOS does define models for vocabulary description, it uses the RDF metamodel to encode metadata. Thus, it is complementary to the RDF or DCMI metamodels, and fully useful in any Dublin Core or RDF setting without conflict. There is some overlap between DC's "Vocabulary Encoding Schemes" and SKOS "Concept Schemes" -- whether they can be considered to be exactly equivalent has yet to be determined -- but nobody has reported this to be a problem in practice.