EBSCO
Headshot of NISO Managing Director, Todd Carpenter

October 2009

This week, the IPRES group held its Sixth International Conference on Preservation of Digital Objects in San Francisco, hosted by the California Digital Library. The program—which looks at traditional and non-traditional media, such as blogs, IRs, and research data—is just one example of the need for continued support for preservation and highlights the number of good approaches worthy of formalization and promotion as best practices in the community. We'll have a report on the meeting from Priscilla Caplan in the Fall issue of Information Standards Quarterly (ISQ).

Finding out about new projects and effective approaches to digital preservation provides us with some clues forward. For example, just last week Roger Schonfeld & Ross Housewright at Ithaka S+R, the strategy and research arm of ITHAKA, published a research report on What to Withdraw? Print Collections Management in the Wake of Digitization. This interesting report presents libraries with a framework for selecting print titles that may reasonably be withdrawn from their collections, while addressing the potential impact on long-term preservation. The report concludes: "For journal collections that are available digitally, the online version provides for virtually all access needs, leaving print versions to serve a preservation role and therefore be required in far fewer numbers." However, because many publishers still rely on the print-and-online revenue mix, actions by the library community to cancel print versions would certainly have a financial impact on publishers. In some ways, the argument for preservation copies was among the last bastions for a robust print collection. This report questions those presumptions and many libraries, whose budgets and space needs are squeezed, will find some comfort in this report's recommendations. Publishers, unfortunately, won't.

The role of preservation by libraries, particularly for that content which is "born digital"—without a print counterpart—has been increasingly visible recently. In August, we touched briefly on the proposed change to the Library of Congress's mandatory deposit rules for online-only content. Initial feedback to the ruling has been submitted from a broad range of stakeholders—publishers, libraries, software developers, photographers, and creators of musical works—and is available on the Copyright Office website. The Office has extended its final deadline for receipt of comments until October 16th. I encourage you to take a look at the comments already submitted, which provide much food for thought on the preservation issues for this electronic only content.

These cross-cutting projects all have a central theme: the need for community best practices and improved preservation standards for digital content. The publishing industry was quick to note that the preservation of the "long tail" of content would require significant standardization and best practices. In many ways, standardizing around some common file formats, such as NISO's newly approved Standardized Markup for Journal Articles project (based on what is commonly known as the NLM DTD) will go a long way toward facilitating both preservation and long-term access to journal content. Other standards are also addressing file formats, such as the International Digital Publishing Forum's EPUB standard for e-books and similar content. While there is no expectation that every publisher will use the same production formats, narrowing them down to several standardized options will help to solve some of the problems that preservation of content presents. Certainly, there are other areas—such as packaging, metadata, and digital rights management—that need some work in order for us to find more comprehensive solutions. But, with the work underway at NISO and in the community at large, we'll be a step closer to addressing some of these bigger issues.

Todd Carpenter’s Signature

Todd Carpenter

Managing Director

NISO Reports

October Webinar: Bibliographic Control Alphabet Soup

RDA is coming! Are you ready for the transition from AACR? Understand from the experts just how this change came about and what's different. Find out what can be learned from actual usage of catalogers of MARC fields.

Register now for NISO's October webinar on Bibliographic Control Alphabet Soup: AACR to RDA and Evolution of MARC, to be held on Wednesday, October 14, 2009, 1:00 - 2:30 p.m. (Eastern Time). Can't make it then? Register anyway and view the recorded version at your convenience.

Diane Hillmann (Director of Metadata Initiatives, Information Institute of Syracuse) will provide an overview of RDA Elements and Vocabularies: a Step Forward from MARC. RDA elements and vocabularies represent the distillation of library descriptive knowledge, optimized for use within an environment that speaks XML, RDF, and linked data, and expressed in an FRBR-aware manner.

Barbara Tillett (Chief, Policy and Standards Division, Library of Congress) will review AACR2, RDA, VIAF, and the Future: From There to Here to There. Learn about the differences between RDA and AACR2 and how the new code will better enable linked data for user access in the Web environment.

William Moen (Associate Professor, School of Library and Information Sciences, University of North Texas) will discuss results from the IMLS-sponsored research project: Data-driven Evidence for Core MARC Records. The project team examined 56 million WorldCat bibliographic records and analyzed patterns of use by catalogers of the available fields/subfields.

For more information and to register, visit the event webpage . Registration is per site (access for one computer) and includes access to the online recorded archive of the webinar for one year. NISO and NASIG members receive a discounted member rate. A student discount is also available.

November Webinar: Data, Data Everywhere: Migration and System Population Practices

The scope and scale of metadata repositories continues to grow, with increasingly heterogeneous data and complexity both on the ingest side (e.g., bibliographic metadata) and in inter- and intra-organizational exchange of usage, patron, purchase, and accounting data. While data format and exchange standards are a given, how do policies, implementations, and standards interact? What are some examples of effective alignment of standards, policies and implementation, and what challenges remain?

These issues and more will be discussed at NISO's November webinar, Data, Data Everywhere: Migration and System Population Practices, to be held on November 11, 2009 from 1:00 - 2:30 p.m. (Eastern Time). Specific topics include:

  • Data quality, policy, and large-scale data flows - How do regional consortia establish and implement policies to allow them to cope with increasing amounts of data in a widening variety of formats?

  • Academic library perspective - Individual research libraries provide local, customized services for their audiences that are based upon large quantities of data—hopefully of high quality and supported by easy-to-use tools and processes provided by vendors and consortia. What are the successes, stress points, and failures from the perspective of the academic library?

  • Vendor perspective - The integrated library system remains the central repository of metadata, usage data, and business data for all types of libraries. As the ILS evolves to interact with electronic resource management systems, link resolvers, and other external systems and repositories, how are commercial vendors aligning standards, policies, and implementations? Where do library and vendor interests intersect and conflict?

For more information and to register, visit the event webpage (). Registration is per site (access for one computer) and includes access to the online recorded archive of the webinar for one year. NISO and NASIG members receive a discounted member rate. A student discount is also available.

CORE Protocol Trial Implementation
Ed Riding and Ted Koppel, Co-Chairs, NISO CORE Working Group

Since the publication of the Draft Standard for Trial Use for the Cost of Resource Exchange (CORE) Protocol (NISO Z39.95-200x), we have received notes from a number of librarians eagerly expressing a desire to participate in this trial. While we are grateful for their willingness to help with the trial, we recognize that the true test can only happen as vendors and software creators implement the CORE data exchange standard for their systems, test and share messages with other parties, and deploy the software to their library customers and partners.

As a reminder, this standard specifies a method by which cost information can be pulled from a library acquisitions system or a subscription agent system to display in or populate an electronic resource management system (ERMS). Many librarians and administrators have expressed a desire for a way in which they could share and re-purpose payment data in their acquisitions system or their subscription vendor's system for use with their ERMS, and this standard has been created to respond to that need. For example, this standard could allow an e-resource librarian to simply click on a button in her ERMS to retrieve and download payment information from the library acquisitions system for a specific subscription, a subscription range, or all resources purchased from a specific e-resource provider. With use statistics also pulled into the ERMS, CORE provides a method to discover cost per use, giving librarians a tool to help them know which subscriptions or e-resources to renew or cancel.

We remind all vendors and software creators of this DSFTU's release and alert you that your customers may be asking you about CORE development in your upcoming 2010 development plans. As you contemplate implementing CORE functionality into your system, we would be happy, with your permission, to post your timelines on the NISO CORE website, so that end users of your software may have some expectation as to when they could possibly anticipate testing or using this standard.

If you work in a library and think this functionality would be useful to your organization, we encourage you to notify your ERMS and library acquisitions system vendor or creator to encourage the development of such software.

Should you have questions about the protocol or are planning to implement, please contact us.

NISO Annual Business Meeting in Boston on October 9

NISO will be holding its annual business meeting, as required by the NISO By-Laws, on October 9, 2009, from 11:00 a.m. to 2:00 p.m. in Boston during the Library Resources Management forum. You do not need to be attending the forum in order to attend the NISO annual meeting.

Agenda

  • State of NISO
  • Financial update
  • Standards program activity report
  • Education programs update
  • Publications update
  • Any new business

The meeting is open to the public and will be held at the Boston Metro Meeting Center, 101 Federal Street, in the Arlington Room. Lunch will be provided. Registration is not required, but we do ask you to RSVP your intention to attend to niso-hq@niso.org.

New on the NISO Website: I², SUSHI, NCIP, KBART, SSO

  • Institutional Identifiers (I²) – The NISO I² Working Group surveyed repository managers and developers about current practices and needs of the repository community around institutional identifiers. Results from the survey will inform a set of use cases that are expected to drive the development of a draft standard for institutional identifiers. A report on the results of the survey is now available to the public. Feedback from the repository community is most welcome; use NISO's online comment form.

  • Standardized Usage Statistics Harvesting Initiative (SUSHI) – The SUSHI Standing Committee has created a server registry of COUNTER report producers who have implemented SUSHI. The registry provides the technical details needed by client implementers of SUSHI, including the server URL address and contact information for further assistance. To date, nine implementers are registered and more are in process.

  • Knowledge Base And Related Tools (KBART) – The NISO/UKSG KBART Working Group has released a final draft of the KBART recommended practice to the KBART interest group, to those who have expressed interest in implementing the recommended practice, and to the NISO and UKSG leadership committees for comment. They will be discussing the comments during their November 2nd call in order to integrate and/or respond to the feedback prior to a public draft. Questions? Contact the chairs (Peter McCracken/Charlie Rapple/Sarah Pearson), NISO (Karen Wetzel), or join the interest group list. KBART was also the topic of NISO's open teleconference call in September; catch up on the project by listening to the audio recording.

  • NISO Circulation Interchange Protocol (NCIP) – The NCIP Implementers Group (NCIP-IG) held an in-person meeting on September 22-24, 2009. A revamped NCIP website was announced by EnvisionWare, the maintenance agency. Other items discussed include: implementation updates, core task profiles, self-service implementation, moving the standard from periodic to continuous maintenance, a change request process, an NCIP test bed, defining compliance, the core message set, use of NCIP in the eXtensible Catalog project, and educational plans at future conferences. The minutes of the meeting are available online.

  • Single Sign-on (SSO) Authentication – The roster has been approved for this new working group, which will be defining best practices for "perfecting single-sign-on (SSO) authentication to achieve seamless item-level linking through SSO technologies in a networked information environment." Harry Kaplanian from Serials Solution was appointed as chair; a co-chair may be identified from the group's members. An email interest group has also been established for those who wish to follow the group's activities. Further information is available from the SSO working group's workroom webpages.

New Specs & Standards

ARMA International, ARMA 5-20xx, Vital Records Management (draft revision)

"This draft revision of ANSI/ARMA 5-2003, covers the same topic areas, but the content has been updated and expanded to more accurately reflect, among other considerations, business continuity-related planning needs. And, reflecting the vital records realities made apparent as a result of Hurricane Katrina, new contents include a section on developing, implementing, and monitoring a records loss prevention plan, new information around protecting electronic data, and an appendix comparing drying techniques for water-damaged books and records. Guidance from the National Archives and Records Administration is also referenced in this latest edition." The deadline for comments is November 18, 2009.

DCMI, Expressing Dublin Core Metadata Using HTML/XHTML Meta And Link Elements

"This document describes how a Dublin Core metadata description set can be encoded in HTML/XHTML <meta> and <link> elements. It is an HTML meta data profile, as defined by the HTML specification. Revised with a small number of errata and an addendum with new examples."

Unicode Consortium, Unicode 5.2.0

"Version 5.2 adds 6,648 characters and significantly improves the documentation of conformance requirements for the specification of normalization forms, canonical ordering, and the status of types of properties. Version 5.2 brings improved clarity of presentation in many Unicode Standard Annexes."

W3C Proposed Recommendation, OWL 2 Web Ontology Language

"OWL 2 ontologies provide classes, properties, individuals, and data values and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, and OWL 2 ontologies themselves are primarily exchanged as RDF documents." Contains only minor editorial changes since the working draft version issued June 11, 2009.

W3C Working Draft, Publishing Open Government Data

"Provides step-by-step guidelines for putting government data on the Web. Sharing data according to these guidelines enables greater transparency; delivers more efficient public services; and encourages greater public and commercial use and re-use of government information."

Media Stories

7 Things You Should Know About... Federated Identity Management
Educause Brief, 09/10/2009

An oceanography researcher uses her university supplied username and password to access not only the information systems in her own organization, but also commercial e-mail and document sharing services, other university and government laboratories for data collection and sharing, grant databases, and a library of electronic information resources. She does all this using federated identity management, "the policies, processes, and technologies that establish user identities and enforce rules about access to digital resources." In a federated environment, participants agree to share identity information using standard protocols. An example is the Indiana Clinical Translational Sciences Institute HUB, where multiple institutions have access to databases and visualization applications. The In Common Federation is one of the larger examples with more than 150 organizations involved and over three million end users. Only one organization in the federation has to collect identify information and verify an individual; the others then accept the credentials from the parent organization. The benefits include more efficient administration, improved security that complies with regulations regarding personal data, greater access to resources for individuals, and geographically remote and mobile access. The disadvantages include costs for start-up and application modification, multiple standards in use that prevent or complicate joining more than one federation, the effort involved in establishing compliant institutional polices, and the risks of security breaches. It is possible that a third-party service may evolve to provide the authorized identity databases, rather than the parent institution. Implemented correctly, federation identity management offers tremendous opportunities in greater collaboration and interdisciplinary scholarship. (Link to Web Source)

NISO Note: NISO has initiated a Single Sign-On Authentication working group to develop best practices in this area.

Can 20th-Century News Survive in a Digital World?
EContent, October 2009 Issue, posted Sep 28, 2009; by Ron Miller

Newspaper and news magazine publishing used to be for the wealthy or a well-funded company. The web has changed all that, lowering the cost of entry to news publishing to almost nothing. Ad dollars have already moved to the Internet and web services for classified sections or job listings have supplanted their print versions. Anyone can report news immediately on his/her blog or social networking page. As a result, the traditional newspaper is disappearing. Silicon Alley Insider reported that in the first six months of the year, 105 newspapers had shut down and many major papers are in a financial crisis. Their situation is not going to rebound when the economy does; the problems go beyond the current downturn. They have lost their monopoly on news distribution and are unlikely to ever get it back. In moving online, they still fail to understand a "linking culture" and instead are attacking content aggregators such as Google. Newspapers could have been the innovators that created a craigslist or a news aggregation site—but they missed the opportunities. Innovation has to go beyond the methods for revenue and experiment with the product itself, such as tools that will involve readers in adding editorial value. Everyone recognizes that there must be a paid content strategy but cannot agree on how users will pay for content. What's clear is that the traditional newspaper / news magazine cannot survive in its current model in today's 24/7 web and social networking environment. (Link to Web Source)

Data Sharing: Empty Archives
Nature, 461, 160-163 (2009), published online 9 September 2009; by Bryn Nelson

When the University of Rochester launched their institutional repository, librarians were concerned about being able to handle the volume of information. Six years later, the repository is mostly empty. Researchers are generally positive about the idea of a repository, but for various reasons—time, inability to use the system, protectiveness, etc.—they don't participate in sharing after one is built. There have been noted successes, such as Cornell University's arXiv.org and others, but these are not typical across disciplines or institutions. One project, the International Polar Year (IPY), involving over 60 countries, mandated data sharing but has run into difficulties with both data diversity, cultural attitudes towards sharing, and the resources to create the needed infrastructure. Other projects had problems in determining at what level data should be shared—raw, cleaned-up, analyzed, etc. Also missing are common standards for format and metadata. Data sharing advocates say it is the funding agencies who must dictate such sharing, as well as scientific societies and journal publishers. The NSF is funding research into digital archiving and search technologies, but sharing policies are scattered and defined per-project. Some journals are beginning to require submittal of the supporting data with the manuscript. A music site, ccMixter, may be a model for how scientific data can be shared while still maintaining attribution to the original submitter. The Creative Commons CC0 license is gaining acceptance in the scientific community as a way to manage rights. A major government investment with demonstration projects and working examples, such as the NSF's DataNet program, may be required. However, such projects could still fail from the same ambivalence and resistance to sharing that institutional repositories have encountered. (Link to Web Source)

After Losing Users in Catalogs, Libraries Find Better Search Software
The Chronicle of Higher Education, September 28, 2009; by Marc Parry

In a world where students are used to Google type searching, library catalogs are frequently criticized for their difficulty in retrieving the truly relevant information. That's changing as university libraries are implementing sophisticated search software for their catalogs. Common features of the new software are faceted searching, relevance ranking, and contextual prompts. For example, asking the user if s/he wanted books "by" or "about" the person entered in the search query. A 2006 Ithaka study found that faculty members were "decreasingly dependent on the library for their research and teaching." Academic librarians are hoping to revitalize use of the library collections with "Web-scale index searching" rather than the old federated searching where queries are sent individually to different databases. In other words, there's a single entry point to the entire collection of books, journals, and digital information. Some in the library community, however, see the next-generation interface as a "dumbing-down of catalogs." In the commercial marketplace, products like Encore from Innovative Interfaces, AquaBrowser from Media lab Solutions, and Primo from Ex Libris have been implemented by libraries to provide a next-generation search interface. There's also a movement to use open-source software like VuFind, Blacklight, and eXtensible Catalog. In addition to expected lower costs, open source is more customizable for a specific library or collection. But the library then has to find or hire someone to maintain the system and the open source systems don't usually provide access to article-level licensed content. Serials Solutions has responded with the recently released Summons, which provides a searchable index to licensed content along with the library's local content. (Link to Web Source)

NISO Note: The following organizations mentioned in this article are NISO members: Ex Libris, Innovative Interfaces, ProQuest, Serials Solutions, and Stanford University.

The Dewey Dilemma
Library Journal, Issue 16, 10/1/2009; by Barbara Fister

Some public libraries are organizing their shelves using the BISAC system instead of Dewey and the library patrons are enthusiastically endorsing the change. BISAC, maintained by the Book Industry Study Group, uses broad subject categories arranged alphabetically and is most often used by book sellers and distributors such as Amazon, Barnes & Noble, Bowker, or Ingram. Although BISAC uses codes, they are hidden from the users. A book may also have multiple categories assigned and the bookseller (or library) decides which category to select for shelving. The Maricopa County, AZ, Perry Branch Library found that their circulation increased when they used the BISAC system for their nonfiction titles. Maricopa County plans to move all 18 libraries away from the Dewey system. The Rangeview Library District in Northglenn, CO, adopted BISAC with a "WorkThink" system with book spines labeled with subject categories and subcategories. The Darien, CT library chose to do a mash-up of Dewey and BISAC, pulling related Dewey areas together on the shelves into broad categories. Another mash-up approach was used by the Phoenix Public Library, which added BISAC headings to the LC Subject Headings in their catalog records. Not to be left out, the Dewey Decimal Classification (DDC) editors are creating a crosswalk map between BISAC and Dewey. The move to BISAC has met with criticism by some in the library community, who feel a classification system is superior to the bookstore retail model. There is also concern that the BISAC approach, which has mainly been implemented in smaller branch libraries, would not scale well with a large collection. Dewey is still more widely used than any other classification system in the world, and will not be going away any time soon. (Link to Web Source)

NISO Note: Bowker (Cambridge Information Group) is a NISO voting member.

TRANSFER Code of Practice: the Publisher's Point of View
Learned Publishing, Volume 22, Number 4, October 2009 , pp. 289-294; by Yvonne Campfens and Ed Pentz

Society journals change publishers and commercial journal publishers buy titles from each other or conduct mergers and acquisitions of the businesses. The problems this has always created in the transfer of a journal from one publisher to another are exacerbated in an online environment, where archives can disappear, online access is lost, or subscriptions are terminated incorrectly. While these problems are obvious to libraries and end users, the publishers and journal owners are encountering complex behind-the-scenes contractual and licensing issues. To address these problems, the UK Serials Group developed—through a consensus-driven process with all types of affected stakeholders—the TRANSFER Code of Practice. The goal for TRANSFER is "to establish a set of standards that apply whenever a journal is transferred from one publisher to another, and to encourage the industry to embrace these standards as a baseline level of quality and performance." The Code focuses primarily on online content, although it does address print subscription lists. "Publishers who publicly sign up to the Code and apply it in practice are considered 'TRANSFER Compliant'," although there is no formal certification process at this time. Over 25 publishers had endorsed the code at the time of writing this article. (Link to Web Source)