Home | About NISO | Blog

Archive for the ‘applications’ Category

The Memento Project – adding history to the web

Wednesday, November 18th, 2009

Yesterday, I attended the CENDI/FLICC/NFAIS Forum on the Semantic Web: Fact or Myth hosted by the National Archives.  It was a great meeting with an overview of ongoing work, tools and new initiatives.  Hopefully, the slides will be available soon, as there was frequently more information than could be expressed in 20-minute presentations and many listed what are likely useful references for more information.  Once they are available, we’ll link through to them.

During the meeting, I had the opportunity to run into Herbert Van de Sompel, who is at the Los Alamos National Laboratory.  Herbert has had a tremendous impact on the discovery and delivery of electronic information. He played a critical role in creating the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), the Open Archives Initiative Object Reuse & Exchange specifications (OAI-ORE), the OpenURL Framework for Context-Sensitive Services, the SFX linking server, the bX scholarly recommender service, and info URI.

Herbert described his newest project, which has just been released, called the Memento Project. The Memento project proposes a “new idea related to Web Archiving, focusing on the integration of archived resources in regular Web navigation.”  In chatting briefly with Herbert, the system uses a browser plug-in to view the content of a page from a specified date.  It does this by using the underlying content management system change logs to recreate what appeared on a site at a given time.  The team has also developed some server-side Apache code that handles the request for calls to the management of systems that have version control.  The system can also point to a version of the content that exists in the Internet Archive (or other similar archive sites) for content from around that date, if the server is unable to recreate the requested page. Herbert and his team have tested this using a few wiki sites.  You can also demo the service from the LANL servers.

Here is a link to a presentation that Herbert and Michael Nelson (co-collaborator on this project) at Old Dominion University gave at the Library of Congress on this project.  There was also a story about this project  A detailed paper that describes the Memento solution is also available on the arXive site.  There is also an article on Memento in the New Scientist.  Finally, tomorrow (November 19, 2009 at 8:00 AM EST), there will be a presentation on this at OCLC as part of their Distinguished Seminar Series, which will be available online for free (RSVP required).

This is a very interesting project that addresses one of the key problems with archiving web page content, which frequently changes.  I am looking forward to the team’s future work and hoping that the project gets some broader adoption.

Open Source isn’t for everyone, and that’s OK. Proprietary Systems aren’t for everyone, and that’s OK too.

Monday, November 2nd, 2009

Last week, there was a small dust up in the community about a “leaked” document from one of the systems suppliers in the community about issues regarding Open Source (OS) software.  The merits of the document itself aren’t nearly as interesting as the issues surrounding it and the reactions from the community.  The paper outlined from the company’s perspective the many issues that face organizations that choose an open source solution as well as the benefits to proprietary software.  Almost immediately after the paper was released on Wikileaks, the OS community pounced on its release as “spreading FUD {i.e., Fear Uncertainty and Doubt}” about OS solutions.  This is a description OS supporters use for corporate communications that support the use and benefits of proprietary solutions.

From my mind the first interesting issue is that there is a presumption that any one solution is the “right” one, and the sales techniques from both communities understandably presume that each community’s approach is best for everyone.  This is almost never the case in a marketplace as large, broad and diverse as the US library market.  Each approach has it’s own strengths AND weaknesses and the community should work to understand what those strengths and weaknesses are, from both sides.  A clearer understanding and discussion of those qualities should do much to improve both options for the consumers.  There are potential issues with OS software, such as support, bug fixing, long-term sustainability, and staffing costs that implementers of OS options need to consider.  Similarly, proprietary options could have problems with data lock-in, interoperability challenges with other systems, and customization limitations.   However, each too has their strengths.  With OS these include and openness and an opportunity to collaboratively problem solve with other users and an infinite customizability.  Proprietary solutions provide a greater level of support and accountability, a mature support and development environment, and generally known fixed costs.

During the NISO Library Resources Management Systems educational forum in Boston last month, part of the program was devoted to a discussion of whether an organization should build or buy LRMS system.  There were certainly positives and downsides described from each approach.  The point that was driven home for me is that each organization’s situation is different and each team brings distinct skills that could push an organization in one direction or another.  Each organization needs to weigh the known and potential costs against their needs and resources.  A small public library might not have the technical skills to tweak OS systems in a way that is often needed.  A mid-sized institution might have staff that are technically expert enough to engage in an OS project.  A large library might be able to reallocate resources, but want the support commitments that come with a proprietary solution.  One positive thing about the marketplace for library systems is the variety of options and choices available to management.

Last year, during the Charleston Conference during a discussion of Open Source, I made the comment that, yes, everyone could build their own car, but why would they.  I personally don’t have the skills or time to build my own, I rely on large car manufacturers to do so for me.  When it breaks, I bring it to a specialized mechanic who knows how to fix it.  On the other hand, I have friends who do have the skills to build and repair cars. They save lots of money doing their own maintenance and have even built sports cars and made a decent amount of money doing so.  That doesn’t make one approach right or wrong, better or worse.  Unfortunately, people frequently let these value judgments color the debate about costs and benefits. As with everything where people have a vested interest in a project’s success, there are strong passions in the OS solutions debate.

What make these systems better for everyone is that there are common data structures and a common language for interacting.  Standards such as MARC, Z39.50, and OpenURL, among others make the storage, discovery and delivery of library content more functional and more interoperable.  As with all standards, they may not be perfect, but they have served the community well and provide an example of how we can as a community move forward in a collaborative way.

For all of the complaints hurled at the proprietary systems vendors (rightly or wrongly), they do a tremendous amount to support the development of voluntary consensus standards, which all systems are using.  Interoperability among library systems couldn’t take place without them.  Unfortunately, the same can’t be said for the OS community.  As Carl Grant, President of Ex Libris, made the point during the vendor roundtable in Boston, “How many of the OS support vendors and suppliers are members of and participants in NISO?”  Unfortunately, the answer to that question is “None” as yet.  Given how critical open standards are to the smooth functioning of these systems, it is surprising that they haven’t engaged in standards development.  We certainly would welcome their engagement and support.

The other issue that is raised about the release of this document is its provenance.  I’ll discuss that in my next post.

Changing the ideas of a catalog: Do we really need one?

Wednesday, November 19th, 2008

Here’s one last post on thoughts regarding the Charleston Conference.

Friday afternoon during the Charleston meeting, Karen Calhoun, Vice President, WorldCat and Metadata Services at OCLC and Janet Hawk, Director, Market Analysis and Sales Programs at OCLC gave a joint presentation entitled: Defining Quality As If End Users Matter: The End of the World As We Know It(link to presentations page – actual presentation not up yet). While this program focused on the needs, expectations and desired functionality of users of WorldCat, there was an underlying theme which came out to me and could have deep implications for the community.

“Comprehensive, complete and accurate.” I expect that every librarian, catalogers in particular, would strive to achieve these goals with regard to the information about their collection. The management of the library would likely add cost-effective and efficient to this list as well. Theses goals have driven a tremendous amount of effort at almost every institution when building its catalog. Information is duplicated, entered into systems (be they card catalogs, ILS or ERM systems) and maintained, eventually migrated to new systems. However, is this the best approach?

When you log into the Yahoo web page, for example, the Washington Post, or a service like Netvibes or Pageflakes, what you are presented with is not information culled from a single source, or even 2 or three. On my Netvibes landing page, I have information pulled from no less than 65 feeds, some mashed up, some straight RSS feeds. Possibly (probably), the information in these feeds is derived from dozens of other systems. Increasingly, what the end-user experiences might seem like an integrated and cohesive experience, however on the back-end the page is drawing from multiple sources, multiple formats, multiple streams of data. These data stream could be aggregated, merged and mashed up to provide any number of user experiences. And yet, building a catalog has been an effort to build a single all-encompassing system with data integrated and combined into a single system. It is little wonder that developing, populating and maintaining these systems requires tremendous amounts of time and effort.

During Karen’s and Janet’s presentation last week provided some interesting data about the enhancements that different types of users would like to see in WorldCat and WorldCatLocal. The key take away was that there were different users of the system, with different expectations, needs and problems. Patrons have one set of problems and desired enhancements, while librarians have another. Neither is right or wrong, but represent different sides of the same coin – what a user wants depends entirely on what the need and expect from a service. This is as true for banking and auto repair as it is for ILS systems and metasearch services.

    Putting together the pieces.

Karen’s presentation followed interestingly from another session that I attended on Friday in which Andreas Biedenbach, eProduct Manager Data Systems & Quality at Springer Science + Business Media, spoke about the challenges of supplying data from a publisher’s perspective. Andreas manages a team that distributes metadata and content to the variety of complicated users of Springer data. This includes libraries, but also a diverse range of other organizations such as aggregators, A&I services, preservation services, link resolver suppliers, and even Springer’s own marketing and web site departments. Each of these users of the data that Andreas’ team supplies has their own requirements, formats and business terms, which govern the use of the data. Some of these streams are complicated feeds of XML structures to simple comma-separated text files. Each of which is in its own format, some standardized, some not. It is little wonder there are gaps in the data, non-conformance, or format issues. Similarly, it is not a lack of appropriate or well-developed standards as much as it is conformance, use and rationalization. We as a community cannot continue to provide customer-specific requests to data requests for data that is distributed into the community.

Perhaps the two problems have a related solution. Rather than the community moving data from place to place, populating their own systems with data streams from a variety of authoritative sources could a solution exist where data streams are merged together in a seamless user interface? There was a session at ALA Annual hosted by OCLC on the topic of mashing up library services. Delving deeper, rather than entering or populating library services with gigabytes and terabytes of metadata about holdings, might it be possible to have entire catalogs that were mashed up combinations of information drawn from a range of other sources? The only critical information that a library might need to hold is an identifier (ISBN, ISSN, DOI, ISTC, etc) of the item they hold drawing additional metadata from other sources on demand. Publishers could supply a single authoritative data stream to the community, which could be combined with other data to provide a custom view of the information based on the user’s needs and engagement. Content is regularly manipulated and represented in a variety of ways by many sites, why can’t we do the same with library holdings and other data?

Of course, there are limitations to how far this could go: what about unique special collections holdings; physical location information; cost and other institution-specific data. However, if the workload of librarians could be reduced in significant measure by mashing up data and not replicating it in hundreds or thousands of libraries, perhaps it would free up time to focus on other services that add greater value to the patrons. Similarly, simplifying the information flow out of publishers would reduce errors and incorrect data, as well as reduce costs.

NISO’s New Website

Tuesday, April 1st, 2008

This new website launched publicly over the weekend. It has been about 8 months in a development phase and even longer than that in its planning. In addition to the obvious new look, more importantly the back-end of the site has been completely overhauled. With the support of the Andrew W. Mellon Foundation, we were able to invest in a system that provides tools that will better help NISO manage its own processes and reporting requirements, will coordinate the work of the various technical working groups, allow us to provide better document tracking, as well as improve balloting, registration and other member services.

The old NISO website included nearly 4,000 HTML, Powerpoint and PDF pages and almost 2 GB of data in a variety of formats. There was also a variety of contact, voting and member data in a ColdFusion database. Any transition of this size is bound to be difficult and cause problems. The structure of our new site is based in large part on a database structure of interlinked and dynamic pages. As such, we couldn’t simply copy and move HTML pages to the new site. Most pages needed to be recreated in the new system. We are still in the process of moving over data. Our goal was to move the most recent information first and fill in additional information as we move forward. There are also bugs that we have found and we are working with our hosting service to fix them.

Despite the challenges, we think that you will agree that the new site is easier to navigate and the information on it is more accessible. We will be organizing some webcasts later this month to provide training on how to use the system for members and committee members. These webcasts will be recorded and available on the site after the meetings, if you can’t join us. More information on these members will be distributed to the community in the coming weeks.

If you spot any problems or bugs, please email nisohq [at] niso. Thank you for your patience as we move through this transition.

Amazing digital conversion presentation at Code4Lib

Wednesday, February 27th, 2008

I am sitting at the Code4Lib meeting in Portland, and I’ve just seen an amazing presentation by Andrew Bullen a librarian and programmer at the Illinois State Library.  Taking scanned digital images of sheet music in the Pullman archive sheet music collection and using some music translation software, outputting MIDI formats, he’s output some piano music.  Using the acousitc profile of a local mansion/hotel, owned by the Pullman family, he’s created an output mp3 file of the results.  Not knowing how to read music, or how to play piano, he’s created a fantastic audio translation of the sheet music.  Here is a link to the video.  It is incredible.  Well done, Andrew!