Home | About NISO | Blog

Archive for the ‘Resource Sharing’ Category

OLE to hold a web seminar on Evergreen 12/9

Friday, December 5th, 2008

The OLE Project will be hosting a free webinar on December 9th discussing the Evergreen project. run by the Georgia Public Library Service.  Earlier this fall, at the NISO Collaborative Resource Sharing seminar, two of the presenters in this webinar, Julie Walker and Elizabeth McKinney, spoke about the Evergreen project.  Their presentation is available here.  

There is also an article in the forthcoming issue of ISQ on the OLE Project.  The issue will be available online soon.  

More information is on the OLE website

From the site:  

December 9, 2008

5:00 pm to 6:00 pm

John Little will host a webcast discussion with the principle developers and drivers of the Evergreen Project. The Webcast will be open to the first 100 participants, recorded for playback, and made available on the Oleproject.org site. To Register for the Webcast: Register Now

Participants include:

• John Little, ILS Support Section Head, Duke University

• Julie Walker, Deputy State Librarian, Georgia Public Library Service

• Tim Daniels, Assistant State Librarian

• Elizabeth McKinney, PINES Program Director

• Chris Sharp, PINES System Administrator

Changing the ideas of a catalog: Do we really need one?

Wednesday, November 19th, 2008

Here’s one last post on thoughts regarding the Charleston Conference.

Friday afternoon during the Charleston meeting, Karen Calhoun, Vice President, WorldCat and Metadata Services at OCLC and Janet Hawk, Director, Market Analysis and Sales Programs at OCLC gave a joint presentation entitled: Defining Quality As If End Users Matter: The End of the World As We Know It(link to presentations page – actual presentation not up yet). While this program focused on the needs, expectations and desired functionality of users of WorldCat, there was an underlying theme which came out to me and could have deep implications for the community.

“Comprehensive, complete and accurate.” I expect that every librarian, catalogers in particular, would strive to achieve these goals with regard to the information about their collection. The management of the library would likely add cost-effective and efficient to this list as well. Theses goals have driven a tremendous amount of effort at almost every institution when building its catalog. Information is duplicated, entered into systems (be they card catalogs, ILS or ERM systems) and maintained, eventually migrated to new systems. However, is this the best approach?

When you log into the Yahoo web page, for example, the Washington Post, or a service like Netvibes or Pageflakes, what you are presented with is not information culled from a single source, or even 2 or three. On my Netvibes landing page, I have information pulled from no less than 65 feeds, some mashed up, some straight RSS feeds. Possibly (probably), the information in these feeds is derived from dozens of other systems. Increasingly, what the end-user experiences might seem like an integrated and cohesive experience, however on the back-end the page is drawing from multiple sources, multiple formats, multiple streams of data. These data stream could be aggregated, merged and mashed up to provide any number of user experiences. And yet, building a catalog has been an effort to build a single all-encompassing system with data integrated and combined into a single system. It is little wonder that developing, populating and maintaining these systems requires tremendous amounts of time and effort.

During Karen’s and Janet’s presentation last week provided some interesting data about the enhancements that different types of users would like to see in WorldCat and WorldCatLocal. The key take away was that there were different users of the system, with different expectations, needs and problems. Patrons have one set of problems and desired enhancements, while librarians have another. Neither is right or wrong, but represent different sides of the same coin – what a user wants depends entirely on what the need and expect from a service. This is as true for banking and auto repair as it is for ILS systems and metasearch services.

    Putting together the pieces.

Karen’s presentation followed interestingly from another session that I attended on Friday in which Andreas Biedenbach, eProduct Manager Data Systems & Quality at Springer Science + Business Media, spoke about the challenges of supplying data from a publisher’s perspective. Andreas manages a team that distributes metadata and content to the variety of complicated users of Springer data. This includes libraries, but also a diverse range of other organizations such as aggregators, A&I services, preservation services, link resolver suppliers, and even Springer’s own marketing and web site departments. Each of these users of the data that Andreas’ team supplies has their own requirements, formats and business terms, which govern the use of the data. Some of these streams are complicated feeds of XML structures to simple comma-separated text files. Each of which is in its own format, some standardized, some not. It is little wonder there are gaps in the data, non-conformance, or format issues. Similarly, it is not a lack of appropriate or well-developed standards as much as it is conformance, use and rationalization. We as a community cannot continue to provide customer-specific requests to data requests for data that is distributed into the community.

Perhaps the two problems have a related solution. Rather than the community moving data from place to place, populating their own systems with data streams from a variety of authoritative sources could a solution exist where data streams are merged together in a seamless user interface? There was a session at ALA Annual hosted by OCLC on the topic of mashing up library services. Delving deeper, rather than entering or populating library services with gigabytes and terabytes of metadata about holdings, might it be possible to have entire catalogs that were mashed up combinations of information drawn from a range of other sources? The only critical information that a library might need to hold is an identifier (ISBN, ISSN, DOI, ISTC, etc) of the item they hold drawing additional metadata from other sources on demand. Publishers could supply a single authoritative data stream to the community, which could be combined with other data to provide a custom view of the information based on the user’s needs and engagement. Content is regularly manipulated and represented in a variety of ways by many sites, why can’t we do the same with library holdings and other data?

Of course, there are limitations to how far this could go: what about unique special collections holdings; physical location information; cost and other institution-specific data. However, if the workload of librarians could be reduced in significant measure by mashing up data and not replicating it in hundreds or thousands of libraries, perhaps it would free up time to focus on other services that add greater value to the patrons. Similarly, simplifying the information flow out of publishers would reduce errors and incorrect data, as well as reduce costs.

FedEx – Physical delivery and Resource Sharing – Part 4

Tuesday, October 14th, 2008

It occurred to me after writing the last post that our collective mindset about physical delivery has changed radically in the past two decades.  In preparing my regular article for Against the Grain, I thought to write some more about changing user expectations regarding when people can receive things.  The most obvious of the services that have radically changed our mindset about delivery is FedEx. A quick search of the web turned up a number of the vintage commercials about FedEx services.  It’s amazing how things that had been services almost exclusively for business have become ubiquitous. Some classics:  Hereherehere, and here

Advice from Peter Drucker – an idea from the Resource Sharing meeting – Part 2

Saturday, October 11th, 2008

During the Collaborative Resource Sharing meeting earlier this week, Adam Wathem, Interim Head of Collections Services Department, Kansas State University Libraries wrapped up the meeting by discussing barriers to efficiencies within libraries.  It was  a great presentation that brought together the threads of the conversations and presentations throughout the meeting.  At one point in the presentation (available here), Adam quoted Peter Drucker, which summarizes one of the problems that libraries face:

“There is nothing so useless as doing efficiently that which should not be done at all.”  

How much of the workflow processes in institutions is bound by a focus on improving how things that neither meet user needs or expectation of today?