Home | About NISO | Blog

Archive for the ‘ILS’ Category

Open Source isn’t for everyone, and that’s OK. Proprietary Systems aren’t for everyone, and that’s OK too.

Monday, November 2nd, 2009

Last week, there was a small dust up in the community about a “leaked” document from one of the systems suppliers in the community about issues regarding Open Source (OS) software.  The merits of the document itself aren’t nearly as interesting as the issues surrounding it and the reactions from the community.  The paper outlined from the company’s perspective the many issues that face organizations that choose an open source solution as well as the benefits to proprietary software.  Almost immediately after the paper was released on Wikileaks, the OS community pounced on its release as “spreading FUD {i.e., Fear Uncertainty and Doubt}” about OS solutions.  This is a description OS supporters use for corporate communications that support the use and benefits of proprietary solutions.

From my mind the first interesting issue is that there is a presumption that any one solution is the “right” one, and the sales techniques from both communities understandably presume that each community’s approach is best for everyone.  This is almost never the case in a marketplace as large, broad and diverse as the US library market.  Each approach has it’s own strengths AND weaknesses and the community should work to understand what those strengths and weaknesses are, from both sides.  A clearer understanding and discussion of those qualities should do much to improve both options for the consumers.  There are potential issues with OS software, such as support, bug fixing, long-term sustainability, and staffing costs that implementers of OS options need to consider.  Similarly, proprietary options could have problems with data lock-in, interoperability challenges with other systems, and customization limitations.   However, each too has their strengths.  With OS these include and openness and an opportunity to collaboratively problem solve with other users and an infinite customizability.  Proprietary solutions provide a greater level of support and accountability, a mature support and development environment, and generally known fixed costs.

During the NISO Library Resources Management Systems educational forum in Boston last month, part of the program was devoted to a discussion of whether an organization should build or buy LRMS system.  There were certainly positives and downsides described from each approach.  The point that was driven home for me is that each organization’s situation is different and each team brings distinct skills that could push an organization in one direction or another.  Each organization needs to weigh the known and potential costs against their needs and resources.  A small public library might not have the technical skills to tweak OS systems in a way that is often needed.  A mid-sized institution might have staff that are technically expert enough to engage in an OS project.  A large library might be able to reallocate resources, but want the support commitments that come with a proprietary solution.  One positive thing about the marketplace for library systems is the variety of options and choices available to management.

Last year, during the Charleston Conference during a discussion of Open Source, I made the comment that, yes, everyone could build their own car, but why would they.  I personally don’t have the skills or time to build my own, I rely on large car manufacturers to do so for me.  When it breaks, I bring it to a specialized mechanic who knows how to fix it.  On the other hand, I have friends who do have the skills to build and repair cars. They save lots of money doing their own maintenance and have even built sports cars and made a decent amount of money doing so.  That doesn’t make one approach right or wrong, better or worse.  Unfortunately, people frequently let these value judgments color the debate about costs and benefits. As with everything where people have a vested interest in a project’s success, there are strong passions in the OS solutions debate.

What make these systems better for everyone is that there are common data structures and a common language for interacting.  Standards such as MARC, Z39.50, and OpenURL, among others make the storage, discovery and delivery of library content more functional and more interoperable.  As with all standards, they may not be perfect, but they have served the community well and provide an example of how we can as a community move forward in a collaborative way.

For all of the complaints hurled at the proprietary systems vendors (rightly or wrongly), they do a tremendous amount to support the development of voluntary consensus standards, which all systems are using.  Interoperability among library systems couldn’t take place without them.  Unfortunately, the same can’t be said for the OS community.  As Carl Grant, President of Ex Libris, made the point during the vendor roundtable in Boston, “How many of the OS support vendors and suppliers are members of and participants in NISO?”  Unfortunately, the answer to that question is “None” as yet.  Given how critical open standards are to the smooth functioning of these systems, it is surprising that they haven’t engaged in standards development.  We certainly would welcome their engagement and support.

The other issue that is raised about the release of this document is its provenance.  I’ll discuss that in my next post.

Upcoming Forum on Library Resource Management Systems

Thursday, August 27th, 2009

In Boston on October 8-9, NISO will host a 2-day educational forum, Library Resource Management Systems: New Challenges, New Opportunities. We are pleased to bring together a terrific program of expert speakers to discuss some of the key issues and emerging trends in library resource management systems as well as to take a look at the standards used and needed in these systems.

 

The back end systems upon which libraries rely have become the center of a great deal of study, reconsideration and development activity over the past few years.  The integration of search functionality, social discovery tools, access control and even delivery mechanisms to traditional cataloging systems are necessitating a conversation about how these component parts will work together in a seamless fashion.  There are a variety of approaches, from a fully-integrated system to a best-of-breed patchwork of systems, from locally managed to software as a service approaches.  No single approach is right for all institutions and there is no panacea for all the challenges institutions face providing services to their constituents.  However, there are many options an organization could choose from.  Careful planning can help to find the right one and can save the institution tremendous amounts of time and effort.  This program will provide some of the background on the key issues that management will need to assess to make the right decision.

 

Registration is now open and we hope that you can join us. 

OLE to hold a web seminar on Evergreen 12/9

Friday, December 5th, 2008

The OLE Project will be hosting a free webinar on December 9th discussing the Evergreen project. run by the Georgia Public Library Service.  Earlier this fall, at the NISO Collaborative Resource Sharing seminar, two of the presenters in this webinar, Julie Walker and Elizabeth McKinney, spoke about the Evergreen project.  Their presentation is available here.  

There is also an article in the forthcoming issue of ISQ on the OLE Project.  The issue will be available online soon.  

More information is on the OLE website

From the site:  

December 9, 2008

5:00 pm to 6:00 pm

John Little will host a webcast discussion with the principle developers and drivers of the Evergreen Project. The Webcast will be open to the first 100 participants, recorded for playback, and made available on the Oleproject.org site. To Register for the Webcast: Register Now

Participants include:

• John Little, ILS Support Section Head, Duke University

• Julie Walker, Deputy State Librarian, Georgia Public Library Service

• Tim Daniels, Assistant State Librarian

• Elizabeth McKinney, PINES Program Director

• Chris Sharp, PINES System Administrator

Changing the ideas of a catalog: Do we really need one?

Wednesday, November 19th, 2008

Here’s one last post on thoughts regarding the Charleston Conference.

Friday afternoon during the Charleston meeting, Karen Calhoun, Vice President, WorldCat and Metadata Services at OCLC and Janet Hawk, Director, Market Analysis and Sales Programs at OCLC gave a joint presentation entitled: Defining Quality As If End Users Matter: The End of the World As We Know It(link to presentations page – actual presentation not up yet). While this program focused on the needs, expectations and desired functionality of users of WorldCat, there was an underlying theme which came out to me and could have deep implications for the community.

“Comprehensive, complete and accurate.” I expect that every librarian, catalogers in particular, would strive to achieve these goals with regard to the information about their collection. The management of the library would likely add cost-effective and efficient to this list as well. Theses goals have driven a tremendous amount of effort at almost every institution when building its catalog. Information is duplicated, entered into systems (be they card catalogs, ILS or ERM systems) and maintained, eventually migrated to new systems. However, is this the best approach?

When you log into the Yahoo web page, for example, the Washington Post, or a service like Netvibes or Pageflakes, what you are presented with is not information culled from a single source, or even 2 or three. On my Netvibes landing page, I have information pulled from no less than 65 feeds, some mashed up, some straight RSS feeds. Possibly (probably), the information in these feeds is derived from dozens of other systems. Increasingly, what the end-user experiences might seem like an integrated and cohesive experience, however on the back-end the page is drawing from multiple sources, multiple formats, multiple streams of data. These data stream could be aggregated, merged and mashed up to provide any number of user experiences. And yet, building a catalog has been an effort to build a single all-encompassing system with data integrated and combined into a single system. It is little wonder that developing, populating and maintaining these systems requires tremendous amounts of time and effort.

During Karen’s and Janet’s presentation last week provided some interesting data about the enhancements that different types of users would like to see in WorldCat and WorldCatLocal. The key take away was that there were different users of the system, with different expectations, needs and problems. Patrons have one set of problems and desired enhancements, while librarians have another. Neither is right or wrong, but represent different sides of the same coin – what a user wants depends entirely on what the need and expect from a service. This is as true for banking and auto repair as it is for ILS systems and metasearch services.

    Putting together the pieces.

Karen’s presentation followed interestingly from another session that I attended on Friday in which Andreas Biedenbach, eProduct Manager Data Systems & Quality at Springer Science + Business Media, spoke about the challenges of supplying data from a publisher’s perspective. Andreas manages a team that distributes metadata and content to the variety of complicated users of Springer data. This includes libraries, but also a diverse range of other organizations such as aggregators, A&I services, preservation services, link resolver suppliers, and even Springer’s own marketing and web site departments. Each of these users of the data that Andreas’ team supplies has their own requirements, formats and business terms, which govern the use of the data. Some of these streams are complicated feeds of XML structures to simple comma-separated text files. Each of which is in its own format, some standardized, some not. It is little wonder there are gaps in the data, non-conformance, or format issues. Similarly, it is not a lack of appropriate or well-developed standards as much as it is conformance, use and rationalization. We as a community cannot continue to provide customer-specific requests to data requests for data that is distributed into the community.

Perhaps the two problems have a related solution. Rather than the community moving data from place to place, populating their own systems with data streams from a variety of authoritative sources could a solution exist where data streams are merged together in a seamless user interface? There was a session at ALA Annual hosted by OCLC on the topic of mashing up library services. Delving deeper, rather than entering or populating library services with gigabytes and terabytes of metadata about holdings, might it be possible to have entire catalogs that were mashed up combinations of information drawn from a range of other sources? The only critical information that a library might need to hold is an identifier (ISBN, ISSN, DOI, ISTC, etc) of the item they hold drawing additional metadata from other sources on demand. Publishers could supply a single authoritative data stream to the community, which could be combined with other data to provide a custom view of the information based on the user’s needs and engagement. Content is regularly manipulated and represented in a variety of ways by many sites, why can’t we do the same with library holdings and other data?

Of course, there are limitations to how far this could go: what about unique special collections holdings; physical location information; cost and other institution-specific data. However, if the workload of librarians could be reduced in significant measure by mashing up data and not replicating it in hundreds or thousands of libraries, perhaps it would free up time to focus on other services that add greater value to the patrons. Similarly, simplifying the information flow out of publishers would reduce errors and incorrect data, as well as reduce costs.

Ex Libris changes hands

Tuesday, August 12th, 2008

Last week, Ex Libris announced last week that it was being acquired by Leeds Equity Partners.  According to Harratz.com, the deal is worth “an estimated $170 million.” The company had previously been owned by Francisco Partners, who purchased Ex Libris in November, 2006 for approximately $60-65 million.  Selling at roughly three times the price paid only two years ago, I’d say that Francisco received a pretty good return on its investment. Leeds is invested in a number of industries, from administrative support software, to for-profit post-secondary education, to property management systems, to furniture for education, healthcare and hospitality.  A description of Leeds culled from their website: 

Leeds Equity Partners is a private equity firm focused on investments in the education, training and information and business services industries (the “Knowledge Industries”).  …  We focus on investments across all of the Knowledge Industries, which includes education, training and information and business services. We broadly define these sectors to include businesses offering products, services and solutions that enable individuals and enterprises to be more effective in an increasingly global, hyper-competitive, information-intensive and fast-changing marketplace. … Since 1993, Leeds Equity has invested in more than 20 companies across all of the Knowledge Industries, representing a total enterprise value of more than $4.1 billion.     

The announcement came quick on the heels of the announcement that Carl Grant would be re-joining Ex Libris as the President, North America.  Carl is a long-time supporter of NISO and standards development, having spent time as Chair of the Standards Development Committee, as a member of the Board of Directors, Treasurer, and a term as Chair of the Board.  Carl had been the President of CARE Affiliates, a service firm that provided support for open source systems implementors.  CARE Affiliates was acquired by LibLime in August.

Open Library Environment (OLE) Project – Planning open ILS systems

Tuesday, August 5th, 2008

The Open Library Environment (OLE) Project, a new initiative funded by the Mellon Foundation, launched its website this week.  The group aims to develop plans for the next generation of library automation systems build upon a modular SOA approach. Quoting from their Project Overview: The group “will convene the academic library community in planning an open library management system built on Service Oriented Architecture (SOA). Our goal is to think beyond the current model of an Integrated Library System and to design a new system that is flexible, customizable and able to meet the changing and complex needs of modern, dynamic academic libraries.”  The group will first research library processes and model practices and the systems necessary. Through the process, they hope to build a community that will This project has ties to the DLF project on ILS Discovery Interfaces and a number of other open source development initiatives in the community looking to address this issue.  It is also interesting to note that at least one ILS system vendor, Ex Libris, recently announced its new Open-Platform Strategy.There will certainly be interesting developments from the OLE Project and how their recommendations tie in with other ongoing work.  Of course, system interoperability relies heavily on standard data structures and interfaces.  If the end results aren’t easily plug and play, only the largest and most technically savvy organizations will be able to take advantage of the advances.