Home | About NISO | Blog

NISO Standards Bearer Blog

How the Information World Connects

9 Ways that librarians can support standards adoption

February 15th, 2010

Last week, I was at the Electronic Resources in Libraries conference in Austin, Texas.  This is the fifth meeting of ER&L and the meeting has grown tremendously, becoming an important destination for librarians and publishers focused on electronic content.  There is a growing energy around this conference that reminds me a lot of the Charleston conference back in the 1990s–or perhaps earlier, but that’s when I first attended Charleston.  The organizer of the meeting, Bonnie Tijerina, Electronic Resources Librarian at the UCLA Library, is full of drive and energy, and will I expect continue to be a force in the library community for many years to come.  So too are the team of people who stand with Bonnie in making this entire project happen, most of whom wandered about the meeting in t-shirts emblazoned with a welcoming and helpful Texas “Howdee!” in large letters across the chest.

Generally, a relaxed meeting with a capped attendance of ~350 people and a tight schedule of only a few competing sessions, ER&L also involves a lot of participant engagement.  Participants are encouraged to contribute to the conversation via the conference wiki and blog.  Also, the first day included a lightening talk opportunity for anyone to take the stage for five minutes to discuss whatever project they wanted to share.

I took the opportunity to stand up and discuss briefly an important issue for the library community: the adoption of standards by vendors and publishers.  There is often a chicken and egg problem with the development of systems interoperability standards.  When two parties need to exchange data, both sides of that exchange need to see the value of investing in implementation.  Implementation has to serve the interest of both communities.  In the case of library systems, the interests of the library staff are usually tied to improving end-user access, reducing data entry, more efficient services or better analysis.  For the vendor, this might include simply better customer service and keeping current customers happy, building in response to RFP requests, or possibly a competitive advantage over other systems offerings.  The problem is that in an era when development resources are tight–and they are always tight, only more so now–developing interchange functionality to make the system the supplier has developed work with another system (which was generally not developed by the same supplier) doesn’t often compete well in the list of development priorities.

How can the library community engage to help this situation?  During my brief talk at ER&L I listed a few ways that librarians can encourage adoption of technical standards by their vendors, such as systems suppliers and publishers:

1) Educate yourself about the different initiatives that are ongoing in the community. NISO offers a series of educational events throughout the year, ranging from webinars to in-person events.  Also, many of these events are free, such as the Changing Standards Landscape Forum at ALA and the monthly Open Teleconference Series.  Subscribing to NISO’s free Newsline or our magazine, ISQ are also ways to keep abreast of the work ongoing at NISO and elsewhere in the community.

2) Build compliance language into your RFPs and contracts.  A customer never has more power over the vendor than right before she/he is about to purchase something.  While price is often the first thing people think about when negotiating a contract for a system, there are other important elements tied to service levels that should also be considered.    Does the system conform with existing standards — and what exactly is meant by “conformance”.  Conformance can mean different things to different organizations.  Be as clear as you can be about what your needs are from the outset can avoid problems later.  NISO will be updating the NISO RFP Guide later this spring, which will help in this process.

3) Regularly speak with the product managers or account executives at your suppliers.  The product managers are there to provide input and feedback to their development teams.  Usually, they are a solid source for the company about customer needs and expectations.  They can often advocate for your needs within the company.  However, you need to be realistic about what they can achieve, which is why #8 below is an important channel too!

4) Participate in user group meetings and discussion groups:  Every successful company will reach out to its customers for feedback and input, especially when new products, services or platform upgrades are under consideration.  Be mindful of exactly what your needs and concerns are.  This is where your work on Education point #1 above) can be so valuable.

5) Serve on Library Advisory Boards: Most publishers and systems vendors have advisory boards of librarians who provide regular feedback about community conditions and development needs.

6) Open Source Development – A variety of libraries are working on development of new systems and services using Open Source tools and methods.  Building in interoperability standards into these systems is a great way to leverage communities to push adoption by proprietary vendors, which often require interoperability with proprietary systems for them to work properly.  In addition, Open Source provides a public forum for the testing and improvement of existing standards.

7) Find out if your suppliers are engaging in standards development work.  All of the rosters of NISO working groups are available online.  Look through them and see which of your suppliers is participating.  If you find a group that you feel would benefit your library, reach out to your suppliers.  Press them to engage if they are not.

8 ) Go to the top – Contacting the executive leadership at supplier companies is a great way to get action on your needs.  Often, the product managers don’t control the development pipeline at an organization—although they are useful as a first and regular point of contact (see #4 above).  The executives can often control a wide variety of resources to get a project moving forward, if you can convince them it is valuable to their customers.  Reaching to the executives is never a bad idea and can usually bring results if your requests are focused and actionable.

9) Get involved yourself. – There are many ways that you can engage in standards and best practice works.  You can engage directly with NISO or via any of the variety of mirror groups that exist as part of ALA, ARL, LITA, NFAIS, SLA or MLA.  In addition to building your own skills, you will be able to speak more authoritatively about your needs, the more engaged you are.  Also, it provides you an opportunity for your needs to be built in to the standards or best practices from the outset. You will be amazed at how similar the issues you face are with others in the community.

Did the iPad start a publishing revolution yesterday or not? Wait and see

January 28th, 2010

For Apple and Steve Jobs, yesterday might have been a game-changing day for Apple and -by extension- the entire media world.  I’m not sure the world shook in the way that he had hoped, but its possible that in the future we may look back on yesterday as a bigger day than how we view it was today.  Such is often the nature of revolutions.

Since very few people have had an iPad in their hands yet, the talk of its pros and cons seems to me premature.  As with previous devices, it will be more and also less than the hype of its first debut.  As people begin to use it, as developers push the boundries of its capabilities, it will mature and improve.  It was wholly unrealistic to presume that Apple (or any other company launching a new product) would make the technological or political leaps necessary to create the “supreme device” that will replace all existing technology.

A lot of people have made points about the iPad missing this or that technology.  Apple will almost certainly release an iPad 2.0 sometime in early 2011, dropping its price points and adding functionality — both as the underlying (interestingly not OLED display, which has been falsely reported) display technology becomes cheaper and based on, in some small ways, customer demand for functionality.  In this regards, think of copy & paste on the iPhone. As for some software gaps, such as lack of Adobe Flash support, while some have made the point that this is because of the iPhone OS,  I think these are driven by a desire to lock people into apps and inhibit browser-based free, or possibly paid, web-based services. It is in Apple’s interest to lock people into proprietary software/apps, which are written specifically for their device.

From a standards perspective, the iPad could be both a good or bad thing.  Again it is too soon to tell, but very initial reactions are worrying.  That the iPad will support .epub as a file format is good on its face.  However, it is very likely that the iPad will contain Apple-specific DRM, since there isn’t at the moment an industry standard.  Getting content into (and out of, for those who want to move away from the iPad) that DRM will be the crucial question.  As far as I am aware, Apple has been publicly silent on that question.  I expect that some of the publishes who agreed to content deals likely discussed this in detail, but those conversatins were likely limited to a very small group of executives all bound by harsh NDAs.  (I note that McGraw Hill was allegedly dropped from the announcement because of comments made by its CEO Tuesday on MSNBC.)

Also on the standards front, there was an excellent interview last night on the NPR news show Marketplace, during which author Josh Bernoff, also of Forrester Research, made the point that the internet was splintering into a variety of device specific applications.  The move toward applications in the past two years might reasonably be cause for concern.  It definitely adds to cost for content producers to create multiple contents for multiple platforms. I can’t say that I completely agree with his assessment, however.  The fact that there are open platforms available in the market place and that competition is forcing developers to open up their systems, notably the Google Android phone OS as well as the introduction of the Amazon Kindle Development Kit last week.

What is most interesting about this new product is its potential.  No one could have predicted three years ago the breadth and depth of the applications that have been developed for the iPhone.  Unleashing that creativity on the space of ebooks will very likely prove to be a boon for our community.  Specifically, this could provide publishers with an opportunity to expand the functionality of the ebook.

Often, new technology is at first used to replicate the functionality of the old technology.  In the case of books, I’m referring to the technology of paper. We are only now beginning to see people begin to take advantage of the new digital technology’s possibilities.    Perhaps the launch of Amazon’s new development kit and the technology platform of the iPad will spur innovative thinking about how to use ebooks and enhancing the functionality of digital content’s ability to also be an interactive medium.  The one element of the presentation yesterday that really caught my eye in this regard is the new user interface for reading the New York Times. This seemed the most innovative application of the iPad.  Hopefully in the coming months and years we will see a lot more of that experimentation, user interface design and multi-media intergration.

If that takes place than yesterday might have been a big day in the development of ebooks and information distribution.  If not, the jokes about the name will be all that we’ll recall about this new reader.

The free Ebook “bestsellers”

January 27th, 2010

There is an interesting trend in the mass market for e-books, which is new on this scale: The free book.

Certainly free book distribution has taken place as a marketing tactic for decasdes, if not centuries.  However, since the release of the Kindle, this new distribution mode seems to have really taken off.

An article in the New York Times this past weekend described the growing trend.  As the Times reports, more than half of the “best-selling” e-books on the Kindle, Amazon.com’s e-reader, are available at no charge.

On Saturday afternoon, I double-checked this and of the books in Amazon Kindle’s top ebook “Bestsellers” list -

Top 5 – 4 free and one at $0.25 about the Kindle

Top 10 – 8 of top 10 – another at $8.55 (but on $0.95 off hardcover list)

Top 15 – 11 of top 15 – two more at $4.39 one at $7.50

Top 20 – 14 of top 20 – one at $5.50 and the first at $9.99

Top 25 – 17 of top 25 – two more at $9.99 including Dan Brown’s book

Ten more were not free in 25-50, so 18 of 50 or only 36% of top 50 book are paid.

11 more were for-fee books in the next 50-75

11 more in 75-100 –

In total 60 of top 100 “selling titles” for Kindle are free or public domain books.  Now Amazon changes this every hour, so a review of your own would probably not come up with the same results.  However, it seems that at least half and as many as two-thirds of the list are not “sellers” at all, but only downloads.

However in that article, the author noted theoretically “lost” 28K sales.  An old friend of mine knows one of the authors in that story and she told me that the referenced author actually made about $10,000 in royalties on her backlist during the free period, which is incredible.

Chris Anderson, editor of Wired Magazine and author of the Long Tail, described how free works at the dawn the internet age in his book “Free”.  I should note that I was one who took advantage of Anderson’s business model and read “Free” at no cost on my Kindle.  What publishers are doing is a perfect example of Anderson’s thesis: That people can use the medium of digital distribution and it’s *nearly* free distribution to get consumers to be interested in other non-free products.  You can download the first book of a series for free, but if you want volumes 2-12 of the “Twilight” series, you will have to pay. NOTE – that Twilight or Harry Potter didn’t employ this model.  However, 10 years from now is there a kids book series that kids are eagerly awaiting the movies of, that began as a series with the first book free?  I don’t think this process will change how people discover and share books, but it certainly will accelerate the process.

BISG Appoints a new Executive Director

January 5th, 2010

The Book Industry Study Group has just announced that Scott Lubeck has been appointed the new Executive Director at BISG.  Lubeck, most recently Vice President of Technology for Wolters Kluwer Health, Professional and Education, has more than thirty years of publishing industry experience and has been heavily involved in technology and in dealing with the design and implementation of digital initiatives. Lubeck has also held executive positions with Harvard Business School Publishing and Newsstand, Inc., as well as with Perseus Books Group and National Academy Press.

Michael Healy, the previous Director of BISG, left in May to lead the forthcoming Book Rights Registry, which will form after (if?) the Google Books Settlement is approved by the courts.

BISG and NISO frequently partner on industry events and initiatives.  We look forward to continuing to serve the community together and to working with Scott and wish him the best of luck throughout his transition and in his new role.

Best wishes for a prosperous 2010 from NISO

January 1st, 2010

Happy New Year!  I’d like to take this time, on behalf of the NISO staff and the Board of Directors, to thank you for your involvement and interest in NISO over the past year and to wish you and your organization a prosperous and successful 2010.  The past year at NISO has seen some challenges, but more importantly many, many successes.

Everything that we undertake is only possible through the volunteer contributions of members of the NISO community and the financial support of our members.  While everyone producing, sharing, using and preserving information relies on the work that NISO undertakes, few understand the effort and time that go into standards development.  Those of you who participate in the process–either directly on a working group, or on the ballot review groups, or by supporting adoption through education and outreach–understand how challenging and rewarding consensus work can be.

The coming year will see a great deal of important activity on several different fronts.  We look forward to serving the needs of community and to making information flow easier, more rapid and more reliable.  All the best to each of you!

The Memento Project – adding history to the web

November 18th, 2009

Yesterday, I attended the CENDI/FLICC/NFAIS Forum on the Semantic Web: Fact or Myth hosted by the National Archives.  It was a great meeting with an overview of ongoing work, tools and new initiatives.  Hopefully, the slides will be available soon, as there was frequently more information than could be expressed in 20-minute presentations and many listed what are likely useful references for more information.  Once they are available, we’ll link through to them.

During the meeting, I had the opportunity to run into Herbert Van de Sompel, who is at the Los Alamos National Laboratory.  Herbert has had a tremendous impact on the discovery and delivery of electronic information. He played a critical role in creating the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), the Open Archives Initiative Object Reuse & Exchange specifications (OAI-ORE), the OpenURL Framework for Context-Sensitive Services, the SFX linking server, the bX scholarly recommender service, and info URI.

Herbert described his newest project, which has just been released, called the Memento Project. The Memento project proposes a “new idea related to Web Archiving, focusing on the integration of archived resources in regular Web navigation.”  In chatting briefly with Herbert, the system uses a browser plug-in to view the content of a page from a specified date.  It does this by using the underlying content management system change logs to recreate what appeared on a site at a given time.  The team has also developed some server-side Apache code that handles the request for calls to the management of systems that have version control.  The system can also point to a version of the content that exists in the Internet Archive (or other similar archive sites) for content from around that date, if the server is unable to recreate the requested page. Herbert and his team have tested this using a few wiki sites.  You can also demo the service from the LANL servers.

Here is a link to a presentation that Herbert and Michael Nelson (co-collaborator on this project) at Old Dominion University gave at the Library of Congress on this project.  There was also a story about this project  A detailed paper that describes the Memento solution is also available on the arXive site.  There is also an article on Memento in the New Scientist.  Finally, tomorrow (November 19, 2009 at 8:00 AM EST), there will be a presentation on this at OCLC as part of their Distinguished Seminar Series, which will be available online for free (RSVP required).

This is a very interesting project that addresses one of the key problems with archiving web page content, which frequently changes.  I am looking forward to the team’s future work and hoping that the project gets some broader adoption.

Trust but verify: Are you sure this document is real?

November 3rd, 2009

Continuing on the theme of a “leaked” document that was posted last week from a systems supplier in the community.  One thing that few asked initially regarding this document is: “Is it real?”  In this case, not 24 hours after the document was “released”, it was confirmed by the author that he had written the document and that it had been circulating for some time. However, it is amazing the stir that can be started by posting a PDF document anonymously on the Wikileaks website, regardless of its provenance.

Last week was the 40th anniversary of the “birth” of the internet, when two computers were first connected using a primitive router and transmitted the first message from two computers: “Lo”.  They were trying to send the command “Login”, but the systems crashed before the full message was sent. Later that evening, they were able to get the full message through and with that the internet – in a very nascent form was born.  During a radio interview that week, Dr. Leonard Kleinrock, Professor of Computer Science, UCLA, who was a one of the scientists that was working on those systems that night, spoke about the event.  During one of the questions, Dr. Klenirock was asked about the adoption of IP Version 6. His response was quite fascinating:

Dr. KLEINROCK: Yes. In fact, in those early days, the culture of the Internet was one of trust, openness, shared ideas. You know, I knew everybody on the Internet in those days and I trusted them all. And everybody behaved well, so we had a very easy, open access. We did not introduce any limitations nor did we introduce what we should have, which was the ability to do strong user authentication and strong file authentication. So I know that if you are communicating with me, it’s you, Ira Flatow, and not someone else. And if you send me a file, I receive the file you intended me to receive.

We should’ve installed that in the architecture in the early days. And the first thing we should’ve done with it is turn it off, because we needed this open, trusted, available, shared environment, which was the culture, the ethics of the early Internet. And then when we approach the late ‘80s and the early ‘90s and spam, and viruses, and pornography and eventually the identity theft and the fraud, and the botnets and the denial of service we see today, as that began to emerge, we should then slowly have turned on that authentication process, which is part of what your other caller referred to is this IPV6 is an attempt to bring on and patch on some of this authentication capability. But it’s very hard now that it’s not built deep into the architecture of the Internet.

The issue of provenance has been a critical gap in the structure of the internet from the very beginning.  At the outset, when the number of computers and people who were connected to the network was small, the issue of authentication and validation were significant barriers to a working system.  If you know and trust everyone in your neighborhood, locking your doors is an unnecessary hassle.  In a large city, where you don’t know all of your neighbors, locking your doors is a critical routine that becomes second nature.  In our digital environment, the community has gotten so large that locking doors, authenticating and passwords to ensure you are who you claim to be is essential to a functioning community.

Unfortunately, as Dr. Kleinrock notes, we are in a situation where we need to patch some of the authentication and provenance holes in our digital lives.  This brings me back to the document that was distributed last week via Wikileaks.

There is an important need, particularly in the legal and scientific communities that provenance be assured.  With digital documents, which are easily manipulated or created and distributed anonymously, confirming the author and source of a document can be.  Fortunately, in this case, the authorship can be and was confirmed easily and quickly enough.  However, in many situations this is not the case, particularly for forged or manipulated documents.  Even when denials are issued, there is no way to prove the negative to a doubtful audience.

The tools for creating extremely professional looking documents are ubiquitous.  Indeed, the same software that most publishers companies use to create formal published documents is available to almost anyone with a computer.  It would not be difficult to create one’s own “professional” documents and distribute them as real.  The internet is full of hoaxes of these sorts and they run the gamut from absurd, to humorous, to quite damaging.

There have been discussions about the need for better online provenance information for nearly two decades now. Some work on metadata provenance is gaining broader adoption including PREMIS, METS and DCMI, some significant work on standards remains regarding the authenticity of documents.  The US Government and the Government Printing Office has made progress with the GPO Seal of Authenticity and digital signature/public key technology in Acrobat v. 7.0 & 8.0.  In January, 2009, GPO digitally signed and certified PDF files of all versions of Congressional bills introduced during the 111th and 110th Congresses. Unfortunately, these types of authentication technologies have not been broadly adopted outside the government.  The importance of provenance metadata was also re-affirmed in a recent Arizona Supreme Court case.

Although it might not help in every case, knowing the source of a document is crucial in assessing its validity.  Until standards are broadly adopted and relied upon, a word of warning to the wise about content on the Internet: “Trust but verify.”

Does your ebook lack that sensory experience?

November 3rd, 2009

Do your e-books lack that special something that print had?  Do you miss the feel and smell of old-fashioned paper and ink?  Well, you needn’t worry any longer.  A new product was released earlier this year that could be the answer to your yearning for the heyday of the printing press: The Smell of Books

This “aerosol ebook enhancer” is purported to be compatible with a wide range of formats and is described as 100% DRM-compatible.  It is even noted to work with the DAISY Talking Book (NISO Z39.86) standard format.  Smell of Books™ is available in five designer aromas.

*  New  Book Smell
*  Classic Musty Smell
*  Scent of Sensibility
*  Eau You Have Cats
*  Crunchy Bacon Scent

I’ve submitted a request for a trial size some to test on my new Kindle.  I’ll post a review once it arrives!

NB: I came across this site today, while searching for examples of funny forgeries.  Thanks to the Museum of Hoaxes for the link.

Open Source isn’t for everyone, and that’s OK. Proprietary Systems aren’t for everyone, and that’s OK too.

November 2nd, 2009

Last week, there was a small dust up in the community about a “leaked” document from one of the systems suppliers in the community about issues regarding Open Source (OS) software.  The merits of the document itself aren’t nearly as interesting as the issues surrounding it and the reactions from the community.  The paper outlined from the company’s perspective the many issues that face organizations that choose an open source solution as well as the benefits to proprietary software.  Almost immediately after the paper was released on Wikileaks, the OS community pounced on its release as “spreading FUD {i.e., Fear Uncertainty and Doubt}” about OS solutions.  This is a description OS supporters use for corporate communications that support the use and benefits of proprietary solutions.

From my mind the first interesting issue is that there is a presumption that any one solution is the “right” one, and the sales techniques from both communities understandably presume that each community’s approach is best for everyone.  This is almost never the case in a marketplace as large, broad and diverse as the US library market.  Each approach has it’s own strengths AND weaknesses and the community should work to understand what those strengths and weaknesses are, from both sides.  A clearer understanding and discussion of those qualities should do much to improve both options for the consumers.  There are potential issues with OS software, such as support, bug fixing, long-term sustainability, and staffing costs that implementers of OS options need to consider.  Similarly, proprietary options could have problems with data lock-in, interoperability challenges with other systems, and customization limitations.   However, each too has their strengths.  With OS these include and openness and an opportunity to collaboratively problem solve with other users and an infinite customizability.  Proprietary solutions provide a greater level of support and accountability, a mature support and development environment, and generally known fixed costs.

During the NISO Library Resources Management Systems educational forum in Boston last month, part of the program was devoted to a discussion of whether an organization should build or buy LRMS system.  There were certainly positives and downsides described from each approach.  The point that was driven home for me is that each organization’s situation is different and each team brings distinct skills that could push an organization in one direction or another.  Each organization needs to weigh the known and potential costs against their needs and resources.  A small public library might not have the technical skills to tweak OS systems in a way that is often needed.  A mid-sized institution might have staff that are technically expert enough to engage in an OS project.  A large library might be able to reallocate resources, but want the support commitments that come with a proprietary solution.  One positive thing about the marketplace for library systems is the variety of options and choices available to management.

Last year, during the Charleston Conference during a discussion of Open Source, I made the comment that, yes, everyone could build their own car, but why would they.  I personally don’t have the skills or time to build my own, I rely on large car manufacturers to do so for me.  When it breaks, I bring it to a specialized mechanic who knows how to fix it.  On the other hand, I have friends who do have the skills to build and repair cars. They save lots of money doing their own maintenance and have even built sports cars and made a decent amount of money doing so.  That doesn’t make one approach right or wrong, better or worse.  Unfortunately, people frequently let these value judgments color the debate about costs and benefits. As with everything where people have a vested interest in a project’s success, there are strong passions in the OS solutions debate.

What make these systems better for everyone is that there are common data structures and a common language for interacting.  Standards such as MARC, Z39.50, and OpenURL, among others make the storage, discovery and delivery of library content more functional and more interoperable.  As with all standards, they may not be perfect, but they have served the community well and provide an example of how we can as a community move forward in a collaborative way.

For all of the complaints hurled at the proprietary systems vendors (rightly or wrongly), they do a tremendous amount to support the development of voluntary consensus standards, which all systems are using.  Interoperability among library systems couldn’t take place without them.  Unfortunately, the same can’t be said for the OS community.  As Carl Grant, President of Ex Libris, made the point during the vendor roundtable in Boston, “How many of the OS support vendors and suppliers are members of and participants in NISO?”  Unfortunately, the answer to that question is “None” as yet.  Given how critical open standards are to the smooth functioning of these systems, it is surprising that they haven’t engaged in standards development.  We certainly would welcome their engagement and support.

The other issue that is raised about the release of this document is its provenance.  I’ll discuss that in my next post.

Upcoming Forum on Library Resource Management Systems

August 27th, 2009

In Boston on October 8-9, NISO will host a 2-day educational forum, Library Resource Management Systems: New Challenges, New Opportunities. We are pleased to bring together a terrific program of expert speakers to discuss some of the key issues and emerging trends in library resource management systems as well as to take a look at the standards used and needed in these systems.

 

The back end systems upon which libraries rely have become the center of a great deal of study, reconsideration and development activity over the past few years.  The integration of search functionality, social discovery tools, access control and even delivery mechanisms to traditional cataloging systems are necessitating a conversation about how these component parts will work together in a seamless fashion.  There are a variety of approaches, from a fully-integrated system to a best-of-breed patchwork of systems, from locally managed to software as a service approaches.  No single approach is right for all institutions and there is no panacea for all the challenges institutions face providing services to their constituents.  However, there are many options an organization could choose from.  Careful planning can help to find the right one and can save the institution tremendous amounts of time and effort.  This program will provide some of the background on the key issues that management will need to assess to make the right decision.

 

Registration is now open and we hope that you can join us.