Home | About NISO | Blog

Archive for the ‘NISO’ Category

The Memento Project – adding history to the web

Wednesday, November 18th, 2009

Yesterday, I attended the CENDI/FLICC/NFAIS Forum on the Semantic Web: Fact or Myth hosted by the National Archives.  It was a great meeting with an overview of ongoing work, tools and new initiatives.  Hopefully, the slides will be available soon, as there was frequently more information than could be expressed in 20-minute presentations and many listed what are likely useful references for more information.  Once they are available, we’ll link through to them.

During the meeting, I had the opportunity to run into Herbert Van de Sompel, who is at the Los Alamos National Laboratory.  Herbert has had a tremendous impact on the discovery and delivery of electronic information. He played a critical role in creating the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), the Open Archives Initiative Object Reuse & Exchange specifications (OAI-ORE), the OpenURL Framework for Context-Sensitive Services, the SFX linking server, the bX scholarly recommender service, and info URI.

Herbert described his newest project, which has just been released, called the Memento Project. The Memento project proposes a “new idea related to Web Archiving, focusing on the integration of archived resources in regular Web navigation.”  In chatting briefly with Herbert, the system uses a browser plug-in to view the content of a page from a specified date.  It does this by using the underlying content management system change logs to recreate what appeared on a site at a given time.  The team has also developed some server-side Apache code that handles the request for calls to the management of systems that have version control.  The system can also point to a version of the content that exists in the Internet Archive (or other similar archive sites) for content from around that date, if the server is unable to recreate the requested page. Herbert and his team have tested this using a few wiki sites.  You can also demo the service from the LANL servers.

Here is a link to a presentation that Herbert and Michael Nelson (co-collaborator on this project) at Old Dominion University gave at the Library of Congress on this project.  There was also a story about this project  A detailed paper that describes the Memento solution is also available on the arXive site.  There is also an article on Memento in the New Scientist.  Finally, tomorrow (November 19, 2009 at 8:00 AM EST), there will be a presentation on this at OCLC as part of their Distinguished Seminar Series, which will be available online for free (RSVP required).

This is a very interesting project that addresses one of the key problems with archiving web page content, which frequently changes.  I am looking forward to the team’s future work and hoping that the project gets some broader adoption.

Open Source isn’t for everyone, and that’s OK. Proprietary Systems aren’t for everyone, and that’s OK too.

Monday, November 2nd, 2009

Last week, there was a small dust up in the community about a “leaked” document from one of the systems suppliers in the community about issues regarding Open Source (OS) software.  The merits of the document itself aren’t nearly as interesting as the issues surrounding it and the reactions from the community.  The paper outlined from the company’s perspective the many issues that face organizations that choose an open source solution as well as the benefits to proprietary software.  Almost immediately after the paper was released on Wikileaks, the OS community pounced on its release as “spreading FUD {i.e., Fear Uncertainty and Doubt}” about OS solutions.  This is a description OS supporters use for corporate communications that support the use and benefits of proprietary solutions.

From my mind the first interesting issue is that there is a presumption that any one solution is the “right” one, and the sales techniques from both communities understandably presume that each community’s approach is best for everyone.  This is almost never the case in a marketplace as large, broad and diverse as the US library market.  Each approach has it’s own strengths AND weaknesses and the community should work to understand what those strengths and weaknesses are, from both sides.  A clearer understanding and discussion of those qualities should do much to improve both options for the consumers.  There are potential issues with OS software, such as support, bug fixing, long-term sustainability, and staffing costs that implementers of OS options need to consider.  Similarly, proprietary options could have problems with data lock-in, interoperability challenges with other systems, and customization limitations.   However, each too has their strengths.  With OS these include and openness and an opportunity to collaboratively problem solve with other users and an infinite customizability.  Proprietary solutions provide a greater level of support and accountability, a mature support and development environment, and generally known fixed costs.

During the NISO Library Resources Management Systems educational forum in Boston last month, part of the program was devoted to a discussion of whether an organization should build or buy LRMS system.  There were certainly positives and downsides described from each approach.  The point that was driven home for me is that each organization’s situation is different and each team brings distinct skills that could push an organization in one direction or another.  Each organization needs to weigh the known and potential costs against their needs and resources.  A small public library might not have the technical skills to tweak OS systems in a way that is often needed.  A mid-sized institution might have staff that are technically expert enough to engage in an OS project.  A large library might be able to reallocate resources, but want the support commitments that come with a proprietary solution.  One positive thing about the marketplace for library systems is the variety of options and choices available to management.

Last year, during the Charleston Conference during a discussion of Open Source, I made the comment that, yes, everyone could build their own car, but why would they.  I personally don’t have the skills or time to build my own, I rely on large car manufacturers to do so for me.  When it breaks, I bring it to a specialized mechanic who knows how to fix it.  On the other hand, I have friends who do have the skills to build and repair cars. They save lots of money doing their own maintenance and have even built sports cars and made a decent amount of money doing so.  That doesn’t make one approach right or wrong, better or worse.  Unfortunately, people frequently let these value judgments color the debate about costs and benefits. As with everything where people have a vested interest in a project’s success, there are strong passions in the OS solutions debate.

What make these systems better for everyone is that there are common data structures and a common language for interacting.  Standards such as MARC, Z39.50, and OpenURL, among others make the storage, discovery and delivery of library content more functional and more interoperable.  As with all standards, they may not be perfect, but they have served the community well and provide an example of how we can as a community move forward in a collaborative way.

For all of the complaints hurled at the proprietary systems vendors (rightly or wrongly), they do a tremendous amount to support the development of voluntary consensus standards, which all systems are using.  Interoperability among library systems couldn’t take place without them.  Unfortunately, the same can’t be said for the OS community.  As Carl Grant, President of Ex Libris, made the point during the vendor roundtable in Boston, “How many of the OS support vendors and suppliers are members of and participants in NISO?”  Unfortunately, the answer to that question is “None” as yet.  Given how critical open standards are to the smooth functioning of these systems, it is surprising that they haven’t engaged in standards development.  We certainly would welcome their engagement and support.

The other issue that is raised about the release of this document is its provenance.  I’ll discuss that in my next post.

Tina Feick discusses NISO Institutional Identifier (I2) Project

Tuesday, August 25th, 2009

Each month, NISO hosts an Open Teleconference during which we highlight the work ongoing within NISO and provide an opportunity for anyone in the community to provide feedback to NISO on any work or suggested new work.  These calls take place on the second Monday of each month and have been quite a success for NISO since they were launched in January.

In August, Tina Feick was guest speaker during the latest call.  Tina is Director of Sales and Marketing at Harrassowitz and co-chair of the NISO Institutional Identifiers (I2) Working Group.  She discussed the status of the I2 project, its goals, and their soon-to-be-released draft descriptive metadata structure.

Here is a link to a recording of a portion of the call:  NISO Open Teleconference – August 10, 2009 – Institutional ID

The next Open Teleconference will take place on September 14th at 3:00 PM EST.  The dial-in instructions are distributed in NISO Newsline.  They are also available on the Events page of the NISO website.

A lesson in doing customer service well: Thanks, Amazon

Sunday, July 12th, 2009

I am often astounded by the lack of customer service in many organizations, particularly in the services environment.  We all have horror stories and more than a few of us (myself included) are not shy about complaining publicly about it when things go wrong.  Although, sadly, this doesn’t usually change the offending company’s behavior.  My belief is that we should equally vehemently praise a company when it does a good job in responding to customer needs.
Last month, I purchased a new 2nd generation Kindle.  I’ve been meaning to for a while, but finally got around to it in mid-June.  It is a great device and a substantial improvement over the 1st gen device.  The size, speed, and design particularly the navigation button placement are vastly improved.  I’m also a HUGE fan of the Whispersync service with my iPhone Kindle app.  If only Amazon would implement native EPUB support, but that’s another matter….  When I purchased it last month the price was $359 — a lot for a reader, but in all honesty, I needed a new toy!  Seriously, anyone in publishing should be paying very close attention to what is taking place in the ebook space, since it will radically transform our industry in the coming years.  Having a e-reader and engaging with electronic books is critical to understanding these changes and planning for the future and every publishing executive should have one and use it regularly.  So, while not exactly pleased with the price, I saw it as an investment in staying abreast of technology in our community.

Last Thursday, while in the cab on my way to a meeting of NISO’s Architecture Committee at the University of Chicago, I read a twitter post that Amazon had just dropped the price of the Kindle to $299.  A wise move, I thought from the perspective of boosting marketshare and increasing adoption.  Few apart from tech early adopters, publishing execs and some commuters who read vericiously would see the value at nearly $400.  As the price drops, the value becomes more apparent.  A few have mentioned a price point of $199 as the point where ebook readers will take off in a mass market way.  My feeling is that somewhere under $100 is the point at which you’ll start to see them everywhere.  Based on the estimated cost to produce the current generation Kindle of about $185, the price is not unreasonable, but still more than most people will shell out.  There’s been some analysis about what the price drop means and whether it’s a good sign or bad for ebook market. In all liklihood, it is a good sign with increasing sales (or even competition from different readers) comes efficiencies of scale, which lead to lower prices.  Since the Kindle 2 was just released in late February, it is unlikely that there is a 3rd gen product coming out the door from Amazon in the coming months.

At first, this price drop irritated me–personally–quite a bit.  If only I’d waited another couple weeks, I thought at first, I would have saved myself the $60.  Not the end of the world, but frustrating none-the-less.  However, word quickly spread online that Amazon would refund those who had purchased the device in the 30 days prior to the price change if you contacted customer service. Although I couldn’t find any specific information about the refund on the Amazon site, I knew from the postings online that contacting customer service would respond if queried.  And this is where Amazon deserves a fair amount of praise.  Thursday evening at around 11:00 pm, I sent an email to their customer service department:

I purchased a Kindle 2 on June 13th. I’ve noticed that Amazon has dropped the price of the new Kindle by $60 today.  It has been circulating on various blogs and postings that if someone purchased a Kindle in the last 30 days (which I have), Amazon was issuing refunds on the $60 difference.  If this is correct, how do I go about requesting that refund?

As a general customer service suggestion, I can find no information about this policy anywhere on your site.  Also, since your company is extremely efficient at tracking orders, couldn’t you make this account adjustment automatically, without having customers go through the hassle of tracking customer service down and requesting the refund?  Obviously, not in Amazon’s financial best interest, but it would be easy and simple as well as good PR if you did it automatically.  Everyone knows you have and store this information.  Why couldn’t you make it easier?  Just a thought from a former marketing executive and publishing professional.

With thanks,

Todd

By 9:30 the next morning, I’d received the following response:

Hello,

Thanks for contacting us about the recent price change for Kindle (Latest Generation). I’ve reviewed your order and see the price change occurred within 30 days from the time your Kindle order shipped.

Because of the circumstances, I’ll issue a refund for the price difference in the amount of $60. You should see the refund in the next 2-3 business days.

Thanks for your comments about refunding automatically to customers. We’ll consider your feedback as we plan further improvements.

Customer feedback like yours really helps us continue to improve our store and provide better service to our customers. Thanks for taking time to offer us your thoughts. I appreciate your thoughts, and I’ll be sure to pass your suggestion along.

And by 4:30 the $60 refund had been redeposited into my account.  Now, I certainly feel for those who’d missed the 30-day deadline for the refund.  Actually, the person who passed the information onto me about the refunds had actually missed the deadline by about a week and so wasn’t able to get the refund despite pleading with Amazon.  However, Amazon has always had a policy of 30-day returns and price change refunds, so this fits within established practice for them.  Regardless, I was extremely pleased with the customer service responsiveness of Amazon.  For all of my flaming companies who don’t do customer service well, I thought it was only fair to highlight a company that had done something well.  Now, if only my credit card company would pay attention…

Kodak takes the Kodachrome away

Thursday, June 25th, 2009

I grew up in Rochester, NY, which is home to Kodak, the iconic film company founded by George Eastman.  Much like Detroit is a car town renowned for FordGM and Chrysler, Rochester was known for its film industry.  Nearly everyone I knew had some tie to Kodak and most of my friends fathers were engineers of some sort at the many plants around town.  At its peak, Kodak employed more than 145,000 people in 1988.  It is now down to less than 30,000.  The last time I was home, I was shocked by parking lots, which the sites of were massive manufacturing plants when I was growing up.  Many of buildings of one industrial park were actually imploded in 2007. It was a stark indication of just how much Kodak had changed and how far they had fallen.   

Kodak announced earlier this week that it would be ceasing production of Kodachrome film.  Kodachrome had long been recognized for its true-tone color quality and preservation quality.  It was great slide film and among the first mass-market color films available.  It was even memorialized in a famous Paul Simon song.  Unfortunately, like all great film products, its days have been numbered for over a decade.  Now, if you’re one of the few who still shoot with Kodachrome, there’s only one facility, based in Kansas that processes the film.  Unfortunately, despite its quality and history, that won’t save it from the dustbin of chemistry and manufacturing. 

Kodak was a company built on 19th century technology–chemicals on plastic that captured images. It was old-world manufacturing on a massive scale.  It was also a company that clung onto its cash-cow core business well past the time when it was tenable, focused on its rivalry with other, mainly Japanese filmmakers.  It did not see–or probably more likely didn’t choose to focus on–the seismic shift in its core business to digital media.  This is despite the fact that it was a Kodak Engineer, Steven Sasson, who created the first digital camera in 1975 AT Kodak.  Although Kodak released a digital SLR camera in 1991 (in partnership with Nikon and as a Nikon branded product), at $13,000 it was hardly for the average consumer.  It would take more than a quarter century after Sasson’s original prototype before Kodak released its first mass-market digital camera in 2001.  Just after Kodak peaked in the late 80s and early 90s and begin dueling with Fuji for control of the film market, the rest of the consumer electronic market had begun to move on.   

Today, Kodak receives some 70% of its revenue from digital activities.  It holds the top share of the digital camera market, with nearly a quarter of the market.  Had it moved more quickly, in all liklihood it could have held a much larger share.  After all, “Kodak moments” used to be a common phrase for one that should be captured on film.  While the company spoke of capturing the moment, it was really focused on what they thought to be their business, chemicals. The real problem was people didn’t care about chemicals, they cared about the moment.  How best to capture the moment and how do so quickly and cheaply was what consumers cared about.  Very quickly, as processors sped up, as storage costs dropped, image sensors improved and all of this technology became a great deal cheaper, the old model of chemicals on plastic was displaced.  

The lessons of Kodak and its slow reaction to the changes in its industry should be a warning sign to those whose businesses is being impacted by the switch to digital media.  Focusing only on preservation of your cash-cow business could be detrimental to your long-term success and survival.  The academic journals publishing industry moved quickly to embrace online distribution.  However, in many respects there are still ties to print and many publishers still rely on the 19th-20th century revenue streams of the print-based economy.  The e-book world is much more tied to print models and distribution.  For example, the Kindle is in so many ways a technological derivative of print.  Much of the experience of reading an e-book on the Kindle is based on the experience of reading print. Even more than the technical experience of reading, the business models and approaches to distributing content is completely tied to the print sales streams.    There are so many new approaches that have not even been considered or tried.  Don’t be surprised if you are not paying attention, the entire market could shift under your business’s feet.

Pew Releases 3rd Report on Internet

Sunday, December 14th, 2008

The Pew Internet & American Life Project has issued the third report on The Future of the Internet. There are a lot of interesting opinions contained in the report and it is worth reviewing. In particular:

* – More than 3/4 agreed that the mobile phone will be the main connection tool for accessing the internet.

* – Nearly 2/3 of interviewed experts disagree that copyright protection will be addressed technologically.

Of particular interest for our community is the section on copyright and IP. Included in the report are some interesting perspectives on the future control of IP. The responses cover the gamut of approaches to existent approaches to the issues of licensing, control and user-generated content. I was slightly disappointed that there didn’t seem to be any truly innovative approaches to this very large problem.

Life partners with Google to post photo archive online

Wednesday, December 3rd, 2008

Life magazine, which ceased as an ongoing publication in April of 2007, has partnered with Google to digitize and post the magazine’s vast photo archive.  Most of the collection has never been seen publicly and amounts to a huge swath of America’s visual history since the 1860s.   The release of the collection was announced on the Google Blog.  The first part of the collection is now online, with the remaining 80% being digitized over the next “few months”.  Of course, this does not mean that all images in Life will be online, only those that were produced by the staff photographers (i.e., where Life holds the copyright), not the famous freelancers.

I can find no where any mention of money exchanged either from Google for the rights or for a revenue stream to support the ongoing work, although one can purchase prints of the images.  From a post on this from paidcontent.org:

  Time Inc.’s hopes, Life president Andy Blau explains: “We did this deal for really one reason, to drive traffic to Life.com. We wanted to make these images available to the greater public … everything else from that is really secondary.”  

While exploring the collection, I also noticed Google’s Image Labler, a game to tag images.  The goal of the game is to get points by matching your tags with those of another random player, when you both see the same images.  The game was launched in September of 2006. While I spent about 5 minutes using it, what is truly scary is the number of points raked up by the “all time leaders”. As of today, “Yew Half Maille” had collected 31,463,230. Considering that I collected about 4,000 points in my 5 minutes, how much time are people spending doing this?

Continuing the theme of focusing on what one needs to read

Monday, November 24th, 2008

This week in Time magazine there was an interesting article “How Many Blogs Does the World Need?”by Michael Kinsley. The crux of the article is something that I’ve touched on in a couple of recent posts: How do people separate the quality content from the diatribes, the meaningful from the inappropriate, and the groundbreaking from the debunked?

There may have been an explosion in the amount of content, which Kinsley is decrying, however I don’t think that is the problem. Kinsley was among the leaders of moving to online, having helped to put Slate on the map, which makes his article all the stranger. His voice is but only one in recent weeks complaining of the profusion of blogs and voices contributing to the public square. A similar article was published in Wired last month – again by another successful online writer at Valleywag. Limiting the voices or contributions from any number of authors (quality or not) shouldn’t be the answer. Providing structures by which people can find appropriate content, along with assessment measures and tools that reader can use to determine which content is appropriate for themselves is the critical need in our community.

Journals (in the scholarly world) had longed played this role of vetting and editing content and people could be moderately assured that the content in a particular journal would meet some general standards of quality and content. In a disambiguated world, this isn’t the case. How can you tell one article on the web, or in Science Direct, or in PLOS One is any better or more appropriate to your interests before investing the time and energy to read it? This will be one of the biggest challenges in the coming years will be to find a replacement for the journal in our open-web-world.

Magazine publishing going digital only — PC Magazine to cease print

Wednesday, November 19th, 2008

Another magazine announced today that they will cease publication of a print edition. In an interview with the website PaidContent.org, the CEO of Ziff Davis Jason Young, announced that PC Magazine will cease distribution of their print edition in January.

PC Magazine is just one of several mass-market publications that are moving to online only distribution. Earlier this week, Reuters reported that a judge has approved the reorganization of Ziff Davis, which is currently under Chapter 11 bankruptcy protection. There was some speculation about the future of Ziff Davis’ assets.

From the story:

The last issue will be dated January 2009; the closure will claim the jobs of about seven employees, all from the print production side. None of the editorial employees, who are now writing for the online sites anyway, will be affected.

Only a few weeks ago, the Christian Science Monitor announced that it would be ending print distribution. The costs of producing and distributing paper has always been a significant expense for publishers and in a period of decreasing advertising revenues, lower circulation, and higher production costs, we can expect that more publications will head in this direction.

Within the scholarly world, in particular, I expect that the economics will drive print distribution to print-on-demand for those who want to pay extra, but overall print journals will quickly become a thing of the past. I know a lot of people have projected this for a long time. ARL produced an interesting report written by Rick Johnson last fall on this topic, but it appears we’re nearing the tipping point Rick described in that report.

This transition makes all the more critical the ongoing work on preservation, authenticity, reuse, and rights particularly as they relate to the differences between print and online distribution.

Google Settlement gets tentative court approval

Tuesday, November 18th, 2008

Yesterday, the NY Court judge overseeing the publishers/authors/Google settlement has given tentative approval to the deal.  More details are here.