Home | About NISO | Blog

Archive for October, 2008

The future of paper

Friday, October 31st, 2008

Looking forward (and I’m not much of a futurist), I expect that one of the key technological developments of the next decade will be the improvement of electronic paper display technology.  Low cost technology to provide digital imaging of text and images on paper-like readers will bring to life the potential of digital content.   True integration of multi-media will occur and what we now know of as the book will be altered radically.  One need only think of the newspapers in the Harry Potter series to think about where we are probably headed.  Many people frequently note that the first applications of new technology often look and feel like the old technology.  We currently are in that stage with electronic books and electronic media – although this is changing slowly.  

Earlier this month at the International Meeting on Information Display (iMiD), Samsung demonstrated the world’s first carbon nanotube-based color active matrix electrophoretic display (EPD) e-paper.From Samsumg’s press release:

“Electrophoretic displays offer inherent advantages over traditional flat panel displays due to their low power consumption and bright light readability, making them well suited for handheld and mobile applications. Since they can be produced on thin, flexible substrates, EPD’s also are ideally suited for use in e-paper applications.

Unlike conventional flat panel displays, electrophoretic displays rely on reflected light, and can retain text or images without constant refreshing, thereby dramatically reducing power consumption.”

 

Of course, Samsung isn’t the only player in this market and there are many others developing e-paper technology. However, for those not regularly involved in the display technology space, there is a great deal of activity taking place there.  Since a good portion of publisher and library investments are in paper, focusing on what is changing in the future of display is an area we should be paying closer attention to. 

In September, Esquire magazine released its 75th anniversary issue with an electronic paper cover.  Here’s a video of how it looked. The technology was provided by E Ink Technology, the same company that produced display screen for the Amazon Kindle. The cover price for the digitial cover was only $2.00 more on the newsstand, but I expect this hardly covered the costs. Ford likely underwrote much of the costs with an inside front-cover ad using the same technology. However, the USA Today reported that the issue contained more ads than any other issue in the past 11 years. If one were looking for an exciting new trend in an otherwise depressed world of magazine or newspaper publishing, this might be a start. 

What will be fascinating is watching how this new technology develops — I’m sure in fits and starts — over the next decade.  It will have a profound impact on the production and sharing of infromation as well as the existing business models for selling information.  The clunky-ness of the current generation of e-book readers led a third of those responding to a survey Library Journal conducted at the Frankfurt Book Fair to say that “digital content would never surpass traditional books sales”.   This might have something to do with the response that “almost 60 percent of respondents said they do not currently use ebooks and e-readers at all.” 

While the transition to ebooks, might not take place in the next decade, I think e-paper display technology will advance quickly and the transition will take place sooner than we all think.

Flickr project at Library of Congress

Thursday, October 30th, 2008

Further to the CENDI meeting held yesterday:

Deanna Marcum was the opening speaker of the meeting and her presentation primarily focused on the report on the Future of Bibliographic Control and her response to the report.  One of the recommendations of that report was that libraries should invest in making available their special collections.  One thing that LC has in abundance is special collections.

Deanna discussed the pilot project on Flickr to post digitized images on the service and encourage public tagging of the images.  The pilot includes scans of “1,600 color images from the Farm Security Administration/Office of War Information and 1,500+ images from the George Grantham Bain News Service.”  As of today the project has 4,665 items on Flickr.  The group has had great success in getting thousands of people to tag and enrich the images with descriptions.  In bouncing through a number of images, most of them looked like they’d received more than 2,000 views each.  That translates to more than 9 million views (although I could be overshooting the toal just because of a very small sample size) — although I know from my own account, there’s a lot of double-counting of reloading of pages.  Regardless, this is terrific amount of visibility for an image collection that many wouldn’t be able to see before they was digitized.

In glancing through the tag list that have been added to the images, I expect that there is much that would concern a professional cataloger.  Many of the tags conform to the odd space-less text string convention on Flickr.  Also, from the perspective of making images easier to find, I’d say the results are mixed.  LC will be producing a report of their results in “in the next few weeks” (per Deanna).

Finally, I’m not sure that providing public-domain library content to freely to commercial organizations is in the best interests of the contributing library.  This follows on some further consideration of my post yesterday on Google’s settlement with the publishing and authors communities for the Google Book project.

After the meeting, I took the opportunity of being at the LC to see their exhibition on Creating the United States.  Yesterday was the last day of the exhibition, so unfortunately, if you hadn’t seen it already, it will be “a number of years” before LC brings back out of the vaults the Jefferson draft of the Declreation of Independence.  Along with the exhibition on the American founding, they also have on display, the Jefferson library collection and the  Waldseemüller maps.  These items are among most important maps in the history of cartography, which were the first to name the landmass across the Atlantic from Europe “America” in 1507 and 1516.  I believe the maps will continue to be on display for sometime.  I encourage anyone in the area to stop in and take a look.

CENDI Meeting on Metadata and the future of the iPod

Wednesday, October 29th, 2008

I was at the CENDI meeting to speak today about metadata and new developments related to metadata. There were several great presentations during the morning and some worthy of additional attention. My particular presentation is here.

The presenter prior to me was Dr. Carl Randall, Project Officer from the Defense Technical Information Center (DTIC). Carl’s presentation was excellent. He spoke to the future of search and a research report that he wrote on Current Searching Methodology And Retrieval Issues: An Assessment. Carl ended hispresentation with a note about an article he’d just read entitled Why the iPod is Doomed written by Kevin Maney for portfolio.com.

The article was focused on why the Pod was doomed. The author posits that the technology of the iPod is outdated and will soon be replaced by online “cloud” computing services. To paraphrase from the article: The more entrenched a business is, the less likely it will be able to change when new competitors arise to challenge its existing model.

Another great quote from the article– “In tech years, they [i.e, the iPod and iTunes] are older than Henry Kissinger.”

I don’t quibble with the main tenant of the article; that services will move to the web and that we will think it quaint to have to purchase content and download individual songs, then carry around those songs on hard drives, which store those files locally. The iPod hardware and the iTunes model of by-the-drink downloads are both likely to have limited futures. I do think that Apple is probably better placed to transition their iTunes service to a subscription or cloud-based model than any others through their iPhones. The article dismisses this as unlikely because Apple hasn’t talked about it. This dismisses the fact that Apple never talks about their plans until theyare ready to announce a product of service.

As we move to an era of “cloud” computing, where both applications content are hosted on the network not on individual devices, it is likely that people will desire to purchase subscription access to all content on demand as opposed to the limited content that they specifically purchase.

A subscription model also provides new opportunity to expose users to new content. From my perspective, despite having over 10,000 songs in my iTunes library, I’ve been reluctant to purchase new content that I wasn’t already familiar with. I have used LastFM and other services (anyone remember FM radio) to become acquainted with new music. Part of the reason for this is that the barrier for me is time rather than cost, but I expect that the perceived cost issue is one for many potential users. I say “perceived” because, much research and practical experience shows us consumers will pay more for ongoing subscription services than they will for one-time up-front costs.

Moving content to the “cloud” provides many opportunities for content providers to exercise a measure of control that had been lost. By hosting files, rather than distributing them (streaming as distinct from downloading, for example) the content providers have greater ability to control distribution. Access becomes an issue ofauthentication and rights management, as opposed to DRM wrapping and other more onerous and intrusive approaches. Many of us have become quite comfortable with renting movies through Blockbuster, Netflix or cable OnDemand services.

There are downsides for the customers for moving to the cloud. There are very different rights associated with “renting” a thing (content, cars, houses, etc.) versus owning those things. How interested users will be in skipping those rights for the convenience of the cloud is an open question. Likely, the convenience will override the long-term interest in the rights. Frequently, it isn’t until someone realizes that they don’t have any control over the cloud is when they are burned by the owners of the services take them away in some fashion. If you’ve stored all of your photos on Flickr and the company deletes your account for whatever reason, you’ll wish that you had more control over the terms of service. From my perspective, I’d rather retain ownership and control the content I’ve purchased in those areas where I’m invested in preserving access or rights to reuse. I don’t know that the majority of users share my view on this; likely because they don’t spend much time thinking about the potential impacts.

This is something, in particular, libraries should be focused on having outsourced preservation of digital content to the publishers and organizations like Portico.

However, I do know that looking at these distribution models is a huge opportunity for suppliers of content in all forms. The risks of not acting or reacting are that a new upstart provider will displace the old. I grew up in Rochester, NY where Kodak was king in photography around the world for decades. Now Kodak is but a shadow of its former self, looking for a new business model in an era of digital imaging, not film and processing, which were its specialty.

Google and publishers reach settlement on Google Books project

Tuesday, October 28th, 2008

Google, the Authors Guild and the Association of American Publishers announced this morning a settlement in the ongoing court case  regarding the Google book digitization project launched in 2004.  

According to the release: 

“If approved, the Settlement will authorize Google to continue to scan in-copyright Books and Inserts; to develop an electronic Books database; to sell subscriptions to the Books database to schools, corporations and other institutions; to sell individual Books to consumers; and to place advertisements next to pages of Books. Google will pay Rightsholders, through a Book Rights Registry (“Registry”), 63% of all revenues earned from these uses, and the Registry will distribute those revenues to the Rightsholders of the Books and Inserts who register with the Registry. “ 

The proposed Settlement also will authorize Google to provide public and higher education libraries with free access to the Books database. Certain libraries that are providing Books to Google for scanning are authorized to make limited “non-display uses” of the Books.”

  

In addition:

Google will make payments totaling $125 million to establish the Book Rights Registry, to resolve existing claims by authors and publishers and to cover legal fees.  Of this total, Google will pay $34.5 million for the establishment and initial operations of the Book Rights Registry.  Google will also pay a minimum of $45 million to pay rightsholders whose Books and Inserts were digitized prior to the deadline for rightsholders to opt out of the settlement.

 
It was reasonably clear from the outset that there would be a settlement, since the activity that Google was undertaking was by most reasonable perspectives a violation of copyright for in-copyright works.  I think it is also clear that while the information will be available to patrons of the partner libraries, the rest of us will have to pay for access to Google’s library at some point down the road.  There is a lot to chew on here and something that we’ll be talking about for weeks and months to come. 

More information from Google is here.  From AAP here.

Microsoft, Open ID and the future of authentication

Tuesday, October 28th, 2008

Microsoft announced today that the company is throwing its weight behind the OpenID system.  Microsoft’s Live ID will become an OpenID with the launch of their OpenID Provider (OP), which will initially be launched within Microsoft’s Community Technology Preview testing service. ”The current Technology Preview release is for testing purposes only, and is not intended for widespread adoption at this stage. After a period of industry testing and feedback, we will be incorporating any necessary fixes and feature enhancements into the next revision, to be released to Production sometime in 2009.”

There is a list of non-compliant websites, which users are demanding the use of OpenID on their sites, Demand OpenID.  The site lists some of the most recognized sites on the web, such as Google, Twitter, FacebookWikipediaYoutube and del.icio.us.  It will be very interesting to see who else follows Microsoft’s lead in this area.

This action is not surprising given Microsoft’s support of Open Standards, which hit its stride with the standardization within ISO of OOXML earlier this year.  In the release, they note “We have been tracking the evolution of the OpenID specification, from its birth as just a dream and a vision through its development into a mature, de facto standard with terms that make it viable for us to implement it now.” The fact that Microsoft is awaiting the maturity of non-Microsoft standards before they throw their weight behind them indicates that there will be a competitive approach to standards development in the coming years and we will be in a period with a number of competing standards for several years. 

The New York Times covered the announcement in today’s edition.  Quoting from the article:

“This move by a traditionally proprietary organization like Microsoft could be the signal that gives the market – both large and small players combined – the confidence to invest more time and energy into the widespread adoption of OpenID. That is good news for OpenID proponents. And it’s equally good news for all of us who are interested in simplifying the management of our identity across the multitude of sites we use on a day-to-day basis.” 

Microsoft has a Windows Live ID blog where more information will be posted as the testing moves forward. The library and publishing communities have been dealing with the thorny issue of authentication for a number of years.  The application of OpenID solves part of the problem, but does not address the other key aspect of authentication: certification. There will still need to be some considerable work toward rationalizing authentication and identity management, making the process simpler for end-users through a single sign-on is a big step in the right direction.  

The el-dente style of standards

Monday, October 27th, 2008

Although outside of NISO’s normal remit, I thought I’d provide some interesting fun for a rainy fall Monday.  ISO has just announced the publication of a standard on “state of the art” cooking of pasta.  From the release:

ISO 7304-2:2008 – Alimentary pasta produced from durum wheat semolina – Estimation of cooking quality by sensory analysis – Part 2: Routine methoddescribes a test method for laboratories to determine a minimum of cooking time for pasta.” “This International Standard specifies a method for assessing, by sensory analysis, the quality of cooked alimentary pasta in the form of long, solid strands (e.g. spaghetti) or short, hollow strands (e.g. macaroni) produced from durum wheat semolina, expressed in terms of the starch release, liveliness and firmness characteristics (i.e. texture) of the pasta.”

I know at least one of my college roommates who could have benefited from the advice this standard could provide.  I am sure that the development of this standards was … tasty.

RDA draft by month’s end?

Wednesday, October 22nd, 2008

Roy Tennant is reporting on his blog at the Library Journal that the full draft of RDA will be distributed by month’s end.  There’s no confirmation on the Joint Steering Committee website — indeed minutes or agendas from any meeting since April are not up yet.  The original release  had been scheduled for August, but the group was unable to hit that deadline.  The group maintains that there will be a three-month public comment period that will extend into January. 

Island of Sans Sariffe

Tuesday, October 21st, 2008

Collecting maps is one of my hobbies.  I’ve always loved how people view the world in years gone by and how they tried to represent their views on paper.  There’s a very intersting site, Strangemaps.com, which highlights the various miscellany of maps and geographical display.  Earlier this month, the site highlighted the great semi-colonial state San Serriffe, one of the great April Fool’s Day pranks by the Guardian newspaper in the UK.  Those of you in the publishing world might find this amusing for the references to typography.  Others in our community focused on the issue of country codes might also find a place for this island nation in ISO 3166.

 Island of San Seriffe 

More about the history of San Serriffe can be found here on the Museum of Hoaxes website.

ISO names Robert Steele as new Secretary General

Tuesday, October 21st, 2008

ISO has announced the appointment of a new Secretary General.  Below is the press release we received. 

ISO PRESS RELEASE / COMMUNIQUE DE PRESSE DE L’ISO (VERSION FRANCAISE CI-APRES) 

New ISO Secretary-General as of 1 January 2009

ISO (International Organization for Standardization) will have a new Secretary-General from 1 January 2009. He is Mr. Robert Steele who was appointed by the ISO Council at its meeting in Dubai on 17 October 2008, following the 31st ISO General Assembly.

Last March, the ISO Council initiated the process to select a new Secretary-General for the organization after the current Secretary-General, Mr. Alan Bryden, indicated that he was at the Council’s disposal to enable a smooth transition to a successor so that the latter would be in office from the launching next year of the consultations relating to the new ISO Strategic Plan.

MORE: http://www.iso.org/iso/pressrelease.htm?refid=Ref1168 

VIEWS and Web Services

Monday, October 20th, 2008

In a recent Ariadne article on The Networked Library Service Layer, the author stated that “NISO started a Web services initiative, VIEWS but this has lain dormant since 2004.” In actuality the VIEWS project, after transfer to NISO, resulted in a recommended practice. No criticism of the author, who is well-versed in standards, but this just shows how difficult it can be to keep track of developing standards that may take years to come to fruition. (I’ll save standards current awareness for a later blog entry.) 

VIEWS, which stands for Vendor Initiative for Enabling Web Services, actually started as an independent vendor activity in June 2004 to leverage the use of web services for integrating library applications from disparate vendors. While the original vision had been to develop and test some actual web services interoperability protocols, following a survey of web services usage and a metasearch white paper, the conclusion was reached that such a goal might be premature due to basic infrastructure issues such as authentication and services discovery.

In September 2005, NISO was asked to take over the activity and a new working group was formed, including representatives from the original members of VIEWS and some new members that included non-vendors. The new working group focused on developing a recommended practice that would outline both actual and potential uses of web services in a library context as an alternative to a full API.

Their document, Best Practices for Designing Web Services in the Library Context (NISO RP 2006-01), was issued in July 2006. A number of library applications of web services are described and best practices are recommended in the areas of HTTP caching, filtering of user input, output formats, security, and throttling.

Since that recommended practice was issued, NISO has put its own toes in the web services water, so to speak, with the SUSHI standard protocol, which is built on a web services platform.

Posted by Cynthia Hodgson