Home | About NISO | Blog

Archive for the ‘technology’ Category

NISO response to the National Science Board on Data Policies

Wednesday, January 18th, 2012

Earlier this month, the National Science Board (NSB) announced it was seeking comments from the public on the report from the Committee on Strategy and Budget Task Force on Data Policies, Digital Research Data Sharing and Management.  That report was distributed last December.

NISO has prepared a response on behalf of the standards development community, which was submitted today.  Here are some excerpts of that response:

The National Science Board’s Task Force on Data Policies comes at a watershed moment in the development of an infrastructure for data-intensive science based on sharing and interoperability. The NISO community applauds this effort and the focused attention on the key issues related to a robust and interoperable data environment.

….

NISO has particular interest in Key Challenge #4: The reproducibility of scientific findings requires that digital research data be searchable and accessible through documented protocols or method. Beyond its historical involvement in these issues, NISO is actively engaged in forward-looking projects related to data sharing and data citation. NISO, in partnership with the National Federation of Advanced Information Services (NFAIS), is nearing completion of a best practice for how publishers should manage supplemental materials that are associated with the journal articles they publish. With a funding award from the Alfred P. Sloan Foundation and in partnership with the Open Archives Initiative, NISO began work on ResourceSync, a web protocol to ensure large-scale data repositories can be replicated and maintained in real-time. We’ve also had conversations with the DataCite group for formal standardization of their IsCitedBy specification. [Todd Carpenter serves] as a member of the ICSTI/CODATA task force working on best practices for data citation and NISO is looking forward to promoting and formalizing any recommendations and best practices that derive from that work.

….

We strongly urge that any further development of data-related best practices and standards take place in neutral forums that engage all relevant stakeholder communities, such as the one that NISO provides for consensus development. As noted in Appendix F of the report, Summary Notes on Expert Panel Discussion on Data Policies, standards for descriptive and structural metadata and persistent identifiers for all people and entities in the data exchange process are critical components of an interoperable data environment. We cannot agree more with this statement from the report of the meeting: “Funding agencies should work with stakeholders and research communities to support the establishment of standards that enable sharing and interoperability internationally.”

There is great potential for NSF to expand its leadership role in fostering well-managed use of data. This would include not only support of the repository community, but also in the promulgation of community standards. In partnership with NISO and using the consensus development process, NSF could support the creation of new standards and best practices. More importantly, NSF could, through its funding role, provide advocacy for—even require—how researchers should use these broad community standards and best practices in the dissemination of their research. We note that there are more than a dozen references to standards in Digital Research Data Sharing and Management report, so we are sure that this point is not falling on unreceptive ears.

The engagement of all relevant stakeholders in the establishment of data sharing and management practices as described in Recommendation #1 is critical in today’s environment—at both the national and international levels. While the promotion of individual communities of practice is a laudable one, it does present problems and issues when it comes to systems interoperability. A robust system of data exchange by default must be one grounded on a core set of interoperable data. More often than not, computational systems will need to act with a minimum of human intervention to be truly successful. This approach will not require a single schema or metadata system for all data, which is of course impossible and unworkable. However, a focus on and inclusion of core data elements and common base-level data standards is critical. For example, geo-location, bibliographic information, identifiers and discoverability data are all things that could be easily standardized and concentrated on to foster interoperability. Domain-specific information can be layered over this base of common and consistent data in a way that maintains domain specificity without sacrificing interoperability.

One of the key problems that the NSB and the NSF should work to avoid is the proliferation of standards for the exchange of information. This is often the butt of standards jokes, but in reality it does create significant problems. It is commonplace for communities of interest to review the landscape of existing standards and determine that existing standards do not meet their exact needs. That community then proceeds to duplicate seventy to eighty percent of existing work to create a specification that is custom-tailored to their specific needs, but which is not necessarily compatible with existing standards. In this way, standards proliferate and complicate interoperability. The NSB is uniquely positioned to help avoid this unnecessary and complicating tendency. Through its funding role, the NSB should promote the application, use and, if necessary, extension of existing standards. It should aggressively work to avoid the creation of new standards, when relevant standards already exist.

The sharing of data on a massive scale is a relatively new activity and we should be cautious in declaring fixed standards at this state. It is conceivable that standards may not exist to address some of the issues in data sharing or that it may be too early in the lifecycle for standards to be promulgated in the community. In that case, lower-level consensus forms, such as consensus-developed best practices or white papers could advance the state of the art without inhibiting the advancement of new services, activities or trends. The NSB should promote these forms of activity as well, when standards development is not yet an appropriate path.

We hope that this response is well received by the NSB in the formulation of its data policies. There is terrific potential in creating an interoperable data environment, but that system will need to be based on standards and rely on best practices within the community to be fully functional. The scientific community, in partnership with the library, publisher and systems provider communities can all collectively help to create this important infrastructure. Its potential can only be helped by consensus agreement on base-level technologies. If development continues in a domain-centered path, the goal of interoperability and delivering on its potential will only be delayed and quite possibly harmed.

The full text PDF of the entire response is available here.  Comments from the public related to this document are welcome.

Mandatory Copyright Deposit for Electronic-only Materials

Thursday, April 1st, 2010

In late February, the Copyright Office at the Library of Congress published a new rule that expands the requirement for the mandatory deposit to include items published in only in digital format.   The interim regulation, Mandatory Deposit of Published Electronic Works Available Only Online (37 CFR Part 202 [Docket No. RM 2009–3]) was released in the Federal Register.  The Library of Congress will focus its first attention on e-only deposit of journals, since this is the area where electronic-only publishing is most advanced.  Very likely, this will move into the space of digital books as well, but it will likely take sometime to coalesce.

I wrote a column about this in Against the Grain last September outlining some of these issues that this change will require.  A free copy of that article is available here.  The Library of Congress is aware, and will become painfully more so when this stream of online content begins to flow their way.  To support an understanding about these new regulations, LC hosting a forum in Washington in May to discuss publisher’s technology for providing these data on a regular basis.  Below is the description about the meeting that LC provided.

Electronic Deposit Publishers Forum
May 10-11, 2010
Library of Congress — Washington, DC

The Mandatory deposit provision of the US Copyright Law requires that published works be deposited with the US Copyright Office for use by the Library of Congress in its collection.  Previously, copyright deposits were required only for works published in a physical form, but recently revised regulations now include the deposit of electronic works published only online.  The purpose of this workshop is to establish a submission process for these works and to explore technical and procedural options that will work for the publishing community and the Library of Congress.

Discussion topics will include:

  • Revised mandatory deposit regulations
  • Metadata elements and file formats to be submitted

Space for this meeting is very limited, but if you’re interested in participating in the meeting, you should contact the Copyright Office.

  • Proposed transfer mechanisms
  • Did the iPad start a publishing revolution yesterday or not? Wait and see

    Thursday, January 28th, 2010

    For Apple and Steve Jobs, yesterday might have been a game-changing day for Apple and -by extension- the entire media world.  I’m not sure the world shook in the way that he had hoped, but its possible that in the future we may look back on yesterday as a bigger day than how we view it was today.  Such is often the nature of revolutions.

    Since very few people have had an iPad in their hands yet, the talk of its pros and cons seems to me premature.  As with previous devices, it will be more and also less than the hype of its first debut.  As people begin to use it, as developers push the boundries of its capabilities, it will mature and improve.  It was wholly unrealistic to presume that Apple (or any other company launching a new product) would make the technological or political leaps necessary to create the “supreme device” that will replace all existing technology.

    A lot of people have made points about the iPad missing this or that technology.  Apple will almost certainly release an iPad 2.0 sometime in early 2011, dropping its price points and adding functionality — both as the underlying (interestingly not OLED display, which has been falsely reported) display technology becomes cheaper and based on, in some small ways, customer demand for functionality.  In this regards, think of copy & paste on the iPhone. As for some software gaps, such as lack of Adobe Flash support, while some have made the point that this is because of the iPhone OS,  I think these are driven by a desire to lock people into apps and inhibit browser-based free, or possibly paid, web-based services. It is in Apple’s interest to lock people into proprietary software/apps, which are written specifically for their device.

    From a standards perspective, the iPad could be both a good or bad thing.  Again it is too soon to tell, but very initial reactions are worrying.  That the iPad will support .epub as a file format is good on its face.  However, it is very likely that the iPad will contain Apple-specific DRM, since there isn’t at the moment an industry standard.  Getting content into (and out of, for those who want to move away from the iPad) that DRM will be the crucial question.  As far as I am aware, Apple has been publicly silent on that question.  I expect that some of the publishes who agreed to content deals likely discussed this in detail, but those conversatins were likely limited to a very small group of executives all bound by harsh NDAs.  (I note that McGraw Hill was allegedly dropped from the announcement because of comments made by its CEO Tuesday on MSNBC.)

    Also on the standards front, there was an excellent interview last night on the NPR news show Marketplace, during which author Josh Bernoff, also of Forrester Research, made the point that the internet was splintering into a variety of device specific applications.  The move toward applications in the past two years might reasonably be cause for concern.  It definitely adds to cost for content producers to create multiple contents for multiple platforms. I can’t say that I completely agree with his assessment, however.  The fact that there are open platforms available in the market place and that competition is forcing developers to open up their systems, notably the Google Android phone OS as well as the introduction of the Amazon Kindle Development Kit last week.

    What is most interesting about this new product is its potential.  No one could have predicted three years ago the breadth and depth of the applications that have been developed for the iPhone.  Unleashing that creativity on the space of ebooks will very likely prove to be a boon for our community.  Specifically, this could provide publishers with an opportunity to expand the functionality of the ebook.

    Often, new technology is at first used to replicate the functionality of the old technology.  In the case of books, I’m referring to the technology of paper. We are only now beginning to see people begin to take advantage of the new digital technology’s possibilities.    Perhaps the launch of Amazon’s new development kit and the technology platform of the iPad will spur innovative thinking about how to use ebooks and enhancing the functionality of digital content’s ability to also be an interactive medium.  The one element of the presentation yesterday that really caught my eye in this regard is the new user interface for reading the New York Times. This seemed the most innovative application of the iPad.  Hopefully in the coming months and years we will see a lot more of that experimentation, user interface design and multi-media intergration.

    If that takes place than yesterday might have been a big day in the development of ebooks and information distribution.  If not, the jokes about the name will be all that we’ll recall about this new reader.

    The Memento Project – adding history to the web

    Wednesday, November 18th, 2009

    Yesterday, I attended the CENDI/FLICC/NFAIS Forum on the Semantic Web: Fact or Myth hosted by the National Archives.  It was a great meeting with an overview of ongoing work, tools and new initiatives.  Hopefully, the slides will be available soon, as there was frequently more information than could be expressed in 20-minute presentations and many listed what are likely useful references for more information.  Once they are available, we’ll link through to them.

    During the meeting, I had the opportunity to run into Herbert Van de Sompel, who is at the Los Alamos National Laboratory.  Herbert has had a tremendous impact on the discovery and delivery of electronic information. He played a critical role in creating the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), the Open Archives Initiative Object Reuse & Exchange specifications (OAI-ORE), the OpenURL Framework for Context-Sensitive Services, the SFX linking server, the bX scholarly recommender service, and info URI.

    Herbert described his newest project, which has just been released, called the Memento Project. The Memento project proposes a “new idea related to Web Archiving, focusing on the integration of archived resources in regular Web navigation.”  In chatting briefly with Herbert, the system uses a browser plug-in to view the content of a page from a specified date.  It does this by using the underlying content management system change logs to recreate what appeared on a site at a given time.  The team has also developed some server-side Apache code that handles the request for calls to the management of systems that have version control.  The system can also point to a version of the content that exists in the Internet Archive (or other similar archive sites) for content from around that date, if the server is unable to recreate the requested page. Herbert and his team have tested this using a few wiki sites.  You can also demo the service from the LANL servers.

    Here is a link to a presentation that Herbert and Michael Nelson (co-collaborator on this project) at Old Dominion University gave at the Library of Congress on this project.  There was also a story about this project  A detailed paper that describes the Memento solution is also available on the arXive site.  There is also an article on Memento in the New Scientist.  Finally, tomorrow (November 19, 2009 at 8:00 AM EST), there will be a presentation on this at OCLC as part of their Distinguished Seminar Series, which will be available online for free (RSVP required).

    This is a very interesting project that addresses one of the key problems with archiving web page content, which frequently changes.  I am looking forward to the team’s future work and hoping that the project gets some broader adoption.

    Upcoming Forum on Library Resource Management Systems

    Thursday, August 27th, 2009

    In Boston on October 8-9, NISO will host a 2-day educational forum, Library Resource Management Systems: New Challenges, New Opportunities. We are pleased to bring together a terrific program of expert speakers to discuss some of the key issues and emerging trends in library resource management systems as well as to take a look at the standards used and needed in these systems.

     

    The back end systems upon which libraries rely have become the center of a great deal of study, reconsideration and development activity over the past few years.  The integration of search functionality, social discovery tools, access control and even delivery mechanisms to traditional cataloging systems are necessitating a conversation about how these component parts will work together in a seamless fashion.  There are a variety of approaches, from a fully-integrated system to a best-of-breed patchwork of systems, from locally managed to software as a service approaches.  No single approach is right for all institutions and there is no panacea for all the challenges institutions face providing services to their constituents.  However, there are many options an organization could choose from.  Careful planning can help to find the right one and can save the institution tremendous amounts of time and effort.  This program will provide some of the background on the key issues that management will need to assess to make the right decision.

     

    Registration is now open and we hope that you can join us. 

    Problems with a “Kindle in Every Backpack”

    Wednesday, July 15th, 2009

    Interestingly, on the heels of last week’s ALA conference in Chicago, the Democratic Leadership Council (DLC) has released a proposal: “A Kindle in Every Backpack: A Proposal for eTextbooks in American Schools”.  This influential DC-based think tank promotes center-left-leaning policies related to education, trade, pro-business tax and economic reform and health care according to their website.  The report was issued by Tom Freedman, a policy analyst and lobbyist, who had worked as a policy adviser to the President in the Clinton administration as and former Press Secretary and Policy Director for Senator Schumer (D-NY).  Unfortunately, this is the kind of DC policy report that approaches these issues from a 30,000-foot level from an expert who, by the looks of his client list, has no experience with the media or publishing industries and therefore comes to the wrong conclusion.  This perspective leads to a report is light on understanding the business impacts, the pitfalls of the technology at this stage, and the significant problems that would be caused by leaping at once behind still maturing technology.

    The report does make several good points about the value of e-texts. The functionality, the reduction in manufacturing costs, the up-date-ability of digital versions, the environmental impact and savings of digital distribution, all make the move to ebooks very compelling. I do agree that this is the general direction that textbooks are headed.  However, before we jump headfirst into handing out ebook readers (especially the Kindle) to every child, there’s much more to this topic than Freedman’s report details.
    While a good idea from some perspectives, Freedman misses the trees through the forest.  First of all, while I am incredibly fond of my Kindle, it is not perfectly suited for textbooks.  Here are several concerns I have at this stage, in no particular order.  Many of these topics were themes we covered in the NISO / BISG Forum last Friday on the Changing Standards Landscape for Ebooks.  We’ll be posting video clips of the presentations later this summer.  NISO is also hosting a webinar on ebooks next month.
    The business models for ebook sales are very early in their development.  Many critical questions, such as license terms, digital rights management, file formats and identification still need to be explored, tested and tweaked.  It took more than a decade for the business model for electronic journals to begin to mature and ebooks are only at the outset of these changes.  Even a year later, the market of e-journals is still a tenuous one, still tied in many ways to print.  It will be at least a decade before these same models mature for ebooks, which is a larger and in many ways a more complex market.

    While a print book might be inefficient from the perspective of distribution, storage and up-to-date content, print has the distinct advantage in that it also lasts a long time.  Freedman’s report notes that many school texts are outdated.  A 2008 report from the New York Library Association that Freedman cites highlights that “the average age of books in school libraries ranges from 21 to 25 years old across the six regions of the state surveyed, with the average book year being 1986.”  That NYLA report also found that “the average price of an elementary school book is $20.82 and $23.38 for secondary school books.” So if one text were purchased once and used for 20+ years, the cost per year, per student is less than $1.00.  I seriously doubt that publishers would be willing to license the texts for so little on an annual ongoing subscription basis.  That would reduce the textbook market from $6 billion per year less than $1 billion (presuming if the 56 million k-12 students were each given an e-book reader with 6 books at $2 per book, which is more than twice the current cost/year/book detailed in the NYLA report.) The problem is that the textbook publishes can’t survive on this reduced revenue stream and they know it.
    I don’t want to quibble, but the data source that Freedman uses for his cost estimates is simply not accurate for manufacturing as a percentage of overall costs of goods sold.  Therefore his estimate of the potential costs savings is way off the mark.  Freedman claims that the savings by moving to digital distribution would be in the range of 45% and is simply wrong.  Anyone who has dealt with the transition from print to electronic distribution of published information knows this.  To paraphrase NBC Universal’s Jeff Zucker, the publishing world cannot sustain itself moving from “analog dollars to digital pennies.”  The vast majority of costs of a product are not associated with the creation and distribution of the physical item.  Much like most product development, most people are shocked to realize that it costs less than a fraction of a penny to “manufacture” the $3.00 soda that they purchase at a restaurant, or only $3 to manufacture the $40 HDMI cable.
    Physical production of books (the actual paper, print and binding) represents only about 10-15% of the retail price.  Had Freedman actually understood the business (or Tim Conneally, who wrote the cited article) he would have understood the flaw in the following statement “32.7% of a textbook’s cost comes from paper, printing, and editorial costs.”  The vast majority of the 32.7% are not manufacturing costs, they are editorial and first copy costs, which do not go away in a digital environment.  Unless people are willing to read straight ASCII-text, a book still needs to be edited, formatted, laid out, tagged for production, images included, etc.  This is especially true in textbooks.  These costs do not go away in a digital environment and will continue to need support.  If the industry is functioning on $6 billion in annual revenues, reducing marginal costs by even 20% wouldn’t allow it to survive on less than half of present revenues.  This is a problem that many publishers are finding with current Kindle sales and has been the subject of a number of posts, conjecture and controversy.
    A much better analysis of the actual costs of manufacturing a book than what Freedman uses in his paper is available on the Kindle Review Blog. Even though this analysis is focused on trade publication, the cost splits are roughly equivalent in other publishing specialties.
    Costs are another issue inhibiting the wide adoption of ebook reader technology.  Once the cost-per-reading device decreases to a point where they cost less than $100 per device they will likely begin to become as ubiquitous as the iPod is today. However, they will have to drop well below $100 before most school districts will begin handing them to students. I doubt that it will take place in the next 3-5 years.  The reasons for this are myriad.
    At the moment, the display technology of e-book readers is still developing and improving.  This is one of the main reasons that manufacturing of the Kindle was slow to meet the demand even into its first year of sales.  Although increased demand from a program such as proposed would significantly boost manufacturing capabilities, it is still an open question as to whether e-ink is the best possible technology, although it is very promising – and one that I’m fond of.  Would it make sense for the government through a “Kindle for every child” program to determine that Kindle and its technology are most appropriate, simply because it was first to market with a successful product (BTW, I’m sure Sony wouldn’t agree with this point.) Even recently with several hundred thousand devices produced, the manufacturing costs for the Kindle are reported to be about $185 per device.  It would take a 75% reduction in direct costs to make a sub-$100 price feasible.  That won’t happen quickly, even if the government through millions of dollars Amazon’s way.
    Other issues that I have about this idea are more specific to the Kindle itself and its functionality.  The note-taking feature is clumsy and when compared to writing in the margins or highlighting on paper falls short – although it is better than other devices (particularly the 1st gen Kindle).  Other devices, with touch-screen technology appear to handle this better, although these too are several years from mass-market production.   Even still, they would likely be more costly than the current e-ink technology.
    At the moment, Kindle is also only a black and white reading device.  Some color display technologies are being developed, but the power drain is too significant for any but small, iPhone-sized devices to support long-term use (such as the week or more between Kindle charges).  The fact that the display on the Kindle is only in black and white would pose significant problems for textbooks where color pictures are critical, especially in the sciences.  This goes back to the underlying costs of the systems noted above.
    Also, the Kindle is a closed proprietary system completely controlled by Amazon.  While fine for the moment in its current, trade book market space, it would be unlikely for a whole new class of publishers (K-12 textbook publishers) to hand their entire business model over to a single company, who many publishers already consider to be too powerful.
    The rendering of graphics, charts and forms is clumsy on the Kindle, in part because of the Kindle format, but more because the file format standards are still improving in this area.  Also, the reflowable file format, EPUB, is still maturing and how it handles complex format issues, as would be the case in textbooks, are still being improved.  EPUB, an open standard for publishing ebooks, isn’t even naively supported by the Kindle, which relies on its own proprietary file format.
    While a great grand vision, I’m sorry to say that even the pilot described in Freedman’s paper isn’t a good idea at this stage.  The ebook market needs time to work through the technical and business model questions.  The DLC report he wrote presumes that dumping millions of dollars in a program of this type will push the industry forward more rapidly. My sense of the energy in the ebooks market is already palpable and would progress regardless of any government intervention.   The unintended consequences of this suggestion would radically and irrevocably alter the publishing community in a way that would likely lead to diminished service, even if it were to be successful.  Eventually, educational textbook publishers will move in the direction of ebooks, as will most other publishers.  Digital versions will not replace print entirely, but they will supplant it in many cases and textbooks are likely one segment where it will.  However, it will take a lot more time than most people think.

    Kodak takes the Kodachrome away

    Thursday, June 25th, 2009

    I grew up in Rochester, NY, which is home to Kodak, the iconic film company founded by George Eastman.  Much like Detroit is a car town renowned for FordGM and Chrysler, Rochester was known for its film industry.  Nearly everyone I knew had some tie to Kodak and most of my friends fathers were engineers of some sort at the many plants around town.  At its peak, Kodak employed more than 145,000 people in 1988.  It is now down to less than 30,000.  The last time I was home, I was shocked by parking lots, which the sites of were massive manufacturing plants when I was growing up.  Many of buildings of one industrial park were actually imploded in 2007. It was a stark indication of just how much Kodak had changed and how far they had fallen.   

    Kodak announced earlier this week that it would be ceasing production of Kodachrome film.  Kodachrome had long been recognized for its true-tone color quality and preservation quality.  It was great slide film and among the first mass-market color films available.  It was even memorialized in a famous Paul Simon song.  Unfortunately, like all great film products, its days have been numbered for over a decade.  Now, if you’re one of the few who still shoot with Kodachrome, there’s only one facility, based in Kansas that processes the film.  Unfortunately, despite its quality and history, that won’t save it from the dustbin of chemistry and manufacturing. 

    Kodak was a company built on 19th century technology–chemicals on plastic that captured images. It was old-world manufacturing on a massive scale.  It was also a company that clung onto its cash-cow core business well past the time when it was tenable, focused on its rivalry with other, mainly Japanese filmmakers.  It did not see–or probably more likely didn’t choose to focus on–the seismic shift in its core business to digital media.  This is despite the fact that it was a Kodak Engineer, Steven Sasson, who created the first digital camera in 1975 AT Kodak.  Although Kodak released a digital SLR camera in 1991 (in partnership with Nikon and as a Nikon branded product), at $13,000 it was hardly for the average consumer.  It would take more than a quarter century after Sasson’s original prototype before Kodak released its first mass-market digital camera in 2001.  Just after Kodak peaked in the late 80s and early 90s and begin dueling with Fuji for control of the film market, the rest of the consumer electronic market had begun to move on.   

    Today, Kodak receives some 70% of its revenue from digital activities.  It holds the top share of the digital camera market, with nearly a quarter of the market.  Had it moved more quickly, in all liklihood it could have held a much larger share.  After all, “Kodak moments” used to be a common phrase for one that should be captured on film.  While the company spoke of capturing the moment, it was really focused on what they thought to be their business, chemicals. The real problem was people didn’t care about chemicals, they cared about the moment.  How best to capture the moment and how do so quickly and cheaply was what consumers cared about.  Very quickly, as processors sped up, as storage costs dropped, image sensors improved and all of this technology became a great deal cheaper, the old model of chemicals on plastic was displaced.  

    The lessons of Kodak and its slow reaction to the changes in its industry should be a warning sign to those whose businesses is being impacted by the switch to digital media.  Focusing only on preservation of your cash-cow business could be detrimental to your long-term success and survival.  The academic journals publishing industry moved quickly to embrace online distribution.  However, in many respects there are still ties to print and many publishers still rely on the 19th-20th century revenue streams of the print-based economy.  The e-book world is much more tied to print models and distribution.  For example, the Kindle is in so many ways a technological derivative of print.  Much of the experience of reading an e-book on the Kindle is based on the experience of reading print. Even more than the technical experience of reading, the business models and approaches to distributing content is completely tied to the print sales streams.    There are so many new approaches that have not even been considered or tried.  Don’t be surprised if you are not paying attention, the entire market could shift under your business’s feet.

    Atlantic Records posts more digital sales than CDs

    Tuesday, December 2nd, 2008

    Late last week, one of the largest music labels announced that its sales of digital files exceeded the revenue generated by CDs.  As reported in the New York TimesAltantic Records saw 51% of its sales generated by digital sales.  This was significantly more than Atlantic’s parent company, Warner Music Group, which reported only 27% of its total sales from digital distribution. 

    It should come as no surprise that digital music is quickly replacing physical media.  One need only think of the weight and mess of thousands of CDs, versus a nearly unlimited amount on an iPod or streaming on demand.  The question is when will other media follow?  Some magazines are slowly getting rid of print in favor of online.  It will be some time before display technology exceeds the user experience of print on paper.  In some ways scholarly journal publishing is already headed down this path.  The rest of publishing is slower to adapt.  However, several tipping points will likely be reached fairly soon. 

    * – Display technology needs to improve, so that the user experience is comparable to print

    * - Standardization around some from of reader, or at least a common file format working on different devices

    * - A Napster-like social movement among the broader tech-savvy early adopters (not regarding free distribution, necessarily) which pushes e-books and the like to digital.

    * – A breadth and depth of available content to make the purchase of the reader worthwhile.

    * – Mass production of readers so that they are no longer $300+  

    * – Preservation strategies need to be improved 

     

    Many of these issues are consensus based and awaiting either standards or adoption of existing standards. 

    The future of paper

    Friday, October 31st, 2008

    Looking forward (and I’m not much of a futurist), I expect that one of the key technological developments of the next decade will be the improvement of electronic paper display technology.  Low cost technology to provide digital imaging of text and images on paper-like readers will bring to life the potential of digital content.   True integration of multi-media will occur and what we now know of as the book will be altered radically.  One need only think of the newspapers in the Harry Potter series to think about where we are probably headed.  Many people frequently note that the first applications of new technology often look and feel like the old technology.  We currently are in that stage with electronic books and electronic media – although this is changing slowly.  

    Earlier this month at the International Meeting on Information Display (iMiD), Samsung demonstrated the world’s first carbon nanotube-based color active matrix electrophoretic display (EPD) e-paper.From Samsumg’s press release:

    “Electrophoretic displays offer inherent advantages over traditional flat panel displays due to their low power consumption and bright light readability, making them well suited for handheld and mobile applications. Since they can be produced on thin, flexible substrates, EPD’s also are ideally suited for use in e-paper applications.

    Unlike conventional flat panel displays, electrophoretic displays rely on reflected light, and can retain text or images without constant refreshing, thereby dramatically reducing power consumption.”

     

    Of course, Samsung isn’t the only player in this market and there are many others developing e-paper technology. However, for those not regularly involved in the display technology space, there is a great deal of activity taking place there.  Since a good portion of publisher and library investments are in paper, focusing on what is changing in the future of display is an area we should be paying closer attention to. 

    In September, Esquire magazine released its 75th anniversary issue with an electronic paper cover.  Here’s a video of how it looked. The technology was provided by E Ink Technology, the same company that produced display screen for the Amazon Kindle. The cover price for the digitial cover was only $2.00 more on the newsstand, but I expect this hardly covered the costs. Ford likely underwrote much of the costs with an inside front-cover ad using the same technology. However, the USA Today reported that the issue contained more ads than any other issue in the past 11 years. If one were looking for an exciting new trend in an otherwise depressed world of magazine or newspaper publishing, this might be a start. 

    What will be fascinating is watching how this new technology develops — I’m sure in fits and starts — over the next decade.  It will have a profound impact on the production and sharing of infromation as well as the existing business models for selling information.  The clunky-ness of the current generation of e-book readers led a third of those responding to a survey Library Journal conducted at the Frankfurt Book Fair to say that “digital content would never surpass traditional books sales”.   This might have something to do with the response that “almost 60 percent of respondents said they do not currently use ebooks and e-readers at all.” 

    While the transition to ebooks, might not take place in the next decade, I think e-paper display technology will advance quickly and the transition will take place sooner than we all think.