Home | About NISO | Blog

Archive for the ‘Business models’ Category

Did the iPad start a publishing revolution yesterday or not? Wait and see

Thursday, January 28th, 2010

For Apple and Steve Jobs, yesterday might have been a game-changing day for Apple and -by extension- the entire media world.  I’m not sure the world shook in the way that he had hoped, but its possible that in the future we may look back on yesterday as a bigger day than how we view it was today.  Such is often the nature of revolutions.

Since very few people have had an iPad in their hands yet, the talk of its pros and cons seems to me premature.  As with previous devices, it will be more and also less than the hype of its first debut.  As people begin to use it, as developers push the boundries of its capabilities, it will mature and improve.  It was wholly unrealistic to presume that Apple (or any other company launching a new product) would make the technological or political leaps necessary to create the “supreme device” that will replace all existing technology.

A lot of people have made points about the iPad missing this or that technology.  Apple will almost certainly release an iPad 2.0 sometime in early 2011, dropping its price points and adding functionality — both as the underlying (interestingly not OLED display, which has been falsely reported) display technology becomes cheaper and based on, in some small ways, customer demand for functionality.  In this regards, think of copy & paste on the iPhone. As for some software gaps, such as lack of Adobe Flash support, while some have made the point that this is because of the iPhone OS,  I think these are driven by a desire to lock people into apps and inhibit browser-based free, or possibly paid, web-based services. It is in Apple’s interest to lock people into proprietary software/apps, which are written specifically for their device.

From a standards perspective, the iPad could be both a good or bad thing.  Again it is too soon to tell, but very initial reactions are worrying.  That the iPad will support .epub as a file format is good on its face.  However, it is very likely that the iPad will contain Apple-specific DRM, since there isn’t at the moment an industry standard.  Getting content into (and out of, for those who want to move away from the iPad) that DRM will be the crucial question.  As far as I am aware, Apple has been publicly silent on that question.  I expect that some of the publishes who agreed to content deals likely discussed this in detail, but those conversatins were likely limited to a very small group of executives all bound by harsh NDAs.  (I note that McGraw Hill was allegedly dropped from the announcement because of comments made by its CEO Tuesday on MSNBC.)

Also on the standards front, there was an excellent interview last night on the NPR news show Marketplace, during which author Josh Bernoff, also of Forrester Research, made the point that the internet was splintering into a variety of device specific applications.  The move toward applications in the past two years might reasonably be cause for concern.  It definitely adds to cost for content producers to create multiple contents for multiple platforms. I can’t say that I completely agree with his assessment, however.  The fact that there are open platforms available in the market place and that competition is forcing developers to open up their systems, notably the Google Android phone OS as well as the introduction of the Amazon Kindle Development Kit last week.

What is most interesting about this new product is its potential.  No one could have predicted three years ago the breadth and depth of the applications that have been developed for the iPhone.  Unleashing that creativity on the space of ebooks will very likely prove to be a boon for our community.  Specifically, this could provide publishers with an opportunity to expand the functionality of the ebook.

Often, new technology is at first used to replicate the functionality of the old technology.  In the case of books, I’m referring to the technology of paper. We are only now beginning to see people begin to take advantage of the new digital technology’s possibilities.    Perhaps the launch of Amazon’s new development kit and the technology platform of the iPad will spur innovative thinking about how to use ebooks and enhancing the functionality of digital content’s ability to also be an interactive medium.  The one element of the presentation yesterday that really caught my eye in this regard is the new user interface for reading the New York Times. This seemed the most innovative application of the iPad.  Hopefully in the coming months and years we will see a lot more of that experimentation, user interface design and multi-media intergration.

If that takes place than yesterday might have been a big day in the development of ebooks and information distribution.  If not, the jokes about the name will be all that we’ll recall about this new reader.

The free Ebook “bestsellers”

Wednesday, January 27th, 2010

There is an interesting trend in the mass market for e-books, which is new on this scale: The free book.

Certainly free book distribution has taken place as a marketing tactic for decasdes, if not centuries.  However, since the release of the Kindle, this new distribution mode seems to have really taken off.

An article in the New York Times this past weekend described the growing trend.  As the Times reports, more than half of the “best-selling” e-books on the Kindle, Amazon.com’s e-reader, are available at no charge.

On Saturday afternoon, I double-checked this and of the books in Amazon Kindle’s top ebook “Bestsellers” list -

Top 5 – 4 free and one at $0.25 about the Kindle

Top 10 – 8 of top 10 – another at $8.55 (but on $0.95 off hardcover list)

Top 15 – 11 of top 15 – two more at $4.39 one at $7.50

Top 20 – 14 of top 20 – one at $5.50 and the first at $9.99

Top 25 – 17 of top 25 – two more at $9.99 including Dan Brown’s book

Ten more were not free in 25-50, so 18 of 50 or only 36% of top 50 book are paid.

11 more were for-fee books in the next 50-75

11 more in 75-100 –

In total 60 of top 100 “selling titles” for Kindle are free or public domain books.  Now Amazon changes this every hour, so a review of your own would probably not come up with the same results.  However, it seems that at least half and as many as two-thirds of the list are not “sellers” at all, but only downloads.

However in that article, the author noted theoretically “lost” 28K sales.  An old friend of mine knows one of the authors in that story and she told me that the referenced author actually made about $10,000 in royalties on her backlist during the free period, which is incredible.

Chris Anderson, editor of Wired Magazine and author of the Long Tail, described how free works at the dawn the internet age in his book “Free”.  I should note that I was one who took advantage of Anderson’s business model and read “Free” at no cost on my Kindle.  What publishers are doing is a perfect example of Anderson’s thesis: That people can use the medium of digital distribution and it’s *nearly* free distribution to get consumers to be interested in other non-free products.  You can download the first book of a series for free, but if you want volumes 2-12 of the “Twilight” series, you will have to pay. NOTE – that Twilight or Harry Potter didn’t employ this model.  However, 10 years from now is there a kids book series that kids are eagerly awaiting the movies of, that began as a series with the first book free?  I don’t think this process will change how people discover and share books, but it certainly will accelerate the process.

Open Source isn’t for everyone, and that’s OK. Proprietary Systems aren’t for everyone, and that’s OK too.

Monday, November 2nd, 2009

Last week, there was a small dust up in the community about a “leaked” document from one of the systems suppliers in the community about issues regarding Open Source (OS) software.  The merits of the document itself aren’t nearly as interesting as the issues surrounding it and the reactions from the community.  The paper outlined from the company’s perspective the many issues that face organizations that choose an open source solution as well as the benefits to proprietary software.  Almost immediately after the paper was released on Wikileaks, the OS community pounced on its release as “spreading FUD {i.e., Fear Uncertainty and Doubt}” about OS solutions.  This is a description OS supporters use for corporate communications that support the use and benefits of proprietary solutions.

From my mind the first interesting issue is that there is a presumption that any one solution is the “right” one, and the sales techniques from both communities understandably presume that each community’s approach is best for everyone.  This is almost never the case in a marketplace as large, broad and diverse as the US library market.  Each approach has it’s own strengths AND weaknesses and the community should work to understand what those strengths and weaknesses are, from both sides.  A clearer understanding and discussion of those qualities should do much to improve both options for the consumers.  There are potential issues with OS software, such as support, bug fixing, long-term sustainability, and staffing costs that implementers of OS options need to consider.  Similarly, proprietary options could have problems with data lock-in, interoperability challenges with other systems, and customization limitations.   However, each too has their strengths.  With OS these include and openness and an opportunity to collaboratively problem solve with other users and an infinite customizability.  Proprietary solutions provide a greater level of support and accountability, a mature support and development environment, and generally known fixed costs.

During the NISO Library Resources Management Systems educational forum in Boston last month, part of the program was devoted to a discussion of whether an organization should build or buy LRMS system.  There were certainly positives and downsides described from each approach.  The point that was driven home for me is that each organization’s situation is different and each team brings distinct skills that could push an organization in one direction or another.  Each organization needs to weigh the known and potential costs against their needs and resources.  A small public library might not have the technical skills to tweak OS systems in a way that is often needed.  A mid-sized institution might have staff that are technically expert enough to engage in an OS project.  A large library might be able to reallocate resources, but want the support commitments that come with a proprietary solution.  One positive thing about the marketplace for library systems is the variety of options and choices available to management.

Last year, during the Charleston Conference during a discussion of Open Source, I made the comment that, yes, everyone could build their own car, but why would they.  I personally don’t have the skills or time to build my own, I rely on large car manufacturers to do so for me.  When it breaks, I bring it to a specialized mechanic who knows how to fix it.  On the other hand, I have friends who do have the skills to build and repair cars. They save lots of money doing their own maintenance and have even built sports cars and made a decent amount of money doing so.  That doesn’t make one approach right or wrong, better or worse.  Unfortunately, people frequently let these value judgments color the debate about costs and benefits. As with everything where people have a vested interest in a project’s success, there are strong passions in the OS solutions debate.

What make these systems better for everyone is that there are common data structures and a common language for interacting.  Standards such as MARC, Z39.50, and OpenURL, among others make the storage, discovery and delivery of library content more functional and more interoperable.  As with all standards, they may not be perfect, but they have served the community well and provide an example of how we can as a community move forward in a collaborative way.

For all of the complaints hurled at the proprietary systems vendors (rightly or wrongly), they do a tremendous amount to support the development of voluntary consensus standards, which all systems are using.  Interoperability among library systems couldn’t take place without them.  Unfortunately, the same can’t be said for the OS community.  As Carl Grant, President of Ex Libris, made the point during the vendor roundtable in Boston, “How many of the OS support vendors and suppliers are members of and participants in NISO?”  Unfortunately, the answer to that question is “None” as yet.  Given how critical open standards are to the smooth functioning of these systems, it is surprising that they haven’t engaged in standards development.  We certainly would welcome their engagement and support.

The other issue that is raised about the release of this document is its provenance.  I’ll discuss that in my next post.

Upcoming Forum on Library Resource Management Systems

Thursday, August 27th, 2009

In Boston on October 8-9, NISO will host a 2-day educational forum, Library Resource Management Systems: New Challenges, New Opportunities. We are pleased to bring together a terrific program of expert speakers to discuss some of the key issues and emerging trends in library resource management systems as well as to take a look at the standards used and needed in these systems.

 

The back end systems upon which libraries rely have become the center of a great deal of study, reconsideration and development activity over the past few years.  The integration of search functionality, social discovery tools, access control and even delivery mechanisms to traditional cataloging systems are necessitating a conversation about how these component parts will work together in a seamless fashion.  There are a variety of approaches, from a fully-integrated system to a best-of-breed patchwork of systems, from locally managed to software as a service approaches.  No single approach is right for all institutions and there is no panacea for all the challenges institutions face providing services to their constituents.  However, there are many options an organization could choose from.  Careful planning can help to find the right one and can save the institution tremendous amounts of time and effort.  This program will provide some of the background on the key issues that management will need to assess to make the right decision.

 

Registration is now open and we hope that you can join us. 

Problems with a “Kindle in Every Backpack”

Wednesday, July 15th, 2009

Interestingly, on the heels of last week’s ALA conference in Chicago, the Democratic Leadership Council (DLC) has released a proposal: “A Kindle in Every Backpack: A Proposal for eTextbooks in American Schools”.  This influential DC-based think tank promotes center-left-leaning policies related to education, trade, pro-business tax and economic reform and health care according to their website.  The report was issued by Tom Freedman, a policy analyst and lobbyist, who had worked as a policy adviser to the President in the Clinton administration as and former Press Secretary and Policy Director for Senator Schumer (D-NY).  Unfortunately, this is the kind of DC policy report that approaches these issues from a 30,000-foot level from an expert who, by the looks of his client list, has no experience with the media or publishing industries and therefore comes to the wrong conclusion.  This perspective leads to a report is light on understanding the business impacts, the pitfalls of the technology at this stage, and the significant problems that would be caused by leaping at once behind still maturing technology.

The report does make several good points about the value of e-texts. The functionality, the reduction in manufacturing costs, the up-date-ability of digital versions, the environmental impact and savings of digital distribution, all make the move to ebooks very compelling. I do agree that this is the general direction that textbooks are headed.  However, before we jump headfirst into handing out ebook readers (especially the Kindle) to every child, there’s much more to this topic than Freedman’s report details.
While a good idea from some perspectives, Freedman misses the trees through the forest.  First of all, while I am incredibly fond of my Kindle, it is not perfectly suited for textbooks.  Here are several concerns I have at this stage, in no particular order.  Many of these topics were themes we covered in the NISO / BISG Forum last Friday on the Changing Standards Landscape for Ebooks.  We’ll be posting video clips of the presentations later this summer.  NISO is also hosting a webinar on ebooks next month.
The business models for ebook sales are very early in their development.  Many critical questions, such as license terms, digital rights management, file formats and identification still need to be explored, tested and tweaked.  It took more than a decade for the business model for electronic journals to begin to mature and ebooks are only at the outset of these changes.  Even a year later, the market of e-journals is still a tenuous one, still tied in many ways to print.  It will be at least a decade before these same models mature for ebooks, which is a larger and in many ways a more complex market.

While a print book might be inefficient from the perspective of distribution, storage and up-to-date content, print has the distinct advantage in that it also lasts a long time.  Freedman’s report notes that many school texts are outdated.  A 2008 report from the New York Library Association that Freedman cites highlights that “the average age of books in school libraries ranges from 21 to 25 years old across the six regions of the state surveyed, with the average book year being 1986.”  That NYLA report also found that “the average price of an elementary school book is $20.82 and $23.38 for secondary school books.” So if one text were purchased once and used for 20+ years, the cost per year, per student is less than $1.00.  I seriously doubt that publishers would be willing to license the texts for so little on an annual ongoing subscription basis.  That would reduce the textbook market from $6 billion per year less than $1 billion (presuming if the 56 million k-12 students were each given an e-book reader with 6 books at $2 per book, which is more than twice the current cost/year/book detailed in the NYLA report.) The problem is that the textbook publishes can’t survive on this reduced revenue stream and they know it.
I don’t want to quibble, but the data source that Freedman uses for his cost estimates is simply not accurate for manufacturing as a percentage of overall costs of goods sold.  Therefore his estimate of the potential costs savings is way off the mark.  Freedman claims that the savings by moving to digital distribution would be in the range of 45% and is simply wrong.  Anyone who has dealt with the transition from print to electronic distribution of published information knows this.  To paraphrase NBC Universal’s Jeff Zucker, the publishing world cannot sustain itself moving from “analog dollars to digital pennies.”  The vast majority of costs of a product are not associated with the creation and distribution of the physical item.  Much like most product development, most people are shocked to realize that it costs less than a fraction of a penny to “manufacture” the $3.00 soda that they purchase at a restaurant, or only $3 to manufacture the $40 HDMI cable.
Physical production of books (the actual paper, print and binding) represents only about 10-15% of the retail price.  Had Freedman actually understood the business (or Tim Conneally, who wrote the cited article) he would have understood the flaw in the following statement “32.7% of a textbook’s cost comes from paper, printing, and editorial costs.”  The vast majority of the 32.7% are not manufacturing costs, they are editorial and first copy costs, which do not go away in a digital environment.  Unless people are willing to read straight ASCII-text, a book still needs to be edited, formatted, laid out, tagged for production, images included, etc.  This is especially true in textbooks.  These costs do not go away in a digital environment and will continue to need support.  If the industry is functioning on $6 billion in annual revenues, reducing marginal costs by even 20% wouldn’t allow it to survive on less than half of present revenues.  This is a problem that many publishers are finding with current Kindle sales and has been the subject of a number of posts, conjecture and controversy.
A much better analysis of the actual costs of manufacturing a book than what Freedman uses in his paper is available on the Kindle Review Blog. Even though this analysis is focused on trade publication, the cost splits are roughly equivalent in other publishing specialties.
Costs are another issue inhibiting the wide adoption of ebook reader technology.  Once the cost-per-reading device decreases to a point where they cost less than $100 per device they will likely begin to become as ubiquitous as the iPod is today. However, they will have to drop well below $100 before most school districts will begin handing them to students. I doubt that it will take place in the next 3-5 years.  The reasons for this are myriad.
At the moment, the display technology of e-book readers is still developing and improving.  This is one of the main reasons that manufacturing of the Kindle was slow to meet the demand even into its first year of sales.  Although increased demand from a program such as proposed would significantly boost manufacturing capabilities, it is still an open question as to whether e-ink is the best possible technology, although it is very promising – and one that I’m fond of.  Would it make sense for the government through a “Kindle for every child” program to determine that Kindle and its technology are most appropriate, simply because it was first to market with a successful product (BTW, I’m sure Sony wouldn’t agree with this point.) Even recently with several hundred thousand devices produced, the manufacturing costs for the Kindle are reported to be about $185 per device.  It would take a 75% reduction in direct costs to make a sub-$100 price feasible.  That won’t happen quickly, even if the government through millions of dollars Amazon’s way.
Other issues that I have about this idea are more specific to the Kindle itself and its functionality.  The note-taking feature is clumsy and when compared to writing in the margins or highlighting on paper falls short – although it is better than other devices (particularly the 1st gen Kindle).  Other devices, with touch-screen technology appear to handle this better, although these too are several years from mass-market production.   Even still, they would likely be more costly than the current e-ink technology.
At the moment, Kindle is also only a black and white reading device.  Some color display technologies are being developed, but the power drain is too significant for any but small, iPhone-sized devices to support long-term use (such as the week or more between Kindle charges).  The fact that the display on the Kindle is only in black and white would pose significant problems for textbooks where color pictures are critical, especially in the sciences.  This goes back to the underlying costs of the systems noted above.
Also, the Kindle is a closed proprietary system completely controlled by Amazon.  While fine for the moment in its current, trade book market space, it would be unlikely for a whole new class of publishers (K-12 textbook publishers) to hand their entire business model over to a single company, who many publishers already consider to be too powerful.
The rendering of graphics, charts and forms is clumsy on the Kindle, in part because of the Kindle format, but more because the file format standards are still improving in this area.  Also, the reflowable file format, EPUB, is still maturing and how it handles complex format issues, as would be the case in textbooks, are still being improved.  EPUB, an open standard for publishing ebooks, isn’t even naively supported by the Kindle, which relies on its own proprietary file format.
While a great grand vision, I’m sorry to say that even the pilot described in Freedman’s paper isn’t a good idea at this stage.  The ebook market needs time to work through the technical and business model questions.  The DLC report he wrote presumes that dumping millions of dollars in a program of this type will push the industry forward more rapidly. My sense of the energy in the ebooks market is already palpable and would progress regardless of any government intervention.   The unintended consequences of this suggestion would radically and irrevocably alter the publishing community in a way that would likely lead to diminished service, even if it were to be successful.  Eventually, educational textbook publishers will move in the direction of ebooks, as will most other publishers.  Digital versions will not replace print entirely, but they will supplant it in many cases and textbooks are likely one segment where it will.  However, it will take a lot more time than most people think.

Charleston conference: Every librarian need not be a programmer too

Saturday, November 8th, 2008

Over dinner on Friday with the Swets team and their customers, I had the chance to speak with Mia Brazil at Smith College.  We had a great conversation.  She was telling me her frustration about getting systems to work and she was lamenting the challenges of not understanding programming.  She’d said that she tried learning SQL, but didn’t have much luck.  Now, learning SQL programming is no small feat and I can appreciate her frustrations (years ago, I helped build and implement marketing and circulation databases for publishers).  However, realistically, librarians aren’t programmers and shouldn’t be expected to be.

The systems that publishers and systems providers sell to libraries shouldn’t require that everyone get a master’s in database programming to implement or use.  While the larger libraries are going to have resources to implement and tweak these systems to meet their own needs, the smaller college or public libraries are not going to have the resources to have programmers on staff.  We shouldn’t expect that the staff at those libraries – on top of their other responsibilities – should have to be able to code their own system hacks to get their work done.

In a way, this was what Andrew Pace discussed in his session Friday on moving library services to the grid.  Essentially, Andrew argued that many libraries should consider moving to a software-as-a-service model for their ILS, catalog and other IT needs.  Much like Salesforce.com, provides an online platform for customer relationship management, or like Quicken does for accounting software, libraries shouldn’t have to locally load, support and hack systems to manage their work.  Some suppliers are headed in that direction.  While there are pros and cons related to this approach, it certainly is a viable solution for some organizations. I hope for Mia’s sake it happens sooner than later.

CENDI Meeting on Metadata and the future of the iPod

Wednesday, October 29th, 2008

I was at the CENDI meeting to speak today about metadata and new developments related to metadata. There were several great presentations during the morning and some worthy of additional attention. My particular presentation is here.

The presenter prior to me was Dr. Carl Randall, Project Officer from the Defense Technical Information Center (DTIC). Carl’s presentation was excellent. He spoke to the future of search and a research report that he wrote on Current Searching Methodology And Retrieval Issues: An Assessment. Carl ended hispresentation with a note about an article he’d just read entitled Why the iPod is Doomed written by Kevin Maney for portfolio.com.

The article was focused on why the Pod was doomed. The author posits that the technology of the iPod is outdated and will soon be replaced by online “cloud” computing services. To paraphrase from the article: The more entrenched a business is, the less likely it will be able to change when new competitors arise to challenge its existing model.

Another great quote from the article– “In tech years, they [i.e, the iPod and iTunes] are older than Henry Kissinger.”

I don’t quibble with the main tenant of the article; that services will move to the web and that we will think it quaint to have to purchase content and download individual songs, then carry around those songs on hard drives, which store those files locally. The iPod hardware and the iTunes model of by-the-drink downloads are both likely to have limited futures. I do think that Apple is probably better placed to transition their iTunes service to a subscription or cloud-based model than any others through their iPhones. The article dismisses this as unlikely because Apple hasn’t talked about it. This dismisses the fact that Apple never talks about their plans until theyare ready to announce a product of service.

As we move to an era of “cloud” computing, where both applications content are hosted on the network not on individual devices, it is likely that people will desire to purchase subscription access to all content on demand as opposed to the limited content that they specifically purchase.

A subscription model also provides new opportunity to expose users to new content. From my perspective, despite having over 10,000 songs in my iTunes library, I’ve been reluctant to purchase new content that I wasn’t already familiar with. I have used LastFM and other services (anyone remember FM radio) to become acquainted with new music. Part of the reason for this is that the barrier for me is time rather than cost, but I expect that the perceived cost issue is one for many potential users. I say “perceived” because, much research and practical experience shows us consumers will pay more for ongoing subscription services than they will for one-time up-front costs.

Moving content to the “cloud” provides many opportunities for content providers to exercise a measure of control that had been lost. By hosting files, rather than distributing them (streaming as distinct from downloading, for example) the content providers have greater ability to control distribution. Access becomes an issue ofauthentication and rights management, as opposed to DRM wrapping and other more onerous and intrusive approaches. Many of us have become quite comfortable with renting movies through Blockbuster, Netflix or cable OnDemand services.

There are downsides for the customers for moving to the cloud. There are very different rights associated with “renting” a thing (content, cars, houses, etc.) versus owning those things. How interested users will be in skipping those rights for the convenience of the cloud is an open question. Likely, the convenience will override the long-term interest in the rights. Frequently, it isn’t until someone realizes that they don’t have any control over the cloud is when they are burned by the owners of the services take them away in some fashion. If you’ve stored all of your photos on Flickr and the company deletes your account for whatever reason, you’ll wish that you had more control over the terms of service. From my perspective, I’d rather retain ownership and control the content I’ve purchased in those areas where I’m invested in preserving access or rights to reuse. I don’t know that the majority of users share my view on this; likely because they don’t spend much time thinking about the potential impacts.

This is something, in particular, libraries should be focused on having outsourced preservation of digital content to the publishers and organizations like Portico.

However, I do know that looking at these distribution models is a huge opportunity for suppliers of content in all forms. The risks of not acting or reacting are that a new upstart provider will displace the old. I grew up in Rochester, NY where Kodak was king in photography around the world for decades. Now Kodak is but a shadow of its former self, looking for a new business model in an era of digital imaging, not film and processing, which were its specialty.