Home | About NISO | Blog

Archive for the ‘NISO’ Category

What is a book today?

Saturday, November 5th, 2011

One sign of the profound implications of some forms for technology are the types of questions that they force one to ask after considering its implications. Last week at the Internet Archive Books in Browsers meeting one of the questions that kept arising reflected on the impact of the move to digital distribution in our information exchange environment. Eric Hellman first posed this question in his presentation about his forthcoming service GlueJar, although several asked similar versions of the same question:

“What is a book [in a digital context]?”

What made a physical thing a book in the analog context is no longer what makes a thing a book today. Certainly, there still are physical things we call books. But in our new digital age, books are much more complex things. Books are no longer just the text. Nor are they even constrained by a certain text length or form, given the potential provided by a linking environment. Using semantic markup and linked data opportunities, there are no longer constraints about even the content of the item. During his talk Networked books and networked reading, Kevin Kelly described the ultimate goal of every Wikipedia entry should be one where every word or phrase in a text is hyperlinked to another richer document.

There are even more complex content forms, which might still be variations on the single digital media form of a book. The text of a file might be rendered with text to speech technology so that a digital text file is now possibly also an audio book. This could be an author or dramatic reading included as part of the package, or simply machine processed reading. As translation tools improve even a text expression of an item (to use FRBR terminology) could also be a simultaneous expression in other languages, simply rendered onto the screen. Even the idea that a book is a self-contained thing that could be packaged, distributed and preserved is an open question, as we consider the possibility of a book that links outside to a live streaming embedded video.

All of these questions about what is a book and what are the implications of a networked multimedia experience that we are now faced with, present a variety of challenges for the information distribution community. For the editors and publishers, who are lovingly and carefully creating their content, which enhancements of the content-expression does one focus the most attention on. This directly goes to the cost of content creation and the potential return for the authors and publishers. Hugh McGuire during his presentation said that Ebooks should be no more difficult to create than a website, which is fine but of course putting together a great website isn’t an easy piece of work either.

From a cataloging and library perspective, where do those various elements get described and stored within a catalog? How best to expose those forms and how they will be indexed and ultimately served to patrons and users is again another challenge both for search and discovery services. Finally, the preservation of this digital content stream is nearly painful in its consideration. This is due, in part to the fact that these different media forms are developing and changing at different timescales by different constituencies. The expertise in text preservation, video preservation, and software preservation are not usually held by the same people.

Often the technical questions drive deeper philosophical conversations about the meaning and impact of the changes at hand. It is likely that with ebooks we have entered that phase, where the questions and the answers to those questions will drive much of our forward momentum for decades to come.

I was asked at the end of a panel discussion at the Frankfurt Tools of Change meeting what will we be working on five years from now. I punted on the question, not that there aren’t a variety of things that we need to accomplish. However, I expect that we will likely be trying to answer some of these deep philosophical questions we are beginning to pose now. The answers to these questions are not obvious, but their implications are profound. In all likelihood, we’ll be working our way through these questions for many years to come.

There are so many angles and issues to address, purely from a standards and technology perspective. This does not take into account the other vexing problems of cultural nor business challenges that ebook present us. This is absolutely not to say that we are spinning our wheels and being unproductive. These questions and issues are difficult, complex and interwoven. Coming to some resolution will take time, energy, attention to detail and sustained commitment.

Reflecting on Mr. Jobs

Thursday, October 6th, 2011

I don’t have a cool Steve Jobs story. Just a deep love of the products he and his partners and his teams over the years (who are often disregarded, though no one works alone) created.

My first introduction to the computer was in Boy Scouts when the parents of my friends all had computers. I grew up in Rochester, NY, where Kodak ruled the city and nearly everyone I knew had a father that worked for Kodak. They were mostly engineers and they tinkered with computers in their spare time over the weekends. My real love for computers began on an Apple. My fifth grade math teacher had an Apple ][ in the corner and I recall spending far more time learning to program on the Apple than I did learning math that year. My middle school had a computer lab, where some of my geekier friends and I learned basic programming and how to have fun with the computers. About that time, the Macintosh came out and I recall riding my bike up to the only computer store about 4 miles from my house to gawk at the interface and play with the mouse. Despite my prodding, my parents, who were not engineers by any stretch, refused to buy into—literally or figuratively—my desire for a computer. In retrospect, given the $1,995 price (at the time, which would be roughly $4,500 today adjusted for inflation), I can completely understand the reticence to purchase such a very expensive toy for their son. While I tinkered with electronics through my early high school years, I spent more time using glorified electric typewriters than I did using computers.

I flirted again with Macs in college, although my roommate (whose computer I used) and most of my friends had PCs. I worked a bit on computers, but I was too busy with other things to get deeply involved. Once I got out into “the real world”, I was often thrown tech projects and database projects because I just loved working with computers. It would be some 15 years later before I had my own Apple computer. More or less, I’ve never turned back.

Like many in the technology world, I can’t say I ever had any direct interaction with Mr. Jobs, or even secondary contact. But he and his company did have a tremendous impact on me. Probably if I hadn’t had one of those Apple ][ computers to play with, I’d never have been as interested in those funny boxes, or programming, or data systems, which is where I later ended up. I might not be where I am today. I’m sure there are many others who share my appreciation. Whether we’ve lost our Edison, as some have stated, I’ll leave it up to history. My thinking is that was probably Steve Wozniak, who’s gotten somewhat short shrift of late. But we might have lost a Carnegie. Regardless, I’m saddened by the news. I feel like a bit of my childhood died yesterday.

Why are there so many standards?

Wednesday, July 20th, 2011

One of the most often quipped complaints about standards is that there are so many to choose from.  Standards have a way of proliferating without control, much like summer weeds in the garden.

One of my favorite formualtions of this conundrum is a quote from Connie Morella, former congresswoman and former ambassador to the Organization of Economic Cooperation and Development. She spoke at an ANSI’s World Standards Day awards dinner in 2006, when she received ANSI’s Ronald H. Brown Standards Leadership Award. During her speech she said, “Standards are like toothbrushes. Everybody wants one but nobody wants to use anybody else’s.”

Thanks to @ljndawson for the pointer to this XKCD cartoon, which summarizes this problem quite well.

<br /> Why are there so many standards?

Why are there so many standards?

There are a variety of reasons for this. Some are more reasonable than others. One reason that I have been spending a lot of time considering lately is that different communities create their own specifications because they are not aware of developments taking place in adjacent communities, don’t see the overlap and common goals of the two (or more) related specifications. One of the things we are trying to achieve in the space of ebooks with the recently launched special interest group, is to help foster cross-community discussion and collaboration. Hopefully, we can avoid the problem described in the cartoon.

When is a new thing a new thing?

Thursday, June 10th, 2010

I recently gave a presentation at the National Central Library in Taiwan at a symposium on digital publishing and international standards that they hosted. It was a tremendous meeting and I am grateful to my hosts, Director General Karl Min Ku and his staff for a terrific visit.  One of the topics that I discussed was the issue of the identification of ebooks. This is increasingly becoming an important issue in our community and I am serving on a BISG Working Group to explore thes issues. Below are some notes from one slide that I gave during that presentation, which covers one of the core questions: At what point do changes in a digital file qualify it as a new product?  The full slide deck is here. I’ll be expanding on these ideas in other forums in the near future, but here are some initial thoughts on this question.

——-

In a print world, what made one item different from another was generally it’s physical form. Was the binding hardcover or soft-cover? Was the type regular or large-size for the visually impaired, or even was it printed using Braille instead of ink? Was the item a book or a reading of the book, i.e. an audio book, was about as far afield as the form question had gone prior to the rise of the internet in the mid 1990s. In a digital environment, what constitutes a new item is considerably more complex. This poses tremendous issues regarding the supply chain, identification, and collections management in libraries.

This is a list of some of the defining characteristics for a digital text that are distinct from those in a print environment.  Each poses a unique challenge to the management and identification of digital items.

  • Encoding structure possibilities (file formats)
  • Platform dependencies (different devices)
  • Reflowable (resize)
  • Mutable (easily changed/updated)
  • Chunked (the entire item or only elements)
  • Networkable (location isn’t applicable)
  • Actionable/interactive
  • Linkable (to other content)
  • Transformable (text to speech)
  • Multimedia capable
  • Extensible (not constrained by page)
  • Operate under license terms (not copyright)
  • Digital Rights Management (DRM)

Just some of these examples pose tremendous issues for the supply chain of ebooks when it comes to fitting our current business practices, such as ISBN into this environment.

One question is whether the form of the ebook which needs a new identifier is the file format. If the publisher is distributing a single file format, say an epub file, but then in order for that item go get displayed onto a Kindle, it needs to be transformed into a different file format, that of the Kindle, at what point does the transformation of that file become a new thing? Similarly, if you wrap that same epub file with a specific form of digital rights management, does that create a new thing? From an end-user perspective, the existence and type of DRM could render a file as useless to the users as it would be if you supplied a Braille version to someone who can’t read Braille.

To take another, even thornier question, let’s consider location. What does location mean in a network environment. While I was in Taiwan, if I wanted to buy a book using my Kindle from there, where “am I” and where is the transaction taking place? Now in the supply chain, this makes a tremendous amount of difference. A book in Taiwan likely has a different ISBN number, assigned to a different publishers, because the original publisher might not have worldwide distribution rights. The price might be different, even the content of the book might be slightly different-based on cultural or legal sensitivities. But while I may have been physically located in Taiwan, my Amazon account is based in Maryland, where I live and where my Kindle is registered. Will Amazon recognize me as the account holder in the US or the fact of my present physical location in Taiwan, despite the fact that I traveled back home a week later and live in the US? Now, this isn’t even considering where the actual transaction is taking place, which could be a server farm somewhere in California, Iceland or Tokyo.  The complexity and potential challenges for rights holders and rights management could be tremendous.

These questions about when is a new thing a new thing are critically important question in the identification of objects and the registration and systems that underlie them. How we manage this information and the decisions we take now about what is important, what we should track, and how should we distinguish between these items will have profound impacts on how we distribute information decades into the future.

Mandatory Copyright Deposit for Electronic-only Materials

Thursday, April 1st, 2010

In late February, the Copyright Office at the Library of Congress published a new rule that expands the requirement for the mandatory deposit to include items published in only in digital format.   The interim regulation, Mandatory Deposit of Published Electronic Works Available Only Online (37 CFR Part 202 [Docket No. RM 2009–3]) was released in the Federal Register.  The Library of Congress will focus its first attention on e-only deposit of journals, since this is the area where electronic-only publishing is most advanced.  Very likely, this will move into the space of digital books as well, but it will likely take sometime to coalesce.

I wrote a column about this in Against the Grain last September outlining some of these issues that this change will require.  A free copy of that article is available here.  The Library of Congress is aware, and will become painfully more so when this stream of online content begins to flow their way.  To support an understanding about these new regulations, LC hosting a forum in Washington in May to discuss publisher’s technology for providing these data on a regular basis.  Below is the description about the meeting that LC provided.

Electronic Deposit Publishers Forum
May 10-11, 2010
Library of Congress — Washington, DC

The Mandatory deposit provision of the US Copyright Law requires that published works be deposited with the US Copyright Office for use by the Library of Congress in its collection.  Previously, copyright deposits were required only for works published in a physical form, but recently revised regulations now include the deposit of electronic works published only online.  The purpose of this workshop is to establish a submission process for these works and to explore technical and procedural options that will work for the publishing community and the Library of Congress.

Discussion topics will include:

  • Revised mandatory deposit regulations
  • Metadata elements and file formats to be submitted

Space for this meeting is very limited, but if you’re interested in participating in the meeting, you should contact the Copyright Office.

  • Proposed transfer mechanisms
  • ISTC and Ur-Texts

    Thursday, April 1st, 2010

    Tuesday, I attended a meeting on the International Standard Text Code (ISTC), organized by the Book Industry Study Group (BISG) in Manhattan.  The meeting was held in conjunction with the release of a white paper on the ISTC by Michael Holdsworth entitled ISTC: A Work in Progress. This is a terrific paper and worthy of reading for those interested in this topic and I commend it to you all, if you haven’t seen it.  The paper provides a detailed introduction to the ISTC and what role this new identifier will play in our community.

    During the meeting as I was tweeting about the standard, I got into a brief twitter discussion with John Mark Ockerbloom at the University of Pennsylvania Library.  Unfortunately as wonderful as Twitter is for instantaneous conversation, it is not at all easy to communicate nuance.    For that, a longer form is necessary, hence this blog post.

    As a jumping off point, let us start with the fact that the ISTC has a fairly good definition about what it is identifying: the text of a work as a distinct abstract item that may be the same or different across different products or manifestations.  Distinguishing between those changes can be critical, as is tying together the various manifestations for collection development, rights and product management reasons.

    One of the key principles of the ISTC is that:

    “If two entities share identical ISTC metadata, they shall be treated as the same textual work and shall have the same ISTC.”

    Where to draw this distinction is quite an interesting point.  As John pointed out in his question to me, “How are works with no definitive original text handled? (e.g. Hamlet) Is there an #ISTC for some hypothetical ur-Hamlet?”  The issue here is that there are multiple “original versions” of the text of Hamlet. Quoting from Wikikpedia: “Three different early versions of [Hamlet] have survived: these are known as the First Quarto (Q1), the Second Quarto (Q2) and the First Folio (F1). Each has lines, and even scenes, that are missing from the others.”

    In this case, the three different versions would each have three different ISTCs assigned to them, since the text of the versions is different.  They could be noted as related to the other ISTCs (as well as the cascade of other related editions) in the descriptive metadata fields.  Hamlet is a perfect example of where the ISTC could be of critical value, since those who have an interest in the variances between the three different versions would want to know which text is the basis of the copy of Hamlet they are purchasing, since there are significant differences between the three copies.

    Perhaps most stringent solution in keeping with the letter of the standard might be that the First Quatro, have been the first known to published, since it was the first to appear in the Stationers’ Register in 1602 although it likely was not published until summer or fall 1603.  The Second Quarto and First Folio were published later—in 1604 and 1623 respectively.  Although the first Quatro is often considered “inferior” to later versions, assigning it the “Source” ISTC would be no different than if it were published today, and subsequently re-published as a revision (which would be assigned a related ISTC).  While there has been controversy about the source text of Hamlet that probably began not long after the day it was published and has certainly grown as the field of scholarship around Shakespeare has grown, for the purposes of identification and linking does the “Ur-text” matter?

    Certainly, a user would want to know that this is the canonical version, be that the Second Quatro or First Folio versions.  The critical point is that we identify things differently when there are important reasons to make the distinctions.  In the case of Hamlet, there is a need to make the distinction.  Which copy is considered “original” and which is a derivative isn’t nearly as important as making the distinction.

    It is valuable to note the description in the ISTC User’s Manuel in the section on Original works and derivations.  Quoting from the Manuel:

    7.1    What is an “original” work?

    For the purposes of registration on the ISTC database, a work may be regarded as being “original” if it cannot be adequately described using one or more of the controlled values allowed for the “Derivation Type” element (specified elsewhere in this document).

    A work is considered to be “original” for registration purposes unless it replicates a significant proportion of a previously existing work or it is a direct translation of the previously existing one (where all the words may be different but the concepts and their sequence are the same). It should be noted that this is a different approach from that used by FRBR2, which regards translations as simply different “expressions” of the same work.

    The “Source ISTC” metadata field is an optional one and is “Used to identify the original work(s) from which this one is derived (where appropriate). It is recommended that these are provided whenever possible.”  In the case of the three Hamlet “original versions” this field would likely be left blank, since there is no way to distinguish between the “Original” and the “Derivation”.  Each of the three versions could be considered “Original”, but this would get messy if one were not noted as original.   There is a “Derivation type” metadata field with restricted values, although “Unspecified” is one option.  Since there isn’t necessarily a value in the “original” distinction, there isn’t a point arguing about which is original.  In the real world, what will likely be the “original” will be the first version that receives the assignment.

    This same problem will likely be true of a variety of other texts, especially from distant historical periods.   A focus on core principles, that we distinguish what is important, that disambiguation is important, and avoiding the philosophical arguments surrounding “original” versus “derivative”, just as the ISTC community is trying to avoid “ownership” of the record, will help to serve the entire community.

    There is a lot more information about the ISTC provided by NISO. Members and subscribers can read the article that Andy Weissberg VP of Identifier Services & Corporate Marketing at Bowker wrote in Information Standards Quarterly last summer, The International Standard Text Code (ISTC): An Overview and Status Report. For non-subscribers, Andy Weissberg also presented during the 2009 NISO-BISG Changing Standards Landscape forum prior to ALA’s Annual conference in Chicago.  You can view his presentation slides or watch the video from that meeting.

    The International ISTC Agency Ltd is a not-for-profit company, limited by guarantee and registered in England and Wales. Its sole purpose is to implement and promote the ISO 21047 (ISTC) standard and it is operated by representatives of its founding members, namely RR Bowker, CISAC, IFRRO, and Nielsen Book Services.

    The first edition of “ISO 21047 Information and Documentation – International Standard Text Code (ISTC)” was published by ISO in March 2009. It is available for purchase in separate English and French versions either as an electronic download or printed document from ISO.

    9 Ways that librarians can support standards adoption

    Monday, February 15th, 2010

    Last week, I was at the Electronic Resources in Libraries conference in Austin, Texas.  This is the fifth meeting of ER&L and the meeting has grown tremendously, becoming an important destination for librarians and publishers focused on electronic content.  There is a growing energy around this conference that reminds me a lot of the Charleston conference back in the 1990s–or perhaps earlier, but that’s when I first attended Charleston.  The organizer of the meeting, Bonnie Tijerina, Electronic Resources Librarian at the UCLA Library, is full of drive and energy, and will I expect continue to be a force in the library community for many years to come.  So too are the team of people who stand with Bonnie in making this entire project happen, most of whom wandered about the meeting in t-shirts emblazoned with a welcoming and helpful Texas “Howdee!” in large letters across the chest.

    Generally, a relaxed meeting with a capped attendance of ~350 people and a tight schedule of only a few competing sessions, ER&L also involves a lot of participant engagement.  Participants are encouraged to contribute to the conversation via the conference wiki and blog.  Also, the first day included a lightening talk opportunity for anyone to take the stage for five minutes to discuss whatever project they wanted to share.

    I took the opportunity to stand up and discuss briefly an important issue for the library community: the adoption of standards by vendors and publishers.  There is often a chicken and egg problem with the development of systems interoperability standards.  When two parties need to exchange data, both sides of that exchange need to see the value of investing in implementation.  Implementation has to serve the interest of both communities.  In the case of library systems, the interests of the library staff are usually tied to improving end-user access, reducing data entry, more efficient services or better analysis.  For the vendor, this might include simply better customer service and keeping current customers happy, building in response to RFP requests, or possibly a competitive advantage over other systems offerings.  The problem is that in an era when development resources are tight–and they are always tight, only more so now–developing interchange functionality to make the system the supplier has developed work with another system (which was generally not developed by the same supplier) doesn’t often compete well in the list of development priorities.

    How can the library community engage to help this situation?  During my brief talk at ER&L I listed a few ways that librarians can encourage adoption of technical standards by their vendors, such as systems suppliers and publishers:

    1) Educate yourself about the different initiatives that are ongoing in the community. NISO offers a series of educational events throughout the year, ranging from webinars to in-person events.  Also, many of these events are free, such as the Changing Standards Landscape Forum at ALA and the monthly Open Teleconference Series.  Subscribing to NISO’s free Newsline or our magazine, ISQ are also ways to keep abreast of the work ongoing at NISO and elsewhere in the community.

    2) Build compliance language into your RFPs and contracts.  A customer never has more power over the vendor than right before she/he is about to purchase something.  While price is often the first thing people think about when negotiating a contract for a system, there are other important elements tied to service levels that should also be considered.    Does the system conform with existing standards — and what exactly is meant by “conformance”.  Conformance can mean different things to different organizations.  Be as clear as you can be about what your needs are from the outset can avoid problems later.  NISO will be updating the NISO RFP Guide later this spring, which will help in this process.

    3) Regularly speak with the product managers or account executives at your suppliers.  The product managers are there to provide input and feedback to their development teams.  Usually, they are a solid source for the company about customer needs and expectations.  They can often advocate for your needs within the company.  However, you need to be realistic about what they can achieve, which is why #8 below is an important channel too!

    4) Participate in user group meetings and discussion groups:  Every successful company will reach out to its customers for feedback and input, especially when new products, services or platform upgrades are under consideration.  Be mindful of exactly what your needs and concerns are.  This is where your work on Education point #1 above) can be so valuable.

    5) Serve on Library Advisory Boards: Most publishers and systems vendors have advisory boards of librarians who provide regular feedback about community conditions and development needs.

    6) Open Source Development – A variety of libraries are working on development of new systems and services using Open Source tools and methods.  Building in interoperability standards into these systems is a great way to leverage communities to push adoption by proprietary vendors, which often require interoperability with proprietary systems for them to work properly.  In addition, Open Source provides a public forum for the testing and improvement of existing standards.

    7) Find out if your suppliers are engaging in standards development work.  All of the rosters of NISO working groups are available online.  Look through them and see which of your suppliers is participating.  If you find a group that you feel would benefit your library, reach out to your suppliers.  Press them to engage if they are not.

    8 ) Go to the top – Contacting the executive leadership at supplier companies is a great way to get action on your needs.  Often, the product managers don’t control the development pipeline at an organization—although they are useful as a first and regular point of contact (see #4 above).  The executives can often control a wide variety of resources to get a project moving forward, if you can convince them it is valuable to their customers.  Reaching to the executives is never a bad idea and can usually bring results if your requests are focused and actionable.

    9) Get involved yourself. – There are many ways that you can engage in standards and best practice works.  You can engage directly with NISO or via any of the variety of mirror groups that exist as part of ALA, ARL, LITA, NFAIS, SLA or MLA.  In addition to building your own skills, you will be able to speak more authoritatively about your needs, the more engaged you are.  Also, it provides you an opportunity for your needs to be built in to the standards or best practices from the outset. You will be amazed at how similar the issues you face are with others in the community.

    Did the iPad start a publishing revolution yesterday or not? Wait and see

    Thursday, January 28th, 2010

    For Apple and Steve Jobs, yesterday might have been a game-changing day for Apple and -by extension- the entire media world.  I’m not sure the world shook in the way that he had hoped, but its possible that in the future we may look back on yesterday as a bigger day than how we view it was today.  Such is often the nature of revolutions.

    Since very few people have had an iPad in their hands yet, the talk of its pros and cons seems to me premature.  As with previous devices, it will be more and also less than the hype of its first debut.  As people begin to use it, as developers push the boundries of its capabilities, it will mature and improve.  It was wholly unrealistic to presume that Apple (or any other company launching a new product) would make the technological or political leaps necessary to create the “supreme device” that will replace all existing technology.

    A lot of people have made points about the iPad missing this or that technology.  Apple will almost certainly release an iPad 2.0 sometime in early 2011, dropping its price points and adding functionality — both as the underlying (interestingly not OLED display, which has been falsely reported) display technology becomes cheaper and based on, in some small ways, customer demand for functionality.  In this regards, think of copy & paste on the iPhone. As for some software gaps, such as lack of Adobe Flash support, while some have made the point that this is because of the iPhone OS,  I think these are driven by a desire to lock people into apps and inhibit browser-based free, or possibly paid, web-based services. It is in Apple’s interest to lock people into proprietary software/apps, which are written specifically for their device.

    From a standards perspective, the iPad could be both a good or bad thing.  Again it is too soon to tell, but very initial reactions are worrying.  That the iPad will support .epub as a file format is good on its face.  However, it is very likely that the iPad will contain Apple-specific DRM, since there isn’t at the moment an industry standard.  Getting content into (and out of, for those who want to move away from the iPad) that DRM will be the crucial question.  As far as I am aware, Apple has been publicly silent on that question.  I expect that some of the publishes who agreed to content deals likely discussed this in detail, but those conversatins were likely limited to a very small group of executives all bound by harsh NDAs.  (I note that McGraw Hill was allegedly dropped from the announcement because of comments made by its CEO Tuesday on MSNBC.)

    Also on the standards front, there was an excellent interview last night on the NPR news show Marketplace, during which author Josh Bernoff, also of Forrester Research, made the point that the internet was splintering into a variety of device specific applications.  The move toward applications in the past two years might reasonably be cause for concern.  It definitely adds to cost for content producers to create multiple contents for multiple platforms. I can’t say that I completely agree with his assessment, however.  The fact that there are open platforms available in the market place and that competition is forcing developers to open up their systems, notably the Google Android phone OS as well as the introduction of the Amazon Kindle Development Kit last week.

    What is most interesting about this new product is its potential.  No one could have predicted three years ago the breadth and depth of the applications that have been developed for the iPhone.  Unleashing that creativity on the space of ebooks will very likely prove to be a boon for our community.  Specifically, this could provide publishers with an opportunity to expand the functionality of the ebook.

    Often, new technology is at first used to replicate the functionality of the old technology.  In the case of books, I’m referring to the technology of paper. We are only now beginning to see people begin to take advantage of the new digital technology’s possibilities.    Perhaps the launch of Amazon’s new development kit and the technology platform of the iPad will spur innovative thinking about how to use ebooks and enhancing the functionality of digital content’s ability to also be an interactive medium.  The one element of the presentation yesterday that really caught my eye in this regard is the new user interface for reading the New York Times. This seemed the most innovative application of the iPad.  Hopefully in the coming months and years we will see a lot more of that experimentation, user interface design and multi-media intergration.

    If that takes place than yesterday might have been a big day in the development of ebooks and information distribution.  If not, the jokes about the name will be all that we’ll recall about this new reader.

    BISG Appoints a new Executive Director

    Tuesday, January 5th, 2010

    The Book Industry Study Group has just announced that Scott Lubeck has been appointed the new Executive Director at BISG.  Lubeck, most recently Vice President of Technology for Wolters Kluwer Health, Professional and Education, has more than thirty years of publishing industry experience and has been heavily involved in technology and in dealing with the design and implementation of digital initiatives. Lubeck has also held executive positions with Harvard Business School Publishing and Newsstand, Inc., as well as with Perseus Books Group and National Academy Press.

    Michael Healy, the previous Director of BISG, left in May to lead the forthcoming Book Rights Registry, which will form after (if?) the Google Books Settlement is approved by the courts.

    BISG and NISO frequently partner on industry events and initiatives.  We look forward to continuing to serve the community together and to working with Scott and wish him the best of luck throughout his transition and in his new role.

    Best wishes for a prosperous 2010 from NISO

    Friday, January 1st, 2010

    Happy New Year!  I’d like to take this time, on behalf of the NISO staff and the Board of Directors, to thank you for your involvement and interest in NISO over the past year and to wish you and your organization a prosperous and successful 2010.  The past year at NISO has seen some challenges, but more importantly many, many successes.

    Everything that we undertake is only possible through the volunteer contributions of members of the NISO community and the financial support of our members.  While everyone producing, sharing, using and preserving information relies on the work that NISO undertakes, few understand the effort and time that go into standards development.  Those of you who participate in the process–either directly on a working group, or on the ballot review groups, or by supporting adoption through education and outreach–understand how challenging and rewarding consensus work can be.

    The coming year will see a great deal of important activity on several different fronts.  We look forward to serving the needs of community and to making information flow easier, more rapid and more reliable.  All the best to each of you!