Home | About NISO | Blog

Archive for October, 2008

Different types of peer review

Monday, October 20th, 2008

Last week, I was at the ARL fall membership meeting in Arlington.  As always, the meeting was terrific in the range of topics discussed and the discussion that took place among those who attend were first rate.  During a session hosted by the Scholarly Communications Steering Committee, James J. O’Donnell, Provost at Georgetown University, gave a talk entitled, “Monocles, Monographs, and Monomania: A Look Ahead”.  For those of you who haven’t heard Jim speak before, his depth, range and scope of presentation is amazing.  

In this particular presentation, he described his experience in scholarly communication, both as a scholar, editor and collegiate administrator.   One of the topics he touched on was the state of peer review and as he described them the current four forms of peer review in scholarly publishing.  The first and most obvious is the pre-publication peer review process that is most commonly thought of when one mentions peer review.  This is the editorial review process that takes place prior to publication and is often an iterative improvement process.  The second form is the post-publication review process of published reviews of a work, where other scholars comment directly of the relative strengths and weaknesses of their colleagues work.  The third form is the longer-term citation ranking of a work in the published literature.  Despite its flaws, it does capture the relative impact of a work (generally journals) over time since publication.  The final form that Jim covered is the less appreciated and less discussed secretive process of tenure and promotion review.  Through the tenure process, outside reviewers consider the corpus of a scholar’s work when making tenure and promotion decisions, which combines many of the aspects of the first and second forms of review in a private setting.  

During his discussion, Jim described what he saw as some of the potential flaws in this system and the dangers of the diminishing quality of these review processes.  Two interesting points in the discussion stood out for me.  The first dealt with the potential impact of the conservativeness of  changes in this process that might come from a more open approach to collaboration on research.  Unfortunately I couldn’t stay for Friday’s ARL/CNI Forum on “Reinventing Science Librarianship“, which looked like a great program. One thing is certain, however, the topic of openness and sharing of data will be one topic that surely will be discussed. Until the P&T system at universities changes, I fear that science will be locked up by the conservative doctrine of the sole scholar toiling away at his work — completely missing the proven value of collaboration and joint projects. It will be interesting to see over the coming years how the incorporation of these new methods of scholarly communication and publication impact the promotion systems that have developed over centuries. I expect that not until the senior faculty in universities are slowly replaced with junior faculty more accustomed with and attuned to the strengths of networked workflows will the process change.

The second point that Jim made, ties to the increasing quantity of materials published. Jim made the point that it was too easy to get published and that it was too hard to know what is really good. In a way, Jim postulated, this is owing to decreasing quality of peer review and publisher’s (broadly speaking) willingness to publish works by scholars, which is adding to the problems of information overload. While I question the premise on his first point there is a great deal to consider in his second. There is data that refutes this point, particularly a report published by the STM Association that correlates the amount of papers and journals published with the increasing size of the number of scholars. His second point about knowing what is really good is the real crux of information distribution. What is key is not what is available, but what is important and relevant to me. One of the most promising area of information science is providing context and additional informaiton about the item so that it can be more easily discovered and assessed. There is lots of ongoing work in ontologies and semantics (NB – links are illustrative not comprehensive). I believe that there is also a lot that the scholarly community can learn from usage analysis and the work of the Los Alamos Library team working on MESUR led by Johan Bollen. There are also some of the other usage-based metrics of quality, such as COUNTER’s Usage Factor and the Eigen factor.

My feeling is that lessons could also be used from the Web 2.0 world and user interactions in quality assessment. I don’t know whether any publisher is using something like “rate this article” in the same way that Amazon is for books. I tend to doubt it, given the results of the Nature trial a few years back of “Open Peer Review“. While open peer-review wasn’t a success; in part because of the quality of the reviews and the low take-up. Perhaps the issue of greatest concern (and limiting participation) was the idea of scooping someone’s ideas. However, post-publication commenting doesn’t present the same challenges. I am sure though that publicly airing one’s opinions in our community has its downsides, which could limit its success.

Recently, Elsevier launched a contest on providing new service models for articles, entitled the Article 2.0 Contest. It is being billed as an opportunity to become a publisher and provide new services based on Elsevier’s Science Direct journal database. From the site:
“Each contestant will be provided online access to approximately 7,500 full-text XML articles from Elsevier journals, including the associated images, and the Elsevier Article 2.0 API to develop a unique yet useful web-based journal article rendering application.” Perhaps some innovator will find a way to improve interaction and discovery through user engagement. If nothing else, Elsevier will get some good ideas and someone will win $4,000.

Investments in a downturn – why resources invested in standards are needed

Friday, October 17th, 2008

As I write this, the markets are heading up (of course, that could certainly change by days end), which is something of an anomaly over the past month.  There was also much talk about library budgets being scaled back in the current environment at the ARL membership meeting that I attended this week.  It seems that most people or organizations are considering ways to scale back their expenditures in some fashion this year.  While this is a challenge we all face, it might appear an odd time to make a pitch for greater investment, but that is what is often needed at times like these.  Where we need to invest our time, energy, and money is in developing efficiencies so that we are all more productive in our work and efficient in doing what we do.   I’ve made the point repeatedly that standards are all about building efficiency.

This was the theme of my opening remarks during the Collaborative Resource Sharing meeting earlier this month.  October 1st was the centennial anniversary of one of the most important and influential standards projects in the history of manufacturing.  That was the date that the first Model T Ford was assembled in Detroit. The manufacture of automobiles in the early 20th century was no breakthrough. The first car (as we know of it today) had been built for commercial sale some 23 years earlier by Karl Benz in 1885. Where Henry Ford was ingenious was in the ability to replicate, improve quality, reduce costs and speed the manufacturing process. And he was able to do this through standardization of the production process. In the next 19 years, Ford would produce some 15 million Model Ts – a feat previously unimaginable. Over the 19 years, the average production was more than 750,000 per year – or more than 2,100 per day. “Sure,” you might say. “Standards in manufacturing make perfect sense. But how does one translate that into libraries? 

In the same way that Ford couldn’t produce ¾ million cars per day without standards, if a library needed to fulfill 750,000 ILL requests per year, there is no way that one could do so without streamlining the process to reduce the transaction costs of each request and fulfillment. According to the most recent ARL statistics from 2005-06, the ARL library processing the most ILL requests, Ohio State University, was the only ARL library with over 200,000 ILL transactions.

Similarly, the management of both physical and digital resources require standards to build efficiencies so that we can do more with less.  The SUSHI standard will allow librarians to reduce the amount of time they invest in gathering usage data for analysis.  The SERU initiative will eliminate the time, energy and expense of negotiating one-off contracts when appropriate.  The Cost of Resources Exchange (CORE) project is aiming to simplify the process of calculating cost-per-use assessment.  And the Institutional Identification (I2) project is working to improve supply chain and distribution efficiencies for publishers and libraries.  There are many other areas where collaboration and standardization can smooth and simplify the flow of information from creators to publishers, through libraries to the end-users.

NISO makes these initiatives happen in our community and engagement in the process is required from all types of organizations.  Even though times are difficult, investing in standards provides real-world, measurable and cost-saving improvements to the process of managing information.  The creation of standards and best practices for the community take time, effort and money.  But we should all remember what the outcomes of these investments can be and how much larger the rewards are than the investments we make in the process.

SSP Seminar on E-journal standards

Tuesday, October 14th, 2008

The Society for Scholarly Publishing  is beginning to promote its fall seminar series. One meeting in particular, E-Journal Publishing: A Critical Review of Emerging Standards and Practice,  should be of interest to the information standards community.  The Journal Article Versions report will be discussed in this meeting. FROM THE DESCRIPTION:

 Digital publishing offers opportunities to add value to content. For the journal article, this can involve making available prepublication material, grey literature – raw and ancillary materials associated with an article, publishing on an article-by-article basis prior to the completion of the journal issue, et al., which in turn, raise questions about the version of record. Some of these issues have been addressed through working groups on whose standards recommendations the seminar will draw. These groups are the NISO/ALPSP “Best practices for journal article versions” which has issued recent recommendations and the NFAIS “Working group on article-by-article publishing, convened in December 2007.Along with the opportunities for added value afforded by digital publishing comes more decisions to be made by content providers: What is the best content format and level of functionality? Should the digital article be published when ready, prior to the full issue/print version? Is the print version required? How should articles be published on an article-by-article basis? Should selected articles be made freely available? How are these decisions handled in institutional repositories? What is the appropriate policy for our journal(s) for deposit in institutional repositories? Implementation will vary by market, by publisher and by available resources.


The seminar speakers include publishers and librarians recounting their experiences and resulting policies and participants in the NISO/ALPSP and NFAIS working groups.


Bonnie Lawlor, NFAIS  

Philippa Scoones, Wiley-Blackwell

Cathy Eisenhower, Gelman Library, George Washington University 

T. Scott Plutchak, Lister Hill Library of the Health Sciences, University of Alabama Birmingham 


FedEx – Physical delivery and Resource Sharing – Part 4

Tuesday, October 14th, 2008

It occurred to me after writing the last post that our collective mindset about physical delivery has changed radically in the past two decades.  In preparing my regular article for Against the Grain, I thought to write some more about changing user expectations regarding when people can receive things.  The most obvious of the services that have radically changed our mindset about delivery is FedEx. A quick search of the web turned up a number of the vintage commercials about FedEx services.  It’s amazing how things that had been services almost exclusively for business have become ubiquitous. Some classics:  Hereherehere, and here

Managing Physical Delivery – Collaborative Resource Sharing – Part 3

Tuesday, October 14th, 2008

Much of the community’s attention has been focused on digital content over the past decade.  Strategies for managing purchase, discovery, integration, and reuse have been common themes of conferences and publications since the mid-90s.  Innovative work on metasearch, description, authentication and archiving related to digital content has made great strides in making sure that people can retrieve web-based content easily and quickly from nearly anywhere.  Of course, this is only part of the story of what is taking place in libraries. Libraries continue to maintain vast collections of physical objects and providing that content in physical form has been an area of innovation as well over the past decade.  

Collaborative collection development among regional partnerships has been one way libraries have been working together to maximize their acquisitions budgets.  During the Collaborative Library Resource Sharing meeting last week, Julia Gammon, Head, Acquisitions Department, University of Akron Libraries, discussed the work that OhioLink is doing to coordinate the purchasing work of libraries in Ohio. In her presentation, Julia outlined the steps OhioLink undertook so that it could manage a collaborative acquisition strategy. Among these was using a common book supplier, YBP Library Services and their GobiTween service to manage purchasing. According to Julia, using GobiTween has allowed OhioLink to better understand purchasing decisions across most institutions in the state and make more informed acquisitions decisions regarding how many copies will be needed. While this form of collaboration isn’t going to work for every institution or in every case, it is certainly a model that more consortia should explore to improve effective cost management within the member libraries.

In order to make a system of collaborative purchasing effective, libraries need to have cost effective approaches to sharing items among the member institutions. This was the theme of Valerie Horton’s, Executive Director, Colorado Library Consortium, presentation entitled: Moving Mountains: The Status of Library Physical Delivery Services. Valerie described eloquently the myriad ways that institutions and consortia are moving objects around their networks. From the USPS, to commmercial carriers, to courier systems libraries are investing heavily in moving objects from one place to another. One figure Valerie noted from an ICOLC survey in 2008, is that the Prospector unified catalog service in Colorado is sharing some 426,000 items per year, about the same amount as other consortia listed in that report. There is now a physical delivery subgroup of the rethinking resource sharing group that is exploring these issues to provide more efficient and effective end-user delivery services. According to Valerie, there is much that could be accomplished here and standard could help in several areas. Some specific ideas she gave related to common circulation policies, labeling, and packaging. These are the types of standards that shippers across the country and across the globe have agreed to long ago and the library community could learn much from.

This is a complicated issue, in particular, for publishers. While the thought of libraries sharing items among each other makes perfect sense and indeed will decrease costs among institutions, there is a downside for publishers. An item that is shared among libraries to serve patrons is viewed positively by the library community, it could be (and often is) viewed as a lost sale opportunity by the publisher community. One needn’t look beyond the precarious state of university presses and the challenges of their book sales model, which for decades had been predicated on the sale of scholarly monographs generally to the library community. This issue comes into sharper relief when considering e-books, a new area for library acquisitions and the prospect for sharing digital content. We’ll explore that in more detail in the coming weeks.

Advice from Peter Drucker – an idea from the Resource Sharing meeting – Part 2

Saturday, October 11th, 2008

During the Collaborative Resource Sharing meeting earlier this week, Adam Wathem, Interim Head of Collections Services Department, Kansas State University Libraries wrapped up the meeting by discussing barriers to efficiencies within libraries.  It was  a great presentation that brought together the threads of the conversations and presentations throughout the meeting.  At one point in the presentation (available here), Adam quoted Peter Drucker, which summarizes one of the problems that libraries face:

“There is nothing so useless as doing efficiently that which should not be done at all.”  

How much of the workflow processes in institutions is bound by a focus on improving how things that neither meet user needs or expectation of today?   

A focus on the end user: NISO’s Collaborative Resource Sharing Conference – part 1

Wednesday, October 8th, 2008

This week, NISO held a seminar on how the library community collaborates to share their resources to provide more effective services to the end-user community.  The meeting, Collaborative Library Resource Sharing was held in Atlanta at the Georgia Tech Global Learning Center.  Bringing together some of the leading experts in the areas of resource sharing and inter-library loan, the meeting provided a window on a how libraries are working to improve the end user services that they provide.  Every one of the speakers provided interesting perspective and experiences from which the community can learn, as well as potential ideas upon which new solutions can be built and spread.

One thing rang clear from many of the presentations: current library systems are more often built around managing the thing (be it books, journal articles, AV items, or digital objects) rather than responding to needs of the users.  Several of the speakers compared the services that NetflixAmazon or Google provide, against the ILL request and service processes or the retrieval services.  While in some institutions and in some digital services, accessing content can be very easy.  In others, unfortunately, it can be a horrendous maze of complicated procedures and barriers.  For example, Gail Wanner, Resource Sharing Manager at SirsiDynix, noted that one of the main reasons that people don’t use library services is that they are concerned with late fees.  Even Blockbuster has moved away from painful late fees and Netflix has made a name for itself by allowing users to keep content as long as they wish.  Another example of a customer focused experience highlighted at the meeting by Marshall Breeding, Director for Innovative Technologies and Research at the Heard Library at Vanderbilt University, is Amazon’s variety of service options.  Not only is ordering made simple with the “One Click Ordering“, but also presents a wide variety of purchase options: if you want it new, used, shipped over night or next week, download an e-book version or audio-book (whenever it’s available).  The reader reviews add tremendous social input to the value and quality of the item, which aids discovery, but also focuses attention on which item is most appropriate for the end-user.

While librarians often are loath to consider their patrons “customers”, many of the service models in the business world, in particular the relentless focus on serving the customer’s needs, are concepts that libraries will need to begin adopting or at least adapting to retain their place in the sources of information that the community rely on.  If they fail to incorporate a vigorous focus on serving the needs of library patrons, they will continue to turn to providers of similar services for the delivery of content.   We spent a great deal of time talking about how libraries can do this.  I’ll write in more detail about some of the specifics that were discussed later this week. 

NOTE: Presentations from the event will be posted to the agenda page later this week as well.

NISO brings together Data Thought Leaders

Friday, October 3rd, 2008

We held the last of the Mellon-funded Thought Leader Meeting series Wednesday.  The topic of this meeting was on Research Data and explored many of the issues surrounding the use, reuse, preservation, and citation of data in scholarship.  Like the three previous meetings, it was a great success.  The meeting brought together a number of representatives from the research, publisher, library and system developer communities.  A list of the representatives is below.

Research data is becoming increasingly critical in almost every area of scholarship.  From census data to high-energy physics, and medical records to the humanities, the range of types of data and the uses which researchers apply this data has expanded dramatically in the past decade.  Managing this data, finding, accessing and curating it is a growing problem.  A report produced by IDC earlier this year concluded that the amount of digital data created exceeded the total available storage capacity in the world.  Determining which aspects are most valuable and adding value through curation will be a tremendous project in the coming decades. 

In order to be useful (in a scientific sense), data needs to verifiable, identifiable, reference-able, preservable, much in the way that published materials are.   Obviously, this poses many questions:  When referring to a data set that is constantly being updated or appended, what would you be citing?  What if the results are modeled from a subset?  Again the data set isn’t as relevant to the citation as which portion of the larger set were used, as well as the model and criteria that were used in the analysis.  Additionally, models and software that are used on a specific data set would be critical to determining the validity of any results or conclusions drawn from the data.  In the peer-review process of science, each of these aspects would need to be considered.  Some publishers are already considering these issues and review criteria. In the future, these issues will only grow for publishers, societies and scientists as they consider the output of science.

Another issue is the variety of life cycles for different types of data.  In fields such as chemistry, there is a much shorter half life in the usefulness of a dataset than it might be in the humanities or social sciences.  This could effect the value proposition of whether to curate a dataset.  Some work done by the JISC had been focused on mandating deposit of materials for the purpose of preservation. Unfortunately, the project didn’t succeed and was withdrawn in 2007. One of the potential reasons that more than $3 million investment turned out to be a disappointment was possibly its focus on archiving and preservation of the data deposited and not focused on reuse and application of deposited data. In order for the preservation to be deemed worth the investment, simultaneous focus on the reuse of the data is critical to ensuring that the investment sees some form of return — apart from developing a large repository of never-accessed data.

While there was some discussion during the day that related to encouraging use and sharing of research data and methodologies, technical standards will not help with what is inherently a political question.  Many of the rewards and recognition in the scholarly process come back to the formalities of publication, which have developed over centuries.  As with many standards-related questions, the problems are not normally related to technologies per se, but often hinge on the political or social conventions that support certain activities.  That said, the development of citation structures, descriptive metadata conventions, discovery methodologies, and curation strategies will add to the growing trends of utilizing these data forms in scholarly communications.  By expanding their use and ensuring that the content if preserved and citable, NISO could help encourage expanded use of data in the communication process.

The report of this meeting will be publicly available in a few weeks on the NISO website along with the other reports.  NISO’s leadership committee structure will be reviewing the recommendations and deciding which initiatives to push forward with in the coming months. 

 Research Data Thought Leader Participants:

Clifford Lynch, Coalition for Networked Information 

Ellen Kraffmiller, Dataverse Network 

Paul Uhlir, National Academy of Sciences

Lars Bromley, AAAS 

Robert Tansley, Google 

Jean Claude Bradley, Drexel University

Camelia Csora, 2collab, Elsevier  

MacKenzie Smith, MIT Libraries – DSpace

Stuart Weibel, OCLC