NISO Webinar: Assessment Metrics

December 14, 2011
1:00 - 2:30 p.m. (Eastern Time)

 

Sponsored by

CrossRef


Below are listed questions that were submitted during the December 14, 2011 webinar. Answers from the presenters will be added when available. Not all the questions could be responded to during the live webinar, so those that could not be addressed at the time are also included below.

Speakers:

  • COUNTER and SUSHI: What’s new with release 4 of the COUNTER Code of Practice
    Oliver Pesch, Chief Strategist, E-Resource Access and Management Services, EBSCO Information Services
  • Journal Assessment Metrics
    Robin Kear, Reference/Instruction Librarian, University of Pittsburgh
  • Using Journal Metrics for Decision-Making
    Tim Jewell, Director, Information Resources and Scholarly Communication, University of Washington and Corey Murata, Information Resources and Collection Assessment Librarian, University of Washington

Feel free to contact us if you have any additional questions about library, publishing, and technical services standards, standards development, or if you have suggestions for new standards, recommended practices, or areas where NISO should be engaged.

NISO Webinar Questions and Answers

1. With the availability of various different versions of impact factor, do librarians prefer to have all of them at the same time to evaluate the collections or recommend the journals or one of them is enough as they are more or less the same?

Robin Kear: The scores are not the same, but they are similar in nature. There are two distinct datasets that the scores are derived from. The Impact Factor, Eigenfactor, and Article Influence are all derived from the ISI Web of Knowledge journals and citations from Thomson Reuters. The SJR and SNIP scores are both derived from the Scopus journals and citations from Elsevier. They do all attempt to rank the importance of journals by measuring. The way that they measure is different. (Please see my slides and the answer to question 2).

2. If one of the impact factor variation is enough, which one is the preferred indicator?

Robin Kear: Each score has its own strengths. The Impact Factor from Thomson Reuters has the most longevity, is the most well-known, and is strongest in the sciences. The Eigenfactor and Article Influence scores are derived from this same dataset and are unique scores with variations in the algorithms that, in part, help address perceived impact factor weaknesses (self-citations, manipulation). The SJR and SNIP scores are derived from a larger citation set (from Elsevier) with more journals and other gray literature (proceedings, papers, etc.) from a variety of disciplines.

3. Question for Robin: Which of the measures being discussed are being looked at during purchase/renewal decisions?

Robin Kear: I cannot speak for all of our subject specialists and collection development specialists, so I cannot answer definitively. Our institution does have access to all of the scores I discussed by subscription (impact factor, Eigenfactor, article influence score, SJR, and SNIP).

4. Question for Tim: You mentioned student demographics, but do you ever look at departments by the amount of funding they bring into the institutions/?

5. How do you get the Est value?

6. Is there a way libraries can make use of the actual (title-level and/or bundle-level) metrics data (the "scores") that have already been done, rather than each trying to generate them again and again ourselves?

7. Can this data be used to help decide what to weed/discard/keep in storage/remove from storage?

8. Will there be a centralized source of KBART vendor title lists, or will they have to be gathered one at a time, as per COUNTER reports prior to SUSHI?

Oliver Pesch: KBART's role is to describe the format and associated rules that content providers should follow when creating their title lists for delivery to knowledge base vendors. Each content provider is responsible for creation of the lists and for making them available to knowledge base vendors via an FTP site.

9. What staff positions in your library are responsible for populating and maintaining your databases?

10. You mentioned that you organized cancellation lists by fund codes. How did you handle interdisciplinary titles that impact different funds (ultimately different departments/colleges)?

11. Does the multimedia report also cover audio-visual resources like films on demand, etc?

Oliver Pesch: Release 4 of the COUNTER Code of Practice describes the multimedia reports as "usage of multimedia content ( audio, image, video, etc. ) that is a content item in itself ( i.e. not part of a Journal, Book or Reference Work)..." and defines a Multimedia Full Content Unit as "an item of non-textual media content such as an image, streaming or downloadable audio or video files. (Does not include thumbnails or descriptive text/metadata).

12. Where does the 'prestige metric' that SJR uses come from?

Robin Kear: The prestige metric of SJR value orders journals by average prestige per article. The paper, The SJR indicator: A new indicator of journals' scientific prestige by Borja Gonzalez-Pereira, Vicente Guerrero-Bote, Felix Moya-Anegon, December 2009, which explains the score, is freely available here: http://arxiv.org/ftp/arxiv/papers/0912/0912.4141.pdf.