NISO Two-Part Webinar: Measuring Use, Assessing Success
PART 1: Measure, Assess, Improve, Repeat: Using Library Performance Metrics
September 8, 2010

Below are listed questions that were submitted during the first of NISO's two-part September webinar series on Measuring Use, Assessing Success. This September 8, 2010 webinar, "Measure, Assess, Improve, Repeat: Using Library Performance Metrics" aimed to answer questions such as "What performance metrics are relevant to library operations? Are performance metrics already being successfully applied in libraries? How can information standards assist with this area of growing importance to libraries, the entities they report to, and the information providers that supply them?" Answers from the presenters will be added when available. Not all the questions could be responded to during the live webinar, so those that could not be addressed at the time are also included below.

Speakers:

Applying Performance Metrics in Libraries: Current Use
Steve Hiller, Director, Assessment and Planning, University of Washington Libraries

Looking Ahead: The Future of Performance Metrics
Martha Kyrillidou, Senior Director, Statistics and Service Quality Programs, Association of Research Libraries (ARL)

Feel free to contact us if you have any additional questions about library, publishing, and technical services standards, standards development, or if you have have suggestions for new standards, recommended practices, or areas where NISO should be engaged.

NISO Performance Metrics Webinar Questions and Answers

  1. Do we have any sense whether metrics used across different institutions are comparable? In other words, does cost/download in Library A usually mean pretty much the same thing as cost/download in Library B?

  2. Do businesses that provide library services to institutions that outsource their libraries use the same measures we have been talking about?

  3. Martha: Like it or not, academic institutions like to plan in terms of national rankings. The ARL investment index only shows how expensive a library is. Are you planning any national ranking system that looks beyond $$ spent?

  4. Are MINES used in EZ Proxy? How?

  5. Please describe the applicability of these ARL-scaled tools to small or private libraries (e.g., Oberlin group libraries). Do they have widespread use among libraries outside ARL?

  6. To Steve and/or Martha: What are the assessment tools to evaluate information literacy teaching in libraries?

  7. On the ClimateQUAL survey, it says it is "designed to understand the impact [staff] perceptions have on service quality in a library setting." How does the survey make that connection or draw those conclusions?

  8. On the University of Washington map, how were the four perspectives identified... at ARL? at UW?

  9. For both speakers: Are there quantitative measures you would recommend libraries use to demonstrate added value to their parent institutions? And is there a good method--or any method--besides self-report user surveys to measure outcomes for things like electronic resources? (We know that Journal Y has $X cost-per-use, but how do we measure whether accessing it led to our users learning or doing something valuable?)

  10. What's the status of DigiQUAL+? Is it functional yet?
     
  11. Why do we need to quantify qualitative information?