Digital Repositories meeting: Metrics and assessment
Digital Repositories meeting: Metrics and assessment
November 2008
Yesterday and today, I’ve been at the SPARC conference on Digital Repositories. It’s been a good meeting so far. Full disclosure: NISO is a sponsor of the event, although we were not involved in the development of the program.
One topic that has been discussed repeatedly is the need for statistics and measures to assess the quality of matierals deposited into IR systems. Yesterday, one of the speakers (sorry, name and hopefully link to presentation coming) noted that they had begun using the COUNTERcode of practice to report out usage from their repository. Not surprisingly, when the COUNTER rules are applied to the usage data that comes out of IRs, the usage figures drop quite precipitously. Raw usage figures numbering in the millions dropped to around 80,000 hits (actual figures will be drawn from presentation when posted). Those in the community familiar with publisher usage data and how much reported usage drops when reporting conformance with COUNTER is instituted will be familiar with these usage level drops. Perhaps, greater application of the COUNTER code for IRs will provide a level playing field upon which people can consider IR traffic on an apples-to-apples approach.
The larger question of assessment is thornier. This ties to my earlier post on metrics for article level usage from the Charleston conference. As yet, there has been no discussion about what these measures will be. Impact Factor is frequently cited, but this is a journal-specific measure. How an individual article or resource is assessed, this measure falls short with relation to IRs and there is likely a need for more item specific measures.