The Archival Metrics Toolkit: Development and implementation

Authors


Abstract

User based evaluation in archives and manuscript repositories lags behind that of libraries and museums. This paper discusses the development and testing of the Archival Metrics Toolkit which is designed to support archivists in conducting user-based evaluations. The current Toolkit includes 5 different questionnaires focused on assessing various archival services in Colleges and Universities as well as instructions for administration and data analysis. The questionnaires aim to gather feedback from (1) onsite users of the reading room, (2) students who have attended an orientation session and (3) instructors who use the archives for teaching, as well as (4) online users of the website and (5) online users of finding aids.

Introduction

User based evaluation in archives and manuscript repositories lags behind that of libraries and museums. In order to address this gap, the authors have been working for several years designing and testing assessment tools to support archivists in conducting user-based evaluations. In 2004, a preliminary meeting, “Users, Metrics, Archives”, was held at the University of North Carolina. Representatives from different types of archives (religious, corporate, museum, and university) attended. The outcome of this meeting was agreement that generic assessment tools would benefit their repositories and the profession. Reasons given for wanting to gather user feedback included informing administrators involved in resource allocation and better identification of constituency needs. The most significant outcome of the meeting was the overwhelming support for the metrics movement within the archival community.

Participants at that meeting also agreed that user-based evaluation tools needed to be specifically geared toward a particular type of archives. For example, colleges and universities have three main missions - teaching, research, and service. Thus, impact measures must be developed in relation to these missions and evaluative instruments need to measures support for these activities.

Due to the need for mission-specific evaluation tools and our funder's interest in higher education, in the next phase of the project, we decided to focus on college and university archives and special collections. Universities and colleges hold some of North America's richest collections of original documents. Their one-of-a-kind photographs, letters, scientific logbooks, and business ledgers, protected and presented by college and university archivists, provide the grist for historical research. Faculty members at these institutions access these resources when they teach students how to conduct original primary source research, as well as for their own projects. Faculty and students are thus key users of archives, and university archivists and manuscript curators must assess their needs. Therefore, the Archival Metrics project developed and tested five surveys focusing on assessment in colleges and universities. The surveys gather feedback from: 1) reading room researchers, (2) students who attended an orientation session and (3) instructors who use the archives for teaching, (4) website users and (5) online finding aids users. This paper outlines the processes of developing and testing the surveys.

Developing the Conceptual Framework

All of the questionnaires are supported by an underlying conceptual framework. This framework maps important aspects of archival user services to specific items in the questionnaires. The concepts were developed through two complementary processes: extensive literature reviews and 40 interviews with college and university archivists and manuscript curators, instructors utilizing primary sources in their courses, and students who had used primary sources in the context of their classes in multiple locations. The concepts addressed vary from survey to survey. For example, the concepts underlying the onsite reading room questionnaire are 1) quality of the interaction, 2) accessibility and access, 3) archival/special collections information space, and 4) learning outcomes while the online finding aids questionnaire includes the concepts of navigation and usability.

Developing the Questionnaires

At the beginning of the project we envisioned developing a question bank from which archivists could create their own questionnaires by selecting questions or modules. In the end, this proved problematic and the questionnaires were tested as entire instruments. Repositories administering the surveys will be advised to change only certain aspects of the questionnaires and to refrain from rewording items. The process for pretesting the instruments involved several methods, such as one-on-one administration coupled with a follow-up interview about the questionnaire and focus groups. We also conducted pilot tests at several sites deploying one or more questionnaires, sometimes using multiple methods to assess both questionnaires and administrative procedures.

Identifying Effective Administration Procedures

Questionnaires were pilot-tested using a variety of administration procedures. In the case of the reading room questionnaire, the survey was administered on-site during the researcher's visit. The goal was to give the questionnaire to users so they could evaluate the current visit. Therefore, administering the survey after the user has located and received some materials is key. In the case of the website and online finding aids surveys, the administration procedures included sending invitations to recent reading room visitors, retrospectively contacting email reference requestors, and sending a link a short time after responding to email reference requestors (prospective invitations). As can be seen from the results in Table 1, for the online finding aids survey the Email Retrospective invitations generated the lowest response rate. In addition to testing the questionnaires and survey administration procedures, we have also tested the instructions for analysis of the data gathered.

Table 1. Online Finding Aids Survey Comparison of Administration Procedures and Response Rates
Test SitesInvitationsTotal Responses
 Total Sent1st(#/%)2nd(#/%)3rd(#/%) 
A(Email Reference Retrospective)10216(16%)21(21%)7(7%)44(43%)
B(In-house Researchers Retrospective)5114(27%)7(14%)3(6%)24(47%)
B(EmailReference Prospective)529(17%)11(21%)4(8%)24(46%)
C(Email Reference Prospective)3611(31%)11(31%)3(8%)25(70%)
D(Email Reference Retrospective)1624(15%)23(14%)17(11%)64(40%)

Conclusions

The Archival Metrics Toolkit is available for free at http://archivalmetrics.org. The Toolkit consists of the questionnaires as well as administration and analysis instructions. We plan to analyze usage of the tools in order to improve them in the future.

Acknowledgements

This project was made possible by funding from the Andrew W. Mellon Foundation. We also thank all of our partner institutions and especially those repositories that let us to test the surveys at their sites. During this project the following students contributed to the effort: Morgan Daniels, Luanne Freund, Magia Krause, Erin Passehl, Juanita Rossiter and Beth St. Jean.

Ancillary