Reflections on Archival User Studies

Hea Lim Rhee (rhee.healim@gmail.com) is a senior researcher at the Korea Institute of Science and Technology Information, and also she is an associate professor at the University of Science and Technology in Korea.

This study is the first to focus on how developments in research trends, technology, and other factors have changed archival user studies. How have they changed in the past thirty years? How have they been conducted? This study examines and analyzes the US and Canadian literature on archival user studies to trace their past, characterize their present, and uncover the issues and challenges facing the archival community in conducting user studies. It discusses findings and gives suggestions for further archival user studies.

The library profession has conducted many user studies since the first user study appeared in the late 1940s, but the archival profession has paid attention to archival user studies only since the 1980s. In the 1980s several archivists criticized the archival community for only impressionistically or anecdotally understanding users, and they championed a systematic approach to studying users.1 Since then, user studies have been touted as a useful tool for collecting information about users and their use, including who uses archival materials and institutions, what users need, how they locate archival materials, what kind of archival materials and access tools they prefer, and how they use gathered archival materials. Since the 1980s, not only has the archival environment changed (e.g., archival information systems, services, and access tools), but archival users and their use have changed as well. Several factors, such as changing research trends, research interests, and developing technology, have also changed archival user studies. Unfortunately, there is no study exclusively focusing on this development of archival user studies, so it is unclear how they have changed.

Most of the existing literature on archival user studies only partially describes previous literature or focuses only on user studies dealing with specific research topics. For instance, Lisa R. Coats reviewed the literature on user studies of online archival finding aids.2 Carolyn Harris reviewed literature published since the late 1990s investigating archives users in the digital era.3 Anneli Sundqvist reviewed a number of examples in the literature on how the English and Swedish archival discourses conceptualize users and use of records.4

This study answers the following research questions about the development of archival user studies themselves: How has the nature of user studies changed over the past thirty years? How have user studies been conducted? It examines and analyzes the US and Canadian literature on archival user studies to trace their past, characterize their present, and uncover the issues and challenges facing the archival community in conducting user studies.

This paper’s analysis of the development of archival user studies could help assess whether previous archival user studies have been properly conducted. It reveals issues and limitations of existing user studies and suggests ways to improve future ones and better utilize their results in archival functions and practices. Archivists reading this paper may discover informative user studies conducted in the same context as their own institutions. Ultimately, archivists can more effectively serve their institutions’ users by knowing more about them. This study aims to increase and clarify the archival community’s knowledge of the user studies that obtain this information.

Research Method

To identify valid and reliable characteristics about archival user studies, the author examined, analyzed, and synthesized publications on archival user studies. This study rests on a broad analysis of the archival literature, but many of the examined works came from four journals, from their initiation year to December 2011: American Archivist, Archivaria, and Archival Science, because they are the top three archival journals in “Proposed Journal Ranking List for Archives and Records Management” (2009), and Journal of Archival Organization, which was recommended by many researchers.5 To select articles for analysis, the author reviewed these journals’ tables of contents and abstracts for the keywords “user study” and “use study,” focusing on investigations that used empirical research methods. The author scanned the full text of candidate articles and articles whose topic was not made clear by the abstract or title.

In addition, the author searched bibliographical utilities (e.g., Library and Information Science Abstracts and Library Literature and Information Science). She consulted and extracted keywords from titles and abstracts of articles selected from the four journals above. She used various terms, both as keywords and subject terms, in basic and Boolean searches: “archival user study,” “archival use study,” “archives AND user study,” “archives AND use study,” “user service AND archives,” “archives AND access,” “user AND archives,” “user AND repository,” “user AND reference AND archives,” “use AND archival source,” “use AND primary source,” and “user AND historical research.” The author also checked citations and bibliographies of relevant literature and consulted the syllabi of relevant university courses (on archival access, information-seeking behavior, user studies, and human information behavior). She also received literature recommendations from professors in the fields of archival science and library and information science. As a result, the review included a number of articles published in Archival Issues (previously Midwestern Archivist), conference proceedings, and one book.

The use of several search strategies revealed a variety of publications that broadened the scope of the examined literature and captured the unexpected, diverse, and instinctual characteristics of user studies.

Ultimately, this study encompassed publications about archival user studies that (1) investigated not only users who visit archival institutions in person, but also remote users utilizing phone, fax, mail, or email; (2) used empirical research methods; and (3) targeted archival users in the United States and Canada.

The appendix lists all of this study’s forty-five examined publications, which are summarized in table 1. The selected pieces were examined in chronological order of publication. After reviewing all selected publications, the author determined which aspects of user studies to identify in the literature and analyze, such as user study research topics, research methods, subjects, and job position of researchers conducting user studies. The author applied content analysis to each article to identify and count the selected aspects of user studies. The resulting tallies were entered into a Microsoft Excel spreadsheet for analysis. To address the research questions of this study, certain aspects of interest were plotted over time, and others were counted.

This study does have limitations. First, it focuses exclusively on archival user studies conducted in the United States and Canada because the author had access to the relevant literature and the North American research environment. Second, this study did not examine all US and Canadian journals with articles on user studies, nor did it examine unpublished user studies. Third, this study selected literature published in English only.

In this paper, user study means an archival investigative activity that collects, analyzes, and interprets data on users and use by empirical research methods. User studies should not be confused with usability studies, which investigate only the usability of archival access systems and websites, not users and use themselves, and are outside the scope of this study.

Findings

Number of Archival User Studies over Time

Archival user studies can be said not to have emerged until the 1980s. Before then, only one user study, a 1977 investigation of historians’ use of archival finding aids, had been conducted in the archival context. Many archival institutions had collected some basic data on users and use through reference services, but most of them had neither analyzed nor interpreted the collected data.6 Many archival institutions simply counted numbers of users and uses without analysis or interpretation, and archivists relied on anecdotal evidence and their own observations of and conversations with users.7 In the 1980s, the archival community began to analyze reference data on users and use of holdings, and many increasingly insisted that it was necessary to study users and use systematically, scientifically, and synthetically, beyond just analyzing and interpreting statistical data.

In 1986 Paul Conway presented a framework for studying users that reflected the few user studies of its time.8 His model has five successive stages that compare complex objectives of archival programs and services to research methods for assessing user information. The model has three objectives information archives should gain from user studies to help evaluate their programs and services: quality, integrity, and value. The model’s five stages correspond to five research methods: collection of registration forms, orientation for users, follow-up, survey, and experiments. Though Conway’s model has influenced subsequent user studies, many do not fall neatly into a specific stage of his framework.

Though the archival community has claimed that conducting archival user studies is necessary, there has been no significant increase in the number of archival user studies conducted since their rise in the 1980s (see figure 1). Even in the 2000s, several archival researchers acknowledged the scarcity of archival user studies and encouraged the archival community to conduct more.9 The archival profession has conducted—or at least published—far fewer user studies than the library profession. Two reasons, seemingly derived from the priorities of the archival community, are assumed.

First, it is the author’s experience that the archival mission of preservation seems to influence the priorities of archival institutions. The first priority of libraries is to serve their users. In contrast, the most important function of archival institutions, traditionally, is preservation of rare or unique materials, not user service. Archivists have had long focused on preservation.10 Dearstyne describes preservation as the “ultimate goal of archival work.”11 Helen Tibbo claims that archives’ “love affair” with their materials, rather than with their users, has focused their policies and practices more on preservation than use.12 However, as shown by research topics of user studies, it appears that archives’ interest in user service is increasing.

Second, archival institutions seem not to have sufficient resources, including staff time, to conduct user studies.13 Archival institutions seem to place more value on processing and description than on conducting user studies.14

Researchers Conducting Archival User Studies

From the late 1970s through the 1980s, practitioners (mostly archivists) led the emergence of user studies, mostly by conducting studies of their own institution’s users. In the mid-1990s, the number of academicians (faculty members and graduate students) conducting user studies rose and, since 1998, has usually exceeded the number of practitioners (see figure 2).

The increase in the number of academicians’ user studies is assumed to have four causes. The first is probably active collaboration between faculty members, their peers, and their graduate students occurring at the national and international levels. Three researchers lead all the others in the field of archival user studies: the frequent collaborators Duff, Tibbo, and Yakel, who are affiliated with different universities in the United States and Canada.

The second reason for academic leadership of user studies may be that professors’ interests in user studies influence their students’ interest in the subject, leading professors and students to collaborate on user studies.15 The influence of professors has also led several graduate students to write course papers or master’s theses on archival users and use.16

The third assumed reason for academic leadership of user studies is that professors and graduate students have more opportunities than do practitioners to learn and use the research methods and statistics that user studies often employ. Academicians often employ the research methods of user studies for their other studies as well.

Finally, it appears that faculty members outside the archival field may also be undertaking user studies. Inwood, a professor of economics and history, and Reid, a professor of history, collaborated on a user study in 1993.17

As mentioned at the outset, this study does not include unpublished user studies. Though practitioners seem to conduct user studies more frequently than they publish them, the total number is probably still small.18 Not only are practitioners not required to conduct user studies, they often lack the necessary resources, especially time.

Research Topics of Archival User Studies

The research topics of user studies have diversified over time. They have been affected by several factors such as changing research trends and development of technology. Research topics of archival user studies can be broadly divided into three categories: information needs, information seeking, and information use. However, not every user study falls neatly into one of the three categories. Some user studies fall into more than one category, and some fall outside them.

Information Needs

Information needs include the subjects that archival users are investigating, users’ inquiries via reference questions, users’ presentation language, research trends and interests in a specific field, and information sources needed.19

User studies have been identifying how shifting research trends and interests have influenced users’ information needs. For example, more historians have been demanding archival material about women’s history since the 1970s, which led Diane L. Beattie to study the information needs of researchers studying this topic in archival institutions.20

User studies on information needs have also been propelled by the development of technology, especially in the digital age.21 For instance, the Northwest Digital Archives consortium examined its core users’ needs as it developed user-based digital delivery systems.22 Anne J. Gilliland-Swetland investigated K–12 users’ needs with respect to digital primary source materials.23

Another such catalyst of user studies on information needs was a grant provided by the National Historical Publications and Records Commission, which funded the Historical Documents Study in 1992. The study investigated contemporary historians’ and genealogists’ need for sources and the extent to which researchers benefit by utilizing services provided to enhance their use.24

Information Seeking

Information seeking is the most popular topic of archival user studies. Specific topics include the archival material that users seek as well as their access tools, access problems, strategies for locating archival materials, interactions with archivists, preferred format of information sources and materials, and information-seeking activities.25 Most user studies on information seeking focus on users’ information-seeking behavior while few deal with user cognition.26

The development of technology significantly affects users’ information-seeking behaviors, while changing research trends and interests greatly impact users’ information needs. For instance, the Primarily History project examined historians’ information-seeking behavior since the advent of the World Wide Web, online finding aids, digitalized collections, and the increasingly pervasive networked scholarly environment.27

One frequent research topic of user studies is the type and format of information sources and archival materials researchers prefer and use in the information-seeking process.28 Results of those studies show that, for a given project, researchers use several types and formats of information sources and archival materials, new types of which are increasingly used as time passes.

Another aspect of information seeking is access to archival materials, especially access tools.29 User studies of access tools show that researchers use both traditional tools (e.g., indexes, abstracts, and paper finding aids) and electronic tools (e.g., online finding aids, OPACs, and bibliographic utility databases).30

The advent of online finding aids with Encoded Archival Description (EAD), in particular, inspired several user studies.31 Though a few user studies show that some users have difficulty learning and using online finding aids with EAD, these aids do enhance searchability and accessibility for both users and staff of archival institutions. As time passes, more archival institutions employ online finding aids over paper finding aids.

A few user studies have identified access problems in archival institutions, such as geographic limits, political or governmental restrictions, lack of finding aids, copyright issues, and problems with difficult-to-use formats.32

Information Use

User studies of information use deal mainly with use of archival materials, use patterns such as citation patterns, and who is using archival institutions, their holdings, and specific archival materials.33 Several user studies have examined why and how certain types and formats of sources and materials are used.34

Shifting research trends and interests have led to user studies identifying information use. For instance, as social history emerged in the late 1970s and early 1980s, it drew attention to researchers’ use of archival materials on this topic. For instance, Fredric Miller analyzed the use of archival materials in 214 articles on US social history and found that use patterns varied significantly.35

Since the 2000s, some user studies on user education have been published.36 This seems to derive from the fact that since the late 1990s academicians, whose profession requires them to publish, have more often conducted user studies than practitioners have, as shown in figure 2. Professors and doctoral students conducted user studies on user education. Notably, Elizabeth Yakel highlighted the necessity of user education to establish “common ground” for both better reference service and for the design of more effective archival access systems.37 However, existing user studies have not addressed many questions on archival education: What types of archival user education would be useful? What content should archival institutions’ websites contain for user education?

The range of research topics in user studies has broadened since the 2000s, though many user studies still focus on information needs, information-seeking behaviors, and preferred information sources, archival materials, and access tools. Relatively current user studies examine new research topics, such as user education, the interfaces of archival access systems, interactions with online finding aids, hard copy information sources converted to digital formats, and archival intelligence.

Despite this slight diversification of user study topics, many unstudied topics remain. One noteworthy topic is user cognition and cognitive approaches. Most user studies reviewed in this study focus on users’ behaviors rather than cognition, a tendency perhaps rooted in the assumption that psychological states are difficult to observe, explain, and prove scientifically. However, studies of user cognition could improve user services and information systems by identifying users’ information needs, information use, and satisfaction. User satisfaction (and its factors) with archives’ reference services, information systems, and websites is another rarely studied topic.

User Groups as Subjects of Investigation

Subjects of user study investigations can be largely divided into two categories: (1) all users of one or more archival institutions during a specific period and (2) specific types of user groups.38 User studies of the first category aim to enhance institutional administration, information services, and information systems, and they focus on identity of users, information needs, information-seeking behaviors, the effect of orientation sessions, and use of collections and access tools in specific institutional settings.39 Some of those studies indicate that institutional culture plays a significant role in users’ information behavior.

Types of archival institutions where user studies were conducted include presidential libraries, national archives, university archives, image archives, medical archives, and multi-institutional archives.40 Even within specific types of archival settings, user studies have had different research topics, research methods, and subjects of investigation. For example, in the university archives setting, Maher focused on research use and researchers while Elizabeth Yakel and Laura L. Bost studied administrative use and users.41 Conway and Goggin each conducted user studies in a specific division of the Library of Congress (the Prints and Photographs Division and the Manuscript Division, respectively).42 However, user studies conducted in corporate archives and museum archives are rare.

Ongoing institutional user studies have in fact produced benefits. In 1986, Maher contended that ongoing user studies in an institution would be a “very solid basis for analysis of trends and comparisons of types of users and types of projects.”43 Studies by Kristen E. Martin and Margaret O’Neill Adams support this claim.44 Martin analyzed reference correspondence sent to a manuscripts repository in 1995 and 1999. His findings show how email and the Internet have changed reference correspondence. Adams found changes in users and use of electronic records by analyzing administrative records collected for many years from NARA’s electronic records reference program.

Some user studies have examined specific types of users. Studies of types of users have investigated historians almost exclusively, especially academic historians (faculty members, graduate students, and undergraduate students). Three studies reviewed for this study examined non-historians: two on genealogists and one on K–12 users.

In the 1970s, historians wrote a considerable body of literature on the importance of effective archival finding aids for historical research; however, they did not rigorously analyze the strategies for employing archival finding aids.45 This prompted the first archival user study to investigate how historians used finding aids in their research processes.46

User studies investigating historians, especially their information-seeking behavior, increased remarkably during the 2000s. Some user studies examined historians’ changing information needs; how they located, accessed, and used information sources and archival materials; and the transformation of information sources and archival materials in the digital age.47

Yet another type of user study examines researchers of particular topics. One such user study interviewed authors of works about the No Gun Ri massacre.48

Pugh, Conway, and Dowler each claimed that information needs and information seeking differ between user groups.49 Conway in particular says that this assertion is supported by existing user studies. The user studies on historians, genealogists, and K–12 users examined in the current study indicate that each group has different patterns of information seeking and information use. Duff has studied historians and genealogists separately and together, and she concludes that the two groups need “different types of access tools to find the information and interpret it” in archives.50 However, even Duff’s recommendations may not have gone far enough: because historians, genealogists, and K–12 users are not the only users of archives,51 archival user studies need to investigate more diverse user groups (e.g., teachers and government officials).

Archivists should pay attention not only to current users but also to the appearance of new types of users. Among the most recent and potentially significant new user group is web users. Despite this group’s increasing numbers, this study indicates that it has not been investigated by many user studies.52 To help archivists improve their institution’s website, information systems, digital collections, online services, and advocacy, archivists need to know who uses their institution’s website, why and how they use it, and what information web users access and use.

Research Methods of Archival User Studies

Several user studies have employed a single research method (e.g., survey, interview, citation analysis, and focus group); however, many more user studies have employed multiple research methods (e.g., interview and survey; survey and observation; survey and reference question/correspondence analysis; reference question/correspondence analysis and web analytics; and survey, interview, and observation).53 In particular, observation of users in archival institutions is usually used in concert with other methods.54

Most archival researchers have borrowed research methods from other fields, particularly social sciences and library and information science, to make their own methods more effective, undertake their studies more systematically, and give validity to their research design and results. For instance, the diary has been a key information source in other fields (e.g., biography, psychology, sociology, and information science). However, archival user studies have rarely employed diaries.55

As more user studies have been conducted since the 1980s, archival researchers have diversified their research methods (see table 2).

Citation analysis has been employed since the early 1980s but has rarely been employed since the mid-1990s. Indeed, there is some argument about the usefulness of citation studies in the archival field.56 Despite the disagreement about the usefulness of citation analysis, the method has been employed to investigate how researchers actually use archival materials they gathered, to identify past and current use patterns, and to anticipate future use patterns. Studies on use of archival materials with citation analysis fit the fourth stage of Conway’s model because these studies are related to the impact of archival collections as measured through citation analysis in researchers’ publications.57

Reference questions/correspondence analysis was continuously employed by several user studies from the 1980s through 2010. This method analyzes reference questions/correspondence collected via letter, facsimile, or e-mail to identify users’ needs. For example, separate studies by David Bearman, Kristen E. Martin, and Wendy M. Duff and Catherine A. Johnson analyzed reference questions to determine what types of questions users ask and what terms users employ to express their information need.58 Those authors claimed that archivists should know users’ own language to better meet users’ information needs, improve archival access systems, and enhance reference services.

Survey and interview are the dominant research methods. Mail surveys have been popular since 1977 when the first user study was conducted.59 Surprisingly, user studies reviewed in this study have not yet used online surveys, despite their availability and popularity in library user studies. The interview method, in person in most cases, has been employed since the 1990s.60

Archival user studies have rarely employed the content classification, focus group, and critical incident methods.61 The experimental method falls into the fifth stage in Conway’s model, and it too has rarely been employed in user studies, appearing in them only since the late 1990s.62 Web analytics is the newest research method in user studies and can be used to “measure user actions, to understand some aspects of user behavior, and to initiate a program to continuously improve online services” in archival environments.63 Nevertheless, this method, too, has gone largely unused in user studies.64

Researchers and archival institutions conducting user studies should be aware of and employ new, relevant tools. For instance, archival institutions can investigate their users through web-based tools (e.g., tools for tracking web visitors and web-based user feedback/comments). The Archival Metrics project developed, tested, and validated user survey toolkits and provides them for free on the project’s website.65 Because many archival institutions have limited resources,66 adapting existing tools to conduct their own user studies may be a good strategy.

Disciplines of Literature Cited in Archival User Studies

Authors writing papers on their user studies cited literature from several disciplines in addition to archival science. Most of them cited library and information science (LIS) literature for two reasons: (1) to introduce research topics of library user studies, show how library user studies have been conducted, and report on how library user studies have progressed; and (2) to justify their research methodologies or results by citing relevant examples from the LIS field.67 Other cited disciplines include aesthetics, history, psychology, communication, philosophy, computer science, and education.68

Archival Functions as Subjects of Investigation

Many user studies have focused on specific archival functions, such as description, reference, preservation, and appraisal. This suggests that the studies’ authors consider, test, or try to apply user studies to archival practice.

Archival reference is one of the traditional research topics in the archival field. Data on users and use collected through reference activities is considered prerequisite for conducting user studies.69 Archival information systems with a reference module seem to facilitate collecting and storing user information related to reference services. Analysis of reference questions and correspondence, as described above in “Research Methods of Archival User Studies,” is a popular research method in archival user studies, and several user studies on archival reference have been conducted.70

The archival profession has also paid attention to applying user studies to preservation, particularly digitization of materials. An exploratory study investigated which formats of digitally preserved objects users preferred and indicated that archivists can benefit from understanding “how user needs and preferences may inform selection of preservation methods.”71

Some archival professionals have conducted user studies on problems with archival description revealed by users’ difficulties understanding and interpreting catalog records, such as card catalogs and Online Public Access Catalogs (OPACs).72 Even though the US MAchine Readable Cataloging–Archives and Manuscripts Control format (USMARC-AMC) was developed in 1982, no studies were conducted on user comprehension of archival description until 1993, when Robert P. Spindler and Richard Peace-Moses examined users’ understanding of USMARC-AMC records. They concluded that archivists, librarians, and other information professionals must study user interaction with descriptive systems and adapt their systems and practices to serve user communities better in an integrated information environment.73

The archival community has debated the benefits of applying user study results to archival appraisal practice, though one exploratory study reported that a few state archives utilize the results of user studies in their appraisal practice.74

A few researchers have investigated the application of user study results to the selection of materials for digitization and preservation. Commonly held beliefs in the archival community implicitly acknowledge the value of user and use information for preservation and especially reference services. For example, a recent study showed that analysis of users’ reference inquiries affects the development of a “user-driven approach to selection for digitization.”75 The applicability of user study results to archival appraisal practice, however, remains the subject of debate in the archival community. In contrast, the library community comprehensively applies user study results to help develop and manage collections, improve information systems and reference services, and enhance advocacy.

Toward a New Framework for Archival User Studies

Currently, the only archival user study framework is Conway’s, which debuted in 1986. Conway describes frameworks as “simplifications of reality–ways of reducing complexities to a set of meaningful, manageable ideas.”76 By extension, frameworks should also reflect changes in reality. Conway intended this framework “to structure a comprehensive program of user studies” and to give a direction for further archival user studies.77 However, this framework does not seem fully applicable to current user studies, nor does it seem likely to apply in the future. For example, when Conway created his framework, he could not have predicted the appearance of new user groups such as web users.

As this paper has previously indicated, most researchers conducting user studies have not rigorously followed Conway’s framework. One reason may be a shift in the goals of user studies. Conway developed his framework to help archivists study users of their own institutions in order to assess archival programs and services.78 To achieve this goal, Conway’s framework presents three objectives for archival programs and five stages of research methods, all of which progress in sophistication. Since the publication of Conway’s framework, however, researchers—mostly academicians—have often conducted user studies to investigate topics of their personal research interests, such as how historians seek information in archival institutions. In the user studies analyzed for this study, many researchers did not strictly follow the first three stages of Conway’s framework. Several academicians did not even investigate users in one particular archival institution or go through the first three stages when studying a particular type of user group.

Given the shift in the goals of user studies, the emergence of new user groups such as web users, and the fact that many researchers conducting user studies have not been following Conway’s framework, it is time to consider the development of a new framework to facilitate archival user studies. This new framework should be developed by investigating user studies, published and unpublished, and involving researchers who have conducted them.

Though developing a new framework is outside of the scope of this study, its findings could inform such a development. A new framework could reflect the less structured way the user studies examined in this study have been conducted. Its structure should be simple so that as many researchers as possible—academicians and practitioners at any level of experience with user studies—can easily use it.

This study indicates that more researchers, especially academicians, conduct user studies with the goal of investigating specific research topics rather than achieving archival program objectives. The framework, then, should focus on topics likely to be of interest to researchers, both academicians and practitioners. While Conway’s framework suggests one research method for each of the five stages, this study found that researchers have studied particular topics using multiple research methods. The new framework could consist of two axes, research topics and research methods, and present a menu of applicable research methods for particular topics. A researcher could select just one research method or mix and match multiple methods appropriate to the specific research topic. The new framework, unlike Conway’s framework, might not incorporate stages, in either research topics or methods, so as to reflect the less staged research methods of recent user studies.

Conclusions

Archival institutions and libraries both have users, but the two communities seem to have different attitudes about user studies. Archival institutions have conducted many fewer user studies than libraries, and where the library community seems to have accepted user studies, the archival community is much more at odds. The archival community stands to gain much from user studies, but it must first understand why archival institutions often ignore or underutilize this potentially powerful tool and what can be done about it.

The archival community seems to pay less attention to conducting user studies than the library community for two reasons: the traditional archival priority of preservation and limited institutional resources, particularly time. However, developing technology could help turn archival institutions’ attention from preservation to access and use. Preservation of physical materials requires much of an archivist’s attention and time, but born-digital and digitized archival materials would somewhat relieve archival institutions of the burden of preservation. Providing digital materials allows institutions to better protect valuable and often unique physical materials while at the same time making them more accessible and available. This may allow archival institutions to spend more attention and resources on conducting user studies. Also, the Internet enables users to more conveniently access digital archives and archival institutions’ websites, so archival institutions may need to give even more attention to access and use. This would require archival institutions to better understand their users through user studies. Developing technology has had a significant impact on the archival environment and challenges both archivists and users to adapt. Users should learn new access tools such as online finding aids and OPACs, and archivists should learn about these access tools, new types of users, and users’ new needs.

When possible, archival institutions should design and conduct their own scientifically sound, intra-institutional user studies. Though archivists frequently interact with users, there is a difference between impressionistic observations and systematic user studies, which could specifically and empirically address an institution’s needs at a given time. The priority that institutions give to user studies may affect their allocation of resources, such as staff and budget for training in research methods and statistics, as well as the number, depth, and breadth of user studies.

Archival professionals should pay more attention to web-based archival services, web accessibility, and their effects on ever-changing user information needs, information-seeking, information use, and satisfaction. This study found only a small number of user studies on technological advances and their effect on archival users. More archival institutions have employed web-based tools and information systems with a reference module, all of which seems to facilitate user studies. The use of web-based tools, in particular, to conduct user studies is likely to increase as these tools advance and the number of remote users, remote reference services, electronic records, and digital collections increases. As archival information systems and access tools have become more advanced, users have been able to access more archival materials more conveniently. Networking capabilities, and especially archives’ websites, could provide the next great advance in archival user services, including digital collections and reference correspondence through email. This level of interaction is perhaps even more important for archives than for libraries because the often unique nature of archival materials makes them harder for users to find than commonly held library materials. Digitized archival collections and web-based services could obviate the need for users to make repeat trips to the archives to examine a particular holding. If archives ignore the public’s ever-increasing expectation of being able to do things online, they risk losing users. To keep pace with users’ changing expectations of archival information systems and user services, archival institutions have conducted and utilized a few user studies to understand users’ needs and level of satisfaction. However, user studies and the way archival institutions utilize them could be even more beneficial.

Researchers and archival institutions conducting user studies should be aware of and employ new tools that can study users and use with greater scientific rigor, precision, validity, and reliability. For instance, archival institutions can investigate their users through web-based tools, though few have done so. As web technology and other information technology develop, researchers and archival institutions can develop and apply new research methods for user studies.

Not every archival institution has been able to conduct its own user studies; the most likely common barrier has been lack of resources. Such institutions should consult results of user studies conducted by other institutions of the same type (e.g., university archives, government archives). To enable this, archival institutions that have conducted user studies need to share their results. Although archivists do conduct user studies in their institutions, they generally are not required to publish the results, unlike academics. If publication of user study results in peer-reviewed journals or industry magazines is impossible or burdensome, archivists could publish their user study results on their institutions’ website. This would make the information available to other institutions, publicize the institution’s performance, and enhance advocacy.

The archival community needs to apply results of archival user studies more actively. This study indicates two basic conditions that would facilitate archival institutions’ application of user studies to archival practices: the ability to conduct user studies and the availability of user study results. The archival community can consult the library community, which has successfully applied results of user studies to its practices.

In conclusion, archival professionals need to pay more attention to user studies. Since the 1980s, far fewer user studies have been conducted in the archival field than in the library field. This does not mean that archival professionals know archival users well enough. More user studies should be continuously conducted to keep up with changing users, use, and factors influencing archival environments, archivists, and users. The continued development and implementation of high quality user studies will benefit archival institutions and their users.

References and Notes

  1. See for example Clark A. Elliott, “Citation Patterns and Documentation for the History of Science: Some Methodological Considerations,” American Archivist 44 (Spring 1981): 131–42; Elsie T. Freeman, “In the Eye of the Beholder: Archives Administration from the User’s Point of View,” American Archivist 47 (Spring 1984): 112; Lawrence Dowler, “The Role of Use in Defining Archival Practice and Principles: A Research Agenda for the Availability and Use of Records,” American Archivist 51, no. 1/2 (Winter/Spring 1988): 74–86; Jacqueline Goggin, “The Indirect Approach: A Study of Scholarly Users of Black and Women’s Organizational Records in the Library of Congress Manuscript Division,” Midwestern Archivist 11, no. 1 (1986): 57–67; and Fredric Miller, “Use, Appraisal, and Research: A Case Study of Social History,” American Archivist 49 (Fall 1986): 371–92.
  2. Lisa R. Coats, “Users of EAD Finding Aids: Who Are They and Are They Satisfied?” Journal of Archival Organization 2, no. 3 (2004): 25–39. A “finding aid” is “1. A tool that facilitates discovery of information within a collection of records. - 2. A description of records that gives the repository physical and intellectual control over the materials and that assists users to gain access to and understand the materials,” in Richard Pearce-Moses, A Glossary of Archival and Records Terminology (Chicago: Society of American Archivists, 2005), s.v. “finding aid,” also available online at http://www2.archivists.org/glossary/terms/f/finding-aid.
  3. Carolyn Harris, “Archives Users in the Digital Era: A Review of Current Research Trends,” Dalhousie Journal of Interdisciplinary Management 1 (2005): 2–6.
  4. Anneli Sundqvist, “The Use of Records—A Literature Review,” Archives & Social Studies 1, no. 1 (2007): 623–53.
  5. Archival Education and Research Institutes, “Discussion: Archival Journal Ranking,” 2010, accessed May 23, 2013, http://aeri2010.wetpaint.com/thread/3891876/Archival+Journal+Ranking.
  6. Lawrence Dowler, “The Role of Use in Defining Archival Practice and Principles: A Research Agenda for the Availability and Use of Records,” American Archivist 51 (Winter/Spring 1988): 79; Richard H. Lytle, “Intellectual Access to Archives: I. Provenance and Content Indexing Methods of Subject Retrieval,” American Archivist 43, no. 1 (Winter 1980): 66.
  7. Clark A. Elliott, “Citation Patterns and Documentation for the History of Science: Some Methodological Considerations,” American Archivist 44 (Spring 1981): 131–42; Dowler, “The Role of Use in Defining Archival Practice and Principles”; Elsie T. Freeman, “In the Eye of the Beholder: Archives Administration from the User’s Point of View,” American Archivist 47 (Spring 1984): 112; Jacqueline Goggin, “The Indirect Approach: A Study of Scholarly Users of Black and Women’s Organizational Records in the Library of Congress Manuscript Division,” Midwestern Archivist 11, no. 1 (1986): 57–67; Lytle, “Intellectual Access to Archives”; Fredric Miller, “Use, Appraisal, and Research: A Case Study of Social History,” American Archivist 49 (Fall 1986): 371–92.
  8. See Paul Conway, “Facts and Frameworks: An Approach to Studying the Users of Archives,” American Archivist 49 (Fall 1986): 393–407.
  9. See for example Mary Jo Pugh, Providing Reference Services for Archives & Manuscripts (Chicago: Society of American Archivists, 2005); and Helen R. Tibbo, Learning to Love Our Users: A Challenge to the Profession and a Model for Practice, accessed October 23, 2013, www.ils.unc.edu/tibbo/MAC%20Spring%202002.pdf.
  10. James M. O’Toole and Richard J. Cox, Understanding Archives & Manuscripts (Chicago: Society of American Archivists, 2006).
  11. Bruce W. Dearstyne, “What is the Use of Archives? A Challenge for the Profession,” American Archivist 50 (Winter 1987): 77.
  12. Helen R. Tibbo, Learning to Love Our Users: A Challenge to the Profession and a Model for Practice, 10, accessed on October 23, 2013, www.ils.unc.edu/tibbo/MAC%20Spring%202002.pdf.
  13. See for example Tibbo, Learning to Love Our Users; Pugh, Providing Reference Services for Archives & Manuscripts; and Hea Lim Rhee, “Exploring the Relationship between Archival Appraisal Practice and User Studies: U.S. State Archives and Records Management Programs” (PhD diss., University of Pittsburgh, 2011).
  14. Tibbo, Learning to Love Our Users.
  15. See for example Morgan G. Daniels and Elizabeth Yakel, “Seek and You May Find: Successful Search in Online Finding Aid Systems,” American Archivist 73 (2010): 535–68; Elizabeth Yakel and Deborah A. Torres, “AI: Archival Intelligence and User Expertise,” American Archivist 66, no. 1 (Spring/Summer 2003): 51–78; and Elizabeth Yakel and Deborah A. Torres, “Genealogists as a ‘Community of Records,’” American Archivist 70 (Spring/Summer 2007): 93–113.
  16. For example, as master’s theses, there are Megan E. Phillips, “Usage Patterns for Holdings Information Sources at the University of North Carolina at Chapel Hill Manuscripts Department” (master’s thesis, University of North Carolina at Chapel Hill, 1997); and Shayera D. Tangri, “Evaluating Changes in the Methods by Which Users of the University of North Carolina at Chapel Hill Manuscripts Department Learn of the Holdings of the Department” (master’s thesis, University of North Carolina at Chapel Hill, 2000). As published articles, there are Martin, “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection”; and Collins, “Providing Subject Access to Images.” These two articles were originally written for courses taught by Helen R. Tibbo in the School of Information and Library Science at the University of North Carolina at Chapel Hill.
  17. See Kris Inwood and Richard Reid, “The Challenge to Archival Practice of Quantification in Canadian History,” Archivaria 36 (Autumn 1993): 232–38.
  18. See Tibbo, Learning to Love Our Users; and Rhee, “Exploring the Relationship between Archival Appraisal Practice and User Studies.”
  19. See for example David Bearman, “User Presentation Language in Archives,” Archives and Museum Informatics 3 (Winter 1989–90): 3–7; Dianne L. Beattie, “An Archival User Study: Researchers in the Field of Women’s History,” Archivaria 29 (Winter 1989–90): 33–50; Wendy M. Duff and Catherine A. Johnson, “A Virtual Expression of Need: An Analysis of E-mail Reference Questions,” American Archivist 64 (Spring/Summer 2001): 43–60; and Anne J. Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials,” American Archivist 61 (Spring 1998): 136–57.
  20. Dianne L. Beattie, “An Archival User Study: Researchers in the Field of Women’s History,” Archivaria 29 (Winter 1989–90): 33–50.
  21. See for example Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; and Helen R. Tibbo, “Primarily History in America: How U.S. Historians Search for Primary Materials at the Dawn of the Digital Age,” American Archivist 66 (Spring/Summer 2003): 9–50.
  22. Jodi Allison-Bunnell, Elizabeth Yakel, and Janet Hauck, “Researchers at Work: Assessing Needs for Content and Presentation of Archival Materials,” Journal of Archival Organization 9, no. 2 (2011): 67–104.
  23. Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials.”
  24. Ann D. Gordon, Using the Nation’s Documentary Heritage: The Report of the Historical Documents Study (Washington, DC: National Historical Publications and Records Commission in cooperation with the American Council of Learned Societies, 1992).
  25. See for example Wendy M. Duff and Catherine A. Johnson, “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives,” Library Quarterly 72, no. 4 (October 2002): 472–96; Helen R. Tibbo, “Primary History: Historians and the Search for Primary Source Materials” (proceedings presented at the 2002 ACM IEEE Joint Conference on Digital Libraries, July 14–18, 2002), accessed February 10, 2013, http://portal.acm.org/citation.cfm?doid=544220.544222; Tibbo, “Primarily History in America”; and Xiaomu Zhou, “Student Archival Research Activity: An Exploratory Study,” American Archivist 71 (Fall / Winter 2008): 476–98.
  26. User studies of user behavior, particularly information-seeking behavior, include Wendy M. Duff and Catherine A. Johnson, “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives,” Library Quarterly 72, no. 4 (October 2002): 472–96; Wendy M. Duff and Catherine A. Johnson, “Where Is the List with All the Names? Information-Seeking Behavior of Genealogists,” American Archivist 66 (Spring/Summer 2003): 79–95; Kristina L. Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections,” Archival Issues 26, no. 2 (2002): 91–109; Tibbo, “Primary History”; and Helen R. Tibbo, “Primarily History in America”. User studies dealing with user cognition include Barbara C. Orbach, “The View from the Researcher’s Desk: Historians’ Perceptions of Research and Repositories,” American Archivist 54 (Winter 1991): 28–43; Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no. 2 (2002): 111–23; and Yakel and Torres, “AI.”
  27. Tibbo, “Primarily History in America,” 14.
  28. See for example Beattie, “An Archival User Study”; Wendy Duff, Barbara Craig, and Joan Cherry, “Finding and Using Archival Resources: A Cross-Canada Survey of Historians Studying Canadian History,” Archivaria 58 (Fall 2004): 51–80; and Wendy Duff, Barbara Craig, and Joan Cherry, “Historians’ Use of Archival Sources: Promises and Pitfalls of the Digital Age,” Public Historian 26, no. 2 (2004): 7–22.
  29. See for example Michael E. Stevens, “The Historians and Archival Finding Aids,” Georgia Archive (Winter 1977): 64–74; Paul Conway, “Research in Presidential Libraries: A User Survey,” Midwestern Archivist 11, no. 1 (1986): 35-56; Duff and Johnson, “Accidentally Found on Purpose”; and Tibbo, “Primarily History in America.”
  30. See for example Duff and Johnson, “Accidentally Found on Purpose”; Tibbo, “Primarily History in America.”
  31. See for example Daniels and Yakel, “Seek and You May Find”; Christopher J. Prom, “User Interactions with Electronic Finding Aids in a Controlled Setting,” American Archivist 67, no. 2 (Fall/Winter 2004): 234–68; Wendy Scheir, “First Entry: Report on a Qualitative Exploratory Study of Novice User Experience with Online Finding Aids,” Journal of Archival Organization 3, no. 4 (2006): 49–85.
  32. See for example Duff, Craig, and Cherry, “Finding and Using Archival Resources”; and Duff, Craig, and Cherry, “Historians’ Use of Archival Sources.”
  33. See for example Margaret O’Neill Adams, “Analyzing Archives and Finding Facts: Use and Users of Digital Data Records,” Archival Science 7, no. 1 (2007): 21–36; Duff, Craig, and Cherry, “Historians’ Use of Archival Sources”; Elliott, “Citation Patterns and Documentation for the History of Science”; Goggin, “The Indirect Approach”; and Elizabeth Yakel and Laura L. Bost, “Understanding Administrative Use and Users in University Archives,” American Archivist 57 (1994): 596–615.
  34. See for example Inwood and Reid, “The Challenge to Archival Practice of Quantification in Canadian History”; and Adams, “Analyzing Archives and Finding Facts.”
  35. Miller, “Use, Appraisal, and Research,” 371.
  36. See for example Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no.2 (2002): 111–23; Yakel and Torres, “AI”; Helen R. Tibbo, “How Historians Locate Primary Resource Materials: Educating and Serving the Next Generation of Scholars” (paper presented at the ACRL Eleventh National Conference Charlotte, North Carolina, 2003), www.ala.org/ala/mgrps/divs/acrl/events/pdf/tibbo.pdf; Zhou, “Student Archival Research Activity”; Magia G. Krause, “Undergraduates in the Archives: Using an Assessment Rubric to Measure Learning,” American Archivist 73 (Fall/Winter 2010): 507–34.
  37. Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no. 2 (2002): 111–23. According to Herbert Clark, common ground is “the sum of . . . mutual, common, or joint knowledge, beliefs, and assumptions.” Herbert Clark, Using Language (New York: Cambridge University Press, 1996), 93.
  38. User studies investigating a single archival institution include Karen Collins, “Providing Subject Access to Images: A Study of User Queries,” American Archivist 61 (Spring 1998): 36–55; Wendy M. Duff and Joan M. Cherry, “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact,” American Archivist 71 (Fall/Winter 2008): 499–529; and Kristina L. Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections,” Archival Issues 26, no. 2 (2002): 91–109. User studies investigating multiple institutions include Bearman, “User Presentation Language in Archives”; Paul Conway, “Research in Presidential Libraries: A User Survey,” Midwestern Archivist 11, no. 1 (1986): 35–56; and Duff and Johnson, “A Virtual Expression of Need.” User studies investigating specific types of user groups include Duff and Johnson, “Where Is the List with All the Names?”; Duff, Craig, and Cherry, “Historians’ Use of Archival Sources”; and Tibbo, “Primarily History in America.”
  39. See for example James Bantin and Leah Agne, “Digitizing for Value: A User-Based Strategy for University Archives,” Journal of Archival Organization 8, no. 3–4 (2010): 244–50; Collins, “Providing Subject Access to Images”; Conway, “Research in Presidential Libraries”; Duff and Cherry, “Archival Orientation for Undergraduate Students”; Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections”; and Zhou, “Student Archival Research Activity.”
  40. For a study on presidential libraries, see Conway, “Research in Presidential Libraries.” For a study on national archives, see Paul Conway, Partners in Research; Improving Access to the Nation’s Archive. User Studies of the National Archives and Records Administration (Pittsburgh: Archives and Museum Informatics, 1994). For studies on university archives, see Bantin and Agne, “Digitizing for Value”; Maher, “The Use of User Studies”; Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections”; and Yakel and Bost, “Understanding Administrative Use and Users in University Archives.” For studies on image archives, see Collins, “Providing Subject Access to Images”; and Paul Conway, “Modes of Seeing: Digitized Photographic Archives and the Experienced User,” American Archivist 73 (Fall/Winter 2010): 425–62. For studies on medical archives, see McCall and Mix, “Scholarly Returns.” For studies on multi-institutional archives, see Bearman, “User Presentation Language in Archives”; Conway, “Research in Presidential Libraries”; and Duff and Johnson, “A Virtual Expression of Need.”
  41. Maher, “The Use of User Studies”; Yakel and Bost, “Understanding Administrative Use and Users in University Archives.”
  42. Paul Conway, “Modes of Seeing: Digitized Photographic Archives and the Experienced User,” American Archivist 73 (Fall/Winter 2010): 425–62; Goggin, “The Indirect Approach.”
  43. Maher, “The Use of User Studies,” 19.
  44. Adams, “Analyzing Archives and Finding Facts”; Martin, “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection.”
  45. Stevens, “Historians and Archival Finding Aids,” 64–65.
  46. See Stevens, “Historians and Archival Finding Aids.”
  47. For example, Duff, Craig, and Cherry, “Historians’ Use of Archival Sources”; Tibbo, “Primarily History in America”; and Inwood and Reid, “The Challenge to Archival Practice of Quantification in Canadian History.”
  48. Donghee Sinn, “Room for Archives? Use of Archival Materials in No Gun Ri Research,” Archival Science 10 (2010): 117–40.
  49. Conway, “Research in Presidential Libraries”; Dowler, “The Role of Use in Defining Archival Practice and Principles”; Mary Jo Pugh, “The Illusion of Omniscience: Subject Access and the Reference Archivist,” American Archivist 45 (Winter 1982): 33–44.
  50. Wendy M. Duff, “Working as Independently as Possible: Historians and Genealogists Meet the Archival Finding Aid,” in The Power and the Passion of Archives: A Festschrift in Honour of Kent Haworth, edited by Reuben Ware, Marion Beyea, and Cheryl Avery (Ottawa: Association of Canadian Archivists, 2005), 201. Her previous two studies are Duff and Johnson, “Accidentally Found on Purpose”; and Duff and Johnson, “Where Is the List with All the Names?”
  51. Many archival literatures describe archival institutions have user groups other than historians, genealogists, and K–12 users. See for example Maher, “The Use of User Studies”; Mary Jo Pugh, Providing Reference Services for Archives & Manuscripts (Society of American Archivists: Chicago, 2005); and Bruce Washburn, Ellen Eckert, and Merrilee Proffitt, Social Media and Archives: A Survey of Archive Users (OCLC Research: Dublin, 2013).
  52. See for example Daniels and Yakel, “Seek and You May Find”; and Christopher J. Prom, “Using Web Analytics to Improve Online Access to Archival Resources,” American Archivist 74 (2011): 158–84.
  53. See for example Bantin and Agne, “Digitizing for Value”; Beattie, “An Archival User Study”; Conway, Partners in Research; Maher, “The Use of User Studies”; and Yakel and Bost, “Understanding Administrative Use and Users in University Archives.”
  54. See for example Conway, Partners in Research; Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; Yakel and Torres, “Genealogists as a ‘Community of Records’”; and Zhou, “Student Archival Research Activity.”
  55. Publications on archival user studies employing the diary method are Elaine G. Toms and Wendy Duff, “I Spent 1 _ Hours Sifting Through One Large Box . . . Diaries as Information Behavior of the Archives User: Lessons Learned,” Journal of the American Society for Information Science and Technology 53, no. 14 (December 2002): 1232–38; and Catherine A. Johnson and Wendy M. Duff, “Chatting up the Archivist: Social Capital and the Archival Researcher,” American Archivist 68, no. 1 (Spring/Summer 2005): 113–29.
  56. See Elliott, “Citation Patterns and Documentation for the History of Science”; Goggin, “The Indirect Approach”; Nancy McCall and Lisa A. Mix, “Scholarly Returns: Patterns of Research in a Medical Archives,” Archivaria 41 (Spring 1996): 158–87; and Miller, “Use, Appraisal, and Research.”
  57. Conway, “Facts and Frameworks”; Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials,” 146.
  58. Bearman, “User Presentation Language in Archives”; Duff and Johnson, “A Virtual Expression of Need”; Martin, “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection.”
  59. See Beattie, “An Archival User Study”; Collins, “Providing Subject Access to Images”; Conway, “Research in Presidential Libraries”; Duff and Cherry, “Archival Orientation for Undergraduate Students”; Duff, Craig, and Cherry, “Historians’ Use of Archival Sources”: Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; Robert P Spindler and Richard Peace-Moses, “Does AMC Mean ‘Archives Made Confusing’? Patron Understanding of USMARC AMC Catalog Records,” American Archivist 56 (Spring 1993): 330–41; Southwell, “How Researchers learn of Manuscript Resources at the Western History Collections”; Stevens, “The Historians and Archival Finding Aids”; Tibbo, “Primary History”; Tibbo, “Primarily History in America”; and Helen R. Tibbo, “How Historians Locate Primary Resource Materials: Educating and Serving the Next Generation of Scholars” (paper presented at the ACRL Eleventh National Conference Charlotte, North Carolina, 2003), https://www.ala.org/ala/acrl/acrlevents/tibbo.PDF.
  60. See Conway, Partners in Research; Johnson and Duff, “Chatting up the Archivist”; Orbach, “The View from the Researcher’s Desk”; Yakel, “Listening to Users”; Yakel and Torres, “Genealogists as a ‘Community of Records’”; and Zhou, “Student Archival Research Activity.”
  61. User studies employing content classification include Inwood and Reid, “The Challenge to Archival Practice of Quantification in Canadian History.” User studies employing focus groups include Duff and Stoyanova, “Transforming the Crazy Quilt.” User studies employing critical incidents include Duff and Johnson, “Accidentally Found on Purpose”; Duff and Johnson, “Where is the List with All the Names?”; and Yakel, “Listening to Users.”
  62. User studies employing an experimental method include Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; Margaret L. Hedstrom et al., “‘The Old Version Flickers More’: Digital Preservation from the User’s Perspective,” American Archivist 69 (Spring/Summer 2006): 159–87; and Prom, “User Interactions with Electronic Finding Aids in a Controlled Setting.”
  63. Christopher J. Prom, “Using Web Analytics to Improve Online Access to Archival Resources,” American Archivist 74 (2011): 161.
  64. See for example Bantin and Agne, “Digitizing for Value.”
  65. The survey toolkits are available at http://archivalmetrics.cms.si.umich.edu/node/10
  66. See for example Dearstyne, “What is the Use of Archives?”; and Rhee, “Exploring the Relationship between Archival Appraisal Practice and User Studies.”
  67. See for example Collins, “Providing Subject Access to Images”; Conway, Partners in Research; Elliott, “Citation Patterns and Documentation for the History of Science”; Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; Goggin, “The Indirect Approach”; Hedstrom et al., “‘The Old Version Flickers More’”; Spindler and Pearce-Moses, “Does AMC Mean ‘Archives Made Confusing’?”; and Yakel and Torres, “AI.”
  68. See for example Conway, “Modes of Seeing”; Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials”; Hedstrom et al., “‘The Old Version Flickers More’”; Inwood and Reid, “The Challenge to Archival Practice of Quantification in Canadian History”; McCall and Mix, “Scholarly Returns”; and Yakel and Torres, “AI.”
  69. When an archival institution plans to conduct its user study, it is assumed that the institution has its reference data on its users and use. See for example Conway, “Facts and Frameworks”; and Goggin, “The Indirect Approach.”
  70. See for example Adams, “Analyzing Archives and Finding Facts”; Duff and Johnson, “A Virtual Expression of Need”; Kristen E. Martin, “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection,” American Archivist 64 (Spring-Summer 2001): 17–42; and Helen R. Tibbo, “Interviewing Techniques for Remote Reference: Electronic Versus Traditional Environments,” American Archivist 58 (Summer 1995): 294–310.
  71. Margaret L. Hedstrom et al., “‘The Old Version Flickers More,’” 159.
  72. For example, Spindler and Peace-Moses reported, “as archivists who provide reference services, we have often been confronted with inquiries that suggest patrons have misinterpreted a MARC AMC catalog record.” Spindler and Peace-Moses, “Does AMC Mean ‘Archives Made Confusing’?,” 332. Yakel and Torres also pointed out the users’ difficulties of understanding archival description in their article “AI: Archival Intelligence and User Expertise.”
  73. Spindler and Peace-Moses, “Does AMC Mean ‘Archives Made Confusing’?,” 341.
  74. See Hea Lim Rhee, “Exploring the Relationship between Archival Appraisal Practice and User Studies.”
  75. Bantin and Agne, “Digitizing for Value,” 244.
  76. Conway, “Facts and Frameworks,” 394.
  77. Ibid., 393.
  78. “Figure 1 [the structure of Conway’s framework] depicts what archivists could learn from a comprehensive program of user studies and how they could build such a program” in Conway, “Facts and Frameworks,” 398.

Appendix. Publications on Archival User Studies Examined in This Study

Adams, Margaret O’Neill. 2007. “Analyzing Archives and Finding Facts: Use and Users of Digital Data Records.” Archival Science 7, no. 1: 21–36.

Allison-Bunnell, Jodi, Elizabeth Yakel, and Janet Hauck. 2011. “Researchers at Work: Assessing Needs for Content and Presentation of Archival Materials.” Journal of Archival Organization 9, no. 2: 67–104.

Bantin, James, and Leah Agne. 2010. “Digitizing for Value: A User-Based Strategy for University Archives.” Journal of Archival Organization 8, no. 3–4: 244–50.

Bearman, David. 1989–90. “User Presentation Language in Archives.” Archives and Museum Informatics 3 (Winter): 3–7.

Beattie, Dianne L. 1989–90. “An Archival User Study: Researchers in the Field of Women’s History.” Archivaria 29 (Winter): 33–50.

Collins, Karen. 1998. “Providing Subject Access to Images: A Study of User Queries.” American Archivist 61 (Spring): 36–55.

Conway, Paul. 1986. “Research in Presidential Libraries: A User Survey.” Midwestern Archivist 1, no. 11: 35–56.

———. 1994. Partners in Research: Improving Access to the Nation’s Archive: User Studies of the National Archives and Records Administration. Pittsburgh: Archives and Museum Informatics.

———. 2010. “Modes of Seeing: Digitized Photographic Archives and the Experienced User.” American Archivist 73: 425–62.

Daniels, Morgan G., and Elizabeth Yakel. 2010. “Seek and You May Find: Successful Search in Online Finding Aid Systems.” American Archivist 73: 535–68.

Duff, Wendy M., and Joan M. Cherry. 2008. “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact.” American Archivist 71 (Fall / Winter): 499–529.

Duff, Wendy M., and Catherine A. Johnson. 2001. “A Virtual Expression of Need: An Analysis of E-mail Reference Questions.” American Archivist 64, no. 1 (Spring/Summer): 43–60.

———. 2002. “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives.” Library Quarterly 72, no. 4 (October): 472–96.

———. 2003. “Where Is the List with All the Names? Information-Seeking Behavior of Genealogists.” American Archivist 66 (Spring/Summer): 79–95.

Duff, Wendy M., and Penka Stoyanova. 1998. “Transforming the Crazy Quilt: Archival Displays from a Users’ Point of View.” Archivaria 45 (1998): 44–79.

Duff, Wendy, Barbara Craig, and Joan Cherry. 2004. “Finding and Using Archival Resources: A Cross-Canada Survey of Historians Studying Canadian History.” Archivaria 58 (Fall): 51–80.

———. 2004. “Historians’ Use of Archival Sources: Promises and Pitfalls of the Digital Age.” Public Historian 26, no. 2: 7–22.

Elliott, Clark A. 1981. “Citation Patterns and Documentation for the History of Science: Some Methodological Considerations.” American Archivist 44 (Spring): 131–42.

Gilliland-Swetland, Anne J. 1998. “An Exploration of K–12 User Needs for Digital Primary Source Materials.” American Archivist 61 (Spring): 136–57.

Goggin, Jacqueline. 1986. “The Indirect Approach: A Study of Scholarly Users of Black and Women’s Organizational Records in the Library of Congress Manuscript Division.” Midwestern Archivist 11, no. 1: 57–67.

Gordon, Ann D. 1992. Using the Nation’s Documentary Heritage: The Report of the Historical Documents Study. Washington, DC: National Historical Publications and Records Commission in cooperation with the American Council of Learned Societies.

Hedstrom, Margaret L., Christopher A. Lee, Judith S. Olson, and Clifford A. Lampe. 2006. “‘The Old Version Flickers More’: Digital Preservation from the User’s Perspective.” American Archivist 69 (Spring/Summer): 159–87.

Inwood, Kris, and Richard Reid. 1993. “The Challenge to Archival Practice of Quantification in Canadian History.” Archivaria 36 (Autumn): 232–38.

Johnson, Catherine A., and Wendy M. Duff. 2005. “Chatting up the Archivist: Social Capital and the Archival Researcher.” American Archivist 68: 113–29.

Krause, Magia G. 2010. “Undergraduates in the Archives: Using an Assessment Rubric to Measure Learning.” American Archivist 73: 507–34.

Maher, William J. 1986. “The Use of User Studies.” Midwestern Archivist 11, no. 1: 15–26.

Martin, Kristen E. 2001. “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection.” American Archivist 64 (Spring-Summer): 17–42.

McCall, Nancy, and Lisa A. Mix. 1996. “Scholarly Returns: Patterns of Research in a Medical Archives.” Archivaria 41 (Spring): 158–87.

Miller, Fredric. 1986. “Use, Appraisal, and Research: A Case Study of Social History.” American Archivist 49 (Fall): 371–92.

Orbach, Barbara C. 1991. “The View from the Researcher’s Desk: Historians’ Perceptions of Research and Repositories.” American Archivist 54 (Winter): 28–43.

Prom, Christopher J. 2004. “User Interactions with Electronic Finding Aids in a Controlled Setting.” American Archivist 67, no. 2 (Fall/Winter): 234–68.

———. 2011. “Using Web Analytics to Improve Online Access to Archival Resources.” American Archivist 74: 158–84.

Scheir, Wendy. 2006. “First Entry: Report on a Qualitative Exploratory Study of Novice User Experience with Online Finding Aids.” Journal of Archival Organization 3, no. 4: 49–85.

Sinn, Donghee. 2010. “Room for Archives? Use of Archival Materials in No Gun Ri Research.” Archival Science 10: 117–40.

Southwell, Kristina L. 2002. “How Researchers Learn of Manuscript Resources at the Western History Collections.” Archival Issues 26, no. 2: 91–109.

Spindler, Robert P., and Richard Peace-Moses. 1993. “Does AMC Mean ‘Archives Made Confusing’? Patron Understanding of USMARC AMC Catalog Records.” American Archivist 56: 330–41.

Stevens, Michael E. 1977. “The Historians and Archival Finding Aids.” Georgia Archive 5, no. 1 (Winter): 64–74.

Tibbo, Helen R. 2002. “Primary History: Historians and the Search for Primary Source Materials.” Proceedings presented at the 2002 ACM IEEE Joint Conference on Digital Libraries, July 14–18.

———. 2003. “How Historians Locate Primary Resource Materials: Educating and Serving the Next Generation of Scholars.” Paper presented at the ACRL Eleventh National Conference Charlotte, North Carolina.

———. 2003. “Primarily History in America: How U.S. Historians Search for Primary Materials at the Dawn of the Digital Age.” American Archivist 66, no. 1 (Spring/Summer): 9–50.

Yakel, Elizabeth. 2002. “Listening to Users.” Archival Issues 26, no. 2: 111–23.

Yakel, Elizabeth, and Laura L. Bost. 1994. “Understanding Administrative Use and Users in University Archives.” American Archivist 57: 596–615.

Yakel, Elizabeth, and Deborah A. Torres. 2003. “AI: Archival Intelligence and User Expertise.” American Archivist 66, no. 1 (Spring/Summer): 51–78.

———. 2007. “Genealogists as a ‘Community of Records.’” American Archivist 70 (Spring/Summer): 93–113.

Zhou, Xiaomu. 2008. “Student Archival Research Activity: An Exploratory Study.” American Archivist 71 (Fall/Winter): 476–98.

Table 1. Publications on User Studies Examined in This Study

Publication

Pieces on User Studies

American Archivist

22

Archivaria

5

Archival Issues (previously Midwestern Archivist)

5

Journal of Archival Organization

3

Archives and Museum Informatics

2

Archival Science

2

Proceedings

2

Book

1

Georgia Archive

1

Library Quarterly

1

Public Historian

1

Total

45

Table 2. Research Methods Employed in User Studies

Research Method

Frequency

Survey

16

Interview

15

Experiment

6

Reference question/correspondence analysis

6

Citation analysis

5

Critical incident

3

Observation

3

Document analysis

3

Content classification

1

Literature analysis

1

Focus group

1

Diary

1

Field study

1

Web analytics

1

Note: Because many user studies employed multiple research methods, the total number of user studies in this table is greater than the total number of user studies examined in this study.

Figure 1. Number of Examined User Studies by Year of Publication

Figure 1. Number of Examined User Studies by Year of Publication

Figure 2. Number of user studies conducted by practitioners versus academicians

Figure 2. Number of user studies conducted by practitioners versus academicians

Note: Author’s status is as of the year of publication. In the one case of an author who was both a student and a practitioner, the user study was counted in both categories.

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA