rusq: Vol. 51 Issue 2: p. 172
A Systematic Review of Research on Live Chat Service
Miriam L. Matteson, Jennifer Salamon, Lindy Brewster

Miriam L. Matteson is Assistant Professor, Kent State University School of Library and Information Science
Jennifer Salamon is Project Coordinator for the National Digital Newspaper Program in Ohio at the Ohio Historical Society
Lindy Brewster is a Quote Analyst at the Timken Company. Submitted for review April 14, 2011; revised and accepted for publication July 15, 2011

Abstract

The purpose of this study was to synthesize the research literature that has investigated library-based chat reference service. We define library-based chat reference as a synchronous, computer-based question answering service where users of the service ask question(s) which are answered by library employees or contracted agents. Following the methods for conducting a systematic review, we developed inclusion criteria for our data set and collected data from research on chat service dating from 1995 to January 2010. We limited our data to empirical research using established qualitative or quantitative methods. The final data set included 59 documents. We used White's (2001) digital reference service framework to guide our data analysis and unitized the data to the level of the research question(s) asked in each of the studies, resulting in 146 research questions. We focused the bulk of our analysis on the six categories of the framework where the research emphasis was strongest: parameters of the service; clients; parameters of the service; questions; question-answering process; response guidelines; staffing and training; and mission, objectives, statement of purpose.

Our aim is to analyze the literature on chat service from a broad perspective to uncover larger themes and streams of knowledge. We believe that this perspective is relevant to those who are currently engaged in chat service in some capacity—academics, librarians, managers, and IT developers. Our research presents the collective knowledge in this area and provides groundwork for researchers as they explore new questions related to chat service. It unifies for practitioners a collection of findings about chat service to enhance and improve their practice. The results suggest areas of opportunity for managers who wish to further develop chat as a library service, and the results synthesize current understandings about chat service which may be useful for IT developers to extend and innovate chat technology in libraries.


Synchronous, computer-mediated communication between library staff and library users (referred to in this paper as chat) has existed in libraries for around 15 years. In that time a fair amount of research has been conducted that investigates various aspects of chat reference. Each of these studies individually contributes to knowledge building on chat service. Since chat service has been well-established in research and practice, now is an opportune time to extend that knowledge building effort by synthesizing the existing research on chat service so that stakeholders including librarians, academics, managers, and developers can further their understandings about chat services and explore untapped or under researched areas of chat service.

Lankes, Gross, and McClure define digital reference as “human-intermediated assistance offered to users through the internet.”1 To that the authors add the notion of real time and have limited their analysis to research that explores synchronous, simultaneous, real time assistance. Library-based chat is a synchronous, computer-based question answering service where users of the service ask question(s) which are answered by library employees or contracted agents.

The authors have elected to make the distinction between synchronous and asynchronous question answering services, such that e-mail answering services (asynchronous) are excluded from the study. A further boundary of this study is the focus on computer-based service, thus excluding telephone question answering service. The authors focus on library-based chat only, excluding research on personal chatting in a library, and chat services offered by other providers such as customer service chat from online retail sites, service companies, or other industries.

The synthesis is presented in the form of a systematic review of this research literature produced between 1995 and January 2010. The aim is to analyze and integrate the literature on chat reference service from 30,000 feet to uncover larger themes and streams of knowledge. Such a study brings together a large body of individual knowledge pieces and makes sense of them at a higher level; it provides the evidence to show what knowledge has been accumulated in this area; it provides groundwork for researchers in this area as they look at new questions; and it unifies for practitioners a collection of findings about chat reference to enhance and improve their practice.


LITERATURE REVIEW
A Brief History of Chat Service

Chat service has existed in libraries for roughly 15 years. Zanin-Yost cited an exploratory service from 1996 at North Carolina State University using synchronous video chat through CU-SeeMe software.2 The University of Michigan Shapiro Undergraduate Library also experimented with CU-SeeMe at about that same time.3 Sloan identified four chat services (still operating in 2006) as the earliest chat reference services still in operation: (1) Silkeborg, Denmark Public Library, SUNY-Morrisville, and Temple University all dating to 1998, and the University of North Texas since 1999.4

Chat service started to take off in the late nineties when technology companies began developing software libraries could use to manage their chat service. Some of those early players were Library Systems & Services (LSSI), purchased in 2003 by tutor.com, LivePerson, still available as a real time customer engagement software company, and Question Point, a collaborative digital reference venture between the Library of Congress and OCLC, also still in operation. This kind of chat software appealed to many libraries because of the many features built into the software including co-browsing, pushing webpages, storing written messages to use as scripts, and statistics and report management capability. At nearly the same time, libraries were also exploring more scaled down software options for real time chat through instant messaging (IM) software such as America Online's AIM, Microsoft's MSN Messenger, and Yahoo Chat.5 This technology was seen as advantageous for libraries for many reasons including it was less expensive than chat software, it was thought to be easier to use, and many people in the target audience (typically students) were already using it. Because there were many different individual IM services in operation, and they were exclusive such that one must have an account on each service to use it, librarians found they needed to have multiple accounts to the various IM services to be able to reach their users. That situation lead to the adoption of aggregator services that bring together IM accounts from multiple providers into one interface and require no software to download for the user to be able to chat with a librarian. Some examples of aggregator services include Meebo (www.meebo.com), Trillian (www.ceruleanstudios.com), and Pidgin (www.pidgin.im).

At present, the majority of libraries in the United States offer some sort of digital reference service. Data from a U.S. Department of Education report show that in the fall of 2008, 72.1 percent of all U.S. academic libraries offered reference service by e-mail or the web.6 Data from a 2009 Public Library Association report show that 31.4 percent (n = 831) of public libraries offer chat reference service.7

Reviews of Chat Research

To date, there has not been any published research that has set out to identify and integrate the collected empirical research findings on chat reference since its inception. However, two survey articles on aspects of chat reference have informed this work. From a practitioner perspective, Francoeur's survey of the number of chat services around the world in existence at the time of his study provides an informative early history of chat service.8 His large-scale research, which encompasses both the breadth of chat service and the multiple dimensions involved in providing chat service, provides preliminary groundwork for subsequent meta-reviews such as this research. Radford and Mon's chapter on reference service in academic libraries was particularly useful in its integration of research findings on a variety of aspects of reference, including chat reference, such as user satisfaction, reference accuracy, and assessment of reference service.9 Both of these studies provided support to the method presented here.

Systematic Review

A systematic review is a type of review of research literature with the goal to “assemble, critically appraise, or evaluate, and synthesize the results of primary studies in an integrative approach.”10 This method requires asking a specific research question, which is then answered by systematically collecting material for inclusion in the data set, and then synthesizing the material. Systematic reviews differ from meta-analyses in that the findings in a systematic review may be reported quantitatively or qualitatively, whereas meta-analyses use statistical tests to combine results across multiple research studies.11

In building the data set to be examined, inclusion criteria must be established appropriate to the question being asked. Steps to follow to ensure comprehensive inclusion of relevant materials include searching multiple databases using a variety of search techniques, tracking citations from relevant studies, seeking recommendations from experts, and citation pearl growing.12

In the field of library and information science, the systematic review is perhaps underused. Ankem studied the use of systematic reviews (and meta-analyses) in LIS literature and identified only seven published systematic reviews in an eleven year period from 1996–2006, and within that set, all seven reviews were on topics in medical information.13 More recently other systematic reviews have appeared in the LIS literature on topics beyond medical information such as Winston's review of research on ethical education and Liew's work on digital library research.14

McKibbon suggests that a systematic review is an appropriate technique to use in several different cases, such as when there is too much or too little information available for a particular area of inquiry, when there may be contradictory findings for a particular research question, or as in the present case, to identify the current state of our collective understanding of a phenomenon which results in exposing potentially fruitful areas for new research.15

Since chat service has been around in libraries for 15 years, with a growing body of published research on a wide range of aspects of chat service, the authors believe the time is right to explore that research in this systematic manner. The contribution of this work is to investigate this body of literature to synthesize what is empirically known about chat reference service.


METHODS

The following three research questions guided the study:

  1. How have researchers studied chat service in libraries?
  2. What aspects of chat service have been studied since 1995?
  3. For the aspects of chat service most heavily studied, what does the research tell us?


DATA COLLECTION

The authors established the following criteria for inclusion to build the data set:

  • Research that examined some aspect of chat service, defined as library-based question answering services employing synchronous, computer-based technologies
  • Empirical research, exhibiting standard qualitative or quantitative research methods, including formal research questions and appropriate methods to answer those questions
  • Research from 1995 through January 2010

The authors elected to establish a criterion for inclusion in the data set that focuses on a scientific approach to research on chat service. In doing so, articles that reported a single library's experience with digital reference service were not included. Although in many cases, this type of article had elements of a case study approach and reported some empirically derived data, such studies were excluded from the research if they did not, a priori, ask a research question, determine the appropriate data and methods to answer that question, or base their research design on a theoretical or conceptual position. The set of articles that are descriptive case studies of chat service are extremely useful in describing how chat operates in a variety of libraries, and make for a valuable data set for future work in this area.

The authors searched two comprehensive indexes of LIS literature, Library and Information Science and Technology Abstracts, and Library Literature, to identify studies for the data set. Dissertations and Theses also was searched to locate unpublished dissertations. Using keyword and controlled vocabulary searching, and limiting the search by publication date, the initial data set contained 101 articles. Additional articles were discovered through citation tracing. A careful review of the data set applying the inclusion criteria resulted in a final data set of 59 total papers: 52 were journal articles, 6 were unpublished dissertations, and one was a conference paper. Appendix A includes the complete list of papers included in the data set.

Data Analysis

In an effort to link the research with existing literature, the authors searched for an appropriate, extant framework to guide the analyses. Several frameworks that model digital reference services to some extent were considered including Luo's chat reference evaluation framework,16 Pomerantz's process model of chat-based reference service,17 and the preliminary typology of digital reference standards by Lankes, Gross, and McClure.18 Ultimately, White's digital reference service framework was determined to be the most comprehensive schema of digital reference service and one that had been validated through other research and was thus selected for this analysis.19

The framework consists of 18 categories grouped into four broad domains. No additions or subtractions were made to the framework, but some interpretive detail was added to the categories for clarity in the coding process, most noticeably in three areas. All the studies related to users’ satisfaction with chat were coded as Parameters of the Service: Clients. Although the framework breaks out digital reference into core functions, it does not have a category that addresses the chat transaction as a whole communication event. The authors thus opted to use the category Question-Answering Process for that idea. Finally, data about instruction and learning were coded in the category Response Guidelines. The framework is shown in table 1.

The authors piloted the framework with the data using the entire article as the unit of analysis, but since many of the studies examined multiple aspects of chat service, it was determined that the data needed manipulation to a more discrete level. The data were then unitized to the level of the research question by examining each article and identifying the research question(s) asked. In some cases, the research questions were not explicitly identified as such, but were referred to in such a way as to be easily interpreted as the question(s) to be investigated. The final data set included 59 papers yielding 146 research questions.

Two authors independently coded each of the 146 research questions using the framework. Inter-coder reliability was calculated at .77. All three authors discussed the research questions where there was a discrepancy in the coding to reach agreement on the assigned codes.


LIMITATIONS

The authors are aware of the following potential limitations in this study. In building the data set, the authors limited their searching to two LIS indexes, dissertations, and citation tracing, and thus they may have missed studies that meet the inclusion criteria. To address this, the authors iteratively searched the two databases using varied search strategies, and they used pearl growing technique to identify other relevant citations to studies that met the criteria. Another limitation in the study is that the criteria for inclusion contained some gray area and in some cases the authors may have erred on the conservative side, thus leaving out a relevant study. Finally, in intentionally not analyzing the descriptive case accounts of chat service, information of interest may have been omitted. The authors suggest that the body of literature describing single implementations of chat service could be mined for trends that could provide additional perspective to what is reported here.


FINDINGS

The findings are reported in two brief sections and one lengthy section, each pertaining to a stated research question. “Descriptive Findings” answers the first research question: how have researchers studied chat service in libraries? The second section, “Broad Findings,” answers the second research question: what aspects of chat service have been studied since 1995? “Specific Findings” answers the third research question: for the aspects of chat service most heavily studied, what does the research tell us?

Descriptive Findings

These findings characterize the collection of 59 papers in the data set (52 journal articles, 6 dissertations, 1 conference paper). The 52 journal articles were published in 24 different journals. Table 2 shows the number of articles per journal title.

The majority of the research examined chat service in an academic library (n = 37). Multitype library chat services were explored in 14 studies and public library consortial chat services were the focus in 8 studies.

There was a mix of data types across the data set. The majority of the studies used transcripts of chat interactions as the primary data source (n = 31). The second most frequently observed source of data came from user surveys (n = 13). Sixteen of the 59 studies employed multiple data collection and analysis methods, the most prevalent of which was the combination of analysis of chat transcripts with user surveys. Table 3 lists the range of data sources used and the number of studies in which they were reported.

The predominant two types of data analyses were (1) content analysis of the chat transcripts or of open-ended survey and interview questions (n = 38) and (2) quantitative analysis from survey questions (n = 30). Within the content analysis category, some of the more interesting approaches included linguistic, discourse, and conversational analyses. Across the studies using quantitative analysis techniques, the majority reported descriptive statistics (n = 23), with only a few employing inferential statistical analyses of significance (n = 7). One study used multivariate regression analysis to study predictors of user satisfaction in chat.

Broad Findings

Turning to findings at the level of the research question, the distribution of the data across the coding framework is shown in table 4.

The numbers in table 4 show that the areas of chat service most researched are
  • 1c. Parameters of the service: clients
  • 1b. Parameters of the service: questions
  • 3d. Question-answering process
  • 3e. Response guidelines
  • 2b. Staffing and training
  • 1a. Mission, objectives, statement of purpose

No research questions were found for seven categories of the framework

  • 2c. Hardware and software
  • 2e. Responsibilities to the client
  • 3a. Query form
  • 3b. Acknowledgement
  • 3f. Coping with demand
  • 3g. Archiving
  • 4c. External recognition

Specific Findings

The remainder of the findings addresses the six most heavily researched categories of chat service, in descending order of percentage in the data set.

1c. Parameters of the Service: Clients

Twenty-seven research questions were coded in the client category, from which five themes were found: users’ motivations for choosing chat, users’ perceptions of chat service, users’ satisfaction with chat service, characteristics of users of chat, and what users do with the information they receive from chat service. Users’ motivations, users’ perceptions, and users’ satisfaction with chat services are discussed here.

Users’ Motivations

Two studies in particular explored what motivates a user to choose chat service.20 Though the response categories for this question were not identical across the two studies, some integration can be made.

Table 5 shows that the convenience of chat service is the main motivation for choosing it in each study. A somewhat miscellaneous category of reasons was the second highest between the two studies including familiarity, a rejection of other services, as well as curiosity and serendipity.

Users’ Perceptions of Chat Service

Some studies investigated users’ preferred mode of reference services.21 Cummings et al. reported that 72 percent (n = 197) of users surveyed would be willing to use chat reference service.22 In this study, users also were asked to rank their preferred mode of reference service: both users inside and outside of the library placed “chat” at the top of the list, above asking a friend, e-mail, reference librarian, telephone, and librarian website. When asking distance students if telephone reference service was preferred over chat reference service, Lee found that only 8 of 34 students either agreed or strongly agreed with this statement while 22 students either disagreed or expressed no opinion.23 Connaway et al. compared nonusers and users of chat reference and “Net Gen” (born between 1979 and 1994) with adult information seekers to look at reference service preferences. Almost all users of chat services surveyed (100 percent of Net Gen users and 95 percent of adult users, n = 137) indicated that convenience was a significant factor in deciding to use chat reference service.24 Seventy-six percent (n = 184) of the Net Gens stated that chat was the least intimidating form of reference. But 71 percent of nonchat reference users indicated that they prefer face-to-face reference service and 49 percent enjoy face-to-face interactions.25

An important finding throughout all of these studies is that many users were not aware of chat reference service. Connaway et al. reported that the main reason Net Gen students are not using chat reference service is because they did not know that it existed.26 As recently as 2008, focus group interview research showed that all 45 subjects had never heard of chat reference.27 Several studies also reported that participants expressed the belief that a range of reference service types should be available, and some should be available 24/7.28

Users’ Satisfaction with Chat Service

Ten research questions, in eight studies in the data set, explored variations on the question of users’ satisfaction with chat service. In addition to the direct question “are users satisfied with chat reference?” other aspects of user satisfaction explored included differences in satisfaction between complete answers and referrals;29 the factors that contribute to (dis)satisfaction with chat;30 user and librarian correlations with satisfaction assessments;31 satisfaction with outsourced chat service;32 user satisfaction relative to librarians’ performing Reference and User Services Association (RUSA) behaviors during sessions;33 and whether the degree of formality in the librarian's language correlates with user satisfaction with the librarian.34

For those studies that looked directly at users’ satisfaction, the data show that users were quite satisfied with chat service. Pomerantz and Luo reported 92 percent (n = 292) of the respondents were satisfied or very satisfied with chat service.35 Fifty percent (n = 46) of the students in Lee's study were very satisfied.36 Using “willingness to return” as an indirect satisfaction measure, Nilsen and Ross found that 59 percent (n = 85) of respondents indicated a willingness to return.37

Other findings related to satisfaction are summarized here. User satisfaction was highest with completely answered questions, and users whose questions were referred, or partially or not answered, were far less satisfied that those with a complete answer.38

Comparing user reported satisfaction evaluations with librarians’ evaluations of satisfaction with the same transactions, the average patron-reported satisfaction score was 86.7 percent whereas the librarian's average assessment of the transactions was 74.2 percent. Further, patron data revealed 15 “perfect” satisfaction scores, while librarian data showed no perfect scores, with the highest scores ranging from 86 and 97 percent.39

User provided satisfaction scores based on interactions with local librarians compared with interactions with outsourcing service providers were quite similar ranging from 85 to 94 percent for local librarians and 80 to 93 percent for outsource service providers, although users expressed greater dissatisfaction with outsource chat providers on the dimension of answering the question.40

User satisfaction was statistically significantly higher when these six RUSA behaviors were performed by library staff: used patron's name, communicated receptively and listened carefully, searched with or for the patron, provided pointers, asked if question was completely answered, and invited patron to come back. Further, the following five behaviors were statistically significant predictors of user satisfaction: receptive listening, searching with or for users; offering pointers; verifying question was completely answered, and encouraging patrons to return.41

User satisfaction, (measured by librarian helpfulness) was greater in transactions where the librarians used fewer scripted words, and when librarians used with frequency compensator devices, specifically ellipses.42

Integrating this collection of satisfaction-related findings, the data suggest that users of chat service tend to rate their experiences fairly highly in terms of satisfaction, both in general satisfaction and satisfaction toward particular dimensions of the transaction. Librarians’ assessments of chat transactions, however, were not quite so positive. Users tend to be more satisfied when they receive complete answers, and rate outsourcing chat providers lower when they are unable to fully answer a question, which may be an inherent limitation with outsourcing chat service. User satisfaction was higher in transactions where librarians performed critical behaviors that connect them to the user, including listening and communicating, demonstrating expertise in searching and offering pointers, and providing full closure to the transaction by checking that the answer was complete and inviting the user back to the service. Along these same lines, librarians who were considered to be very helpful used fewer scripted words and used ellipses during the chat transaction.

1b. Parameters of the service: questions

26 research questions were coded in this category. Two main subcategories emerged from analysis of the data: types of questions asked of a chat service, and outcomes from the questions (e.g., completed, referred, require subject expertise).

Types of questions asked

Eight studies from the data set investigated this theme as a research question. Five additional studies reported findings on the type of question asked, although it was not stated research question for their studies. For this analysis data from both cases were included when possible.

Taking into account the various schemas in use for categorizing the questions, no one question type was predominant. Looking at the two highest percentage question categories reported by each study, four basic question types were observed with nearly equal frequency: reference (6);43 specific search/known item (7); policy/procedure (5); and information/directional (6). The one outlier is the category for circulation-related questions, which was only observed in the single study reporting data from a public library chat service. Table 6 reports the findings on question types.

Outcomes

A few studies examined the questions asked of the chat service from the standpoint of how the question was subsequently handled by the operator. The two most commonly used outcome measures were: (1) measures of completeness of the answer, and (2) question referral. Table 7 summarizes the findings from three studies that measured the extent to which the answer provided was a complete answer.

These numbers show that in the studies included in the data set, relatively few questions received a partially or fully incomplete answer. Across the data set, the highest percentage of questions reported not answered in real time (a measure of incompleteness) came from questions that were handled by chat operators within a consortium, but not from the local library staff, thus it might be expected that being less familiar with the specific patrons would result in a greater degree of questions not fully answered.44

The other outcome path studied in the data set was the percentage of questions that were referred by the chat operator, either to another librarian on staff, or to a users’ home library institution. Table 8 summarizes the percentage of questions that were recorded as referred to another librarian or another service.

The number of questions that were referred varied among the studies included, but followed predictable patterns. The Kwon and Wikoff studies that reported the largest percent of questions referred were both analyzing chat transcripts from consortial-based chat services where not all the operators were from the same home institution as the user, and thus were more likely to be unable to answer specific account questions or other local kinds of questions. Kwon reported on a public library chat consortia where circulation-related and local library information questions were the most referred (44.8 percent and 39.3 percent, respectively).45 Wikoff found that 15 percent (n = 32) of the percentage total questions referred (33 percent, n = 69) were because the questions were deemed more appropriate for home libraries.46

In making sense of what the research shows about the questions asked of chat service, in nearly all of the studies that examined the type of questions asked of the chat reference service, there were not large differences among the percentages of categories of question types, indicating a fairly balanced distribution of question types. Even accounting for different manners of categorizing the questions, these data show that users are asking a range of questions through chat reference. This suggests that users of chat service do not associate this medium of communication with one or only a few types of information needs. Rather, users of the service are willing to express a variety of information needs via chat, which requires that libraries equip their operators to handle that same variety of questions.

In the case of consortial chat services, this can become problematic. The benefit of consortial chat service is obvious; greater access for many users with modest investment by any one library. But the trade-off comes with a lower degree of familiarity with local library information needs, and a lesser degree of customization of service. Libraries must consider those outcomes bearing in mind the impact on the user. If a user with a question about an overdue book reaches out to her library via chat, and through a consortial arrangement ends up interacting with an operator not from her local institution who is unable to answer the question, is she better off for having had easy access to chat service? Or is she is more likely to be frustrated for not having her information need met? Making referrals in chat reference also illustrates this point. Referring a user to another person or service is in many cases the best answer a librarian can give, and in one respect is a complete answer. But from the user's perspective, does the instant access nature of chat bring the expectation of a full and final answer in real time, so that a referral might be considered a less than satisfactory answer? Kwon found that users were most satisfied when receiving completely answered questions.47 Their satisfaction with referrals was statistically significantly lower than with complete answers, and was at about the same level as those who received an incomplete answer or no answer.

3d. Question-Answering Process

In interpreting White's framework, this category was used for the research questions that investigated the chat interaction as a communication event. 22 questions were coded in the question answering process category and the majority of the studies explored interaction patterns, styles, processes, and techniques revealed through chat session transcripts. The findings from these studies are intractably tied to context, and thus no quantitative findings can be meaningfully reported. Nonetheless, some overlaps were detected among the research questions within this coding category, where, broadly understood, the findings in this area speak to stylistic elements, contextual elements, and quality elements in chat service.

Stylistic Elements

Researchers explored an array of stylistic components revealed in chat sessions transcripts, including the use of abbreviations, contractions, acronyms, emoticons, sentence fragments, dropped words, punctuation, capitalization, font, slang, colloquialisms, and scripted messages. These features of chat reference were interpreted through different lenses: as characteristics of chat;48 as a basis with which to compare chat with F2F reference;49 as indicators of relational elements of the interaction;50 and as indicators of (in)formality in the interaction.51

A general conclusion from these analyses is that such stylistic elements affect the degree of formality of the interaction between the librarian and the user. A number of studies reported that in general chat users invoke stylistically informal features more frequently than librarians do, although, as the interaction progresses, the librarian may mirror the level of formality employed by the user.52 Further, the use of these features to increase the informality of the exchange can have the effect of maintaining positive face between the librarian and the client and may for younger users “transform the reference transaction into a more familiar form of discourse.”53 Maness explored the co-occurrence of stylistic features of chat and user satisfaction and found evidence to suggest that use of scripted words negatively relates to user satisfaction while use of ellipsis (as a nonverbal compensator) positively relates to user satisfaction.54

Contextual Elements

A wide range of contextual elements were observed in chat transcripts including:

  • Showing vulnerability, excessive gratitude, self-deprecation, apologies, mild humor, group identity, and invoking library policy55
  • Greeting and closing rituals, rapport building, deference, self-disclosure56
  • Reaching shared meaning through dynamic, constructed, situation-specific processes, assumptions of trust and mutual reciprocity, indexicality (meanings conditional on the situation) and the use of classification, instruction and instructed action, and sequential organization of the interaction57

These contextual elements were studied as indicators of the degree of formality in the chat interaction;58 as indicators of facilitators and barriers in relational communication;59 and as indicators of the techniques and communication features used in the meaning making process that occurs through a chat interaction.60

A general conclusion to be made from the studies that focused on contextual elements is that the chat medium is rich in context, even without the nonverbal elements present in face-to-face communication. These studies demonstrate that just as in face-to-face interactions, communication through synchronous, virtual communication is also subject to the fundamental complexities of human interaction: creating socially constructed understandings through linguistic, semantic, syntactic, and conversational devices, under the imposition of cultural, social, institutional and individual norms.

Service Quality Elements

Two studies explored the overall service quality of chat service, and both studies supplied a list of dimensions of quality through which chat service could be evaluated. White et al. determined that an efficiency dimension could be assessed by considering the extent to which there was a focus on the main objective of the question combined with the length of the session, minus any down time during the transaction.61 Their effectiveness measure came from considering the accuracy and completeness of the answer provided. Their research also revealed a need for a measure of the quality of the experience of the chat interaction, which they propose should include the accuracy of the answer, the extent to which there was a focus on the main objective of the question, librarian traits such as patience and helpfulness, the interaction, down time, questioner frustration, time spent in the queue, lag time, and technical problems.

Pomerantz et al. evaluated service quality of chat sessions with thirteen items that can be grouped into these categories: characteristics of the answer, librarian behaviors; user satisfaction, and librarian traits.62

Finally, Radford set out to determine what relationship may exist between the content and relational dimensions in determining the quality of chat service and concludes that service quality assessment can only truly come from asking the users of the chat service.63

This whole category of research is a significant departure from some of the earlier research on chat reference, by moving beyond counting of types of questions, categories of users, times of day questions come in, or length of chat session. The stylistic and contextual findings integrated here suggest a much richer, and arguably more valuable line of inquiry for research that will have a much greater impact on service.

3e. Response Guidelines

22 questions were coded in the response guidelines category. This category included research that explored aspects of chat that focused on the behaviors of the librarians as they attempt to respond to the query. The predominant theme that emerged from this analysis was offering library instruction through chat service. This issue was explored through several questions. Is instruction provided through chat? What techniques are used to provide instruction? Do users want instruction? Do they ask for instruction? Do they believe they can learn through chat?

Is instruction provided through chat?

Two studies report that in both IM and commercial chat service, instruction is being provided, 83 percent (n = 146) and 82 percent (n = 118) of the time (when possible) respectively.64 Ward reported that two instruction elements were present 78 percent (n = 72) of the time in the transcripts reviewed, and one of the two elements was found in 12 percent of the transcripts reviewed.65 In Ford's study, 53.92 percent (n = 102) of chat reference transactions included instruction.66

What techniques are used to provide instruction?

The following instructional techniques were found to be used through chat service, in varying degrees of frequency: resource suggestion;67 modeling (explaining how to find information);68 explaining how to use information sources;69 term suggestion;70 leading (explaining in step-by-step fashion while the user follows along);71 providing search tips and tricks.72

Do users want instruction? Do they ask for instruction? Do they believe they can learn through chat?

Data show that users do want instruction through a chat interaction. In a pair of studies, Desai and Graves report that 43 percent (IM, n = 146) and 52 percent (chat, n = 136) of the transcripts reviewed were occasions where users asked for and received instruction.73 Further, 50 percent (IM) and 30 percent (chat) of the transcripts showed that users did not ask for instruction, but still received it.74 In only 2 percent and 3 percent respectively did the transcripts show that a user asked for, but did not receive instruction.75 When users in the IM study were surveyed, 62 percent (n = 50) reported either a desire or willingness for instruction, while 30 percent were indifferent toward it.76 From the chat study user survey, 82 percent (n = 62) either wanted or were willing to have instruction, while only 15 percent were indifferent.77 Finally, 98 percent (IM) users and 92 percent (chat) users reported that chat was a good way to learn.78

The findings in these cases are positive toward the idea of offering library instruction via chat. They demonstrate that instruction can be provided (and evaluated) in chat service, that many chat interactions include instructional elements, and that users do ask for instruction. It is important here to note that the samples for these studies all come from academic libraries which may place more emphasis on providing instruction in any reference transaction, compared with public library chat services.

2b. Staffing and Training

Five themes emerged from the analyzing the 17 questions coded in the staffing and training category: library staff attitudes toward and opinions of chat service; chat reference provider competencies; provider training best practices; differences in response quality based on type of provider; and time spent answering questions. The findings for the first two themes are reported here.

Library staff attitudes and opinions toward chat service

From the studies in the data set, only two sought exclusively to evaluate the staff's views of the chat service. In the Casebier study, a questionnaire was issued in 2003 and 2004 and the results show only lukewarm attitudes toward chat service.79 Results from the 2003 survey period showed that 50 percent (n = 10) believed chat was not the best tool for instructing users on database searching; 80 percent of those who regularly staffed the service did not enjoy it; and 60 percent felt it did not enhance reference services. In the next year, the outlook improved somewhat. 66 percent (n = 19) believed chat service did enhance services at the library; only 33 percent did not enjoy it, and 66 percent felt is should be used mostly for ready reference.

Huston also used a survey instrument to gather the attitudes and opinions of reference librarians regarding the uses, impact and feasibility of chat reference.80 71.8 percent (n = 103) either agreed or strongly agreed that they had the technology skills necessary to perform chat reference, and over half (63.2 percent) believed that their coworkers also had the necessary technological skills. On the other hand, 64.1 percent either agreed or strongly agreed that it was difficult to keep up with emerging technology. In general, the librarians surveyed expressed a concern for insufficient staffing and training, with only 16.5 percent expressing no opinion or not agreeing that chat reference implementation had increased or would increase their workload. An interesting finding in this study is that over half (61.1 percent) of librarians did not agree that there would be an overwhelming demand for chat reference.

Comparing commercial chat service software with instant messaging (IM) software as alternative means for offering chat, Steiner and Long found that more librarians preferred IM over chat, younger librarians tended to favor IM, and 40 percent (n = 302) believed IM was insufficient for handling in-depth reference questions.81 Ward and Kern examined similar issues and reported that staff were hesitant about learning new software, but over time, many came to express a preference for IM.82 However, the increased volume of chat transactions that came as a result of adding IM as a service point caused stress among librarians.

To understand what librarians perceive as successful chat reference transactions, Ozkaramanli interviewed 40 librarians employing critical incident technique.83 The study found that successful and unsuccessful chat reference transactions were largely determined by the attitudes and behaviors of both librarians and patrons. Eleven librarians indicated that communication skills added to the success of a transaction; ten stated that knowledge of reference work and sources was also important. 25 percent noted that the ability to adjust to a new format (i.e. chat) and deal with the unexpected would also contribute to a successful chat reference transaction. The reference interview and question negotiation were also important, with seven librarians indicating that overall success of the interview was helpful and 12 librarians stating that it was important to find out exactly what the patron wants.

In this study, librarians were also asked to consider how users impacted the success of a chat reference transaction.84 Ten librarians thought it was helpful when patrons were familiar and/or comfortable with chat and 8 indicated that patrons should also know what they want. Over a third of librarians interviewed (35 percent) stated that appreciative patrons also lead to a successful chat reference transaction. 25 percent stated that less successful transactions were a result of communication difficulties, such as patrons being impatient (6)85, demanding (5), frustrated (4) or unengaged (2). When asked how chat service could be improved, librarians gave the following suggestions: have more librarians involved, improve hours of operation, provide formal training, and provide more practice and hands-on experience.

Chat reference provider competencies

Luo explored specific competencies for providers who offer chat reference services.86 From reviews of literature and interviews with librarians, she arrived at a list of 30 competencies in 8 areas and then validated those competencies through a survey of other librarians. Results showed of the 30 competencies, 21 were considered essential chat reference competencies, which were grouped into categories of general reference competencies (e.g., reference interview skills), reference competencies highlighted in chat (e.g., ability to work under pressure), and reference competencies specific to chat (online communication skills).

RUSA guidelines also served as an indicator against which to assess chat provider competencies.87 Walter and Mediavilla found that in 114 transcripts which ultimately ended in a referral to a homework help service, very few of the RUSA guidelines were observed.88 Most of the transactions analyzed showed evidence of a friendly greeting and clear communication, but analysis found that the providers’ did not typically probe for further information nor did they check that the information was clearly understood. Similarly, van Duinkerken, Stephens, and MacDonald, examined 1,435 chat transactions for the presence of RUSA-recommended behavior.89 Their data showed strong evidence of the behaviors associated with the RUSA categories of approachability and interest, but found mixed results in the remaining categories of listening/inquiring (i.e., rephrasing the question, using open and closed end questions appropriate in the reference interview), searching (i.e., asking the user what searches they may have already tried, helping the user broaden or narrow their topics), and following up (asking if the question was fully answered, encouraging the user to return to the service, inviting the user to call or visit the library).

Harmeyer approached RUSA guidelines from a different perspective, asking how their presence might contribute to an accurate answer.90 333 transcripts were analyzed in this study, with particular attention paid to the RUSA competencies of librarian interest, librarian approachability, and question negotiation. In the category of librarian interest, 5 variables were measured: hold time, gaps between the librarian's typed responses, service time, the librarian's response to the patron's “Are you there?” statements and the number of URLs co-browsed by the librarian with the patron. Significant correlation was shown between answer accuracy and only 2 of those variables: gaps between the librarian's typed responses and total service time. On average, the gap between responses was 3.8 minutes. Librarians with response gaps of 1.85 minutes or less reached statistically higher levels of accuracy. Harmeyer indicated that, though overall a short gap time is related to a higher accuracy score, it is a weak relationship. In terms of service time, the mean value was 16 minutes, and transactions lasting 8.3 minutes or less reached higher levels of accuracy than those over 8.3 minutes. Again, though shorter service time is related to a higher accuracy score, the author notes that this is also a weak relationship. Librarian approachability was measured by a sense of friendliness and a lack of using jargon. The study showed that neither variable had significance with answer accuracy, and, in most transactions, the librarian showed friendliness (89.2 percent) and did not use jargon (95.5 percent).

In the category of question negotiation, Harmeyer considered 4 variables: the use of open-ended questions, the use of close-ended or clarifying questions, asking the user if the question has been answered completely (follow-up question) and whether the librarian maintained objectivity.91 Significance was found with all variables, except for the last. Librarians who did not ask open-ended questions and should have were less likely to give an accurate answer than those who did not ask this type of question and did not need to. Similarly, librarians who did not ask close-ended or clarifying questions and did not need to were more likely to provide accurate answers than those who should have asked these types of questions and did not. For 45 percent of the transcripts analyzed, the follow-up question was either not present or not appropriate. The study found that, overall, librarians who asked a follow-up question, whether or not the transaction called for it, achieved higher levels of answer accuracy than those who did not.

The strength of the systematic review as a method for integrating knowledge is illustrated when pairing some of the findings from the staffing category with the user category. Data on user satisfaction indicates that users are more satisfied when librarians communicated receptively and listened carefully, asked if the question was completely answered, and, invited the patron to come back.92 Yet, van Duinkerken et al., reported inconsistent use of librarians’ listening/inquiring behaviors, and with following up behaviors such as asking if the question was fully answered, encouraging the user to return to the service, and inviting the user to call or visit the library.93 This highlights a potential service gap area that chat service providers should examine in their local practice.

1a. Mission, objectives, statement of purpose

15 questions were coded in this category. Three themes emerged within the category: chat use (i.e., “to what extent do library users make use of chat reference service”), lack of use (i.e., “why aren't students using our chat service”), and discontinuation of service (i.e., “what were the deciding factors for ending chat reference services”).

The research shows low awareness and usage of chat service: 24.7 percent (n = 194) in Cummings et al. were aware of the chat service; 3 percent (n = 276) of respondents in Johnson's study reported having used chat reference; and none of the 45 focus group participants in Naylor et al., were aware of virtual reference.94 Focus group responses from Naylor et al. and from Connaway et al., provide some reasons for not using chat reference, including: associating IM technology with social interaction, not academic work; an uneasiness about not knowing who they were chatting with, being turned off by the term “chat” because they associate it with their perception of chat rooms; a preference to search independently for information; doubts about the speed, convenience, accuracy, and capability of the service and the librarians providing the service; privacy concerns; and prefer face-to-face interactions.95

Yet, Cummings et al. reported that 72 percent (n = 364) respondents said they would be willing to use chat service, and 35.6 percent (n = 264) of Johnson's respondents felt that chat reference would be a leading service in the future.96 Some of the focus group participants in Naylor's et al. study showed enthusiasm for using chat when the service was demonstrated to them, in particular the personalized service.97 Focus group participants in Connaway's et al., study reported a willingness to use chat if it were recommended by a trusted librarian, colleague of friend.98

Added to that, two other studies report upward trending data on chat reference. A survey of academic health science libraries showed 22 percent (n = 132) of libraries reporting had added chat reference as a service point from 2002 to 2004.99 In another study, librarians compared vendor-based chat with IM chat and these findings add some nuance to the story.100 Over the 12 month period of data collection, the library maintained their vendor-based chat and introduced IM chat. IM chat was used more often than vendor-based chat in all but the first month it was introduced, and in two months in the summer when most undergraduate students were not taking classes. The combination of the two services was 39 percent higher than just the vendor-based chat for the previous twelve months. And although vendor-based chats declined by 49 percent over the period studied, they did not disappear entirely, and appear to be used by a population other than those using IM chat.

From the perspective of why library patrons use chat (or not), Huston explored factors that were influential and factors that were barriers to libraries’ implementation of chat reference service. The most significant influences for implementing chat service were support by librarians for the new service, support by administrators of the new service, and the belief that chat service was needed. She found that the most significant barriers were lack of adequate professional staff, lack of technical support, and lack of request by students/users for the service.101

This category also included research questions about the discontinuation of chat reference services. Dee reported 16 percent (n = 132) of the libraries she surveyed discontinued their chat service.102 Radford and Kern studied nine libraries that closed their chat service, identifying six major reasons for discontinuing:103 funding problems, low volume, low volume from the target population, staffing problems, technical problems, and institutional culture issues.104

Taking these related findings all together, what might be said about the purpose of chat is that compared to other methods of interacting with library staff chat use is still fairly low, in no small part due to a lack of awareness of the service. From the library perspective, Huston's findings suggest that whether chat service is offered or not should be based on a clear, user-driven, need for the service.105 But respondents, and in particular, nonusers seem to be moderately favorably disposed to the idea of using chat service, and in particular, willing to use IM chat which is more nimble, less expensive, more technologically stable, and perhaps more readily visible on a library website than other software solutions.


RECOMMENDATIONS FOR RESEARCH

The findings reported from this systematic review answer the question, what is known about chat service in libraries? Answering that question also brings to light gaps in the collective knowledge of chat service in libraries. This section highlights areas of chat reference for which more research is needed.

Users’ Satisfaction

Users’ satisfaction with chat service though notably quite positive in this systematic review, is a measure that should be considered with some care. Pomerantz and Luo note that respondents tend to show a favorable bias when rating experiences that involve interaction with humans.106 Similarly, Smyth and MacKenzie make the point that the overwhelming positive responses from their user surveys actually make it challenging for librarians to understand the reasons for that satisfaction, as well as to identify areas of the service that need improvement.107 The use of indirect measures such as a “willingness to return” or “willingness to recommend” may help to get a better read on user satisfaction.108 Deconstructing dimensions of satisfaction as used by Pomerantz and Luo (e.g., speed, helpfulness, ease of use) also may improve the utility of satisfaction measures.109 Ultimately, integrating findings on the success of a chat interaction from the users’ and the librarians’ perspective, as observed in Smyth and MacKenzie's study, may be the most useful strategy for librarians to more fully understand users’ satisfaction with chat service.110 Developing data collection methods to more easily collect those paired impressions would be of significant use to researchers.

Users’ Outcomes from Chat

Beyond users’ satisfaction with the chat interaction, more research could be devoted to understanding information use resulting from a chat interaction. For example, how do school-aged users who chat with their local libraries for homework help make use of the information provided? What associations do chat users subsequently form of the library based on the utility (or not) of the information provided in a chat interaction? Research that explores a deeper understanding of information use and the cognitive or affective effect on the user may provide insight into how to structure the chat service, and help guide policies and best practices for providing chat service.

Interpersonal Communication in Chat

Research in the interpersonal communication aspects of chat, such as the works of Epperson and Zemel, Mon, Radford, and Westbrook establish a rich foundation for knowledge building and project a line of inquiry for further research. For example, can it be ascertained that the two participants in a chat interaction have achieved a shared understanding of the experience? Is a shared understanding necessary for the chat to be considered a success? What elements of communication must occur in a chat-based interaction for shared meaning to be reached? Drawing on theories in the area of shared cognition may be fruitful for extending the research in this area.

Library Instruction

The findings on offering library instruction through chat show initial promise on both the capacity for teaching, as well as students’ interest in learning, but would benefit from further investigation. Some questions that merit further exploring include, when are the most effective teachable moments for a student, and are those recognizable through chat? What are the most appropriate strategies for effective learning? The research reviewed here provides a preliminary list of observed techniques, but future research should explore the most effective techniques for learning. For example, empirically testing Van Scoy and Oakleaf's strategies for teaching in virtual environments may be useful in furthering an understanding of effective teaching through chat service.111

Librarian Attitudes

Relatively few studies empirically and rigorously examined librarians’ attitudes toward chat service, which raises the question, why has so little research been done in this area? Perhaps it is not a research area of interest because of the ubiquity of chat service. Whether or not library staff are positively disposed to the service, many libraries offer chat service in some capacity and librarians are embracing it along with the many other changes in their work. Another reason for a lack of research could be that for many libraries staffing a chat service point has been an optional duty performed only by those who volunteer to staff the service. Given the self-selecting nature of the staffing arrangement, it makes sense that those who provide the service are those whose attitudes are favorable toward it. That said, collecting staff perceptions of chat service from a variety of dimensions, such as the usefulness of chat for library instruction, the quality of the chat transactions, or the affective or emotional outcomes on staff from virtual engagement with users are interesting lines of research to pursue.

Awareness

Finally, future research on chat service could explore what strategies are most effective in increasing awareness of chat service. As mobile technologies impact daily life more and more for many users, now may be a renewed opportunity for researching awareness and outreach through new technologies to promote chat service.


RECOMMENDATIONS FOR PRACTICE

Libraries are well beyond the initial effort of establishing a chat service. In this next era of chat service provision, attention should be turned to maximizing the value of the service to the users. Several suggestions are included here for enhancing the practice of chat service.

Developing Technology

Libraries should continue exploring technology that facilitates real time communication. The shift from chat software programs to embedded IM widgets is a positive example of fine-tuning the technology to meet actual needs. As the mobile device industry continues to expand, libraries are investigating texting as another way of interacting with patrons such as KnowitNow24x7, the statewide chat service in Ohio, which recently launched a texting service.112 OCLC's QuestionPoint, a cooperative virtual reference service, also has expanded to include a mobile application where library users can connect with their library from their mobile devices.113 E-readers with Internet access also present a new experience for the user that may create new opportunities for libraries to promote their chat service.

Service Quality

Attention also should be paid to enhancing the quality of the service experience. Indeed, it is nothing short of remarkable that successful interactions are actually achieved via the chat medium. A chat transaction involves two people, quite likely strangers, connecting through a computer-mediated communication tool, without the cues provided by nonverbal, face-to-face communication, for the purpose of making sense and sharing meaning, having likely had no previous shared experience, with no one telling either party exactly what to do during the exchange.114 To improve chat service quality, effort should be made toward sharpening the interaction skills of the chat providers. Bearing in mind what the research says about users’ satisfaction vis-a-vis the RUSA behavioral guidelines is a good starting point. Libraries could go a step further by developing staff training protocols that incorporate research findings about the interpersonal dynamics of chat communication. Borrowing from research methods that explore interpersonal communication in chat, libraries could use their own transcripts to uncover positive and negative communication examples to provide more advanced training for chat operators.

Managing Expectations

In keeping with enhancing the value of chat service to library users, libraries should continue to explore the most effective means to manage user expectations of the chat experience, particularly given the sense of immediacy implied in a chat service exchange. Clearly not all questions a user might ask through a chat service are fully answerable in a few seconds. Libraries, and especially those who are part of collaborative or who use third-party providers during certain operation hours, must continue to look for the most effective ways to clarify for their users what can be expected from the chat service in terms in terms of speed, thoroughness of the answer, and the possibility of a referral to a more appropriate provider. Approaching this issue from the users’ rather than the libraries’ perspective may help uncover the most useful service standards, along with the language needed to communicate those standards.


CONCLUSION

Looking at the research over the last 15 years, some integrated claims about this service can be made.

  • Chat service is generally well-received and users report high levels of satisfaction.
  • Users have expectations that their questions will be answered effectively and efficiently in real time.
  • A variety of question types are posed to chat services, and in most cases analyzed in this study, a nearly complete or complete answer was provided.
  • Users of chat service are comfortable with the informal nature of chat communication.
  • The chat medium is rich in context, even without the nonverbal cues available in face-to-face interactions, and patrons and librarians make use of many stylistic and contextual devices in their interactions to build relationships and share meaning.
  • Chat users frequently ask for or are open to library instruction via chat, and librarians employ several techniques in providing instruction, such as suggesting resources or terms, explaining how to use resources, or modeling step-by-step approaches to searching for information.
  • Providing library service via chat technology requires competencies in both communication skills as well as reference skills. Professional guidelines, such as the RUSA Guidelines for Behavioral Performance of Reference and Information Service Providers are helpful in establishing best practices for chat operators and when used have been shown to correlate with increased patron satisfaction.
  • Though usage statistics may be low relative to other methods of contacting the library, chat services are used regularly.

Chat service is firmly established in the collection of services offered by most libraries. In its 15-year life span, a rich and diverse body of research has been carried out, building our collective understanding of this mode of library service. This synthesis of the research should help stakeholders advance their understandings about chat service for future research and practice.


References and Notes
1. David R.. Lankes, Melissa Gross,  and Charles McClure,  "“Cost, Statistics, Measures, and Standards for Digital Reference Services: A Preliminary View,”,"  Library Trends  (2003)   51, 3:  401–13.
2. Alessia Zanin-Yost,  "“Digital Reference: What the Past Has Taught Us and What the Future Will Hold,”,"  Library Philosophy & Practice  (2004)   7, 1
3. Karen Westwood,  "“Lights! Camera! Action!”,"  American Libraries  (1997)   27, 1:  43–45.
4. Bernie Sloan,  "“Twenty Years of Virtual Reference,”,"  Internet Reference Services Quarterly  (2006)   11, 2:  91–95.
5. Marianne Foley,  "“Instant Messaging Reference in an Academic Library: A Case Study,”,"  College & Research Libraries  (2002)   634, 1:  36–45.
6. Tai A.. Phan et al.,   Academic Libraries: 2008 (Washington, D.C.:  National Center for Education Statistics, Institute of Education Sciences, 2009): .
7. American Library Association, The State of America's Libraries 2010 ,   www.ala.org/ala/newspresscenter/mediapresscenter/americaslibraries/librariestechnology.cfm (accessed July 2, 2011).
8. Stephen Francoeur,  "“An Analytical Survey of Chat Reference Services,”,"  Reference Services Review  (2001)   29, 3:  189–203.
9. Marie L.. Radford and Lorri Mon,  "“Reference Service in Face-to-Face and Virtual Environments,”," in Academic Library Research: Perspectives and Current Trends ,   ed. Marie L.. Radford and Pamela Snelson,  1-47 (Chicago:  Association of College and Research Libraries, 2008) .
10. Ann K. McKibbon,  "“Systematic Reviews and Librarians,”,"  Library Trends  (2006)   55, 1:  203.
11. Matthew L. Saxton,  "“Meta-Analysis in Library and Information Science: Method, History, and Recommendations for Reporting Research,”,"  Library Trends  (2006)   55, 1:  158–70.ArendtArendt ,  Julie,  "“How Do Psychology Researchers Find Studies to Include in Meta-Analyses?”,"  Behavioral & Social Sciences Librarians   26, 12007:  1–23.
12. Alison Brettle,  "“Systematic Reviews and Evidence Based Library and Information Practice,”," in Evidence Based Library and Information Practice,   4. (2009):  35-40.Diana Papaioannou et al.,  "“Literature Searching for Social Science Systematic Reviews: Considertation of a Range of Search Techniques,”,"  Health Information & Libraries Journal  (2009)   27:  114–22.
13. Kalyani Ankem,  "“Evaluation of Method in Systematic Reviews and Meta-Analyses Published in LIS,”,"  Library & Information Research  (2008)   32, 101:  91–104.
14. Chern Li Liew,  "“Digital Library Research 1997–2007: Organisational and People Issues,”," in Journal of Documentation,   65. (2008):  245-66.Mark D. Winston,  "“Ethical Leadership and Ethical Decision Making: A Meta-Analysis of Research Related to Ethics Educaton,”,"  Library & Information Science Research  (2007)   29:  230–51.
15. McKibbon,  "“Systematic Reviews and Librarians,”,"  :  205.
16. Lili Luo,  "“Chat Reference Evaluation: A Framework of Perspectives and Measures,”,"  Reference Services Review  (2008)   36, 1:  71–85.
17. Jeffrey Pomerantz,  "“A Conceptual Framework and Open Research Questions for Chat-Based Reference Service,”,"  Journal of the American Society for Information Science & Technology  (2005)   56, 12:  1288–302.
18. David Lankes R.David LankesR. ,  Gross MelissaDavid LankesR. ,  McClure Charles R.,  "“Cost, Statistics, Measures, and Standards for Digital Reference Services: A Preliminary View,”,"  Library Trends  (2003)   51, 3:  401–13.
19. Marilyn D. White,  "“Digital Reference Services: Framework for Analysis and Evaluation,”,"  Library &Information Science Research  (2001)   23:  211–31.
20. David Ward and M. Kathleen Kern,  "“Combining IM and Vendor-Based Chat: A Report from the Frontlines of an Integrated Service,”," in portal: Libraries & the Academy,   6. (2006):  417-29.Jeffrey Pomerantz and Lili Luo,  "“Motivations and Uses: Evaluating Virtual Reference Service from the Users’ Perspective,”,"  Library & Information Science Research  (2006)   28:  350–73.
21. Joel Cummings, Lara Cummings,  and Linda Frederiksen,  "“User Preferences in Reference Services: Virtual Reference and Academic Libraries,”," in portal: Libraries and the Academy,   7. (2007):  81-96.Lisa Sandra Lee,  "“Reference Services for Students Studying by Distance: A Comparative Study of the Attitudes Distance Students Have Towards Phone, Email and Chat Reference Services,”,"  New Zealand Library & Information Management Journal  (2008)   51, 1:  6–21.Lynn S.. Connaway, Marie L.. Radford,  and Jocelyn D. Williams,  "“Engaging Net Gen Students in Virtual Reference: Reinventing Services to Meet Their Information Behaviors and Communication Preferences”,"  (paper presented at the Fourteenth Annual National Conference of the Association of College and Research Libraries, Seattle, Wash., 2009).
22. CummingsCummings ,  et al.  "“User Preferences in Reference Services,”,"  :  88.
23. Lee,  "“Reference Services for Students Studying By Distance,”,"  :  14.
24. ConnawayConnaway ,  et al.  "“Engaging Net Gen Students in Virtual Reference,”,"  :  13.
25. Ibid., 14.
26. Ibid.
27. Sharon Naylor, Bruce Stoffel,  and Sharon Van Der Laan,  "“Why Isn't Our Chat Reference Used More? Finding of Focus Group Discussions with Undergraduate Students,”,"  Reference & User Services Quarterly  (2008)   47, 4:  342–54.
28. Ibid., 348Lee,  "“Reference Services for Students Studying By Distance,”,"  :  15.
29. Nahyun Kwon,  "“User Satisfaction with Referrals at a Collaborative Virtual Reference Service,”,"  Information Research  (2006)   11, 2n.p.
30. Lee,  "“Reference Services for Students Studying By Distance,”,"  :  11.Kirsti Nilsen and Catherine Sheldrick Ross,  "“Evaluating Virtual Reference from the Users’ Perspective,”,"  Reference Librarian  (2006)   46, 95/96:  53–79.Jeffrey Pomerantz and Lili Luo,  "“Motivations and Uses: Evaluating Virtual Reference Service from the Users’ Perspective,”,"  Library & Information Science Research  (2006)   28:  350–73.
31. Joanne B.. Smyth and James C. MacKenzie,  "“Comparing Virtual Reference Exit Survey Results and Transcript Analysis: A Model for Service Evaluation,”," in Public Services Quarterly,   2. (2006):  85-105.
32. Hill J.B.HillJ.B. ,  Madarash-Hill CherieHillJ.B. ,  Allred Alison,  "“Outsourcing Digital Reference: The User Perspective,”,"  Reference Librarian  (2007)   47, 2:  57–74.
33. Nahyun Kwon and Vicki L. Gregory,  "“The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services,”,"  Reference & User Services Quarterly  (2007)   47, 2:  137–48.
34. Jack M. Maness,  "“A Linquistic Analysis of Chat Reference Conversations with 18–24 Year-Old College Students,”,"  Journal of Academic Librarianship  (2008)   34, 1:  31–38.
35. PomerantzPomerantz ,  Luo,  "“Motivations and Uses,”,"  :  360.
36. Lee,  "“Reference Services for Students Studying By Distance,”,"  :  11.
37. Nilsen and Sheldrick Ross,  "“Evaluating Virtual Reference,”,"  :  63.
38. Kwon,  "“User Satisfaction.”,"  
39. SmythSmyth ,  MacKenzie,  "“Comparing Virtual,”,"  :  94.
40. HillHill ,  et al.  "“Outsourcing Digital Reference,”,"  :  67–68.
41. KwonKwon ,  Gregory,  "“The Effects of Librarians’ Behavioral Performance,”,"  :  144–45.
42. Maness,  "“A Linquistic Analysis,”,"  :  36.
43. The number in parentheses refers to the number of studies in which this category was listed as either the first or second highest percentage category of questions.
44. Deborah L.. Meert and Lisa M. Given,  "“Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries,”,"  College & Research Libraries  (2009)   70, 1:  71–84.
45. Kwon,  "“User Satisfaction.”,"  
46. Nora Wikoff,  "“Reference Transaction Handoffs: Factors Affecting the Transition from Chat to E-mail,”,"  Reference & User Services Quarterly  (2008)   47, 3:  230–41.
47. Kwon,  "“User Satisfaction.”,"  
48. Ian J. Lee,  "“Do Virtual Reference Librarians Dream of Digital Reference Questions? A Qualitative and Quantitative Analysis of Email and Chat Reference,”,"  Australian Academic & Research Libraries  (2004)   35, 2:  95–110.Maness,  "“A Linguistic Analysis,”,"  :  31–38.
49. Christina M. Desai,  "“Instant Messaging Reference: How Does It Compare?”,"  The Electronic Library  (2003)   21, 1:  21–30.
50. Marie L. Radford,  "“Encountering Virtual Users: A Qualitative Investigation of Interpersonal Communication in Chat Reference,”,"  Journal of the American Society for Information Science & Technology  (2006)   57, 8:  1046–59.
51. Virginia A.. Walter and Cindy Mediavilla,  "“Teens Are from Neptune, Librarians Are from Pluto: An Analysis of Online Reference Transactions,”,"  Library Trends  (2005)   54, 2:  209–27.Lynn Westbrook,  "“Chat Reference Communication Patterns and Implications: Applying Politeness Theory,”,"  Journal of Documentation   63, 52007:  638–58.
Maness,  "“A Linquistic Analysis,”,"  :  34.Radford,  "“Encountering Virtual Users,”,"  :  1055.
53. WalterWalter ,  Mediavilla,  "“Teens Are from Neptune,”,"  :  221.
54. Maness,  "“A Linquistic Analysis,”,"  :  36.
55. Westbrook,  "“Chat Reference,”,"  :  647–48.
56. Lorri M. Mon,  "“User Perceptions of Digital Reference Services”"(unpublished dissertation, Univ. of Washington, 2006);Radford,  "“Encountering Virtual Users,”,"  :  1049.Marie L.. Radford and Lynn S. Connaway,  "“Screenagers’ and Live Chat Reference: Living up to the Promise,”,"  Scan  (2007)   26, 1:  31–39,  www.oclc.org/research/publications/archive/2007/connaway-scan.pdf (accessed Apr. 4, 2011).
57. Terrence W.. Epperson and Alan Zemel,  "“Reports, Requests, and Recipient Design: The Management of Patron Queries in Online Reference Chats,”,"  Journal of the American Society for Information Science & Technology  (2008)   59, 14:  2268–83.
58. Westbrook,  "“Chat Reference,”,"  :  638–58.
59. Radford,  "“Encountering Virtual Users,”,"  :  1046–59.RadfordRadford ,  Connaway,  "“Screenagers’ and Live Chat Reference,”,"  :  31–39.
60. EppersonEpperson ,  Zemel,  "“Reports, Requests, and Recipient Design,”,"  :  2268–83.
61. Marilyn D.. White, Eileen G.. Abels,  and Neal Kaske,  "“Evaluation of Chat Reference Service Quality,”,"  D-Lib Magazine  (2003)   9, 2
62. Jeffrey Pomerantz, Lili Luo,  and Charles R. McClure,  "“Peer Review of Chat Reference Transcripts: Approaches and Strategies,”,"  Library & Information Science Research  (2006)   28:  24–48.
63. Radford,  "“Encountering Virtual Users,”,"  :  1055.
64. Stephanie J.. Graves and Christina M. Desai,  "“Instruction Via Chat Reference: Does Co-Browse Help?”," in Reference Services Review,   34. (2006):  340-57.Christina M.. Desai and Stephanie J. Graves,  "“Instruction Via Instant Messaging Reference: What's Happening?”,"  Electronic Library   24, 22006:  174–89.
65. David Ward,  "“Measuring the Completeness of Reference Transactions in Online Chats: Results of an Unobtrusive Study,”,"  Reference & User Services Quarterly  (2004)   44, 1:  46–58.
66. Charlotte E. Ford,  "“An Exploratory Study of the Differences between Face-to-Face and Computer-Mediated Reference Transactions”,"  (Bloomington:  Indiana Univ., 2002): .
67. DesaiDesai ,  Graves,  "“Instruction Via Instant Messaging Reference?”,"  :  179.Lesley M. Moyo,  "“Virtual Reference Services and Instruction,”,"  Reference Librarian  (2006)   46, 95:  213–30.Ward,  "“Measuring,”,"  :  50–51.
68. Christina M.. Desai and Stephanie J. Graves,  "“Cyberspace or Face-to-Face: The Teachable Moment and Changing Reference Mediums,”,"  Reference & User Services Quarterly  (2008)   47, 3
69 Moyo,  "“Virtual Reference Services and Instruction,”,"  :  225.
70. DesaiDesai ,  Graves,  "“Cyberspace or Face-to-Face,”,"  :  250.Ward,  "“Measuring,”,"  :  50.
71. DesaiDesai ,  Graves,  "“Cyberspace or Face-to-Face,”,"  :  249.
72. Moyo,  "“Virtual Reference Services and Instruction,”,"  
73. DesaiDesai ,  Graves,  "“Cyberspace or Face-to-Face,”,"  :  247.
74. Ibid.
75. Ibid.
76. DesaiDesai ,  Graves,  "“Instruction Via Instant Messaging Reference,”,"  :  186.
77. GravesGraves ,  Desai,  "“Instruction Via Chat Reference,”,"  :  352.
78. DesaiDesai ,  Graves,  "“Cyberspace or Face-to-Face,”,"  :  252.
79. Katherine D. Casebier,  "“The University of Texas at Arlington's Virtual Reference Service: An Evaluation by the Reference Staff,”," in Public Services Quarterly,   2. (2006):  127-42.
80. Celia Huston,  "“Reference Librarians’ Perceptions of Chat Reference: An Exploration of the Factors Effecting Implementation”,"  (Minneapolis:  Capella Univ., 2009): .
81. Sarah K.. Steiner and Casey M. Long,  "“What Are We Afraid Of? A Survey of Librarian Opinions and Misconceptions Regarding Instant Messenger,”,"  Reference Librarian  (2007)   47, 1:  31–50.
82. WardWard ,  Kern,  "“Combining IM and Vendor-Based Chat,”,"  :  424.
83. Eylem Ozkaramanli,  "“Librarians’ Perceptions of Quality Digital Reference Services by Means of Critical Incidents”,"  (Pittsburgh:  Univ. of Pittsburgh, 2005): .
84. Ibid., 51.
85. The number in parentheses indicates the number of respondents who listed each communication difficulty
86. Lili Luo,  "“Chat Reference Competencies: Identification from a Literature Review and Librarian Interviews,"  Reference Services Review  (2007)   35, 2:  195–209.Lili Luo,  "“Chat Reference Evaluation,”,"  :  82.
87. "The URLs to the RUSA guidelines cited in the Walter and Mediavilla study, the van Duinkerken, Stephens, and MacDonald, and the Harmeyer study are no longer active links"The current RUSA guidelines can be found here: http://www.ala.org/ala/mgrps/divs/rusa/resources/guidelines/guidelinesbehavioral.cfm
88. WalterWalter ,  Mediavilla,  "“Teens Are from Neptune,”,"  :  216.
89. Wyoma van Duinkerken, Jane Stephens,  and Karen I. MacDonald,  "“The Chat Reference Interview: Seeking Evidence Based on Rusa's Guidelines,”,"  New Library World  (2009)   110, 3/4:  107–21.
90. Dave Harmeyer,  "“Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis”,"  (Malibu, Calif.:  Pepperdine Univ., 2007): , n.p.
91. Ibid
92. KwonKwon ,  Gregory,  "“The Effects of Librarians’ Behavioral,”,"  :  145.
93. Van Duinkerken et al.,  "“The Chat Reference Interview,”,"  :  117.
94. CummingsCummings ,  et al.  "“User Preferences in Reference Services,”,"  :  88.Casey Johnson,  "“Online Chat Reference: Survey Results from Affiliates of Two Universities,”,"  Reference & User Services Quarterly  (2004)   43, 3:  237–47.NaylorNaylor ,  et al.  "“Why Isn't Our Chat Reference Used More?”,"  :  348.
95. NaylorNaylor ,  et al.  "“Why Isn't Our Chat Reference Used More?”,"  :  349.Lynn S. Connaway,  "“Sense-Making and Synchronicity: Information-Seeking Behaviors of Millennials and Baby Boomers,”,"  Libri  (2008)   58, 2:  123–35.
96. CummingsCummings ,  et al.  "“User Preferences in Reference Services,”,"  :  88.Johnson,  "“Online Chat Reference,”,"  :  241.
97. NaylorNaylor ,  et al.  "“Why Isn't Our Chat Reference Used More?”,"  :  349.
98. ConnawayConnaway ,  et al.  "“Sense-Making and Synchronicity,”,"  :  128.
99. Cheryl R. Dee,  "“Digital Reference Service: Trends in Academic Health Science Libraries,”,"  Medical Reference Services Quarterly  (2005)   24, 1:  19–27.
100. WardWard ,  Kern,  "“Combining IM and Vendor-Based Chat,”,"  :  422.
101. Huston,  "“Reference Librarians’ Perceptions of Chat Reference,”,"  65,68.
102. Dee,  "“Digital Reference Service,”,"  :  23.
103 Neither study named the libraries that discontinued their services and so it is possible there is overlap in those numbers.
104 Marie L.. Radford and M. Kathleen Kern,  "“A Multiple-Case Study Investigation of the Discontinuation of Nine Chat Reference Services,”,"  Library & Information Science Research  (2006)   28:  521–47.
105. Huston,  "“Reference Librarians’ Perceptions of Chat Reference,”,"  :  98–99.
106. PomerantzPomerantz ,  Luo,  "“Motivations and Uses,”,"  :  361.
107. SmythSmyth ,  MacKenzie,  "“Comparing Virtual Reference Exit Survey Results and Transcript Analysis,”,"  :  98.
108. DurranceDurrance ,  Joan C.,  "“Reference Success: Does the 55 Percent Rule Tell the Whole Story?”,"  Library Journal  (1989)   114, 7:  31–36.
109 PomerantzPomerantz ,  Luo,  "“Motivations and Uses,”,"  :  360.
110. SmythSmyth ,  MacKenzie,  "“Comparing Virtual Reference Exit Survey Results and Transcript Analysis,”,"  :  85–105.
111. Amy VanScoy and Megan Oakleaf,  "“Effective Instruction in the Virtual Reference Environment,”," in Teaching with Technology: An Academic Librarian's Guide . (): .Joe Williams and Susan Goodwin (2007) Oxford, UK:  Chandos, 
112. KnowitNow24x7, Texting to KnowitNow24x7, knowitnow.org/about_texting.php (accessed Apr. 14, 2011).
113. QuestionPoint, QuestionPoint Cooperative Virtual Reference, www.oclc.org/us/en/questionpoint/default.htm (accessed Apr. 14, 2011).
114. EppersonEpperson ,  Zemel,  "“Reports, Requests, and Recipient Design,”,"  :  2268–83.
APPENDIX A. JOURNAL ARTICLES

Arnold, J., and N. Kaske. 2005. Evaluating the quality of a chat service. portal: Libraries and the Academy 5(2), 177–93.

Casebier, K. D. 2006. The University of Texas at Arlington's virtual reference service: An evaluation by the reference staff. Public Services Quarterly, 2(2/3), 127–42.

Connaway, L. S., M. L. Radford, T. J. Dickey, J. D. Williams, and P. Confer. 2008. Sense-making and synchronicity: Information-seeking behaviors of millennials and baby boomers. Libri 58(2), 123–35.

Cummings, J., L. Cummings, and L. Frederiksen. 2007. User preferences in reference services: Virtual reference and academic libraries. portal: Libraries and the Academy 7(1), 81–96.

Dee, C. R. 2005. Digital reference service: Trends in academic health science libraries. Medical Reference Services Quarterly 24(1), 19–27.

Desai, C. M. 2003. Instant messaging reference: How does it compare? The Electronic Library 21(1), 21–30.

Desai, C. M., and S. J. Graves. 2006. Instruction via instant messaging reference: What's happening? The Electronic Library 24(2), 174–89.

———. 2008. Cyberspace or face-to-face: The teachable moment and changing reference mediums. Reference & User Services Quarterly 47(3), 242–54.

Devlin, F., L. Currie, and J. Stratton. 2008. Successful approaches to teaching through chat. New Library World 109(5/6), 223–34.

Epperson, T. W., and A. Zemel. 2008. Reports, requests, and recipient design: The management of patron queries in online reference chats. Journal of the American Society for Information Science & Technology 59(14), 2268—83.

Fagan, J. C., and C. M. Desai. 2003. Site search and instant messaging reference: A comparative study. Internet Reference Services Quarterly 8(1/2), 167–82.

Fennewald, J. (2006). Same questions, different venue: An analysis of in-person and online questions. The Reference Librarian 95/96, 21–35

Goda, D., and C. Bishop. 2008. Frequency and content of chat questions by time of semester at the University of Central Florida: Implications for training, staffing and marketing. Public Services Quarterly 4(4), 291–316.

Graves, S. J., and C. M. Desai. 2006. Instruction via chat reference: Does co-browse help? Reference Services Review 34(3), 340–57.

Hill, J. B., C. Madarash-Hill, and A. Allred. 2007. Outsourcing digital reference: The user perspective. The Reference Librarian 47(2), 57–74.

Johnson, C. 2004. Online chat reference: Survey results from affiliates of two universities. Reference & User Services Quarterly 43(3), 237–47.

Kwon, N. 2006. User satisfaction with referrals at a collaborative virtual reference service. Information Research 11(2).

———. 2007. Public library patrons’ use of collaborative chat reference service: The effectiveness of question answering by question type. Library & Information Science Research 29, 70–91.

Kwon, N., and V. L. Gregory. 2007. The effects of librarians’ behavioral performance on user satisfaction in chat reference services. Reference & User Services Quarterly 47(2), 137–48.

Lee, I. J. 2004. Do virtual reference librarians dream of digital reference questions? A qualitative and quantitative analysis of email and chat reference. Australian Academic & Research Libraries 35(2), 95–110.

Lee, L. S. 2008. Reference services for students studying by distance: A comparative study of the attitudes distance students have towards phone, email and chat reference services. New Zealand Library & Information Management Journal 51(1), 6–21.

Lewis, K. M., and S. L. DeGroote. 2008. Digital reference access points: An analysis of usage. Reference Services Review 36(2), 194–204.

Luo, L. 2007. Chat reference competencies: Identification from a literature review and librarian interviews. Reference Services Review 35(2), 195–209.

———. 2008. Toward sustaining professional development: Identifying essential competencies for chat reference service. Library & Information Science Research 30, 298–311.

———. 2009. Effective training for chat reference personnel: An exploratory study. Library & Information Science Research 31, 210–24.

Lupien, P., and L. E. Rourke. 2007. Out of the question! . . . How we are using our students’ virtual reference qustions to add a personal touch to a virtual world. Evidence Based Library & Information Practice 2(2), 67–80.

Maness, J. M. 2008. A linguistic analysis of chat reference conversations with 18–24 year-old college students. Journal of Academic Librarianship 34(1), 31–38.

Marsteller, M. R., and D. Mizzy. 2003. Exploring the synchronous digital reference interaction for query types, question negotiation, and patrol response. Internet Reference Services Quarterly 8(1/2), 149–65.

Meert, D. L., and L. M. Given. 2009. Measuring quality in chat reference consortia: A comparative analysis of responses to users’ queries. College & Research Libraries 70(1), 71–84.

Mon, L., B. W. Bishop, C. R. McClure, J. McGilvray, L. Most, T. P. Milas, and J. T. Snead. 2009. The geography of virtual questioning. Library Quarterly 79(4), 393–420.

Moyo, L. M. 2006. Virtual reference services and instruction. Reference Librarian 46(95), 213–30.

Naylor, S., B. Stoffel, and S. Van Der Laan. 2008. Why isn't our chat reference used more? Finding of focus group discussions with undergraduate students. Reference & User Services Quarterly 47(4), 342–54.

Nilsen, K., and C. S. Ross. 2006. Evaluating virtual reference from the users’ perspective. The Reference Librarian 46(95/96), 53–79.

Pomerantz, J. 2004. Factors influencing digital reference triage: A think-aloud study. Library Quarterly, 74(3), 235–64.

Pomerantz, J., and L. Luo. 2006. Motivations and uses: Evaluating virtual reference service from the users’ perspective. Library & Information Science Research 28, 350–73.

Pomerantz, J., L. Luo, and C. R. McClure. 2006. Peer review of chat reference transcripts: Approaches and strategies. Library & Information Science Research 28, 24–48.

Pomerantz, J., S. Nicholson, and R. D. Lankes. 2003. Digital reference triage: Factors influencing question routing and assignment. Library Quarterly 73(2), 103–20.

Radford, M. L. 2006. Encountering virtual users: A qualitative investigation of interpersonal communication in chat reference. Journal of the American Society for Information Science & Technology 57(8), 1046–59.

Radford, M. L., and L. S. Connaway. 2007. Screenagers’ and live chat reference: Living up to the promise. Scan 26(1), 31–39.

Radford, M. L., and M. K. Kern. 2006. A multiple-case study investigation of the discontinuation of nine chat reference services. Library & Information Science Research 28, 521–47.

Ronan, J., P. Reakes, and M. Ochoa. 2006. Application of reference guidelines in chat reference interactions: A study of online reference skills. College & Undergraduate Libraries 13(4), 3–31.

Smyth, J. B., and J. C. MacKenzie. 2006. Comparing virtual reference exit survey results and transcript analysis: A model for service evaluation. Public Services Quarterly 2(2/3), 85–105.

Steiner, S. K., and C. M. Long. 2007. What are we afraid of? A survey of librarian opinions and misconceptions regarding instant messenger. Reference Librarian 47(1), 31–50.

van Duinkerken, W., J. Stephens, and K. I. MacDonald. 2009. The chat reference interview: Seeking evidence based on RUSA ’s guidelines. New Library World 110(3/4), 107–21.

Walter, V. A., and C. Mediavilla. 2005. Teens are from Neptune, Librarians are from Pluto: An analysis of online reference transactions. Library Trends 54(2), 209–27.

Wan, G., D. Clark, J. Fullerton, G. Macmillan, D. E. Reddy, and J. Stephens. (2009). Key issues surrounding virtual chat reference model: A case study. Reference Services Review 37(1), 73–82.

Ward, D. 2004. Measuring the completeness of reference transactions in online chats: Results of an unobtrusive study. Reference & User Services Quarterly 44(1), 46–58.

———. 2005. Why users choose chat: A survey of behavior and motivations. Internet Reference Services Quarterly 10(1), 29–46.

Ward, D., and M. K. Kern. 2006. Combining IM and vendor-based chat: A report from the frontlines of an integrated service. portal: Libraries and the Academy 6(4), 417–29.

Westbrook, L. 2007. Chat reference communication patterns and implications: Applying politeness theory. Journal of Documentation 63(5), 638–58.

White, M. D., E. Abels, and N. Kaske. 2003. Evaluation of chat reference service quality. D-Lib Magazine 9(2).

Wikoff, N. 2008. Reference transaction handoffs: Factors affecting the transition from chat to e-mail. Reference & User Services Quarterly 47(3), 230–41.

Dissertations

Ford, C. E. 2002. An exploratory study of the differences between face-to-face and computer-mediated reference transactions. Doctor of Philosophy, Indiana University, Bloomington, Ind.

Harmeyer, D. 2007. Online virtual chat library reference service: A quantitative and qualitative analysis. Doctor of Education in Educational Technology, Pepperdine University, Los Angeles.

Hodges, R. A. 2006. The impact of collaborative tools on digital reference users: An exploratory study. Doctor of Philosophy, Florida State University, Tallahassee, Fla.

Huston, C. 2009. Reference librarians’ perceptions of chat reference: An exploration of the factors effecting implementation. Doctor of Philosophy, Capella University, Minneapolis, Minn.

Mon, L. M. 2006. User perceptions of digital reference services. Doctor of Philosophy Unpublished dissertation, University of Washington, Seattle, Wash.

Ozkaramanli, E. 2005. Librarians’ perceptions of quality digital reference services by means of critical incidents. Doctor of Philosophy, University of Pittsburgh, Pittsburgh, Pa.

Conference Papers

Connaway, L. S., M. L. Radford, and J. D. Williams. 2009. Engaging net gen students in virtual reference: Reinventing services to meet their information behaviors and communication preferences. Paper presented at the Fourteenth Annual National Conference of the Association of College and Research Libraries, Seattle, Wash.


Tables
Table 1

White's Digital Reference Service Framework


Broad Area Category
1. Purpose of the service a. Mission, objectives, statement of purpose
b. Parameters of the service: questions
c. Parameters of the service: clients
2. Structure and responsibilities to the client a. Administration
b. Staffing and training
c. Hardware and software
d. Ease of use, instructions to the client
e. Responsibilities to the client
3. Core functions a. Query form
b. Acknowledgement
c. Question negotiation
d. Question-answering process
e. Response guidelines
f. Coping with demand
g. Archiving
4. Quality control a. Quality control
b. Evaluation
c. External recognition

Table 2

Journals and Number of Articles in the Data Set


Journal No. of Articles
Library & Information Science Research 6
Reference & User Services Quarterly 6
The Reference Librarian 5
Reference Services Review 4
Internet Reference Services Quarterly 3
Library Quarterly 3
portal: Libraries & the Academy 3
Public Services Quarterly 3
Journal of the American Society for Information
Science and Technology 2
New Library World 2
The Electronic Library 2
Australian Academic & Research Libraries 1
College & Research Libraries 1
College & Undergraduate Libraries 1
D-Lib Magazine 1
Evidence Based Library & Information Practice 1
Information Research 1
Journal of Academic Librarianship 1
Journal of Documentation 1
Library Trends 1
Libri 1
Medical Reference Services Quarterly 1
New Zealand Library & Information
Management Journal 1
Scan 1

Table 3

Data Sources


Data Source No. of Studies
Chat transactions 38
Surveys—users 15
Surveys—librarians 6
Interviews—librarians 4
Focus group 3
Website log data 2
Interviews—users 2
Delphi study 1
Library websites 1
Literature review 1
Think aloud 1
Surveys—nonusers 1

Table 4

Distribution of Data in the Framework


Category No. %
1c. Parameters of the service: clients 27 18.5
1b. Parameters of the service: questions 26 17.8
3d. Question-answering process 22 15.1
3e. Response guidelines 22 15.1
2b. Staffing and training 17 11.6
1a. Mission, objectives, statement of purpose 15 10.3
4b. Evaluation 6 4.1
3c. Question negotiation 5 3.4
2a. Administration 3 2.1
2d. Ease of use, instructions to the client 2 1.4
4a. Quality control 1 0.7
2c. Hardware and software 0 0.0
2e. Responsibilities to the client 0 0.0
3a. Query form 0 0.0
3b. Acknowledgement 0 0.0
3f. Coping with demand 0 0.0
3g. Archiving 0 0.0
4c. External recognition 0 0.0
TOTAL 146 100.0

Table 5

User Motivations—Integrated


Reason Average %*
Convenience/thought it was the quickest 48.5
Other/only place I know/other means not helpful/curiosity/serendipity 29.5
Library is too far away/other reference services were not available 8.8
Heard good things about it/recommended by others 8.5
Don't like asking questions in person/personal characteristics/habits 6.0

*Average percentages were calculated by adding the relevant percents and dividing by the number of addends.


Table 6

Categories of Question Types


Study No. of Transactions Institution Type Top 2 Reported Categories %
Arnold & Kaske (2005) 351 Academic Policy and procedure 41.2
Specific search 19.2
Desai (2003) 140 Academic Specific search 45.0
Ready reference 38.6
Fennewald (2006) 405 Academic Reference 72.0
Where is... 13.0
Ford (2003)* 308 Academic Obtain specific source/holdings 17.9
Research questions 14.6
Goda & Bishop (2008) 4154 Academic Policy/card NA
Research
Harmeyer 333 Academic Research questions 33.9
Library technology 17.7
Kwon (2007) 415 Public Circulation-related 48.9
Subject-based research questions 25.8
Lee (2004) 47 Academic Accessing databases and electronic resources 43.0
Administrative 19.0
Finding known Item 19.0
Research and reference 19.0
Lupien & Rourke (2007) 600 Academic Specific search 41.0
Policy and procedure 39.4
Marsteller & Mizzy (2003) 425 Academic Directional, policy, procedure 34.0
Known item 28.0
Ward (2005) 345 Academic Couldn't find specific book or article 27.0
Not sure where to start research 23.0
Ward & Kern (2006) 811 Academic Subject based research 37.3
Information/directional 31.4

*We have chosen to omit the question type tied for most frequently observed from Ford's data in what we report here. Her data showed that 17.9% of the chat questions were questions asking for information on chat service in general and came from librarians outside the academic community she studied. She speculates this high percentage is due to the fact that the academic library she studied was an early adopter of chat service.


Table 7

How the Questions are Handled: Complete Answers


Study Answer Completeness (%)
Kwon 2007 (n = 415)
56.4 completely answered
29 transferred or referred
4.8 partial or no answer
9.8 problematic ending
Meert & Given 2009 (n = 252 questions fielded by library staff)
11 not answered in real time
(n = 225 questions fielded by nonlibrary staff)
31 not answered in real time
Ward 2004 (n = 72)
47 complete
32 mostly complete
12 mostly incomplete
6 incomplete

Table 8

How the Questions are Handled: Referrals


Study Questions Referred (%)
Kwon 2007 29 (n = 415)
Wan et al. 2009 8.7 (n = 413)
Ward 2004 3 (n = 72)
Wikoff 2008 33 (n = 210)


Article Categories:
  • Library Reference and User Services
    • Features

Refbacks

  • There are currently no refbacks.


© 2017 RUSA