Do You Want to Chat? Reevaluating Organization of Virtual Reference Service at an Academic Library

Maryvon Côté (maryvon.cote@mcgill.ca) is Acting Head, Nahum Gelber Law Library; Svetlana Kochkina (svetlana.kochkina@mcgill.ca) Liaison Librarian, Nahum Gelber Law Library; and Tara Mawhinney (tara.mawhinney@mcgill.ca) Liaison Librarian, Schulich Library of Science and Engineering, McGill University, Montreal, Quebec, Canada.

Authors’ contributions were equal; names are listed alphabetically.

Since their inception, virtual reference services have evolved considerably and are now a significant component of library services in many types of library environments. The current paper reports on a study undertaken at a research-intensive academic library that analyzed and evaluated a decade-old virtual reference service. The main goal of the study was to obtain a broad and comprehensive picture of the current service, grounded in the actual day-to-day provision, usage, and organization of the service. The group of librarians involved in the study developed a feasible, efficient, and adaptable methodology for assessing and evaluating a virtual reference service. The developed methodology that combines qualitative and quantitative methods can be used and applied for a similar evaluation of the service in any type of library environment.

Service History

McGill University is located in Montreal, Quebec, Canada, the largest francophone city in North America and home to people of many languages and cultures. McGill University is an English-speaking research-intensive university with a student population of 39,500 enrolled in more than three hundred programs of study that include the social sciences, sciences, medicine, law, engineering, religion, and the humanities, with a strong continuing education program offering hundreds of courses in various areas of interest. McGill also has the highest percentage of PhD students of any Canadian research university.1 McGill University Library offers public services primarily using a liaison librarian model. Designated librarians are responsible for meeting the reference, instructional, and collection needs of one or more departments. All branch libraries are located on the downtown campus, with the exception of one branch library located on the Macdonald Campus on the outskirts of the city. A single service point model is used in all branches with library support staff being responsible for answering questions at front-line service points and librarians being on call for questions requiring professional skills to answer. Statistics are taken during select sampling weeks throughout the year using LibAnalytics. During the most recent sampling week (February 15–21, 2016), there were 1,929 questions asked in person, via email, and by phone at library service points and directly to librarians. Additionally, 129 chat and email questions were asked via the library’s virtual reference service during the same week.

Virtual reference service has become an important component of the reference services offered at McGill University Library. It is currently offered by fifty-six public services librarians from all branch libraries providing reference assistance in English and French to students, faculty, staff, and the general public. When the service was first introduced at McGill in 2006, QuestionPoint, an OCLC product, was selected as the virtual reference platform. At the time, QuestionPoint was one of the leading products on the market, thereby ensuring expeditious implementation of the service. The primary goal of virtual reference at McGill was the extension of reference services generally offered by phone or in person at service desks. Virtual reference service offers a highly visible access point to users in real time at their point of need. When the service was implemented, it was believed that users, particularly students, would find chat useful since they were already using this form of technology to communicate among themselves. It was also considered a means of offering “ready reference” rather than in-depth subject-specific assistance. It was decided to channel reference questions received via the central library email through the QuestionPoint platform as well. All public services librarians in branch libraries across the system were involved in answering email and chat questions received through the virtual reference platform. To ensure the quality of service, an initial training program for all public services librarians was organized. The implementation of the virtual reference service led to the revision of the library website and subject guides in order to provide better support to users and enable them to find needed resources. Since its implementation, the service has evolved with the subsequent inclusion of chat widgets (called Qwidgets) in selected library resources, including library catalogues, which increased the number of access points to the service.

Between 2009 and 2011, McGill University Library also used Meebo instant messaging software as an additional method of communication. The use of this component of the virtual reference service was discontinued because of low usage numbers and changes in the ownership of the software. The library also experimented with co-browsing, which had the potential to provide librarians with the ability to share the computer screen with the user. However, this practice was discontinued due to technical incompatibilities. There have been other aspects of the service that have been considered over the years but not implemented, such as utilization of a knowledge base, text messaging, and consortium membership.

Context of the Current Study

In April 2014, the Office of the Dean of Libraries created a working group to assess various aspects of the virtual reference service as part of the library’s immediate priority initiatives for the 2014–2015 academic year. The working group was comprised of two branch library heads and two liaison librarians, including the virtual reference coordinator. The group was mandated to evaluate the quality of the service and the nature and content of the questions received. Based on feedback from librarians at McGill University Library, the mandate of the group was expanded to include an assessment of the QuestionPoint software, service hours, and possible collaboration with consortial partners. After discussing methods for collecting and analyzing the data with the Assessment Librarian over the summer of 2014, a study of virtual reference transactions was conducted by the committee members to assess the service and its staffing model in the fall of 2014. The report, entitled “Virtual Reference at McGill Library,” was completed and submitted to the library administration in spring 2015.2

Research Objective

The current paper reports the results of an evaluation of a virtual reference service in a research-intensive academic library. The main purpose of the study was not to specifically evaluate the quality of the answers provided in McGill University Library’s virtual reference service, but to assess the usage of the service and the current service model, which has undergone some administrative and technical modifications since its implementation in 2006. The objective was to obtain a broad and comprehensive picture of the current service, grounded in the actual day-to-day provision, usage, and organization of the service. The goal of the evaluation was to examine the general quality of the service provided, as measured through an analysis of the hours, software, and adequacy of practitioner’s expertise, among other factors, rather than through an analysis based on the quality of responses to individual transactions. In order to attain this objective, a number of research questions were identified and grouped around two common themes: service usage (i.e., who uses the service and how) and service provision (i.e., how the service is provided) (see table 1, next page). Another goal was to suggest possible ways to improve and expand the service.

Literature Review

The volume of literature on virtual reference services attests to their growing popularity. This analysis of the literature starts from the premise that virtual reference services are important, especially in the context of their increasing popularity and reported general user satisfaction.3 Morais and Sampson note that “chat reference service is a very popular, heavily used, and appreciated service” and Nicol and Crook observe that some libraries are seeing increased use of virtual reference services at the same time as statistics are showing decreased or flattening reference desk use.4 A systematic review of virtual reference services published in 2011 by Matteson, Salamon, and Brewster identified fifty-nine papers on the topic, the majority of which were from academic library settings. Their analysis of the literature concludes that virtual reference service expectations are high, that services are well-received, and that they are used regularly.5 At McGill University Library, the team of researchers involved in this study concur that this is a popular service, and the group was tasked with examining its quality to see if major changes to service provision were warranted. These included considering extended hours, having library assistants staff the service, offering the service with more than one person at a time, and joining a consortium. The goal of the current review of the literature, which was focused primarily on evaluation or assessment of virtual reference services within academic libraries, was to examine these issues and formulate research questions based on previous research in this area.

The existing body of literature on this topic employs various methods for evaluating virtual reference service, including examining individual transcripts for quality control, ensuring quality through evaluation of practices and policies, and examining transcripts to identify patterns with the goal of improving service.6 At McGill University Library, there were no immediate concerns with regard to quality of the service. The group also chose not to evaluate individual transcripts for quality because the study would be selective in nature and may not be representative of the overall quality of service provided. With regard to evaluating practices and policies, up until the current time, McGill University Library has been operating with little written documentation of policies. Therefore, the working group opted not to evaluate the service using this method. Instead, it was decided to employ the third method, similar to other recent studies of analyzing transcripts for patterns in complexity and type of question in order to improve quality. The current study analyzes transcripts and other software-derived metrics to identify patterns in the types of questions asked and in user type, the percentage of questions that were McGill-specific, and the adequacy of service hours.

Evaluating the types of questions posed in the virtual reference environment can help improve the quality of service and can help determine alternative staffing possibilities. For example, Matteson, Salamon, and Brewster’s systematic review provides a table with the top question types from a variety of studies.7 There are questions from a variety of different categories, but research-based questions and known-item searching figure prominently. Morais and Sampson identified that “64 percent of questions were ready reference or instructional in nature; 25 percent sought a known item; 6 percent were policy questions; and 5 percent were related to technical problems.”8 However, other studies report receiving significant numbers of questions about policy or library accounts. For example, Armann-Keown, Cooke, and Matheson report their top categories as being those related to library materials (42 percent) and library accounts and circulation services (31 percent).9 Rawson et al. concur, noting that although 48 percent were specific search questions (not known-item searches) such as students needing articles on a topic, they also report that there were a large number of policy-related questions.10 This finding implies that librarians staffing the service must be familiar not only with research-related questions but also those relating to library policy matters and patron account information.

The level, or difficulty, of questions in a virtual reference environment has implications for its staffing. Chow and Croxton state that there is “a general perception . . . that online chat reference is suitable mostly for simple factual and directional but not reference questions.”11 Cabaniss’s analysis discovered that at the University of Washington Libraries, the majority of questions consisted of general information and known-item searches, queries that could be answered by graduate student assistants.12 However, other studies mention the extent to which instruction is taking place within the chat environment, suggesting that, in many cases, the service moves beyond simply answering factual questions and provides an experience to users that allows them to develop new skills. For example, Matteson, Salamon, and Brewster explain that there is frequently instruction taking place within the virtual reference environment, that users are receptive to instruction, and that librarians use techniques such as walking users through the steps in order to locate information.13 Moyo and Ward report similar findings.14 In fact, Moyo emphasizes that certain features of virtual reference, such as the availability of a transcript for the user to consult after the reference transaction and the option for the librarian to provide follow-up information to the user afterward via email, are more conducive to instruction than face-to-face desk reference service or instruction in a classroom setting.15

The previous literature is divided as to whether or not the service should be staffed by professional librarians. Several studies are in favor of librarians staffing the service while others discuss ways of staffing with students and library support staff. Bravender, Lyon, and Molaro did a cost analysis of the virtual reference service at a medium-sized liberal arts university with a small percentage of graduate students and concluded that with less than a quarter of questions requiring a librarian to answer, having librarians staff the service was not cost effective.16 However, other studies suggest that professional librarians’ skills are well suited to offering virtual reference service. In their systematic review, Matteson, Salamon, and Brewster assert that “Providing library service via chat technology requires competencies in both communication skills as well as reference skills” and this statement could be interpreted as an endorsement for such a service model.17 Armann-Keown, Cooke, and Matheson highlight the importance of standardized staff competencies and ongoing training to ensure a consistent level of service.18 A recent study by Maloney and Kemp on the level of complexity of question in a virtual reference environment provides a good discussion of different types of staffing models for virtual reference. They provide results of their study analyzing the complexity of virtual reference questions at one university library and conclude that the complexity of questions asked via chat is higher than those asked in person at a desk, and that many reference questions offer an opportunity to support the research process.19 These findings provide further evidence for staffing virtual reference services with librarians. Furthermore, when a content analysis of chat questions from Georgetown Law Library was conducted by Morais and Sampson, the authors concluded that the “sophisticated level of questions confirms that Georgetown’s practice of having professional librarians staff chat reference [was] the right decision” for their institution.20 The type of clientele an academic library supports is a factor to consider when determining who within the library should staff the virtual reference service.

A second staffing-related question that is discussed in the literature is the use of consortium services, with studies coming to different conclusions on whether consortium-based services or individual library-based services are best. For example, according to Rawson et al., users are satisfied with outsourced chat,21 whereas several studies favor having the service staffed by local librarians. Bishop and Torrence point out that although having less quality control “is a possible disadvantage of consortium participation given the local nature of chat reference,” there are advantages to consortium-based participation such as increased collegiality among institutions.22 Noting what percentage of questions requires local knowledge may help in decisions about whether or not to use a consortial model for staffing the service. Bishop and Torrence’s study analyzed transcripts to determine what percentage of questions required local knowledge to answer and noted that 23 percent of questions were local in nature, while a study from Auburn University Libraries identified that 60 percent of questions required local information to answer.23 Meert and Given’s study comparing the quality of answers provided by the University of Alberta librarians and those in the consortium determined that the local staff met service standards 94 percent of the time, compared to 82 percent of the time for consortia librarians, and that local staff were able to answer 89 percent of questions in real time compared to the consortia librarians who were able to answer 69 percent of questions in real time.24 These findings have implications for the quality of the service. Powers et al.’s article discusses an academic library’s move from consortial to local service in part to ensure high quality service and also to build relationships with faculty and students on campus. In their literature review, they note that there are risks associated with consortium-based virtual reference service, stating that “there have been a number of articles assessing the quality of local chat reference offered within consortia, all coming to the same general consensus—quality of service for local questions is sacrificed in consortial reference.”25 Morais and Sampson’s analysis of their chat service led to a similar assertion that the service should be staffed with professional librarians familiar with the local collection.26 Bishop’s work identifies that lack of access to local information can be an impediment to quality virtual reference service in a consortial environment, but can be mitigated by modifying libraries’ policies related to sharing local information and enhancing training of consortial staff.27

Another area of interest investigated in the literature and related to staffing is the number of questions that are referred rather than responded to directly. Matteson, Salamon, and Brewster’s systematic review reports that the percentage of referred questions in the four studies that investigate referrals varies widely from 3 to 33 percent.28 Two more recent studies, not discussed in the systematic review, show the percentage of referred questions to be in this range, at 13 percent29 and 18 percent.30 The percentage of referred questions is important to investigate since a high rate of referred questions could mean that the quality of the service is not as high as would be desired, and may suggest that the expertise of staff is not adequate for answering questions. High levels of referred questions could also adversely affect the quality of the service, as referred questions likely take longer to be answered than those answered by the staff member on duty.

Methods

For the present study, several methods were chosen and used in order to answer the research questions:

  1. Analysis of a sample of reference transactions to determine the main user groups of the service, the most often used component of service (chat or email), and the effectiveness of widgets embedded in various library website pages, catalogues, and databases as additional access points to the service
  2. Qualitative analysis of the same sample of chat and email transactions in order to discover: the level of complexity of the questions, the recurring themes of the questions, the subject areas of the questions, and the adequacy of the level of expertise of librarians staffing the service
  3. Analysis of the usage of the service to understand if the actual staffing model is adequate for the service
  4. Analysis of a sample of data automatically collected in the platform (number of questions received) to assess the adequacy of the offered virtual reference service in terms of service hours
  5. Analysis of internal policy documents related to the virtual reference service
  6. Comparison of the main features of widely used virtual reference platforms according to a predetermined set of requirements

In order to perform the first three analyses above, virtual reference transactions from July to October 2014 were sampled. The sample consisted of chat and email transactions from the second week of each month, of which there were 555 in total. After blank and duplicate questions were removed, the total number of questions to be analyzed amounted to 510. The questions were divided between four coders who analyzed the transactions and recorded the data in a Microsoft Excel spreadsheet. The transactions were analyzed and coded using a coding scheme developed by the working group (see appendix). To ensure consistency of analysis and inter-coder reliability, previously coded questions were randomly sampled and coded by another member of the group.

For each question, the researchers noted the data regarding reference transactions that were automatically collected by the software, such as means of communication (either chat or email), means of reception (web form or widget), whether or not a question was referred to another librarian or a staff member, and user type. Also, researchers analyzed the content of the transaction to determine theme, subject area, and the level of complexity (basic, intermediate, advanced) of the questions. The themes of the questions (see appendix) emerged from discussions with the librarians regularly staffing the service. The subject areas were defined according to the existing breakdown of the subjects by major disciplinary areas according to the McGill University Library website. The definitions of each level of complexity were aligned with the definitions used in the reference statistics software for recording in-person, email, and phone reference transactions, as follows:

  • basic: responds to a simple question using library information sources (catalogue, website, ready reference);
  • intermediate: assists users with intermediate-level questions or support, may require use of several information sources, and often involve user instruction;
  • advanced: responds to a user’s question using advanced expertise in the service area. Interactions are often multifaceted or interdisciplinary and subject specialists may need to be consulted.

After completing the first stage of data collection, the researchers examined the data to determine if the actual staffing model was adequate for the virtual reference service. In order to understand who should staff the service (librarians, library assistants, or student employees), the distribution of questions by level of complexity was examined. To answer the question of whether or not librarians have an adequate level of expertise to answer the majority of questions asked by library users, the number of referred questions (those reassigned to another librarian or to a service account) was compared to the number of questions answered by librarians who began the reference transaction. A high rate of referred questions could negatively affect user experience of the service and user perception of service quality, and signal a needed change in the staffing model or further training of the librarians providing the service.

To be able to determine if a consortial model of staffing the service should be considered in the future, two factors were considered:

  • the distribution of questions specific to McGill University Library resources versus general questions. If there were many general questions, this may warrant use of a consortial model.
  • the number of questions asked by members of the McGill community compared to the number of questions received from the general public. Given that the literature shows chat services to be an important form of outreach to the campus community, having a high percentage of questions from within the institution could weight against use of a consortial model.

For the analysis of the adequacy of virtual reference service hours (during the academic year, 10 a.m.–5 p.m. Monday to Friday for email and chat; 10 a.m.–6 p.m. Saturday and Sunday for email only), two typically busy months during the winter and fall terms of the academic year were sampled: February 2014 (twenty days of service) and October 2014 (twenty-two days of service). The data for the analysis was collected from the automatically generated monthly reports of transactions with daily and hourly breakdowns by the number of requests received via both chat and email. The analysis had two goals: to determine if there was a significant number of email questions and chat requests received before and after service hours on weekdays, and if there were a significant number of chat requests received during weekends when only the email service is provided, which could suggest that an extension of service offerings is warranted. The average number of emails received per hour in the course of service hours was compared to the average number of emails received in the hours immediately preceding and following the service hours. To determine the need to extend weekend service to include chat service, the total number of chat and email requests received during weekend days was calculated and compared with the average number of email and chat transactions occurring on weekdays.

To determine if the current platform serves the needs of the service, a list of requirements and desired software and platform features was established. Then, five virtual reference platforms used widely by North American academic institutions, consisting of QuestionPoint (OCLC), LibChat (Springshare), Mosio, LivePerson, and LibraryH3lp (Nub Games), were compared to determine if any of them offered distinctive advantages over the platform that is currently used by the McGill University Library (QuestionPoint), and if there would consequently be advantages in implementing a different platform. The group created an evaluation grid (see table 2) with twenty criteria to objectively analyze the chosen platforms. The grid was inspired by a similar grid used by members of the CREPUQ-REFD group (Groupe de travail sur la référence à distance de la Conférence des recteurs et des principaux des universités du Québec) but was modified to reflect the goal of the report and to integrate new developments such as mobile apps and open-source software.

Findings and Discussion

The results of the evaluation undertaken with the methodology described above revealed several trends and tendencies, most of which are in accordance with the previous literature. The main goals of the evaluation were to analyze the usage of the virtual reference service and the adequacy of the current service model. The analysis of the data automatically collected in the virtual reference platform and the qualitative analysis of the sample of chat and email transactions demonstrated the trends discussed below. This analysis allowed the group to make some recommendations with regard to future improvements. If a similar analysis is undertaken by other libraries providing a virtual reference service, it will further their understanding of the functioning, day-to-day provision, usage, and organization of the service and will allow them to make recommendations for possible ways to improve and develop the service.

Service Usage

The service is popular and the trend from 2006 to 2014 (see figure 1) shows an overall increase in service usage, which indicates that the virtual reference service should continue to be provided, supported, and actively promoted to incoming and continuing students and staff. The data also show a shift in the percentage of chats versus emails over time, with chat becoming increasingly important (see figure 1). This can be attributed to the implementation of additional access points to the chat service (e.g., via the Qwidget) or users’ increased levels of familiarity with chat services. The data demonstrate clear advantages of maintaining both components of the service (chat and email), as well as having additional access points to the service (widgets embedded in the catalogues and databases), and suggest possibly adding other access points to the virtual reference service. Due to the large size of the analyzed sample, these findings may be transferable to other academic libraries of similar scale and could assist them in making an informed decision on which components of virtual reference service should be implemented or retained.

Regarding the main users of the service, members of the university community (students, faculty, staff, and alumni) were responsible for the majority of the questions: 79 percent in total, with members of the general public accounting for a significantly smaller share of questions (86 questions, 17 percent) and with 4 percent of unknown origin. Students constituted the largest category of service users (334 questions, 65 percent of the total number of analyzed questions), with other members of the university community being less significantly represented: faculty (39 questions, 8 percent), staff (9 questions, 2 percent), and alumni (18 questions, 4 percent). Conducting a similar analysis at any type of library would allow its librarians to evaluate how effectively the service reaches each user group and could suggest future marketing and promotion directions, for example to target more actively a user group that shows low levels of service usage.

Service Provision

Two factors that can be used to determine the feasibility and applicability of a consortial model for virtual reference service in a particular library are usage of the service by user type and types of questions received. In the case of McGill Library, the analysis of the transcripts revealed that the vast majority of the questions (69 percent in total) were specific to local resources and services (see figure 2). If a similarly high level of local specificity of both the user population and the themes of the questions is demonstrated by the analysis conducted at any library providing a virtual reference service, a consortial model may not be recommended as it may have important implications for the maintenance of service quality. It would be challenging for the staff of other libraries participating in a consortium to provide high-quality service in the circumstances where the majority of both questions and users are specific to a particular institution. As discussed in the literature review, the adoption of a consortial service model may result in longer waiting times for users due to an increased number of referred questions, and possibly in a higher number of incorrect answers. These decreases in service quality could be even more significant for an institution where the main user group is from within the institutional community and a high percentage of questions are locally specific.

Since there is debate in the literature about whether librarian-level expertise is required for answering questions or whether library assistants and students could participate in delivering the service, it was important to analyze received questions to determine their level of complexity. In the analyzed sample, the level of questions showed a nearly equal distribution between 250 basic questions and 249 intermediate questions (those showing evidence of information literacy instruction or question negotiation), with only 11 advanced queries. Due to this almost even split between basic and intermediate questions, the recommendation was made to keep librarian-only staffing of the service.

This decision to keep librarian-only staffing has also been corroborated by the analysis of the number of referred questions. The majority of questions were answered by the librarian who began the transaction, with only 17 percent of questions being recorded as referrals to another librarian, a support staff member, or a service email. The analysis of the virtual reference transactions shows a relatively low number of referrals, which suggests that librarians have a level of expertise that is more than adequate to answer most of the questions. This model of staffing has the benefit of quick response time, which may not necessarily be the case if the service model were changed to staffing by students or library assistants, who might not have sufficient expertise for answering most intermediate-level questions. In the context of an academic institution, it may be deemed to be more appropriate to keep librarian-only staffing, as each chat interaction could be used as an opportunity for information literacy instruction, as well as for building and strengthening relationships with faculty and students. In addition, changing the staffing model could require an important reassignment of available financial and human resources required for the service and an establishment of an adequate training program aiming to ensure that high quality service standards are met, which may not be possible in academic libraries in the current economic situation.

Another finding of this study indicates that a significant number of users have difficulty locating known items, with 22 percent of questions falling into this category. Generally, these findings can be interpreted as an indication that information and instructions on how to locate known items, sometimes considered to be too basic and thus not emphasized, should be reinforced in information literacy instruction and on the library website. For example, having step-by-step instructions on known-item searching available via the library website would be one way of enhancing existing services.

The majority of the analyzed transactions (324 questions, 64 percent) pertain to a specific disciplinary area, with the rest falling into a non-attributed or generic category (see figure 3). The high level of subject-specificity of the questions could indicate the need, in many cases, for information literacy instruction to take place during chat interactions, which can be better provided by librarians than by less skilled staff. This type of analysis by subject area is useful and could be considered by library staff within the organization to help make informed decisions regarding improvements to website design, information literacy instruction, collection development, and reference services in respective disciplines.

Conducting an analysis of user requests received outside of the present service hours generates the data necessary for making an informed decision with regard to the extension of the service hours. Extending service hours should be undertaken only if warranted by a high number of received requests and if staffing permits. An analysis of the requests for the chat and email service received outside of the current service hours did not provide evidence that the service hours should be extended. It did not demonstrate any significant after-hour or weekend traffic outside of the current chat service hours that would indicate the need to increase or change the service hours.

Analysis of Internal Policy Documents

In analyzing the library’s internal policy documents related to the virtual reference service, the working group revealed and highlighted that policy documents with explicit service quality guidelines are lacking and should be developed in order to further enhance service quality. This is not unusual as many academic libraries are in a similar situation, as identified by Pinto and Manso, who state that “most virtual reference services lack the service and quality policies that can help them to develop efficiently.”31 The systematic review by Matteson, Salamon, and Brewster also notes that user satisfaction increases when certain Reference and User Services Association (RUSA) guidelines are adhered to, highlighting that developing policies and procedures around reference interactions is important and can improve service quality.32 Therefore, developing these policy documents, perhaps based on the document created by RUSA, “Guidelines for Implementing and Maintaining Virtual Reference Services,” is a valuable step for any academic library providing virtual reference services.33

Analysis of Software

Five virtual reference service platforms used widely by academic libraries (QuestionPoint, LibChat, Mosio, LivePerson, and LibraryH3lp) were analyzed, employing the grid developed for this purpose by the working group (see table 2, next page). According to the analysis, the three current leading virtual reference software platforms in the North American academic library market that provide users with options to interact with librarians via chat and email are QuestionPoint, LibChat, and LivePerson. For QuestionPoint, text messaging involves integration with separate software provided by Mosio, while LivePerson does not provide a text messaging option. LibraryH3lp and Mosio are not complete virtual reference service solutions. LibraryH3lp has some significant drawbacks, such as the lack of an integrated email service and the need for some in-house configuration. Mosio has limited appeal as a stand-alone platform because it is primarily geared toward texting and does not have some basic features available in other systems. QuestionPoint and LivePerson have existed longest on the market, although LivePerson was initially geared toward the corporate market. LibChat is newer on the market, having launched in 2012, and provides similar functionalities to QuestionPoint and LivePerson. One interesting feature of LibChat is its integration with other Springshare products, such as LibAnalytics, to collect valuable statistics on reference interactions. All of the software platforms offer the possibility to integrate widgets into library catalogues and databases. Based on the analysis of the software features presented above, the working group has offered to enhance the existing virtual reference service by integrating a text messaging component into the existing range of access points to the service. In the current conditions, this could be achieved via integration of text messaging software (e.g., Mosio) within the current platform (QuestionPoint).

In general, this method of integrating new components into the existing platform would be preferable as a short-term solution for any library that would like to enhance its current virtual reference service offerings and provide more access points as it would not require a large amount of resources. As a long-term solution for improving and developing a virtual reference service in any type of institution, regular trials of major competitors of the used platform should be undertaken in order to evaluate benefits and disadvantages of their systems. However, the implementation of any other virtual reference platform, especially in a multi-branch library system, could be recommended only if the competitor offered some clear advantages over the current platform, as it would require significant and time-consuming changes to the service.

The current analysis examined a virtual reference service in an academic library context and determined that the service provision model is meeting user needs. The current staffing model ensures that staff members covering the service are able to answer most queries, with question level being evenly split between basic and intermediate, and only 17 percent of questions being referred. Current service hours are meeting needs, with few questions coming in during non-service hours. All the elements of the current service (i.e., email and chat, as well as widgets) are being used and would be required should a new virtual reference service platform be chosen in the future. Possible areas of improvement include developing policies and procedures around reference interactions to ensure quality, providing more web or in-person instruction on known-item searching (and other areas where there are frequently asked questions), and incorporating newer technologies such as text messaging to improve the service. Improvements such as these will ensure that the service remains responsive and relevant to users in the decade to come.

Conclusion

This study examined the model of usage and provision of a non-consortium-based virtual reference service staffed by librarians from all branches of an academic library within a research-intensive university environment. There are several areas that could be examined in the future in order to gain a broader perspective of the virtual reference service in any type of library, for example surveying users of the service. Although, as mentioned previously, many studies emphasize that user satisfaction is generally very high with regard to virtual reference services, surveying users directly could identify specific areas for improvement that have not been identified thus far. Another further step would be to consider ways of using the data collected through virtual reference interactions to inform website design, structure, and content organization, as well as the design of new library services, or improvement of existing ones.

The findings of this study will be useful to academic libraries in considering the place of virtual reference services among their other reference services. Due to the rapidly changing nature of this field, findings of the studies undertaken even five years ago might show a different picture from the present due to the lower levels of awareness and uptake of the service. Also, given that there is lack of consensus in the literature with regard to the many staffing options for virtual reference services, the current study builds on the literature by providing an analysis of various factors to consider in deciding on an appropriate staffing model for an academic library, such as whether or not the service should be staffed by librarians exclusively and whether or not a consortium-based system would best serve their users.

The current paper demonstrates how a current virtual reference service model can be efficiently evaluated by a local working group comprised of librarians who staff the service. The methods developed for the project can be easily adapted and applied for assessing and evaluating the service in any type of library. The current study builds on the literature by developing a new methodology for analyzing the service that combines the use of automatically collected data and a qualitative analysis of a sample of reference transactions. This method could be useful to other libraries for analyzing their own virtual reference service in order to determine the adequacy of the service provision model in relation to the type and level of questions they receive and their main user groups. Analyzing a virtual reference model of provision and service usage informs a local library community on the current state of the service, produces a document that could be used in the training of librarians or other staff participating in the service, and gives directions and recommendations for future development of the service.

Acknowledgements

Our sincere appreciation to our colleagues Chris Lyons, Lonnie Weatherby, and Robin Canuel for their valuable contributions and feedback.

References

  1. “About McGill,” McGill University, accessed July 18, 2016, www.mcgill.ca/about.
  2. Maryvon Côté et al., “Virtual Reference at McGill Library,” April 20, 2015, www.mcgill.ca/library/files/library/virtual_reference_at_mcgill_library_report_final.pdf.
  3. Miriam L. Matteson, Jennifer Salamon, and Lindy Brewster, “A Systematic Review of Research on Live Chat Service,” Reference & User Services Quarterly 51, no. 2 (2011): 172–90, http://dx.doi.org/10.5860/rusq.51n2.172.
  4. Yasmin Morais and Sara Sampson, “A Content Analysis of Chat Transcripts in the Georgetown Law Library,” Legal Reference Services Quarterly 29, no. 3 (2010): 165–78,177; Erica C. Nicol and Linda Crook, “Now It’s Necessary: Virtual Reference Services at Washington State University, Pullman,” Journal of Academic Librarianship 39, no. 2 (2013): 161–68,165.
  5. Matteson, Salamon, and Brewster, “Live Chat Service,” 185.
  6. María Pinto and Ramón A. Manso, “Virtual References Services: Defining the Criteria and Indicators to Evaluate Them,” Electronic Library 30, no. 1 (2012): 51–69; Vera Armann-Keown, Carol A. Cooke, and Gail Matheson, “Digging Deeper into Virtual Reference Transcripts,” Reference Services Review 43, no. 4 (2015): 656–72; Krisellen Maloney and Jan H. Kemp, “Changes in Reference Question Complexity Following the Implementation of a Proactive Chat System: Implications for Practice,” College & Research Libraries 76, no. 7 (2015): 959–74, http://dx.doi.org/10.5860/crl.76.7.959.
  7. Matteson, Salamon, and Brewster, “Live Chat Service,” 178.
  8. Morais and Sampson, “A Content Analysis of Chat Transcripts,” 166.
  9. Armann-Keown, Cooke, and Matheson, “Digging Deeper into Virtual Reference Transcripts,” 660.
  10. Joseph Rawson et al., “Virtual Reference at a Global University: An Analysis of Patron and Question Type,” Journal of Library & Information Services in Distance Learning 7, no. 1–2 (2012): 93–97, 95.
  11. Anthony S. Chow and Rebecca A. Croxton, “A Usability Evaluation of Academic Virtual Reference Services,” College & Research Libraries 75, no. 3 (2014): 309–61, 312, http://dx.doi.org/10.5860/crl13-408.
  12. Jason Cabaniss, “An Assessment of the University of Washington’s Chat Reference Services,” Public Library Quarterly 34, no. 1 (2015): 85–96, 92, http://dx.doi.org/10.1080/01616846.2015.1000785.
  13. Matteson, Salamon, and Brewster, “Live Chat Service,” 185.
  14. Lesley M. Moyo, “Virtual Reference Services and Instruction,” The Reference Librarian 46, no. 95–96 (2006): 213–30; David Ward, “Measuring the Completeness of Reference Transactions in Online Chats: Results of an Unobtrusive Study,” Reference & User Services Quarterly 44, no. 1 (2004): 46–56.
  15. Moyo, “Virtual Reference Services and Instruction,” 217.
  16. Patricia Bravender, Colleen Lyon, and Anthony Molaro, “Should Chat Reference Be Staffed by Librarians? An Assessment of Chat Reference at an Academic Library Using Libstats,” Internet Reference Services Quarterly 16, no. 3 (2011): 111–27, 125.
  17. Matteson, Salamon, and Brewster, “Live Chat Service,” 185.
  18. Armann-Keown, Cooke, and Matheson, “Digging Deeper into Virtual Reference Transcripts,” 668.
  19. Maloney and Kemp, “Changes in Reference Question,” 966–72.
  20. Morais and Sampson, “A Content Analysis of Chat Transcripts,” 176.
  21. Rawson et al., “Virtual Reference at a Global University,” 94.
  22. Bradley W. Bishop and Matt Torrence, “Virtual Reference Services: Consortium Versus Stand-Alone,” College & Undergraduate Libraries 13, no. 4 (2007): 117–27, 126.
  23. Bishop and Torrence, “Virtual Reference Services,” 122; JoAnn Sears, “Chat Reference Service: An Analysis of One Semester’s Data,” Issues in Science & Technology Librarianship 32 (2001), http://dx.doi.org/10.5062/F4CZ3545.
  24. Deborah L. Meert and Lisa M. Given, “Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries,” College & Research Libraries 70, no. 1 (2009): 71–84, 82, http://dx.doi.org/10.5860/crl.70.1.71.
  25. Amanda C. Powers et al., “Moving from the Consortium to the Reference Desk: Keeping Chat and Improving Reference at the MSU Libraries.” Internet Reference Services Quarterly 15, no. 3 (2010): 169–88, 170.
  26. Morais and Sampson, “A Content Analysis of Chat,” 165.
  27. Bradley W. Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?” Portal 12, no. 4 (2012): 355–70, 356, http://dx.doi.org/10.1353/pla.2012.0036.
  28. Matteson, Salamon, and Brewster, “Live Chat Service,” 179.
  29. Firouzeh F. Logan and Krystal Lewis, “Quality Control: A Necessary Good for Improving Service,” Reference Librarian 52, no. 3 (2011): 218–30, 225.
  30. Armann-Keown, Cooke, and Matheson, “Digging Deeper into Virtual Reference Transcripts,” 666.
  31. Pinto and Manso, “Virtual References Services,” 64.
  32. Matteson, Salamon, and Brewster, “Live Chat Service,” 178.
  33. “Guidelines for Implementing and Maintaining Virtual Reference Services,” RUSA (Reference and User Services Association), revised 2009, accessed July 18, 2016 www.ala.org/rusa/files/resources/guidelines/virtual-reference-se.pdf.

Appendix. Question Coding Scheme

  1. Chat or email
  2. Received via Qwidget: Y/N
  3. Referred: Y/N
  4. Level of questions:
    • Basic: responds to a simple question using library information sources (catalogue, website, ready reference)
    • Intermediate: assists users with intermediate-level questions or support, may require use of several information sources, and often involves user instruction
    • Advanced: responds to a user’s question using advanced expertise in the service area. Interactions are often multifaceted or interdisciplinary and subject specialists may need to be consulted
  5. Theme of questions:
    • Availability of McGill University Library services
    • Issue with access to McGill e-resources
    • Reference/research
    • Loans/renewals of McGill borrowed items
    • Known item searching in McGill catalogues
    • Other
  6. User type:
    • McGill student
    • McGill faculty
    • McGill alumni
    • McGill staff
    • Non-McGill
    • Don’t know
  7. Subject Area:
    • Archives
    • Agriculture and environmental sciences
    • Education
    • Engineering and science
    • Health and biological sciences
    • Humanities and art
    • Law
    • Management and business
    • Music
    • Social sciences
    • Rare books and special collections
    • History of medicine
    • Don’t know/not applicable
Service usage

Figure 1. Service usage

Question types

Figure 2. Question types

Question subjects

Figure 3. Question subjects

Table 1. Research questions. To evaluate the virtual reference service and achieve the goals outlined above, the following research questions and sub-questions were formulated.

Research Questions

Research Sub-Questions

How is the service used? Are there any trends that can be discovered?

Which component of the service (chat versus email) is used the most?

Are the widgets embedded in the catalogues, website, and databases effective as additional access points to the service?

Who are the main users of the service?

What is the level of complexity of the questions?

Do the questions reveal any frequently repeated themes?

What are the disciplinary areas of the questions?

Are questions primarily answered by the librarians who begin the transactions or are they referred to another librarian or support staff member?

Is the current service model adequate for the service?

Who should staff the virtual reference service?

Do librarians have an adequate level of expertise to be able to answer the majority of received questions, including loans-related questions?

Should the McGill University Libraries implement a consortial model?

Are the service hours adequate?

Is the currently used virtual reference service platform adequate for meeting the needs of the service needs?

Does the current platform fulfil the established set of requirements?

How does the current platform compare against its four major competitors?

Table 2. Comparison of software

Main Characteristics

Question Point (OCLC)

LibChat (Springshare)

Mosio

LivePerson

LibraryH3lp (Nub Games)

Integration of chat, email, and text messaging

Yes (texting only with Mosio)

Yes

Yes (but primarily for text messaging)

No (no text messaging)

No (no email)

Mobile app

No (but supported on mobile devices)

Yes

No (but supported on mobile devices)

Yes

No (but supported on mobile devices)

Possibility to use institutional scripts

Yes

Yes

Yes

Yes

Yes

Possibility to assign questions

Yes

Yes

Yes

Yes

Yes

Shared queue by librarians

Yes

Yes

No

Yes

Yes

Possibility to use widgets in databases and catalogue

Yes

Yes

Yes

Yes

Yes

Transcripts send to a user after the chat

Yes

Yes

No

Yes

Yes

Co-browsing

Yes (but has technical difficulties)

No

No

No (but has desktop sharing)

No

Built-in user survey capabilities

Yes

Yes

No

Yes

Yes

Consortia use

Yes

Yes

No

Yes

Yes

Need for users to download a plugin

No

No

No

No

No

Transactions’ transcripts saved

Yes

Yes

Yes

Yes

Yes

Availability of technical support and troubleshooting

Yes

Yes

Yes

Yes

Yes

Hosting on the provider server

Yes

Yes

Yes

Yes

Yes

Open source

No

No

No

No

No (but was open source until recently)

Possibility to generate statistical reports

Yes

Yes

Yes

Yes

Yes

Possibility to assign levels of access

Yes

Yes

Yes

Yes

Yes

Reputation on the market

Good

Good

Unclear

Good

Good

Longevity on the market

Since 2002

Since 2012

Text service since 2007, full service since 2012

Since 1998

Since 2008

Mostly used by public libraries/academic libraries/private sector

Public and Academic

Public and Academic

Primarily Private sector and medical institutions

Public, Academic and Private sector

Public and Academic

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA