04_NOTES_Foster

A Controlled Vocabulary for an Electronic Resources Problem Reporting System

Creation, Implementation and Assessment

Anita K. Foster (foster.1037@osu.edu) is the Electronic Resources Officer at The Ohio State University Libraries.

Manuscript submitted July 28, 2020; returned to author for revision August 7, 2020; revised manuscript submitted October 5, 2020; accepted for publication December 8, 2020.

The Ohio State University Libraries’ Serials and Electronic Resources team tracked reports of problems with electronic resources through a ticketing system, but had not used the system functions to articulate the work involved in supporting such resources. When a new Electronic Resources Officer was hired, the librarian reviewed the type of statistics provided to management and identified an opportunity to more fully document reported problem and staff effort. With the help of team members, a mechanism was created to highlight different types of problems through the application of a controlled vocabulary developed specifically for that environment. Once the vocabulary was available and in use, after some time, terms were evaluated for efficacy, and for how the use of the vocabulary enabled analysis of the trouble-shooting process. Following an analysis by the Electronic Resources Officer of the terms after being in use for some time, the ways that staff were involved in the workflow was changed, leading to faster responses and more consistent communication of information to patrons and vendors. This paper describes the process of developing the controlled vocabulary, the insights found following implementation, and the changes to the workflow that came from that analysis.

In an ideal world, access to electronic resources (e-resources) would be straightforward and stable, as is often the case for print materials. Unfortunately, access to e-resources can fluctuate or behave unexpectedly depending on the path a user takes to get to a site. There are many factors that impact the availability of a resource and can include the following—was the subscription paid on time, were there publisher or platform changes, or were there changes on campus such as network updates that might impact off-campus authentication? The management of e-resources is a continual process, where vigilance is necessary to keep access available to end users. When libraries managed fewer e-resources, it was possible to monitor their performance, but as libraries invested more of their budgets to them and as portfolios of products grew, it quickly became difficult for most libraries, regardless of size, to monitor access regularly. Libraries had to consider where effort was best used—be proactive and dedicate staff time to checking platforms regularly, as described in Mortimer’s paper describing an e-resources auditing program, or be reactive and determine the problem once it has been identified.1 While many libraries may try to do both, focusing on the resources that regularly have problems in a proactive way and concentrating on others only when a problem is reported, the author’s experience is that realistically most staff effort is expended on being reactive, addressing problems when they are identified. The issue with being reactive, however, is that it can be difficult to identify when there is a larger problem occurring and through solving it, the number of individual reports will be reduced.

Librarians use many methods to manage the reporting of problems with e-resources. Email is often used, but trouble ticket systems are also common. Trouble ticket systems provide ways to track the status of problems, reduce the amount of email communication, and trouble ticket work is often shared by a team. Additionally, trouble ticket systems have reporting features that can be used to describe the effort of the staff fixing problems. Features may include average time to completion, who managed specific tickets, completion statistics, all of which detail the work involved in supporting e-resources. Some systems also have opportunities to tag tickets for type of problem.

At The Ohio State University Libraries, the Serials and Electronic Resources (SER) team, a seven person teaming consisting of five staff and two faculty librarians, uses a ticketing system to report e-resource problems, whether they involve journals, books, or databases, and to resolve any type of problem, including access, cataloging (e.g., missing or inaccurate records), or holdings coverage. Since 2013, the team has used Atlassian’s JIRA Project Management software to manage trouble tickets. When first implemented, few features available within JIRA were used in the trouble-shooting workflow, and the previous team manager, an electronic resources librarian, used basic reporting functions to report trouble ticket activity. The number of tickets received in any quarter was reported, as was the amount of time it took to resolve tickets, and quantity of tickets closed. The reports did not include information about the types of problems solved or when there were interactions with other University Libraries’ units. Recognizing that much of team’s effort was invisible beyond those partners, shortly after beginning employment in 2016, the new Electronic Resources Officer investigated additional opportunities to use information collected in JIRA to illuminate the effort to support access to e-resources, and to also identify areas where proactive work might happen and ultimately reduce barriers to resources for the university community. A controlled vocabulary consisting of types of problems would be an asset for learning those trends but one did not exist within JIRA, so the Electronic Resources Officer determined that one would need to be created. This paper details that process, and the unanticipated outcome that led to changing the significant parts of the e-resources problem solving workflow.

Literature Review

Troubleshooting e-resources is complex. According to NASIG’s document, “Core Competencies for Electronic Resources Librarians,” an important personal quality for an Electronic Resources Librarian is a tolerance for high levels of ambiguity as this is quite useful when troubleshooting e-resources.2 Being successful at troubleshooting requires experience working with e-resources, both when resources are functioning properly and when something goes wrong. Resnick, in an article on identifying core competencies in e-resources access services, discusses the need for a thorough understanding of how to resolve problems.3 Training methods for the development of e-resources troubleshooting skills and processes has been reported in multiple papers, such as those by Carter and Traill, and Rathmel et al.4

Having the skills to successfully trouble-shoot problems with e-resources is only part of the picture. Being able to describe problems reported helps in several ways, possibly most important is the impact on end users and being able to facilitate their success in finding and using resources. Having a place to receive and store information about problems can help document problems and solutions. Many libraries use ticketing systems to track a variety of factors involved with managing e-resources problems. In 2014, Samples reported that 43 percent of the respondents to the eProblem Reporting Questionnaire indicated that they used a ticketing system.5 Few e-resource management systems (ERMS) include a ticketing system, leading librarians to rely on other products, often borrowed from Information Technology help desks, to track problem reports. Smith provided an overview of things to consider when thinking about implementing an e-resources ticketing system and how their library used Springshare’s LibAnswers for this purpose.6 Erb discusses using LibGuides, also from Springshare, to assist with troubleshooting.7 Christman describes another experience with setting up a ticketing system when his organization transitioned from receiving reports via email to the open source product Spiceworks.8

Although there are examples in the literature about trouble-shooting and using ticketing systems to manage reports, there is less written about the “next steps” of using ticketing systems. Wright discusses revising workflows following the development and utilization of a ticketing system at the University of Michigan.9 Another example of a next step is using a controlled vocabulary within a ticketing system to identify types of problems received, and using the vocabulary to make process changes for trouble-shooting. Goldfinger and Hemhauser described the process used at the University of Maryland, College Park to code trouble tickets to develop a vocabulary that was then used to provide data to aid in answering four questions: who reported problems, how well staff solved problems, to identify the most frequent types of problems, and to determine whether problems could be prevented through proactive work.10 The authors also described an opportunity to create canned responses to common problems to provide more consistency in answers, and if functionality in their ticketing system that could be used to better advantage. Brett at the University of Houston replicated the process and vocabulary described by Goldfinger and Hemhauser to explore whether the same vocabulary could be used at different institutions.11 Brett concluded that the vocabulary could be transferable.

Very little information was found in the literature about using trouble-shooting trends identified using controlled vocabularies to realign staff effort. This paper fills a gap in the literature about reassessing the trouble-shooting process and staffing using quantitative information gleaned from assigning controlled vocabulary terms to trouble tickets.

Environment

The Serials and Electronic Resources (SER) group, a unit within the Electronic Resources Management Team (ERMT) at The Ohio State University Libraries, manages all aspects of the e-resources lifecycle, from acquisition to licensing to description and managing access and is led by the Electronic Resources Officer. A core activity in which all staff in the unit are involved to varying degrees is e-resources trouble-shooting. Trouble-shooting at the University Libraries is a two-fold process. Most patron questions are first received by the Reference Desk, whose staff does initial triage and basic trouble-shooting. Reference Desk staff reports that many questions are site navigation related (where to find the link to download an articles) or “how to” focused (how to access resources from off-campus, how to search a specific database). Questions come from a variety of sources, including Find It!, the University Libraries’ link resolver (Serials Solutions 360 Link) Report a Problem feature, Springshare’s LibAnswers (chat and email) and in-person interactions. If the staff is unable to determine the problem or needs to communicate information about catalog records or configuration of resources, they open a ticket via a reporting system; tickets are received by SER staff. Another partner in the trouble-shooting workflow is the ILL unit, who regularly identify serials holdings inaccuracies and difficulties finding content on journal and book sites. Library staff and patrons can also submit problem reports directly to the team through an online form or via email.

SER and others in the University Libraries use Atlassian’s JIRA system to manage tickets for multiple uses (e.g., IT support, facilities issues). Tickets are submitted to the Electronic Resources Problem Alert (ERPA) form, and the information is available to all SER staff assigned to the project. The original process assigned staff to specific days when they were expected to be the lead person to handle any submitted tickets. Staff can escalate tickets among the group or transfer tickets to the University Libraries IT group when appropriate. Escalation within the group could mean assigning a ticket to the staff person who works with a specific vendor for order or subscription clarification; more complex problems where the cause is not readily apparent are assigned to the electronic resources access coordinator for completion. Using JIRA for e-resources problem reporting has been in place since 2013. Using JIRA has made analysis of reports possible. Previously, prior to the project described in this paper, statistics were collected by the Electronic Resources Librarian managing the team from submitted tickets to track information such as number of tickets opened in a month, time to resolution grouped by number of days, and median time for resolution. While this information tells some of the story about the effort expended by the unit for troubleshooting purposes, it is incomplete. Information about the type of reports received and the average time spent on each type of problem was not captured. Additional information provides a deeper picture of trouble-shooting activity managed in the SER unit and has the potential to identify regularly occurring problems that may be prevented by being proactive. Additionally, this information could possibly impact staffing for the unit, both in terms of numbers but also staff members’ core responsibilities. Ultimately, by including more information about staff effort, the unit manager could more fully detail the work handled by the unit to the program and division administrators.

Project Goals

There were two goals for this project: (1) to provide additional statistics on problem resolution work done in the unit that could be reported to division administrators to more accurately document SER’s effort in greater detail, and (2) to identify areas with more frequent trouble reports and to determine how to proactively reduce or eliminate occurrences. The goals could be accomplished through reworking and adding to the types of statistics gathered for the unit’s problem-solving activities. When investigating options available for creating statistics with information gathered in JIRA, a knowledge gap was identified. Time for resolution (averages and means), numbers of reports per month and who handled how many tickets were easy to gather, and the first two were regularly reported. However, the types of problems managed and any variations in the time needed for those different types of problems could not be captured automatically by the configurations used at that time for ticket records. The team did not use JIRA’s Label tagging feature, and through discussions of possibilities, the team identified it as a potential way to better identify the types of problem reports received by the unit. However, the Label feature lacked a vocabulary, and was populated solely through free text entry. Leaving the tags up to individual team members could lead to inconsistency of use and the types of terms used. The solution was to develop a standard vocabulary, where everyone in SER agreed to the definition of terms.

The Electronic Resources Officer developed simple and clear criteria to guide the creation of a controlled vocabulary for the ERPA tickets:

  • Terms should be easy to remember, but clearly reflect the reported problem
  • Terms should be short (not more than two words)
    • Shorter words theoretically would be easier to remember and reduce time spent looking them up
  • Total number of terms in the vocabulary would be less than 15, with 10 being an ideal target
    • Fewer terms, but targeted, would also reduce look ups

With those ground rules in place, the next step was to develop the list of terms. Recognizing that a good starting place is with existing content, the Electronic Resources Officer analyzed problem tickets to determine a foundation for the vocabulary to use in JIRA’s Label field.

Method

The first stage in the vocabulary development plan involved analyzing existing JIRA ERPA tickets to glean potential vocabulary terms. There are common, frequently-reported issues about e-resources—such as broken URLs, holdings information conflicts, and off-campus access problems. Once an initial set of terms was identified, the entire SER unit would provide feedback and assisted in refining terms to produce a workable list.

JIRA does not purge tickets upon resolution and the full history of reports since JIRA implementation was available for review. When the project started, there were 1,771 tickets available for analysis by the Electronic Resources Officer. While evaluating all tickets before doing an analysis was considered, and a small set (thirty-seven) was evaluated in this way, the time involved to look at all tickets precluded quick development of a vocabulary. Instead, a randomly selected set of 200 tickets for a specific time period was chosen. The time period chosen for analysis was January 2015 to early September 2016; a determination was made that this was a sufficiently long period of time to reveal commonly reported problems and the SER staff primarily involved in trouble-shooting had been stable during that time. The final set of tickets used for analysis was 237.

The tickets were each evaluated on their own merit and in isolation, with no initial consideration of others in the set. This was done in part to mimic how individual team members might process tickets, but also so that previous and subsequent ticket’s information would not influence term assignment of any single ticket. Although a single person did this analysis, and therefore was subjective, the ground rules for the vocabulary were rigorously followed and the researcher’s experience with creating other controlled vocabularies reduced subjectivity. The core problem as reported was examined and assigned a term that best fit the type of problem as a whole, not the initial report nor the outcome., Often, a term or phrase within the report was the core issue; for example, tickets related to access problems tended to use the word “access” and tickets for link problems tended to use the terminology “bad link” or “broken link.” This existing availability of language from the reports helped lead to the preliminary set of terms for this vocabulary. Other tickets’ information took more time to determine a single term or phrase to assign, either due to complexity of the issue or extended comments added as the problem was being resolved; looking at the entire ticket from reporting to resolution was necessary to determine possible terms. The process of identifying terms in more complex reports was informative and proved useful when discussing the potential list with the problem reporting team, especially when discussing term definitions.

Once the sample set was evaluated, the terms assigned to each ticket were examined. A count of the terms was done, and evaluated. There were thirty-five terms in the first pass. Terms with a single instance were reviewed, and if similar terms existed, a term was chosen that best fit the problem, and all other tickets were reassigned this preferred term. This led to “broken link” becoming “bad link” and “update catalog record” and “modify catalog record” becoming “catalog record.” The original analysis resulted in twenty-three terms, detailed in table 1.

Where many records with similar terms were narrowed down to one with a common meaning, a few terms with a single report were retained for additional review and feedback from the team. This allowed the problem-reporting team an opportunity to determine if some problem instances were impactful enough, or, if due to the potential time needed to resolve, that the single ticket term should be retained and counted. By keeping the single ticket terms in the list for review, there was also some control for possible impacts of the random set selection may have had, as the issue could more frequently occur than the sample set indicated.

Once the Electronic Resources Officer created the initial list, it was presented to the problem reporting team for review, reaction and revision. The review revealed that the single incident terms did not happen particularly frequently, and they could fall into the “misc_error” category. The final list of terms, numbering thirteen, is provided in table 2.

The team recognized that terms might mean different things to members. Definitions were solicited for the terms after the initial list was created. Definitions were not initially included with the list for a few reasons—to get initial reactions to the terms and to reduce any potential influence existing terms might have on developing the term list or potential slowdown in developing the list if the team became bogged down with the definitions. Definitions were added to the list to facilitate a common understanding for the usage of the terms; the SER team identified that as a core factor for success for using the terms. Specific examples were included only for the terms for which clarification in meaning beyond just a definition was requested by a team member, such as the difference between problem reports for bibliographic records and inaccurate holdings information, or when is a link broken versus when is content missing. Additionally, team members believed that examples would help to differentiate similar sounding terms such as link issue and link resolver. Recognizing that sometimes an issue can be something other than originally determined, the team agreed that when the category of a problem was unclear, “Access” could be assigned. When the true nature of an issue was determined, a more accurate label would be assigned. Of course, access problems can be just that, so “Access” could still be used when it best described a problem.

The team began using the list in March 2017 with JIRA’s Labels feature, and members were encouraged to label any tickets assigned to them earlier in the year to facilitate a complete year of information. This would then enable more meaningful analysis when presented outside the unit. When the first set of statistics was compiled in July 2017 after the SER team began using the controlled vocabulary, the person who gathered statistics assigned a label to any tickets lacking one. This first set of statistics was reviewed for compliance of the team to use the terms as well as any initial discrepancies in term use. The first collection period results are seen in figure 1.

Results

During the initial year of use, there were no changes made to the list of labels. As staff familiarized themselves with the terms, there were additional discussions to refine the definitions and to clarify for what specific situations a label would be used. No situations were identified that required the addition of new terms to the list during the first year.

Once a vocabulary is established, it takes time for usage patterns to be established. That was true for this problem reporting vocabulary. Once the labels were decided and definitions determined, the process was left to age for approximately a year. In figure 2, the most frequently assigned labels during that time are identified.

As seen in figure 2, the most frequently assigned labels were “Access,” “link_issue,” and “link_resolver.” While it was not surprising that these were commonly assigned terms, “Access” was unexpectedly high. The scale of reports with this assignment indicated two things: a possibility that team members did not revisit labels after the initial assignment as agreed, or that the definition for “Access” was so vague that team members felt more comfortable using it than other more appropriate terms. Both possibilities indicated a need to review/revise the labels and how team members would determine which label to use, each of which was a training opportunity. An analysis of how the term “Access” was assigned was conducted to identify how it was being used and what training might be needed. The first analysis examined accuracy of the assignment, for which there are four possible states—original and resolved assignment matches the reported problem, original and resolved assignment do not match, original assignment does not match but resolved does or original assignment matches but resolution does not. Of the 243 problem reports assigned the “Access” label, the analysis determined it was just as likely to be incorrectly assigned as correct.

Further analysis of the reports where the original assignment and the resolved assignment did not match led to additional conclusions. The team had not recently discussed the labeling activity beyond reminders to tag the problems on which they were working, and it was clear that a refresher of the process was needed, particularly revisiting tickets and reassigning labels as appropriate when the problem had been resolved. The proportion of reports that ideally would have been labelled “misc_error” highlighted a need to potentially add a new label to the set. A majority of the misaligned labels involved IP addresses, which was a problem in Summer-Winter 2018, when the campus began using the IPv6 formatted addresses in buildings and the wireless network, which many vendors and publishers could not support. The Electronic Resources team often learned about the campus changes only through the receipt of a problem report that initially appeared to be an access problem., Once investigated, it was clearly a proxy related problem. Other common misalignments included issues with links, site behavior and content. Figure 3 details the misaligned labels.

Workflow Changes

The review of the use of labels highlighted issues with the overall problem reporting process. The labels used were an accurate reflection of the type of problems received by the team and the work done to resolve them. However, evaluating the problem tickets also illuminated a known aspect of the process that had not been directly addressed, which was a general inconsistency of customer service when responding to problems. The review of the tickets assigned the “Access” label brought this to light. Since a team handled the problem ticket resolution, some inconsistency was unavoidable. However, the range of answers to common problems and response times was a concern. The team often discussed how to resolve tickets in regular standing meetings. Although all members could review any ticket, they expressed feeling uncomfortable addressing more complex problems or those that did not clearly fit into their position responsibilities. How much time team members spent on resolving tickets varied widely; for example, if a vendor needed to be contacted to resolve a problem, some team members checked regularly for an answer while others only checked for messages on their assigned days. While the label analysis was underway, the team lost members when two staff accepted other positions, one of which was the Electronic Resources Access Coordinator. This proved to be an opportunity to make significant changes to the entire trouble-shooting process.

Following the hire of a new Electronic Resources Access Coordinator, a closer look of the entire problem resolution process was conducted during the Coordinator’s onboarding. After receiving training on how to identify an issue with a resource through trouble-shooting and strategies on how to approach solving them (e.g., escalate to another team member, contact vendor), the new Electronic Resources Access Coordinator was encouraged to respond to any submitted ERPAs for which they felt ready. As the Access Coordinator became more adept with resolving problems and with the JIRA system, they expressed a desire to assume greater responsibility for the problem reports, and not just the complex ones, which supported the need to make changes.

The changes that were made flipped the previous model. Instead of the Access Coordinator being the person who managed the more complex problems that others in the team could not resolve, typically due to the complexity of the issue, the Coordinator became the first person to evaluate any submitted problem reports. The Access Coordinator involved other members of the team as necessary. For example, if there was a question about a subscription or volumes held, the ticket would be transferred to an ordering specialist or to a cataloger, depending on the specific problem. Making this change quickly produced several benefits. First, it provided a more consistent voice for responding to problems, both towards the people reporting the problem but also with the library IT department and vendors. Additionally, there was a reported confidence in the process by staff outside the unit since that they knew who was managing the trouble-shooting process as a whole and that the tickets would be resolved quickly. This was particularly the case with strong partners such as ILL and the Reference Desk. Second, problem reports are now monitored more closely. With the previous model, team members tended to be passive, and checked JIRA only on their assigned days, or when an email was received from another person (vendor, etc.) helping with the resolution. More frequent attention is paid to resolving the problems when one individual is managing them and therefore are more quickly resolved. This ultimately leads to better end user experiences, for the initial reporter and any users of a resource, which is a high priority for the unit. Finally, the consistency of label assignments has improved, which means a more accurate picture of the type and complexity of the work can be provided. Staff who no longer were responsible for regular monitoring of tickets but still involved in the process reported greater confidence in their ability to answer questions, as they knew any questions they received better matched their areas of expertise.

Conclusion

Using a ticketing system to track the resolution of e-resources problems ensures timely processing of reports and rapid return of access. Basic data about problem reports can describe part of the effort for maintaining e-resources in terms of the time it typically takes to manage a report. This data can be made more granular and therefore more useful by identifying the types of problems received, how they are managed, and where more staff effort is spent. Ticketing systems generally lack options for tracking types of problems and few are specific to the types of problems seen in managing e-resources, so libraries either lose an opportunity that the information could have supported or need to create their own. By creating a local controlled vocabulary for tracking e-resources problems, it is possible to clearly report and reflect on issues managed by e-resources staff. Using the data from the vocabulary, a unit can identify pertinent data points, which can lead to refining processes and providing better end user experiences. While the vocabulary developed at The Ohio State University Libraries is short, it captures information that is useful in evaluating staff engagement and in enhancing workflow processes, leading to an overall improvement in service.

References

  1. Jeffrey M. Mortimore and Jessica M. Minihan, “Essential Audits for Proactive Electronic Resources Troubleshooting and Support,” Library Hi Tech News 35, no. 1 (2018), 6–10, https://doi.org/10.1108/LHTN-11-2017-0085.
  2. NASIG, “NASIG Core Competencies for Electronic Resources Librarians,” August 20, 2019, https://www.nasig.org/site_page.cfm?pk_association_webpage_menu=310&pk_association_webpage=7802.
  3. Taryn Resnick, “Core Competencies for Electronic Resource Access Services,” Journal of Electronic Resources in Medical Libraries 6, no. 2 (2009), 101–22.
  4. Sunshine Carter and Stacie Traill, “Essential Skills and Knowledge for Troubleshooting E-resources Access Issues in a Web-Scale Discovery Environment,” Journal of Electronic Resources Librarianship 29, no. 1 (2017), 1–15. https://doi.org/10.1080/1941126X.2017.1270096; Sunshine J. Carter and Stacie Traill, “Troubleshooting Fundamentals: A Beginner’s Guide,” Online Searcher 42, no. 4 (2018), 10–13; Angela Rathmel, et al., “Tools, Techniques, and Training: Results of an E-resources Troubleshooting Survey,” Journal of Electronic Resources Librarianship 27, no. 2 (2015), 88–107, https://doi.org/10.1080/1941126X.2015.1029398.
  5. Jacquie Samples and Ciara Healy, “Making it Look Easy: Maintaining the Magic of Access,” Serials Review 40, no. 2 (2014), 105–17, https://doi.org/10.1080/00987913.2014.929483.
  6. Kelly Smith, “Managing Electronic Resource Workflows using Ticketing System Software,” Serials Review 42, no. 1 (2016), 59–64, https://doi.org/10.1080/00987913.2015.1137674.
  7. Rachel A. Erb and Brian Erb, “Leveraging the Libguides Platform for Electronic Resources Access Assistance,” Journal of Electronic Resources Librarianship 26, no. 3 (2014), 170–89; Rachel Ann Erb and Brian Erb, “An Investigation into the use of LibGuides for Electronic Resources Troubleshooting in Academic Libraries,” Electronic Library 33, no. 3 (2015), 573–89.
  8. Dennis Christman, “The New AskTech: Implementing a Ticketing System Platform for Technical Services Resource Troubleshooting,” Serials Review 44, no. 3 (2018), 193–96, https://doi.org/10.1080/00987913.2018.1542765.
  9. Jennifer Wright, “Electronic Outages: What Broke, Who Broke it, and How to Track It,” Library Resources & Technical Services 60, no. 3 (2016), 204–13, https://doi.org/10.5860/lrts.60n3.204.
  10. Rebecca Kemp Goldfinger and Mark Hemhauser, “Looking for Trouble (Tickets): A Content Analysis of University of Maryland, College Park E-Resource Access Problem Reports,” Serials Review 42, no. 2 (2016), 84–97, https://doi.org/10.1080/00987913.2016.1179706.
  11. Kelsey Brett, “A Comparative Analysis of Electronic Resources Access Problems at Two University Libraries,” Journal of Electronic Resources Librarianship 30, no. 4 (2018), 198–204, https://doi.org/10.1080/1941126X.2018.1521089.
Initial Reporting Period

Figure 1. Initial Reporting Period

Labels assigned over 2 years

Figure 2. Labels assigned over 2 years

Appropriate Labels for Reports Originally Tagged Access

Figure 3. Appropriate Labels for Reports Originally Tagged Access

Table 1. Original term list

Type of Problem

Count

access

60

staff maintenance

37

bad link

36

catalog record

25

content

24

coverage

7

outage report

6

proxy

6

site behavior

5

expired subscription

4

holdings

4

link resolver

4

database list

3

misc error

3

incorrect holdings

2

journal recommendation

2

scheduled maintenance

2

searching

2

branding

1

browser

1

certificate error message

1

order question

1

WorldShare

1

Table 2. Final Term List

Code

Definition

Examples

access

problem accessing site or full text due to an undetermined reason

catalog record

concern about information in the catalog record

URL is missing or incorrect; request to add or remove fields, locations or data from the catalog record

content

content is missing or unusable

having some but not all issues of a volume, pdf is illegible when opened; eBook is missing pages or chapters

coverage/holdings

inaccurate or incomplete holdings information in Find It or the catalog

database list

resource is missing from the research database list or there is a request to add or remove subjects from a resource record

expired subscription

message seen at a resource that a subscription has expired (determining whether the subscription has truly expired before assigning this label is not necessary)

link issue

link goes to an incorrect place, is broken or does not retrieve appropriate full text

link resolver

journal is not listed in Find It or the A to Z list or additional sources are discovered for a journal

misc error

any error that does not fit with any other label

certificate errors, journal suggestions, questions about orders

outage report

report of an outage of a site, either scheduled notice or outage identified by user

proxy

problems accessing resource from off-campus or access problem where solution was to update proxy information

site behavior

suboptimal performance for a site or unexpected actions at a resource

sluggish or slow actions, unexpected messages following actions

Table 3. Accuracy of access label assignment

Access Label Assignment Accuracy

Original Assignment Matches

Resolved Assignment Matches

no/no

101

101

yes/yes

102

102

no/yes

1

1

yes/no

39

39

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core