04_NOTES_Wong

Database Discovery

From a Migration Project to a Content Strategy

Sandra Wong (swongj@sfu.ca) is the Electronic Resources Librarian at Simon Fraser University in Metro Vancouver, Canada.

Manuscript submitted September 26, 2019; returned to author for revision December 26, 2019; revised manuscript submitted January 22, 2020; accepted for publication January 26, 2020.

After migrating to Ex Libris’s Alma and Primo for its integrated library system (ILS) and discovery layer, library staff at Simon Fraser University (SFU) maintained duplicate database information in a locally developed electronic resources management (ERM) system known as the CUFTS ERM for fifteen months. The CUFTS ERM provided the data for the library’s public-facing database list known as the CUFTS resource database (CRDB). A database search function had been on Ex Libris’s Primo roadmap for product development and was announced six months after the library went live with Alma and Primo. However, the new Primo database search function lacked the ability to replace the CRDB. Members of the library’s ILS Steering Committee who managed Alma and Primo were concerned about significant negative impacts on end-users if the library adopted the Primo database search function to replace the CRDB. The steering committee formed a task group to investigate options for creating a database list from Alma records to reduce duplication of staff time and effort, and systems resources, and to replicate the main functions of the existing CRDB for end-user discovery and access.

Simon Fraser University (SFU) is a publicly funded research university offering comprehensive undergraduate and graduate degrees located in Metro Vancouver, Canada. In November 2015, SFU library’s Integrated Library System (ILS) Steering Committee issued a request for proposal for a new ILS to replace their Innovative Millennium system. After months of evaluations and demonstrations, the associate university librarian for Library Technology Services and Special Collections announced the selection of Ex Libris’s Alma with Primo as its new ILS and web-scale discovery service in September 2016. Prior to implementation of Alma and Primo, SFU’s library had used a locally developed knowledge base (KB) and link resolver service and electronic resources management (ERM) system to manage its electronic resources (e-resources). These locally developed systems formed SFU library’s reSearcher suite, an alternative to commercial link resolvers and ERM services. The reSearcher suite consisted of the CUFTS KB with the GODOT openURL link resolver, and the CUFTS ERM system.1 The reSearcher suite was available to libraries as open source software. Libraries could download and install the software independently and obtain monthly updates from the master CUFTS KB at no cost. SFU’s library also offered hosting and support for the reSearcher suite like any other library service vendor on a cost-recovery basis.

Between 2002 and 2010, many academic libraries (primarily in western Canada) subscribed to the reSearcher suite through SFU library’s hosting service. At its peak, SFU’s library hosted and provided support to more than fifty libraries using its reSearcher suite. However, after this period of continued growth and interest in the reSearcher suite as an alternative to commercial equivalents, SFU’s library began losing reSearcher library clients in 2010. This decline seemed to coincide with the growth and adoption of web-scale discovery services like Summon, Primo, and EBSCO Discovery Services. If a library chose a discovery service from a provider other than their existing link resolver service, it usually meant managing two KBs. Often a KB and link resolver service might be included with the discovery service as part of a bundled package.2 As libraries began subscribing to web-scale discovery services, they cancelled their subscriptions to the reSearcher suite to avoid managing multiple KBs. Once the decision had been made to adopt Alma with Primo, senior SFU library administration concluded that continuing reSearcher operations was no longer feasible. Alma was able to handle both print and e-resources natively without the need for additional separate services such as a KB with a link resolver and an ERM system. A formal notice to decommission the reSearcher suite was sent to all SFU-hosted library clients and known reSearcher users informing them of the decision to cease the service effective August 31, 2017. These reSearcher users were given a full year to find a replacement. By the time the reSearcher suite was decommissioned, SFU’s library was hosting twenty-three client libraries.

Although the entire reSearcher suite at SFU was shut down to external library clients on August 31, 2017, SFU library staff continued to use the CUFTS ERM because it was the source for its public facing database list known as the CUFTS Resource Database (CRDB). Earlier, in May 2017, the library had gone live with Alma and Primo. However, Primo did not offer a public-facing database search service at the time. A database search function was on Ex Libris’s Primo roadmap for future product development and an announcement was expected shortly after SFU’s formal go live date. That announcement came with the November 2017 release of Primo software updates. This update included a database search page derived from Alma data that allowed users to search for databases by title or to browse alphabetically.3 A review of this new feature in the sandbox environment proved disappointing. End-users would need to know the exact name of a database to use the Primo database search. Since end-users frequently do not know database names, many members of the ILS Steering Committee felt that this was not an adequate substitute for the CRDB. The ILS Steering Committee could have waited for further enhancements to this feature, but duplicating information in Alma and in the CUFTS ERM for two or more years was not a sustainable option. Managing information such as access URLs, proxy prepends for off-campus authentication, license data, and other details in the CUFTS ERM was redundant when the same details were also managed in Alma. Errors between the two systems were anticipated over time. However, adopting the Primo database search feature was not viable either, as doing so would have had significant negative impacts on end-users. Wanting to avoid another significant change so soon after the ILS migration meant that the ILS Steering Committee needed a solution to balance the library’s requirement to rationalize staff time and effort without disrupting end-user database access and discovery. Thus, in February 2018, the steering committee formed the CRDB Replacement Task Group to investigate options for creating a suitable database list from Alma data that would replicate the end-user functions of the existing CRDB to relieve staff of the need to maintain duplicate data in the CUFTS ERM.

Task group members included the Head of Library Systems, the Systems Librarian, the Systems Consultant, the Electronic Resources Librarian, and the Head of the eBranch who is responsible for the user experience of the library’s online presence. In addition to the technical details and specifications of extracting data from Alma to populate a public-facing database list, the task group developed a library-wide strategy to maintain a sustainable, reliable, and useful database list to meet end-user needs. The task group created a “Database List Criteria and Guidelines” document that ultimately formed the basis for policy, practice, maintenance, and administration for this new database list. The document contained criteria for inclusion in the database list and guidelines for database descriptions, resource types, and subject headings. The document also formally assigned responsibility to the Electronic Resources Librarian for administering and interpreting the criteria and guidelines. Coincidentally, the task group’s activities plus this document formed the basis of a content strategy for SFU library’s new database list.

Literature Review

Despite the seemingly important role of e-resources and databases on academic library websites, there is very little recent literature on the topic. Hoeppner commented on the scarcity of papers related to a library’s public-facing database list in a 2017 paper.4 Brisbon, Parlette-Stewart, and Oldham agreed with Hoeppner’s findings in 2018.5 Hoeppner presented the results of a survey on the systems used to manage access to databases. She found that half of the respondents used LibGuides, a content management system from Springshare, to manage their public facing database list. The remaining responses varied from web editors and content management systems such as Drupal and WordPress to commercial products like Serials Solutions and Alma to a combination of ILS and ERM systems. Hoeppner also provided a brief history of the evolution of database lists at the University of Central Florida, outlining the growth of entries and how the list was maintained. She concluded by offering practical tips on managing a database list.6 At the University of Guelph, Brisbin et al. emphasized the role of project management and collaboration to migrate their database list from a homegrown system written in the ColdFusion programming language to using a LibGuide. The migration team held multiple workshops with librarians and library staff to assign subjects to databases and “best bets” to denote top or recommended databases. They implemented a peer-review process for writing database descriptions that conformed to a set of criteria.7 Tobias provided a case study on Michigan State University Libraries’ migration from a homegrown database list called ERASMUS to using a LibGuide. In her case study, the centralization of management was essential to controlling the proliferation of entries that occurred in ERASMUS when all librarians had permission to add e-resources, including freely available websites.8 Ramshaw, Lecat, and Hodge described the technical details of creating and managing a database list after migrating from Millennium to OCLC’s WorldShare at the American University of Sharjah in the United Arab Emirates. They used OCLC’s application programming interface (API) and a Perl script to automate the populating and updating of their LibGuide database list with information from their WorldShare instance.9 A published conference report from the 2015 NASIG annual conference described how Oberg, at Wheaton College, used CORAL, an open source ERM, and its public interface generator add-on, to create a public-facing database list and to streamline workflows.10 Evelhoch studied whether the adoption of a web-scale discovery service impacted the webpage views of Central Washington University’s database lists by title and by subject. He monitored webpage views before and after the implementation of the Primo web-scale discovery service and found that webpage views of database lists by title and by subject declined after Primo adoption.11

Prior to the widespread adoption of web-scale discovery services by academic libraries that occurred after 2010, several studies on library website usability included sections on a library’s database list. Caudle and Schmitz conducted an inventory of electronic journal (e-journal) and database webpages of Association of Research Libraries (ARL) organizations’ websites in 2005. They found that many ARL libraries were consistent in offering an A to Z list plus a database list by subject. The authors then ranked library websites subjectively for their usability, specifically whether they included library jargon or were difficult to navigate.12 Fuller et al. conducted usability tests at the University of Connecticut Libraries to improve the design of their database list, which was generated by their in-house ERM system. As a result, subject headings were no longer nested and database descriptions were rewritten to reduce the amount of text. Each subject heading presented only the top five databases instead of a long alphabetical list. A keyword search box was de-emphasized as the authors discovered that users tended to enter research topics into the search box, rather than a database name.13 Fry and Rich conducted a 2010 usability study at Bowling Green State University to determine how students were using their database list, which was generated by their Innovative ERM system. They found that users struggled to find additional databases even when presented with a list organized by subject. Users tended to return to known and familiar database brands. In their discussion of the results, Fry and Rich hypothesized that a discovery layer with its single search box to search all of the library’s content would solve some of the usability issues encountered by students. The authors concluded by stating their plans to investigate alternative formatting for their database list and how to add relevancy ranking. They also recommended marketing campaigns so that students would recognize database brands to increase awareness of their database options.14

Ho wrote about using her catalog’s built-in forms to request enhancements to bibliographic records at Texas A&M University. The library’s bibliographic records populated a separate database of e-resources, including article indexes and databases plus e-journals and e-books. She found that librarians often requested uncontrolled subject headings and alternative titles for e-resources in an effort to increase their discoverability.15 Published in 2008, before discovery services were widely available, Geckle, Pozzebon, and Williams of Middle Tennessee State University (MTSU) suggested that “Cataloging electronic resources improves discovery and access.” The authors argued for a central access point for all of the library’s e-resources in addition to a separate A to Z or subject listing. As part of a website redesign, MTSU implemented an open source solution to manage their database list called LibData. What began as a clean-up project to ensure that all e-resources were properly cataloged became an ongoing activity that required policies and procedures to enable better discovery and to maintain accuracy. E-resources needed to be added to LibData before they could be cataloged. The LibData database details webpage served as the official MARC 856 access link in their catalog to minimize the need for link maintenance in two places. At MTSU, the Electronic Resources Librarian and the Acquisitions unit both ordered electronic products separately. Improving communications between the two areas would ensure that e-resources would be added to both the catalog and to LibData.16

The authors of all of these studies concur that making e-resources more discoverable by end-users is the primary goal of any database list. Discovery is dependent upon a user-friendly and easy to navigate website. Novice information seekers unfamiliar with the options and layout of a library’s website with its myriad of choices need guidance. Applying a content strategy to a small subset of the library’s website, such as the database list, can rationalize the library’s database list and promote continuity and stability among the many hundreds of electronic resources (e-resources) made available by the library.

A special section of the January 2011 issue of the Bulletin of the American Society for Information Science introduced the concept of content strategy by top content strategy practitioners to the library literature. In that issue, Baille described how content seemed to be a peripheral aspect of the web development process. User experience, user-design, and usability seemed to be the drivers of website applications. Baille outlined the problems that can occur when content is not made an equal player in a web project along with the developers who write the code and the designers who are responsible for the user experience. Form no longer follows function when content is not at the center of the project. User experience is only successful when users find relevant content. If there is no useful content, the user experience is a failure. By making content central to the project and acknowledging that content has a lifecycle, an organization increases its potential return on investment through its acceptance of content as an asset rather than “the stuff that goes into the design.”17 Preeminent content strategist Halvorson reiterated that: “Content strategy plans for the creation, publication, and governance of useful, usable content.” She outlined what should be defined in a content strategy, such as key themes and messages, a description of the content purpose, metadata frameworks, and “strategic recommendations on content creation, publication and governance.”18

Content strategy in libraries is often associated with usability in libraries. The library literature on content strategy tends to focus on library websites. Many authors provide case studies for initiatives related to website redesigns that include some aspect of creating and applying content strategy or standards. Blakiston published a case study about developing a content strategy for the University of Arizona Libraries’ website. Upon appointment as the library’s website product manager, Blakiston was inspired by Halvorson’s seminal Content Strategy for the Web and decided that her library needed a content strategy. Blakiston outlined her approach to conducting a content audit of the website that included cleaning up and deleting webpages that were redundant and/or outdated. She also provided detailed information on analyzing the results of the audit to define the website’s core purpose and created standards for web authors. The University of Arizona Libraries’ content strategy included the creation of a new role within library teams. Content managers for each library team were responsible for general oversight and management of the webpages assigned to their team.19

Fritch and Pitts from Kansas State University (KSU) took the opportunity to use a migration to LibGuides V2 to implement content standards for their LibGuide webpages and a checklist for content creators to follow. At KSU, the new standards came with “bite,” where LibGuide administrators could ensure compliance by the content creators through annual evaluations with supervisors.20 Greene described the development of a policy at Duke University for adding freely available and open access resources to their catalog to make them accessible, discoverable, and also manageable. Although Greene’s case study is not identified as a content strategy, it has some of the hallmarks of a content strategy: a purpose, criteria and guidelines, and a maintenance schedule.21

Dempsky and Chapman outlined organizational culture problems and resistance to applying content strategy principles to the University of Michigan Library’s website. They described a large, decentralized organization that encountered a mixed response in setting limits on what to produce and maintain. Communicating their content strategy to library staff was often regarded as criticism of a librarian’s work. The authors mention the “strong sense of ownership and attachment to content by librarians” that is often used as evidence for professional productivity that may not align with the library’s core content strategy principles. A web content coordinator group with representatives from each library division was formed to communicate and guide the divisions in understanding and applying the new web content policies, strategies, and best practices. Reinforcement from middle managers seemed to be more effective than working with individual web content authors. Explicit support from library leadership was also important to gain acceptance for the new content strategy principles.22

Datig from Nazareth College in New York described the steps for preparing, implementing, and assessing a content strategy for a library. She suggests beginning with a content audit, creating user personas, formulating a content vision statement, and identifying a channel strategy. Implementing a content strategy involves creating an editorial calendar and preparing workflow documents and editorial standards. The assessment piece includes setting goals and tracking progress. Gathering user feedback and obtaining analytics from websites and social media platforms all contribute to the evaluation of the content strategy. She described some of the efforts made towards content strategy at her own institution. Nazareth College’s librarians audited the library’s FAQs and LibGuide webpages and established workflows and guidelines for print and digital materials. Datig’s case study offers practical strategies for moving forward with a content strategy.23 Newton and Riggs documented the University of Wollongong in Australia’s comprehensive plan that produced their library-wide content strategy. The authors introduce the idea of design thinking to empathize with their users so that “user experience is at the center of decision-making.” Content strategy, design thinking, personas, and continuous evaluation contributed to the University of Wollongong’s ongoing review of their library’s content to place the user at the center of their strategic design plan.24

Buchanan’s paper offers practical advice and tools to manage website content as Portland State University Library’s content strategist. She includes a link to a template to define website goals, priorities, and principles. She also includes links to a style guide, a website calendar, an inventory of usability test questions and scenarios, and a Google Analytics template for website reports.25

McDonald and Burkhardt published a review of content management systems used in libraries to reinforce an organization’s need for a content strategy. They stress that a content strategy is necessary “to meet the ever-increasing demands on our resources to produce timely, user-centered content that advances our missions for supporting teaching, research, and learning.”26 Content strategy is a key theme in the commercial digital industry that is transforming how libraries manage their own digital presence. Developing a content strategy for a library database list can guide users to better database discovery and ultimately to a successful and rewarding user experience.

History of Database Lists at SFU’s Library

From 1997 to 2002, SFU’s databases were listed in a series of webpages with a common format. The format included a table with four columns containing a link to a description of the database, the database name, the interface name, and access restrictions. Entries in these lists included CD-ROMs that users needed to check out, locally networked resources that required patrons to use a library computer to access the database, and telnet and web-accessible resources through SFU’s dial-in service. In 1997, the database list contained eighty-eight entries listed under nine broad subject categories. Figure 1 is screenshot of the database list retrieved from the Internet Archive’s Wayback Machine captured on July 20, 1997.27

By 2002, the list of databases had grown to 191 entries organized into seventy-four subjects that generally corresponded to SFU’s academic departments, nested under nine broad subject categories. Many of the CD-ROM databases and locally networked resources were replaced by web-accessible equivalents. This increase in entries resulted in the creation of an in-house system to manage the list of databases, colloquially referred to as the database of databases (DB of DBs). This DB of DBs exposed a brief description of the database on the initial pages instead of just a link to a description. The nine broad subject categories were removed in favor of using the longer list of subjects that aligned with the departments and courses offered at SFU. Figure 2 is a screenshot of the DB of DBs dated September 6, 2002 from the Wayback Machine.28 The Systems Librarian who created the system managed the DB of DBs from 2002 to 2008.

In 2008, the library introduced the CRDB when the CUFTS ERM was developed. Responsibility for managing the database list was transferred to the Electronic Resources Librarian in Collections Management. The CRDB empowered liaison librarians to write both a brief and a full description of the database entries. These descriptions included html to add formatting and links to additional information. Liaison librarians could add and remove databases under the various subject headings at their own discretion. Subject headings were divided by “top” and “other,” and liaisons could rank each entry within a subject through a click and drag procedure. Databases in any subject could be ranked in any order within the subject heading. Figure 3 shows the list of databases in the CRDB for chemistry with their brief description from December 26, 2014.29

The CUFTS ERM also contained other fields that were displayed publicly in a full CRDB record, such as resource type, links to help, title lists, and license terms. Clicking on a database name revealed the full CRDB database record. Figure 4 shows a screen capture of the full CRDB record for the Reaxys database.30

Unlike previous versions of the database lists, the CUFTS ERM served multiple purposes in addition to generating the public-facing database list. The CUFTS ERM managed details such as institutional administrator logins and account numbers, vendor contacts, order information, license information, and COUNTER usage data for the library’s e-resources. The CUFTS ERM was a welcome addition at the time to help the Electronic Resources Librarian manage the increasing number of e-resources licensed by the library. By the time the library went live with Alma and Primo in May 2017, the CRDB portion of the CUFTS ERM contained 767 records, including open access and free resources.

In the months following Alma and Primo implementation, the maintenance of duplicate database and license information in the CUFTS ERM was becoming unsustainable. The automated ingestion of CRDB records into the library’s former Millennium ILS had not been replicated for Alma in anticipation of a Primo database search function. Centralized maintenance of database information in Alma was needed to reduce the duplication of staff time and effort, and system resources. Thus, SFU library’s ILS Steering Committee charged the CRDB Replacement Task Group with finding a solution and making recommendations when it was clear that the new Primo database search was insufficient to meet end-user’s expectations for database discovery.

CRDB Replacement Task Group

The CRDB Replacement Task Group members began by documenting all the CRDB functions and taking an inventory of every CRDB database record. They conducted an environmental scan of the library literature to determine the common practices of academic libraries in providing database access. The task group also reviewed other Alma and Primo library websites to determine how they were handling their database lists. Not surprisingly, most Alma libraries were using Springshare’s LibGuides service as Hoeppner had found.31 One library, Swinburne University in Australia, was using Alma APIs to create its database list. With this information, the Systems Consultant investigated the Alma APIs to establish a proof of concept for a new database list. Following Alma implementation, the Electronic Resources Librarian made the decision to maintain separate electronic collections for database-like access. For readers unfamiliar with Alma, e-resources are organized into electronic collections. Each electronic collection must be assigned one of three types: aggregator, selective or database. Electronic collections may also have two levels of service: a collection and a service level. The service level lists all of the full-text titles, known as portfolios in Alma. Both aggregator and selective electronic collection types contain portfolios. Database type electronic collections do not have portfolios. Therefore, an e-resource containing full-text can be maintained on a single electronic collection. Database-like access could be provided at the collection level, and access to the individual full-text titles within the database could be provided at the service level through its portfolios. However, the Electronic Resources Librarian decided that where an e-resource may contain portfolios and a database-like searching function, that the database level access would be maintained on a separate electronic collection with its type designated as database for easier future maintenance in Alma. For example, EBSCO’s Academic Search Premier could be maintained on a single electronic collection in Alma. But at SFU’s library, Alma has two entries for Academic Search Premier—one for the database access where the electronic collection is set to type equals database and a second electronic collection that contains all of the portfolios or full-text titles in the database with its collection level details suppressed, as displayed in Figure 5. The top record in figure 5 is the database version with no portfolios. The bottom record contains the portfolios or full-text titles included in the database.

Thus, it was easy for the Electronic Resources Librarian to create a saved logical set of Alma database type electronic collections that the Alma API could access to create the new database list. Whenever a new electronic collection whose type equals database was added to Alma, the set would automatically update. Using this saved logical set of Alma database records, the Systems Consultant was able to create a prototype using Alma APIs to produce a new database list. The API retrieved information from the database electronic collection record, its linked license record, and from MARC fields in the bibliographic record attached to the electronic collection. The task group then needed to determine what content from Alma and the CRDB should be added to the new database list and how to migrate data from the CRDB into Alma.

The task group sent an informal email request to all library staff soliciting feedback to identify the two or three primary features that were most important to keep in a database list. The results confirmed the task group member’s instincts on the top features: databases by subject, an ability to rank the list of databases, plus the ability to edit or annotate the description. Part of the project included an inventory of the CRDB. How many entries were in the CRDB, how many entries per subject heading, and how many “top” and “other” database entries were listed in each subject heading. The number of databases assigned to each subject heading ranged from as few as four to as many as 110. Some subjects had more than ten “top” databases with a few subjects with well over twenty “top” databases. Some of the subject headings, such as Datasets, News sources, and Primary Sources, were not actual subjects. Thus, the task group decided to delete these headings and convert them to resource types instead. Subject headings that represented programs no longer offered by the university were also deleted. The task group arbitrarily decided that up to five “top” would be sufficient for each subject and would incorporate ranking for the top five databases. Everything labelled as “other” would be listed alphabetically. Eventually, the task group consulted with the appropriate liaison librarians to create additional subject headings to better reflect the diversity of disciplines where the original subject heading contained close to 100 or more entries. For example, History had 110 entries and English had eighty-three. New subject headings with geographic regions or sub-genres were added to each that matched how information was presented in the corresponding subject research guide. For example, instead of a single entry for “History,” the new database list contains seven additional headings:

  • History—Asia
  • History—Canada
  • History—Europe & the United Kingdom
  • History—Middle East
  • History—Military & War
  • History—Social & Cultural
  • History—United States of America

The task group planned to use and migrate the CRDB subject headings that aligned with SFU’s faculty and departments rather than the formal Library of Congress subject headings found in the MARC 650 fields in the bibliographic records attached to each database electronic collection in Alma. Additionally, the task group decided to use CRDB resource types written in plain text because they would be easier to manage than the complex MARC tags and notations used to designate a resource type for Primo display.

The CRDB inventory also included a full export of all of the brief and full database descriptions for each CRDB record. Upon review, task group members felt that the brief descriptions were too brief in many cases. The full descriptions were also inconsistent, containing many broken links, a lot of HTML code, and obsolete information. Since the migration plan involved overlaying CRDB data into MARC fields, it seemed unlikely that any MARC field could ingest the full descriptions with all of the extra HTML code and formatting. After discussion, the task group decided to use a single succinct database description in plain text for the new database list rather than maintain both brief and full descriptions. At this point, the task group met with two cataloger librarians to determine which MARC fields could be used to record the CRDB data without adversely affecting general cataloging standards and procedures. The task group needed a means to record a single CRDB database description, multiple subject headings, a top subject designation with a ranking number, and the resource type written in plain text. Catalogers suggested MARC 592 $a for the database description, MARC 690 $a for the subject headings and 690 $g for the “top” and ranking number, and MARC 691 $a for the resource type. MARC 59X fields are reserved for local notes. The 69X fields are for local subject use. The MARC 69Xs were also repeatable to accommodate multiple subject headings in a single bibliographic record. The Systems Consultant modified the CRDB MARC export to match this specification. Simultaneously, support staff reconciled all CRDB records to ensure the presence of the CUFTS ERM number in the MARC record that was linked to the corresponding Alma database type electronic collection for matching. Since the library went live with Alma and Primo in May 2017, new databases had been added to the CRDB that lacked the CUFTS ERM number in its corresponding Alma MARC record. Thus, support staff confirmed that every CRDB record matched an Alma database electronic collection with a MARC bibliographic record containing the matching CUFTS ERM number. Next, the task group tested the process of exporting CRDB MARC records and importing the records into Alma to overlay the CRDB database descriptions, subject headings and rankings, and the resource types into their appropriate MARC tags. Then the Systems Consultant wrote scripts using the Alma API to pull the relevant data from Alma to populate a prototype which eventually became the new database list.

The task group knew that the decision to use only one description for each database and limiting the “top” to five databases might be met with some consternation among liaison librarians. The prospect of editing every database description was daunting for both task group members and liaison librarians. Although the new database list could have simply used the existing brief descriptions from the CRDB, task group members felt strongly that these descriptions required significant editing. Initially, liaison librarians were responsible for populating the brief and full descriptions for the databases purchased with their departmental collection budgets. However, there was significant staff turnover since 2008, and liaison librarian priorities changed. Liaison librarians began to focus more on scholarly communication and direct research support and less on reference and general information services. Therefore, maintaining the CRDB database descriptions became less important to them, resulting in dated, obsolete descriptions and broken links. The task group did not want to migrate incorrect and outdated information into the new database list. To avoid unnecessary workflow between Alma and the CRDB, the task group required that all the edits and changes be performed by the liaison librarians in the CRDB environment. This requirement would enable a single export of all the edited CRDB MARC records for migration into Alma. Thus, the task group needed the liaison managers’ support and endorsement to ensure that the tasks required of the liaison librarians would be completed before the final migration of data from the CRDB into Alma.

The CRDB Replacement Task Group’s final report and recommendations included an appendix titled “Database List Criteria and Guidelines” that was adopted by the liaison managers and supported as a library-wide policy. This appendix consisted of a seven-page document that formed the policy, practice, administration, and maintenance for the new database list. This document outlined the purpose of the database list, the criteria for inclusion in the list, and guidelines for subject headings, resource types, and database descriptions. The document also included statements on responsibility and authority for administering the database list. The guidelines for the database descriptions section was written entirely by librarians in the eBranch, who were responsible for the user experience of the library’s online presence. In addition to descriptions limited to plain text, the eBranch added website usability principles adapted to fit the needs of the database list, such as keeping descriptions brief, using jargon-free language, avoiding vendor marketing terminology, and keeping the descriptions evergreen by not listing specific dates or facts that could date quickly.

CRDB Replacement Task Group Results

With the endorsement of the ILS Steering Committee and the liaison managers the task group implemented the project plan. Notwithstanding the programming provided by the Systems Consultant to make the new database list almost identical to the CRDB, the project plan gave liaison librarians ten weeks from mid-May to July 31, 2018 to edit the CRDB subject lists and database descriptions. Liaison librarians had to select and rank up to five “top” databases in each of their liaison subject headings, and review and edit each database description for their assigned databases. Permissions in the CRDB were altered so that liaison librarians could only edit the brief description to avoid circumstances such as editing the full description by mistake. Knowing that some subject areas contained an overwhelming number of databases, the task group arranged with the liaison managers to provide assistance with the labor from reference librarians (contract librarians hired to help with public services and other projects) and a master of Library and Information Studies student employed by the library at the time. The Electronic Resources Librarian and eBranch librarians handled general, multidisciplinary, and orphaned subject headings and databases. From the 230 open access and freely available resources in the CRDB, fifteen were removed due to cessation, disappearance, or where the content was duplicated in another source. Where there were separate entries for e-journal and e-book platforms from the same publisher on the same website, the journal entry was edited to accommodate both e-journal and e-book content, and the e-book platform entry was deleted.

Prior to Primo implementation, the CRDB was needed to expose license information for e-books at the platform level that could not be accommodated with the former Millennium ILS. Separate database entries for e-book platforms to display license information were no longer necessary since Primo could display license information on each individual e-book title, which satisfied the library’s obligation to expose license permissions related to course packs and electronic reserves under the Canadian copyright environment. Indeed, in 2018, many publishers consolidated their e-journal and e-book content under a single website. When the project began, the CRDB contained 767 entries. By August 1, 2018, the number of entries in the CRDB had decreased to 740 entries for import into Alma.

SFU library’s new database list went live on August 15, 2018. Dual access was maintained until the end of August with notes on every CRDB entry warning users of the impending URL change to the new database list and asking users to update their links or bookmarks. In addition to notifying all library staff of the change, an email notice was sent separately to SFU’s Centre for Online Distance Education (CODE) to provide advance notice of the change so that CODE staff could update links in online courses. On September 1, 2018, the CRDB was decommissioned and redirects were provided to point users to the new database list. Many links to the CRDB on library webpages were rewritten systematically by the Systems Consultant who is also the technical administrator for the library’s public website. Any links that could not be rewritten programmatically were reviewed manually for context to determine to what they should point or whether they should be deleted. In addition to mimicking the CRDB’s overall look and feel, the new database list included some functionalities that were not provided by the CRDB. For example, the CRDB had offered users a single drop-down menu to select a subject heading. An additional facet could be added only after a user selected a subject. The new database list allowed users to select from three initial drop-down menus: by subject, by content type (previously referred to as resource type but renamed to distinguish it from Primo resource type facets), and by provider.

The new database list has required little attention aside from regular maintenance in Alma as a part of the life cycle of managing e-resources. The API retrieves data from Alma and rebuilds the new database list daily at 1:00 am Pacific time to account for any changes made to Alma database records, such as databases added or removed, access URL changes, or revised descriptions or subject heading assignments. Because the new database list contained no significant changes from the CRDB’s main functions and had almost the exact same look and feel of the CRDB, the change was likely imperceptible to most end-users. In the absence of any negative comments or feedback from end-users, the task group safely assumed that the transition from the CRDB to the new database list was successful.

Content Strategy for SFU Library’s New Database List Discussion

The CRDB Replacement Task Group did not begin its work with a plan to create formal criteria and guidelines for the library’s database list. Like many academic libraries, additions to the library’s list of databases was ad hoc at best, and followed some very basic guidelines, such as:

  • abstracting and indexing sources;
  • online bibliographies;
  • online statistical sources;
  • the resource should be searchable or have some kind of searching component;
  • a subscription-based collection of e-books; and
  • a searchable collection of reference works (dictionaries, encyclopedias, handbooks, directories).

These basic guidelines were originally documented to justify declining requests from liaison librarians to add free websites and blogs to the CRDB. Coincidentally, as the CRDB Replacement Task Group was beginning its work, a discussion on the Electronic Resources in Libraries Email List (ERIL-L) took place in March 2018 that asked list members to describe their criteria for adding e-resources to an A to Z database list. Some respondents stated that they had no documented criteria. Others had similarly worded guidelines about abstracting and indexing sources and licensed or subscribed databases. A few respondents expressed a wish for more control and centralization, stating that database criteria were too inclusive at their libraries.32 The ERIL-L discussion led to the idea of creating something more formal to replace the relatively short list of bullet points that described the criteria of what could be added to the CRDB.

Indeed, task group members anticipated that convincing a large diverse group of liaison librarians, who generally managed their own professional work and time, to perform the work requested in the CRDB Replacement Task Group’s report and recommendations might be challenging. None of the task group members directly supervised the liaison librarians. Thus, it was important for the three liaison managers to publicly support the task group’s recommendations as a library-wide priority. Adding formal “Database List Criteria and Guidelines” to the final report was a strategic maneuver to add weight to the liaison managers’ support. The task group emphasized the need for the liaison librarians’ subject matter expertise to rank the databases listed in their subject liaison areas and to rewrite or review the database descriptions to ensure that the new database list would be current and useful to end-users. Knowing that some liaison librarians felt a sense of ownership for the databases acquired through their departmental collection budgets, the task group anticipated that the liaison librarians would accept the responsibility of rewriting the database descriptions without protest. It was a delicate matter for the task group to direct the work of other library professionals in a project where the liaison librarians had minimal input into the final report and recommendations.

From a content strategist’s point of view, the “Database List Criteria and Guidelines” form a part of the content strategy for SFU library’s new database list. A copy is included as an appendix to this paper to provide a model for other library personnel who may wish to replicate, create, or modify existing database list criteria documents. The task group created a detailed inventory of all CRDB records. According to content strategy literature, this inventory was the equivalent of a content audit. The task group performed a quantitative audit of all the CRDB records and their subject headings, resource types, and rankings. They also completed a qualitative assessment of the CRDB database descriptions. From this audit, the task group identified where additional subject headings and resource types could be added and deleted. The qualitative assessment of the brief and full database descriptions helped the task group analyze the extent of outdated information and broken links that existed in CRDB database descriptions. The CRDB Replacement Task Group’s primary goal to centralize database information in Alma was countered by a strong focus on end-user discovery and access for the database list. Thus, the core content strategy for the new database list was not to centralize data in Alma, but was defined by the purpose as written in the new “Database List Criteria and Guidelines” document: “The database list provides increased discovery of the library’s list of electronic resources by subject and content type separate from the library’s main catalogue Primo. The database list is used by patrons and library staff who are looking for guidance in finding information for their research needs among the many hundreds of resources available.” Following Brain Traffic’s original content strategy quad, each database description and their subject assignments and ranking forms the substance or content for the new database list. Figure 6 is a copy of Brain Traffic’s original content strategy quad with the core strategy at the center.33 Brain Traffic is a content strategy consulting firm founded by Halvorson, author of Content Strategy for the Web.

The structure of the content strategy is described by the technical specifications for the new database list as detailed in the CRDB Replacement Task Group’s final report and recommendations, such as the MARC fields used for the subject headings, rankings and database descriptions. A copy of these specifications (appendix B) is also provided with this paper for readers who are interested in the technical details. The guidelines for assigning subject headings and resource types, and the guidelines for writing the database descriptions formed the workflow quadrant that make up the standards to which content creators adhere when creating or revising content. By specifically assigning the Electronic Resources Librarian with full administration and interpretation of the “Database List Criteria and Guidelines,” the governing quadrant was fulfilled. Like the lifecycle of e-resources, content strategy also has a lifecycle. The Electronic resources Librarian can use her professional experience managing the e-resources lifecycle to govern the content strategy for the new database list as a part of the routine management of e-resources. As reports of platform changes, mergers and other changes to licensed e-resources are communicated from vendors, she can take action or direct staff or librarians to review and revise as needed. As renewals and new orders for e-resources move through the acquisitions process, the Electronic Resources Librarian is well positioned to apply the content strategy to any new content added to the database list so that it adheres to the criteria and guidelines. While licensed content can be easily incorporated into the content strategy, an editorial calendar for reviewing free and open access resources in the database listing should likely be integrated to ensure consistency over time. No such calendar currently exists, but it is under consideration.

Conclusion

The original goals of the CRDB Replacement Task Group were achieved. Database information was centralized in Alma to reduce duplication of staff time and effort, and library system resources. The new database list replicated the main functions of the CRDB for access and discovery with no disruption to end-users. The new database list retained the subject headings that aligned with SFU’s departments and courses rather than formal LC Subject Headings. Up to five databases could be listed as “top,” and were ranked by the subject liaison librarian. All “other” databases within a subject were listed alphabetically. Additionally, every database description was reviewed and rewritten according to a set of guidelines. “Database List Criteria and Guidelines” were published and adopted by the liaison managers, and now forms the policy for maintenance and administration of the public-facing database list. These criteria and guidelines can be considered the basis for a content strategy for SFU library’s new database list.

Although the CRDB Replacement Task Group did not intend to create new policies and strategies, and the phrase “content strategy” was not invoked by any task group member, the work, analyses and final outcomes of the task group followed the practices of content strategists. Academic libraries are a significant source of scholarly content and a library’s identity and reputation can be formed by its digital presence. Strategies adapted from key trends in the digital industry can have significant positive benefits for the academic library community. Applying content strategy in conjunction with web usability principles can provide a better user experience. An easy-to-use database list with content that ultimately leads end-users to data, sources, information and/or supporting research for their academic pursuits can be achieved through continuous application of a content strategy. When users find what they are seeking, the library’s reputation as a reliable source is maintained.

References and Notes

  1. Kevin Stranack, “The ReSearcher Software Suite: A Case Study of Library Collaboration and Open Source Software Development,” Serials Librarian 55, no. 1–2 (2008): 117–39, https://doi.org/10.1080/03615260801970824; Donald Taylor, Frances Dodd, and James Murphy, “Open-Source Electronic Resource Management System: A Collaborative Implementation,” Serials Librarian 58, no. 1–4 (2010): 61–72, https://doi.org/10.1080/03615261003623039.
  2. Leanna Jantzi, Jennifer Richard, and Sandra Wong, “Managing Discovery and Linking Services,” Serials Librarian 70, no. 1–4 (2016): 184–97, https://doi.org/10.1080/0361526X.2016.1153331.
  3. Ex Libris, “Primo November 2017 Release Notes,” n.d., https://knowledge.exlibrisgroup.com/Primo/Product_Documentation/030Highlights/026Primo_November_2017_Highlights#Database_Search_for_Alma_(New_UI).
  4. Athena Hoeppner, “Database Lists A to Z: A Practitioner’s Tips and Caveats for Managing Database Lists,” Serials Librarian 73, no. 1 (2017): 27–43, https://doi.org/10.1080/0361526X.2017.1320779.
  5. Kailey Brisbin, Melanie S. Parlette-Stewart, and Randy Oldham, “A-Z List Migration: Employing Collaborative Project Management at the University of Guelph McLaughlin Library,” Collaborative Librarianship 10, no. 4 (2018): 234–50, https://digitalcommons.du.edu/collaborativelibrarianship/vol10/iss4/4.
  6. Hoeppner, “Database Lists A to Z.”
  7. Brisbin et al., “A-Z List Migration.”
  8. Christine Tobias, “A Case of TMI (Too Much Information): Improving the Usability of the Library’s Website through the Implementation of LibAnswers and the A–Z Database List (LibGuides V2),” Journal of Library & Information Services in Distance Learning 11, no. 1–2 (2017): 175–82, https://doi.org/10.1080/1533290X.2016.1229430.
  9. Veronica Ramshaw, Véronique Lecat, and Thomas Hodge, “WMS, APIs and LibGuides: Building a Better Database A-Z List,” Code4Lib Journal no. 41 (2018), https://journal.code4lib.org/articles/13688.
  10. Andrea Imre et al., “The Future Is Flexible, Extensible, and Community-Based: Stories of Successful Electronic Resources Management,” Serials Librarian 70, no. 1–4 (2016): 204–10, https://doi.org/10.1080/0361526X.2016.1147878.
  11. Zebulin Evelhoch, “Web-Scale Discovery: Impact on Library Database Web Page Views and Usage,” Journal of Web Librarianship 10, no. 3 (2016): 197–209, https://doi.org/10.1080/19322909.2016.1191048.
  12. Dana M. Caudle and Cecilia M. Schmitz, “Web Access to Electronic Journals and Databases in ARL Libraries,” Journal of Web Librarianship 1, no. 1 (2007): 3–26, https://doi.org/10.1300/J502v01n01_02.
  13. Kate Fuller et al., “Making Unmediated Access to E-Resources a Reality,” Reference & User Services Quarterly 48, no. 3 (2009): 287–301, https://doi.org/10.5860/rusq.48n3.287.
  14. Amy Fry and Linda Rich, “Usability Testing for E-Resource Discovery: How Students Find and Choose e-Resources Using Library Web Sites,” Journal of Academic Librarianship 37, no. 5 (2011): 386–401, https://doi.org/10.1016/j.acalib.2011.06.003.
  15. Jeannette Ho, “Enhancing Access to Resources Through the Online Catalog and the Library Web Site: A Collaboration Between Public and Technical Services at Texas A&M University Libraries,” Technical Services Quarterly 22, no. 4 (2005): 19–37, https://doi.org/10.1300/J124v22n04_02.
  16. Beverly J. Geckle, Mary Ellen Pozzebon, and Jo Williams, “Records for Electronic Databases in the Online Catalog at Middle Tennessee State University,” Journal of Electronic Resources Librarianship 20, no. 4 (2008): 243–55, https://doi.org/10.1080/19411260802557457.
  17. Rahel Anne Bailie, “What’s the Buzz about Content Strategy?,” Bulletin of the American Society for Information Science & Technology 37, no. 2 (2011): 19–22, https://doi.org/10.1002/bult.2011.1720370207.
  18. Kristina Halvorson, “Understanding the Discipline of Web Content Strategy,” Bulletin of the American Society for Information Science & Technology 37, no. 2 (2011): 23–25, https://doi.org/10.1002/bult.2011.1720370208.
  19. Rebecca Blakiston, “Developing a Content Strategy for an Academic Library Website,” Journal of Electronic Resources Librarianship 25, no. 3 (2013): 175–91, https://doi.org/10.1080/1941126X.2013.813295.
  20. Melia Fritch and Joelle E. Pitts, “Adding Bite to the Bark: Using LibGuides2 Migration as Impetus to Introduce Strong Content Standards,” Journal of Electronic Resources Librarianship 28, no. 3 (2016): 159–71, https://doi.org/10.1080/1941126X.2016.1200926.
  21. Bethany Greene, “Developing a Freely Accessible/Open Access Resource Management Policy at Duke University Libraries: A Case Study,” Serials Review 44, no. 3 (2018): 182–87, https://doi.org/10.1080/00987913.2018.1534537.
  22. Ian Demsky and Chapman, Suzanne, “Taming the Kadzu: An Academic Library’s Experience with Web Content Strategy,” in Cutting-Edge Research in Developing the Library of the Future: New Paths for Building Future Services, ed. Bradford Lee Eden, Creating the 21st-Century Academic Library 3 (Lanham: Rowman & Littlefield, 2015), 19–25.
  23. Ilka Datig, “Revitalizing Library Websites and Social Media with Content Strategy: Tools and Recommendations,” Journal of Electronic Resources Librarianship 30, no. 2 (2018): 63–69, https://doi.org/10.1080/1941126X.2018.1465511.
  24. Kristy Newton and Michelle J. Riggs, “Everybody’s Talking but Who’s Listening? Hearing the User’s Voice above the Noise, with Content Strategy and Design Thinking” (presentation, VALA 2016: Libraries, Technology and the Future, Melbourne, Australia, February 11, 2016), www.vala.org.au/conference/vala2016-proceedings/vala2016-session-14-newton/.
  25. Sherry Buchanan, “A Toolkit to Effectively Manage Your Website: Practical Advice for Content Strategy,” Weave: Journal of Library User Experience 1, no. 6 (2017), https://doi.org/10.3998/weave.12535642.0001.604.
  26. Courtney McDonald and Heidi Burkhardt, “Library-Authored Web Content and the Need for Content Strategy,” Information Technology & Libraries 38, no. 3 (2019): 8–21, https://doi.org/10.6017/ital.v38i3.11015.
  27. “Computer Indexes and Databases at SFU Libraries: Alphabetical Listing,” July 20, 1997, https://web.archive.org/web/19970720123248/http:/www.lib.sfu.ca/kiosk/other/cmptind.htm.
  28. “SFU Library—SFU Library Databases, Alphabetical List,” September 6, 2002, https://web.archive.org/web/20020906015519/http:/www.lib.sfu.ca/researchtools/databases/dbofdb.htm?Display=List.
  29. “SFU Library—Chemistry Databases,” December 26, 2014, https://web.archive.org/web/20141226134722/cufts2.lib.sfu.ca/CRDB4/BVAS/browse?subject=563.
  30. “SFU Library—Reaxys,” September 18, 2015, https://web.archive.org/web/20150918204327/http:/cufts2.lib.sfu.ca/CRDB4/BVAS/resource/9113.
  31. Hoeppner, “Database Lists A to Z.”
  32. See the ERIL-L Archives entries for March 2018 under the thread “criteria for e-resources on A-Z databases list.” http://lists.eril-l.org/pipermail/eril-l-eril-l.org/2018-March/thread.html.
  33. Brain Traffic, “Brain Traffic Lands the Quad!,” Brain Traffic, July 6, 2017, www.braintraffic.com/blog/brain-traffic-lands-the-quad. Brain Traffic has since revised and published a new take on their content strategy quad: “New Thinking: Brain Traffic’s Content Strategy Quad,” Brain Traffic, April 26, 2018, www.braintraffic.com/blog/new-thinking-brain-traffics-content-strategy-quad.

Appendix A. Database List Criteria and Guidelines

Purpose

The database list provides increased discovery of the library’s licensed electronic resources by subject and resource/content type separate from the library’s main catalogue Primo. The database list is used by patrons and library staff who are looking for guidance in finding information for their research needs among the many hundreds of resources available. Free and open access content may be included if such resources meet the criteria outlined below and are considered to be of significant value and interest for the SFU community. Subject headings, top rankings in a subject, descriptions and resource/content types will form stable and reliable information about a database. These criteria and guidelines are intended to limit the need for frequent edits to database information in Alma.

Criteria for Inclusion in the Database List

Licensed Databases

  • Abstracting and indexing sources, full-text databases, and searchable online bibliographies.
  • Statistical sources including datasets and summarized statistics.
  • A searchable collection of datasets and/or statisticalsources.
  • Searchable collections of full-text content and/or digital content.
  • Searchable collections of streaming audio and video.
  • Reference sources (encyclopedias, handbooks and directories) that can be searched and are of significant value for their subject areas.
  • Publisher websites with a search option where the SFU library has significant access entitlements and where there is evidence that users are accustomed to searching the publisher’s website directly.
  • Publisher websites that can limit the display of material to that licensed by the library.
  • A database with searching capabilities or a searchable component.

Open Access and Free Databases

  • Searchable collections of open access or free content published by the SFU Library or another university department or group with a searching component.
  • Significant collection of BC or Canadian content likely to be of interest for researchers and students at SFU.
  • Significant collection of content that would otherwise meet the library’s subject collection policies.
  • Collections of strategic value to the SFU Library or SFU.
  • Stable, reliable and searchable source of academic scholarly content, regularly updated and relevant for researchers and students at SFU with a persistent URL that is not likely to change overtime.

Guidelines for Subject Lists

  • Subject lists are divided by “top” and “other.”
  • “Top” databases in any subject list will be limited to a maximum of 5 databases.
  • Liaison librarians may rank the “top” 5 databases in order of importance for display in the subject listing.
  • “Other” databases in the subject list will be arranged alphabetically.
  • Should a librarian want to add a new database to the “top” ranked subject listing and where there are 5 “top” already listed, the librarian must select a database to remove from the “top” in the subject list.
  • New subject headings will be considered if there is a demonstrated need for a new subject heading, such as a new program, new areas of research or new course offerings.
  • Ideally, there should be at least 5 databases to include under any subjectlist.

Guidelines for Resource/Content Types

  • Each database shall be given one resource/contenttype.
  • Where a database contains numerous types, select the resource/content type that best suits the description.
  • New resource/content types will be considered if there is a demonstrated need for an additional resource/content type.
  • Ideally, there should be at least 5 databases to include under any resource/content type.

Resource/content type

Definitions

Datasets

Computer and machine-readable data files.

Digital collection

Objects that have been digitized. Such objects may include: text, images, audio, video, or print material. Often includes optical character recognition (OCR) technology to transform original printed/written material. Generally includes content that is not born digital.

Ebook collection

A significant, searchable collection of ebooks from a single publisher or platform.

Ejournal collection

A significant, searchable collection of full-text journals from a single publisher or platform.

Full-text database

A searchable collection of content containing the complete text of materials, such as books, journals, transcripts, or other textual documents.

Geospatial

Data, software and tools for manipulating data associated with location or a geographic place.

Image collection

Collections of digitized images, photographs, slides, or other visual content.

Index

An abstracting/indexing source that does not natively contain full-text content.

Major reference work

Current and regularly updated handbooks, encyclopedias, and guides that are core reference works for their subject area.

News sources

Full-text online sources of newspapers, newswires, news transcripts, press digests and journalistic reports. An index to newspapers.

Partial full-text database

A searchable indexing/abstracting source with multiple content types (newspapers, journal articles, books, proceedings) with full-text for only a portion of the content indexed.

Primary sources

Documents, manuscripts, diaries, speeches, letters, minutes, interviews, news film footage, autobiographies, and official records. May also include digitized versions of original creative works: poetry, drama, novels, music, art and visual material.

Statistical sources

Numeric information offered in a human-friendly, summarized, readable format, often from government and non-governmental agencies.

Streaming audio

Music, oral histories and other audio files.

Streaming video

Video content such as films, performances, interviews, lectures, instructional or training videos.

Guidelines for Database Descriptions

Introduction

Many researchers, especially undergraduates, browse our databases by subject area and very quickly select a database based on its description. Descriptions are important in that they allow end users to ascertain whether a database will be useful for their research.

As part of replacing the CRDB, we ask that you review and revise, if necessary, the brief description for each database in your subject area assigned that has a record in the CRDB. When revising the brief description, please adhere to the following guidelines.

Note that there will now be a single description for each database instead of separate brief and full descriptions. For help with writing brief descriptions, contact the eBranch.

Guidelines

Keep It Brief

The description should consist of 1–3 concise sentences.

Use Plain, Jargon-Free Language

The description should be written in plain, jargon-free language.

Outline Utility or Value of Database to End User

Descriptions should briefly and clearly explain why an end user should choose this specific database rather than a different one. The user will see a list of databases for a subject and they need help choosing one. Be sure to include the most important information about coverage in simple language, and where possible, address issues where users are likely to be confused. For example:

  • JSTOR does not include recent articles.
  • Early English Books Online (EEBO) includes scholarship published between 1475-1700 [use X database for recent articles on works published during this period].

Examples of well-written descriptions:

MathSciNet

Reviews and abstracts of books, articles and conference proceedings on mathematics, statistics, and computing science.

PsycTESTS

Psychological tests, measures, scales, surveys, and other assessment tools. In most cases actual test or test items provided, but without scoring key information.

Compustat North America & Global

Detailed financial and market data covering publicly traded companies from around the world.

For databases with a relatively broad appeal the description should be written to be understood by a broad cross-section of end users; for highly specialized databases, write the description for a specialist audience.

Specialized:

Thomson Financial Ownership: 13f Institutional Holdings

CDA/Spectrum Institutional 13(f) Common Stock Holdings and Transactions.

Non-specialized:

Associations Canada

Details on Canadian organizations and international groups including industry, commercial and professional associations, registered charities, and special interest organizations.

Non-specialized:

Project MUSE Search

Humanities and social science ebooks and journals.

Keep Content Evergreen

Keep content evergreen by not stating facts or dates that may change quickly. DO NOT USE specific numbers or facts that will quickly date e.g. “Contains 12,341 journals.”

Use Plain Text Only (no HTML)

Use plain text only. DO NOT USE HTML or any formatted text. Brief descriptions will no longer accept HTML.

Leave out unnecessary or unwanted text

With very rare exceptions:

  1. AVOID the words “online,” “database,” “searchable,” “web-based,” “digital” (as they are unnecessary).
  2. DO NOT repeat standard functions, such as “keyword searchable,” “allows citations to be emailed,” etc.
  3. DO NOT USE OR COPY promotional writing or “marketese” e.g. “The most comprehensive and heavily traveled resource on the Internet.”
  4. DO NOT REPEAT words already included in the title of the database e.g. “Computer Science Bibliographies—A collection of bibliographies in the field of computer science.”
  5. DO NOT use abbreviations or acronyms (e.g. CCICED, CHASS) without writing out long versions.

Database List Administration

Interpretation of the criteria and guidelines for the database list resides with the Electronic Resources Librarian.

When a new resource is added to the library’s collection that fits the criteria for inclusion in the database list, the Electronic Resources Librarian will ask the appropriate liaison or subject librarian(s) to select subjects to add, whether to add to top and to update rankings (if necessary), and to provide a brief description.

Requests for changes to database descriptions, subject heading assignments, top rankings, resource/content types, the addition of new subject headings or new resource/content types shall be made to the Electronic Resources Librarian.

The Electronic Resources Librarian will provide direction to cataloguing staff for edits, updates and changes to database descriptions, subject heading assignments, top rankings, resource/content types, the addition of new subject headings and/or new resource/content types that are all managed in MARC fields following the Technical Specifications in appendix B.

Appendix B. Technical Specifications

CRDB/ERM field

CRDB MARC Export and Alma Import

ERM#

035

Description brief

592

Subjects

690 a

Subjects: top and rank #

690 g

Resource/content type

691

CRDB MARC records are exported as MARC8.

Use MARCEdit to convert MARC8 to UFT8 for import into Alma.

 

New database list outputs via Alma API for https://databases.lib.sfu.ca

Database name

MARC 245

Database description

MARC 592

Subjects

MARC 690 a

Subjects: top and rank #

MARC 690 g

Resource/content type

MARC 691

Connect button

Electronic collection > Additional information > Level URL

Proxy

Electronic collection > Additional information > Proxy enabled

Open access note and icon

Electronic collection > Additional information > Is free?

Authentication note*

Electronic collection > Notes > Authentication note

Public note**

Electronic collection > Notes > Public note

License terms***

Electronic collection > General information > Acquisitions and license information > License

License terms***

Course Pack Print

Course Pack Note

Course Reserve Electronic Copy

Course Reserve Note

Interlibrary loan electronic

Interlibrary loan note

*Authentication note is reserved for databases requiring a privacy notice.

**Public note is reserved for communicating user limits. Where there is unlimited simultaneous users, no public note is added for databases.

***License terms match Primo display.

Computer Indexes and Databases at SFU Libraries on July 20, 1997

Figure 1. Computer Indexes and Databases at SFU Libraries on July 20, 1997.

SFU Library Databases also known as the DB of DBs on September 6, 2002

Figure 2. SFU Library Databases also known as the DB of DBs on September 6, 2002.

CRDB list of databases for chemistry on December 26, 2014

Figure 3. CRDB list of databases for chemistry on December 26, 2014.

Reaxys database full record from CRDB displaying additional CUFTS ERM fields

Figure 4. Reaxys database full record from CRDB displaying additional CUFTS ERM fields.

Two Alma electronic collections for EBSCO’s Academic Search Premier are listed for SFU’s library

Figure 5. Two Alma electronic collections for EBSCO’s Academic Search Premier are listed for SFU’s library.

Brain Traffic’s original Content Strategy Quad. Reproduced by permission from Kristina Halvorson

Figure 6. Brain Traffic’s original Content Strategy Quad. Reproduced by permission from Kristina Halvorson. Brain Traffic, “Brain Traffic Lands the Quad!,” Brain Traffic, July 6, 2017, www.braintraffic.com/blog/brain-traffic-lands-the-quad.

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core