lrts: Vol. 56 Issue 1: p. 14
Broken Links and Failed Access: How KBART, IOTA, and PIE-J Can Help
Sarah Glasser

Sarah Glasser is Serials/Electronic Resources Librarian, Joan and Donald E. Axinn Library, Hofstra University, Hempstead, New York; sarah.glasser@hofstra.edu
The author would like to thank her contacts at each of the three initiatives, Chad Hutchins from KBART, Adam Chandler from IOTA, and Cindy Hepfer from PIE-J, for their support and feedback during the writing of this article. The author would also like to thank Howard Graves and Pamela Harpel-Burke, both of Hofstra University, for their advice, encouragement, and proof reading.

Abstract

This paper highlights three industry initiatives currently working on ways to improve access to licensed electronic content. The three initiatives are KBART, IOTA, and PIE-J. Background information on OpenURL, link resolvers, and knowledge bases, as well as detailed descriptions of the access problems the initiatives were developed to solve, is provided. Understanding these initiatives can help those involved in the electronic serials supply chain improve their own work, communicate effectively with others, and advocate for adoption of best practices. Together, these initiatives hold great promise for a future with fewer broken links and improved access for users.


Libraries today rely heavily on electronic full-text content. Users like electronic access, but become frustrated when links to content do not work. The OpenURL standard ushered in a new and much improved way of linking to licensed electronic content, but despite broad adoption of OpenURL, links still fail and access to licensed content still eludes users more often than librarians would like. Even when links resolve correctly, users sometimes are unable to find what they seek because of how journal content is displayed on provider websites. This paper discusses some of the reasons behind failed access and describes in detail three industry initiatives currently working on ways to improve access to electronic content. The three initiatives are recommended practices for Knowledge Bases and Related Tools (KBART), a two-year research project aimed at Improving OpenURLs Through Analytics (IOTA), and recommended practices for the Presentation and Identification of E-Journals (PIE-J). While these initiatives will not solve all access problems, they offer solutions to specific, known causes of electronic access failure. Understanding exactly what they do can help those involved in the electronic serials supply chain improve their own work, communicate effectively with others, and advocate for adoption of best practices by publishers and other content providers. To fully understand the initiatives, background information is presented on OpenURL, link resolvers, and knowledge bases, as well as detailed descriptions of the access problems the initiatives were developed to improve. The ultimate goal of this paper is to enhance understanding of the work being done by KBART, IOTA, and PIE-J to provide those who deal with electronic access issues with the information they need to effect change and ultimately bring better service to users.


Literature Review
OpenURL Linking

The initial (version 0.1) OpenURL syntax was developed in the late 1990s by Herbert Van de Sompel, who introduced it with Oren Beit-Arie in 2001.1 The current version, version 1.0, became a National Information Standards Organization (NISO) standard in 2004 and was reaffirmed in 2010.2 OpenURL was developed to solve the “appropriate copy” problem. The appropriate copy problem refers to the need to link users to incarnations of content to which their institution subscribes.3 Electronic content may be available in more than one place (publisher website, electronic journal aggregator, etc.). End users need to be directed to the copy they have permission to access (i.e., content licensed through their institution). Before the creation of the OpenURL framework, reference linking “involved hard-coding links between one content provider and another.”4 Such linking was referred to as “non-context-sensitive” linking and was problematic because it did not take into account the context of the user who followed the link.5 As a result, users were sometimes linked to the “wrong” or “inappropriate” copy of an article, i.e., one that they did not have permission to access.

The system of Digital Object Identifiers (DOIs) was being developed around the same time as the OpenURL and led to the formation of the International DOI Foundation (IDF) in 1997.6 DOIs are persistent, unique links assigned to digital content such as electronic journal articles. Each DOI is “paired with the object's electronic address, or URL, in an updateable central directory, and is published in place of the URL in order to avoid broken links while allowing the content to move as needed.”7 While DOIs offer persistent links that resolve even when content moves (because of, for example, a publisher or platform change), the DOI system has no mechanism to select the “appropriate copy” for particular users, and is therefore subject to the same appropriate copy problem that the OpenURL framework addressed. DOIs typically link to the publisher's site, regardless of whether the user has permission to access the content on that site.8 If a user has access to content through, for example, an aggregator database but not at the publisher's website, the DOI system alone has no way of knowing this. The OpenURL framework, on the other hand, is a dynamic linking model that can perform context-sensitive linking, “whereby links are flexible and able to take into account the user's institutional affiliations and the licenses of that institution.”9 OpenURL created a linking mechanism that takes into account what the particular user is allowed to access. By doing this, OpenURL solved the appropriate copy problem.

Realizing the limitation of DOIs with regard to the appropriate copy problem, DOI developers adjusted the system to work with OpenURL.10 Today the DOI system has the ability to identify a user's institutional affiliation and, using the OpenURL framework, send the request for electronic content through the institution's local link resolver instead of to the publisher's site. This solution, offered only to library affiliate members of CrossRef, the official DOI registration agency, provides “appropriate copy” resolution of DOI links.11 While DOI was originally part of the appropriate copy problem, it now works together with OpenURL to connect users to licensed electronic content they are authorized to use.

The OpenURL standard specifies a particular syntax for the transport of content-specific metadata, such as International Standard Serials Number (ISSN), volume, issue, start page, and article title, as well as the user's institutional affiliation (to know what the user has permission to access). When a user clicks on a citation from, for example, an abstracting and indexing database, the OpenURL builds a URL string that uses the bibliographic metadata from the citation to “check all of the library's holdings and retrieve the full text if a match is found.”12 Because this link is where the OpenURL linking process begins (the source), it is referred to as the “source link” or “outbound link.” The structure of an OpenURL source or outbound link is illustrated in the following URL:

http://anylibrary.anyresolver.com/?genre=article&sid=[source ID]&issn=[ISSN]&title=[journal name]&atitle=[article title]&volumne=[volume]&issue=[issue]&space=[start page]&date=[yyyy]13

The brackets indicate place holders for specific citation data. For simplicity's sake, this example utilizes version 0.1 of the OpenURL syntax; version 1.0 is similar but more complex.

Two key components of successful OpenURL linking are the link resolver and the knowledge base. The OpenURL standard defines the specifications and syntax of the OpenURL (for example, “atitle” means “article title”), but it is the link resolver, together with the knowledge base, that processes the information and ultimately provides users with links to appropriate copies.

A link resolver is a software tool that deconstructs an OpenURL, separates out the elements that describe the required article, and uses these to create a predictable link to the appropriate service(s) identified by the user's library.”14 The “link to the appropriate service” is the link that takes users to the licensed full-text content, wherever it may reside (publisher's website, aggregator, etc.). Links to target content (as opposed to links from a source citation) are referred to as “target links” or “inbound links.”

A knowledge base is “an extensive database … that contains information about electronic resources, such as title lists, coverage dates, inbound linking syntax, etc.”15 “Inbound linking syntax” refers to the information on how to construct the target or inbound link, the link to the content at the target website (e.g., publisher's website). Individual libraries customize the knowledge base so that it reflects their particular holdings. Libraries do this by activating their subscribed or licensed content within the knowledge base. Activated titles are those the library users are entitled to access; they are the “appropriate copies” for that particular library's users. Libraries must be careful to activate only the content they have licensed.16 Although the link resolver does the linking, it relies on the knowledge base for information regarding which copies are “appropriate” (those activated) and for the metadata necessary to create a successful link to content (target link). Figure 1 outlines the basics of OpenURL linking.

When a user clicks on a citation (the source), a source OpenURL is generated (see above). The link resolver then deconstructs the OpenURL, parsing out the metadata (ISSN, atitle, etc.) and matches it to the information in the knowledge base (to determine whether the library has access to the content, i.e., whether there is an “appropriate copy”). If a match is found, the link resolver generates a results page with target links to the appropriate copy or copies. The link resolver creates the target links using the link-to syntax (the formula used to construct target links) and bibliographic metadata that is stored in the knowledge base.

To summarize, OpenURL is a framework that specifies syntax for context-sensitive reference linking. The link resolver is the software that does all the linking. Using the specifications and syntax of the OpenURL standard, the link resolver pulls apart the OpenURL source link (created from a citation), searches for a match in the knowledge base (which is customized by individual libraries so that it reflects the particular library's exact holdings), and, if a match is found, creates a target link to the full text using metadata in the knowledge base.

With OpenURL, the actual link to electronic content is no longer hard-coded or static, but rather flexible and dynamic. It is specific to the particular user's permissions and thus the target URL will be different for different users. OpenURL linking solved the appropriate copy problem and was deemed a great breakthrough for library reference linking. Since its ratification, the OpenURL framework has been widely adopted within the scholarly information supply chain.17

Causes of Failed Access

Despite the advent and wide adoption of OpenURL and DOIs, linking problems still occur. In their study on link resolver accuracy rates, Trainor and Price found that links failed nearly a third of the time (29 percent).18 An earlier study by Wakimoto, Walker, and Dabbour found that 20 percent of the full-text link options generated by their institution's OpenURL link resolver were erroneous, “either because they incorrectly showed availability (false positives) or incorrectly did not show availability (false negatives).”19 Both false positives and false negatives are forms of link failures. The broken link is the result of a false positive. While broken links are frustrating, false negatives are more elusive and arguably more troublesome. With no link to content appearing at all, false negatives represent a kind of unknown failed access that “can be more damaging to the user.”20 False negatives also represent paid content that is not being discovered and thus not being used. This both reduces the library's return on investment and leaves users dissatisfied.21

While the Wakimoto, Walker, and Dabbour study seems to have resulted in a lower rate of link failures than the Trainer and Price study (20 percent versus 29 percent), Trainor and Price make a compelling argument for the reassignment of the Wakimoto, Walker, and Dabbour category “Correct—required search or browse for FT [full text]” from the “correct group” to the “error group,” stating that “When the target full text item or abstract with full text links is not presented on the target page, most users and even many librarians perceive the resolver as having failed.”22 This reassignment raised the total link resolver error rate for the Wakimoto, Walker, and Dabbour dataset to 35 percent.23

Link failures can occur in various stages along the OpenURL linking chain. If the metadata from the source citation is incorrect or incomplete, the link resolver may not be able to match it to the information in the knowledge base. If the metadata in the knowledge base is incorrect or incomplete, a match will similarly fail. Moving through the OpenURL linking chain, errors also can occur at the target website. Trainor and Price refer to these three main causes of link failures as source URL errors, knowledge base inaccuracies, and target URL translation errors.24

Metadata Problems

Chandler of Cornell University documents examples of link failures caused by problematic metadata sent from the source citation to the link resolver (source URL errors).25 Source URLs are created using the metadata from the source citation (see figure 1). If these metadata are incorrect or incomplete, the link resolver cannot match the information to the metadata in the knowledge base and the OpenURL chain breaks. In an attempt to ascertain why so many of the links from a particular abstracting and indexing database failed, Chandler manually reviewed a sample set of source OpenURLs. He found numerous metadata problems such as “malformed dates, volume and issue numbers combined into one field, reliance on the pages element instead of the start page element for linking, lack of identifiers, etc.”26 These metadata problems were causing the OpenURL links to fail.

Another area of link failure reported in the literature concerns the accuracy of the metadata stored in the knowledge base. Knowledge base vendors obtain metadata for licensed content from content providers, usually in the form of title lists.27 The quality of the data in the knowledge base “depends on the quality of the data that are supplied by the content providers.”28 If the data are inaccurate, incomplete, or inconsistently formatted, they enter the knowledge base with these deficiencies. If not corrected or otherwise normalized by knowledge base developers, the problematic data propagate throughout the OpenURL supply chain, causing failed access to licensed electronic content.

Writing of the knowledge base “sitting behind the linking service,” Mischo and colleagues note that “keeping the metadata populating the database complete, such as accurate identifiers (ISSN, CODEN, ISBN, PubMedID, OAI), current target addresses (including URLs), and complete threshold information (full-text coverage), is critical to getting the user to the full-text resource in as succinct and efficient a manner as possible.”29 They go on to state that, “Without accurate ISSN, ISBN or other identifying numbers, the critical matching of citation information to data stored within [the] knowledgebase could not occur. Often, without the corresponding match point, the linking service displays no full-text results. There are times when this is inaccurate and full-text access is available.”30

Chen puts the blame squarely on content providers for sending inaccurate or incomplete title lists to knowledge bases and other serials management tools.31 He traces numerous link errors back to metadata deficiencies such as inaccurate title information, incorrect identifiers (ISSN, ISBN), incorrect coverage information, and embargo ambiguities, concluding that “Content providers need to realize the serious consequences of misinformation.”32 Donlan similarly bemoans the difficulty of obtaining accurate title lists from content providers, the time involved in getting the metadata corrected in the knowledge base, and general user frustration with broken links.33 She concludes that “all these problems illustrate how important it is that content providers create accurate metadata in order to generate the OpenURL.”34 In analyzing the erroneous links in their study, Wakimoto, Walker, and Dabbour found that “the vast majority of false negatives were the result of incorrectly reported holding information from database vendors.”35

A report commissioned by the United Kingdom Serials Group (UKSG) identified numerous incidences of compromised OpenURL linkage resulting from inaccurate, incomplete, and inconsistent metadata from content providers.36 The report underscored the significance of the knowledge base in OpenURL linking, noting that “it is essential that the data residing in knowledge bases is current, accurate and reliable if users are to discover and access the content that is selected and acquired for them by librarians.”37 Furthermore, the report noted a lack of understanding among some content providers of the importance of accurate metadata and, specifically, the significance of the data they send to knowledge bases, which feed the link resolvers, which in turn drive traffic to content. In concluding, the UKSG report called for the development of a “code of practice” in the knowledge base supply chain because “at the end of the day, libraries are depending on the data provided to offer a reliable service to their patrons.”38

These examples show that links fail when the metadata that fuel the OpenURL linking process are of poor quality. Accurate metadata are necessary for OpenURL linking to work. This fact exposes a limitation of the OpenURL framework. Perhaps the reason so many links fail is that the OpenURL model assumed the “metadata embedded in the OpenURLs would be inherently consistent and accurate.”39 As the examples above show, this is not always the case. KBART and IOTA, explained in detail below, focus on metadata deficiencies that have kept the OpenURL framework from reaching its full potential of providing seamless reference linking to licensed electronic content. Specifically, IOTA is working on ways to decrease source URL errors and KBART is working on ways to decrease link failures because of knowledge base inaccuracies.

Provider Website Problems

OpenURL link failures are not the only cause of failed access to licensed electronic content. Another kind of failed access relates to the way journal titles are presented on provider websites (publisher websites, electronic journal websites). In such cases, links resolve, but bring users to a webpage that is so confusing they cannot find what they are seeking. One particularly acute problem relates to the practice of listing former journal titles under the current, newer title. Hawkins and colleagues give the example of a student who finds a citation for a 1922 article in the American Journal of Hygiene.40 Further clicking brings the student to the webpage for the American Journal of Epidemiology. After a somewhat Kafkaesque journey through cyberland, the user eventually discovers that the American Journal of Epidemiology was published under the title American Journal of Hygiene before 1965. The 1922 article was available, but it was listed under the (nonexistent) 1922 volume of the American Journal of Epidemiology. This provider essentially ignored the previous journal title and placed all content under the newer title. This practice causes failed access. Reynolds and Hepfer give a similar example of user search difficulties and note that, “unless journal websites list all the titles under which content was published, user access to desired content is considerably diminished.”41 They argue further that no one wins in this situation: “not the library, the publisher, the vendor, and certainly not the researcher!”42

Cole touches on the difficulty of dealing with journal title changes in libraries, the complexity and continuous changes of the cataloging rules, and how these issues relate to the representation of title history in the electronic environment.43 He concludes that while they need not follow the catalog code, “Publishers and aggregators need to provide access to both the older and newer titles of serials that have changed titles… . What it important is the provision of access for the end user.”44

Publishers and other content providers may be unaware of the confusion and access barriers they cause by listing former titles on the webpage of the current title. From a marketing or design point of view, placing all the content under the current title may seem to be a “simpler and more elegant arrangement than breaking the content into the various pieces that placing it under multiple changed titles might entail.”45 Indeed, “In a publishing environment it makes sense that the focus for promotion and Web site design is on current titles and products.”46 While this may be true, it is problematic for the researcher. Citations to articles will refer to the journal title that was in effect when the article was published. A researcher has no way of knowing from the citation that the journal has since changed titles. This means that a researcher looking for an article that appeared under a former journal title will look for the article under the former title. If the researcher cannot find content listed under the former journal title, access is compromised.

In conclusion, while listing all content of a journal under its current title may seem to be a convenient way to provide content to users, this practice essentially ignores previous titles, buries them within the website, and ultimately causes failed access to content. PIE-J, explained in detail below, addresses this issue and is working on a best practices document for the presentation of journal titles on content provider websites.


Initiatives Addressing Failed Access

Having highlighted the access problems that KBART, IOTA, and PIE-J were created to improve, this paper will now move to a detailed explanation of each initiative. The initiatives work under the auspices of national organizations. KBART is a joint initiative between UKSG and NISO, while IOTA and PIE-J work solely under NISO. UKSG is a British organization that “exists to connect the information community and encourage the exchange of ideas on scholarly communication,” and NISO is an American standards organization that “identifies, develops, maintains, and publishes technical standards to manage information in our changing and ever-more digital environment.”47 Both organizations encourage collaboration between all sectors of the information community (content providers, libraries, software developers). Consistent with the goals and principles of these two organizations, the working group of each initiative consists of broad stakeholder representation (publishers, platform providers, aggregators, knowledge base vendors, librarians, etc.).

KBART

The creation of KBART in January 2008 was a direct result of the findings of the UKSG report Link Resolvers and the Supply Chain.48 As mentioned earlier, the report underscored the significance of the knowledge base in OpenURL linking and noted a need for education about how data provided by content providers to knowledge bases directly affects the efficiency of OpenURL linking. The report found numerous linkage errors because of inadequate data sent from content providers and noted that no standard guidelines existed for data transfer from content providers to knowledge base vendors. KBART was formed to remedy this situation.

KBART was specifically charged with improving “the supply of data to link resolvers and knowledge bases, in order to improve the efficiency and effectiveness of OpenURL linking.”49 Since its creation, KBART has worked to alleviate “problems in the information supply chain that relate to the data supplied to knowledge bases.”50 This is a very specific goal that deals directly with the OpenURL linkage problems described earlier. By recommending best practices for the accurate and timely exchange of holdings metadata from content providers to knowledge bases, KBART strives to improve OpenURL linking and decrease the incidences of failed access.

The KBART working group completed phase 1 of its work in January 2010 with the publication of KBART: Knowledge Bases and Related Tools: A Recommended Practice of the National Information Standards Organization (NISO) and UKSG.51 The publication contains specific guidelines and instructions for enabling the accurate and timely exchange of holdings metadata from content providers to knowledge base developers. Designed to be intuitive and easy to implement, KBART hopes that “by making some small adjustments to the format of their title lists, content providers can greatly increase the accessibility of their products,” libraries can enjoy a higher return on their investment, and users will experience fewer link failures.52

The KBART report encourages content providers to include sixteen specific fields as columns in a tab-separated metadata file and, for consistency, to use the field labels specified in the report (see table 1). In deciding on these sixteen elements, the goal was to “collect only the information that is most useful, rather than a large number of fields that become too overwhelming for content providers to support.”53 The recommendations address common metadata problems such as the reuse of ISSNs; title inconsistencies (misspellings, the incorrect use of former or subsequent titles); inaccurate or outdated coverage dates; inconsistent date and enumeration formats; inaccurate, inconsistent or missing coverage descriptions (e.g., abstracts, selected full text, exclusion of graphics); and embargo period ambiguities. The report also includes recommendations for metadata file naming as well as the method and frequency of data transfer.

KBART offers content providers a simple metadata exchange format that is easy to follow, easy to implement, and easy for knowledge base developers to process. While many content providers already successfully exchange metadata, others are unsure how best to proceed. KBART offers “entry-level guidelines and instructions” for the timely and accurate exchange of essential holdings metadata.54 The benefits of adopting the KBART best practices span the entire electronic serials supply chain: content providers enjoy a reduction in cost of their customer service, an improved reputation, and increased traffic to their content; knowledge base developers spend less time retrieving missing metadata and reformatting data into a single normalized format; and libraries benefit by “maximizing the usage (and therefore the return on investment) of the content they license, and [improving] the experience and success rate of their users as they navigate the research network.”55

While KBART's phase 1 work focused on metadata exchanges for journals, phase 2 focuses on more advanced, complex issues such as metadata for consortia, open access content, e-books, and conference proceedings.56 Phase 2 also includes work on an information portal that will provide educational resources such as background information on the OpenURL and the serials supply chain, “how to” guides, and selected links to pertinent literature. KBART continues a robust outreach effort to educate and inform the community, and to increase the number of publishers that adopt the practices recommended in the phase 1 report. At the time of this writing, forty-seven publishers and organizations have endorsed KBART.57

IOTA

The IOTA working group was formed by NISO in January 2010 “to investigate the feasibility of creating industry-wide, transparent and scalable metrics for evaluating and comparing the quality of OpenURL implementations across content providers.”58 Like KBART, IOTA is concerned with link failures resulting from problems with the metadata that fuel the OpenURL. IOTA's goal is to measure source OpenURL quality across content providers to pinpoint problematic areas that can then be the focus of improvement efforts. By using metrics to automatically and systematically evaluate OpenURLs, IOTA strives to supply objective, empirical data on exactly where metadata problems exist so that content providers can efficiently and effectively target efforts to improve OpenURL linking.

IOTA has its origin in the study by Chandler referenced earlier.59 As mentioned above, Chandler manually reviewed a sample set of OpenURLs and found numerous typical metadata problems. These kinds of problems cause links to fail. Fixing these problems would increase successful OpenURL linkage, instantly increasing users’ access to licensed content (and decreasing their frustration with failed access). By systematically and objectively identifying precise areas of metadata deficiencies, IOTA hopes to “inform vendors about where to make improvements to their OpenURL strings so that the maximum number of OpenURL requests resolve to a correct record.”60

As of July 2011, the IOTA OpenURL reporting system contained more than 15 million OpenURLs from fifteen institutions and content providers.61 The reporting system analyzes the element frequency and patterns contained within OpenURL strings. Users can run reports that show, among other things, which elements (e.g., article title, ISSN, etc.) are present in the OpenURLs. Such a report could, for example, reveal that the OpenURLs from a particular content provider, “provider X” do not contain a particular OpenURL element such as “spage” (the start page number for the item). This can be compared to other providers. If a high percentage of other providers include the element, this could be a cause of unsuccessful OpenURL links from provider X. Another kind of report can show element patterns, such as the format used for the date of an item. Date formats vary from four digit year formats (2011) to formats that indicate the month with or without the day and with or without hyphens (e.g., 2011-06, 2011-06-20, 20110620). The IOTA reporting system can analyze the OpenURLs and show, for example, that provider X uses the date format YYYY-MM-DD (2011-06-20), while most providers use the year only (2011). This may or may not be the cause of Open URL link failures, but equipped with this knowledge, provider X could focus its attention on this area, make any necessary changes, and improve OpenURL linkage (and thus traffic) to its content in a cost-effective way. In short, such reports allow vendors to see weaknesses in their source OpenURL strings, make targeted improvements, and thus increase access to their content (and decrease broken links).

KBART and IOTA are both working to decrease OpenURL link failures that are caused by metadata deficiencies. IOTA works to decrease source URL errors by analyzing the data that enters the OpenURL chain from the source citation, while KBART focuses on decreasing link failures because of knowledge base inaccuracies by improving the flow and accuracy of the metadata that content providers send to knowledge bases.

PIE-J

PIE-J differs from KBART and IOTA because it is not focused on link resolver errors. Formed by NISO in 2010, PIE-J addresses access barriers that arise from the manner in which electronic journals are presented on provider websites. PIE-J's official charge is

to develop a Recommended Practice that will provide guidance on the presentation and identification of e-journals, particularly in the areas of title presentation and bibliographic history, accurate use of the ISSN, and citation practice, that will assist publishers, platform providers, abstracting and indexing services, knowledgebase providers, aggregators, and other concerned parties in facilitating online discovery, identification, and access for the publications.62

Expected to be published in the first half of 2012, the recommended practice will specifically address the issues of varying titles for different formats, accurate title history information, citation practices, and accurate use of ISSNs. The ultimate goal is to ensure that electronic content can be reliably discovered, cited, and accessed over time.

As described earlier, provider websites sometimes lead users down a confusing path by placing content that was published under former journal titles together with content published under the current title. This practice impedes access when users search for content using historically correct citations from abstracting and indexing services, bibliographies, published works, and other research tools. These citations use the title a journal carried at the time the particular article was published. Users need to be able to find the article with the citation in hand. For this to happen, electronic journal websites must accurately and uniformly present all the titles under which content was published. In short, content providers should present former journal titles “with enough prominence on the website to be easily visible and well enough indexed to be accessible via a search engine.”63

Insufficient identification of former titles also can affect current citation practices, another issue PIE-J is addressing. Many journal websites today offer online citation tools that purport to generate accurate article citations. When older content is placed on websites under the newer title, the citation tool often generates a citation using the journal's current title. This is not correct and, left unattended, will impede future access.

Lacking any standards or guidelines for the presentation and identification of electronic journals on websites, the information regarding title history, title variation, and ISSN history is not always unambiguously supplied. PIE-J's goal is to review the problem and provide guidelines in the form of a set of NISO-recommended practices on how providers can best mount title history, including ISSN history, on their websites to facilitate identification, access, and reliable citation practices over time. While PIE-J does not address OpenURL linkage issues, their recommendations on the accurate use of ISSNs would, if widely adopted, improve linking.


Next Steps

This paper highlights three industry initiatives that are working on solutions to specific, known causes of electronic access failure. The initiatives are supported by national organizations in the United States and United Kingdom (NISO and UKSG) and the working groups consist of representatives from all areas of the electronic serials supply chain (publishers, other content providers, knowledge base vendors, librarians, etc.). This cooperative effort indicates that stakeholders throughout the electronic serials supply chain take these issues seriously and are willing to work together toward the common goal of improving access to licensed electronic content. This is good news, because “it is crucial that the effort to develop best practices has the support and buy-in of publishers, content providers, and librarians.”64

Support and buy-in of all constituencies must continue. Armed with the knowledge of exactly what KBART, IOTA and PIE-J do, librarians, content providers, knowledge base vendors, e-journal website designers, and others can communicate effectively about electronic access, make improvements on their end, educate others, and advocate for support or endorsement of the initiatives. Librarians should ensure that only licensed or free content is activated in their institution's knowledge base. Publishers and other content providers are encouraged to adopt the KBART best practices and remain informed about the forthcoming PIE-J best practices document. All are encouraged to add their log files to IOTA and use IOTA to check their links and those of other vendors and institutions. Only with wide adoption, will these efforts effect real change.

Another area of failed access improvement is being discussed, though it is not yet being formally addressed. It concerns the third cause of OpenURL link failures, identified by Trainor and Price as target translation URL errors.65 These are OpenURL errors that occur when the inbound or target OpenURL link does not resolve at the target (publisher website, for example). This is the last stage in the chain of OpenURL linking. At the time of the writing of this paper, working group members from IOTA and KBART were discussing a joint project to address this third area of OpenURL errors.66 This is an area to watch in the future.


Conclusion

This paper described three industry initiatives aimed at improving access to licensed electronic content. By explaining exactly what KBART, IOTA, and PIE-J do in the context of the access problems they were created to solve, this paper strives to help those struggling with access issues and encourage all affected parties (librarians, content providers, knowledge base vendors, etc.) to work toward the common goal of improving access to licensed electronic content.

KBART and IOTA focus on metadata inaccuracies that affect the efficacy of OpenURL linking. Their identification of metadata deficiencies as a cause of OpenURL link failure exposes a limitation of the OpenURL framework and explains, at least in part, why so many OpenURL links still fail. Dynamic linking requires accurate and consistent metadata to function to its full capacity. In other words, a direct relationship exists between metadata quality and link failures: when the metadata quality is low, links fail, either completely or partially. Improve the quality of the metadata that fuel the OpenURL process and linking will improve as a result. IOTA is working on improving the metadata that is sent from the source citation (the first stage of OpenURL linking) and KBART is focusing on the importance of accurate metadata within the knowledge base (the second stage of OpenURL linking). While these are not the only causes of OpenURL link errors, addressing the metadata inadequacies in these areas will result in more successful OpenURLs and less failed access. As Stevenson and Hutchens write, “Users simply want systems that work.”67 KBART and IOTA are working toward the goal of making the OpenURL framework work better.

PIE-J does not directly address linkage problems but rather focuses on access issues related to the way in which electronic journal titles are presented and identified on provider websites. PIE-J is concerned with the frequent content provider practice of placing content published under former journal titles under the current title. When journal title and ISSN history are not clearly presented, users’ ability to find and access what they seek is diminished. PIE-J is working toward the creation of a NISO-recommended practice that will guide providers on how best to present journal title and ISSN information on websites to facilitate identification and thus successful access to licensed electronic journal content. Although PIE-J does not directly address linking problems, because the ISSN is such an important element in successful OpenURL linking, improved use of ISSNs on electronic journal websites will increase the number of successful links to those websites.

All parties stand to benefit from the work being done by KBART, IOTA, and PIE-J: users get better service, librarians get a better return on their investment, and content providers get more traffic to their content, which leads to increased usage (a criterion often used in library purchasing decisions) and a better reputation. This is a win-win situation for all. The issues surrounding these initiatives are complex but important. Their exact goals and purposes are different but related. Their work is complementary. All three share the goal of increasing successful access to licensed electronic content, and together they hold great promise for a future of fewer broken links and more successful access.


References and Notes
1. Herbert Van de Sompel and Oren Beit-Arie,  "“Open Linking in the Scholarly Information Environment Using OpenURL Framework,”,"  D-Lib Magazine  (2001)   7, no. 3www.dlib.org/dlib/march01/vandesompel/03vandesompel.html (accessed May 30, 2011).
2. National Information Standards Organization, "“ANSI/NISO Z39.88-2004 (R2010) The OpenURL Framework for Context-Sensitive Services,”"www.niso.org/apps/group_public/project/details.php?project_id=82 (accessed June 22, 2011).
3. Oren Beit-Arie et al.,  "“Linking to the Appropriate Copy: Report of a DOI-Based Prototype,”,"  D-Lib Magazine  (2001)   7, no. 9http://www.dlib.org/dlib/september01/caplan/09caplan.html (accessed May 30, 2011).
4. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools: A Recommended Practice of the National Information Standards Organization (NISO) and UKSG, NISO-RP-9–2010 (Baltimore, Md.:  NISO, 2010):  3 www.uksg.org/sites/uksg.org/files/KBART_Phase_I_Recommended_Practice.pdf (accessed June 1, 2011)..
5. Van de SompelVan de Sompel ,  Beit-Arie,  "“Open Linking in the Scholarly Information Environment.”,"  
6 Oren Beit-Arie et al.,  "“Linking to the Appropriate Copy.”,"  
7. CrossRef, "“Info for Libraries,”"www.crossref.org/03libraries/index.html (accessed Sept. 8, 2011).
8. Oren Beit-Arie et al.,  "“Linking to the Appropriate Copy.”,"  
9. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools ,    ():  3
10. Oren Beit-Arie et al.,  "“Linking to the Appropriate Copy.”,"  
11. The DOI System, "“Local Content Servers: The “Appropriate Copy Problem,”"www.doi.org/doi_proxy/appropriate_copy.html (accessed Sept. 8, 2011).
12. Rafal Kasprowski,  "“Best Practices and Standardization Initiatives for Managing Electronic Resources,”,"  Bulletin of the American Society for Information Science & Technology  (2008)   35, no. 1:  13.
13. Ibid., 13. Figure used with permission.
14. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools ,    ():  2
15. Ibid., 2.
16. Peter McCracken and Michael A. Arthur,  "“KBART: Best Practices in Knowledgebase Data Transfer,”,"  Serials Librarian  (2009)   56:  230–35.
17. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools . ():
18. Cindi Trainor and Jason Price,  "“Rethinking Library Linking: Breathing New Life into OpenURL,”,"  Library Technology Reports  (2010)   46, no. 7
19. Jina Choi Wakimoto, David S.. Walker,  and Katherine S. Dabbour,  "“The Myths and Realities of SFX in Academic Libraries,”,"  Journal of Academic Librarianship  (2006)   32, no. 2:  133.
20. TrainorTrainor ,  Price,  "“Rethinking Library Thinking,”,"  :  8.
21. Peter McCracken and Kristina Womack,  "“KBART: Improving Access to Electronic Resources through Better Linking,”,"  Serials Librarian  (2010)   58:  232–39.
22. TrainorTrainor ,  Price,  "“Rethinking Library Thinking,”,"  :  16.
23. Ibid.
24. Ibid.
25. Adam Chandler,  "“Results of L'Année philologique online OpenURL Quality Investigation, Mellon Planning Grant Final Report, February 2009,”"http://metadata.library.cornell.edu/oq/files/200902%20lannee-mellonreport-openurlquality-final.pdf (accessed Apr. 1, 2011).
26. Ibid., 2.
27. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools . ():
28. Ross MacIntyre and Lisa S. Blackwell,  "“Industry Initiatives: What you Need to Know,”,"  Serials Librarian  (2011)   60:  189.
29. William H.. Mischo et al.,  "“The Growth of Electronic Journals in Libraries: Access and Management Issues and Solutions,”,"  Science & Technology Libraries  (2006)   26, no. 3/4:  45.
30. Ibid.
31. Xiaotian Chen,  "“Assessment of Full-Text Sources Used by Serials Management Systems, OpenURL Link Resolvers, and Imported E-journal MARC Records,”,"  Online Information Review  (2004)   28, no. 6:  428–34.
32. Ibid., 434.
33. Rebecca Donlan,  "“Boulevard of Broken Links: Keeping Users Connected to E-Journal Content,”,"  Library Collections & Technical Services  (2007)   48, no. 1:  99–104.
34. Ibid., 102.
35. WakimotoWakimoto ,  et al.  "“The Myths and Realities of SFX in Academic Libraries,”,"  :  133.
36. James Culling,   Link Resolvers and the Serials Supply Chain (Oxford:  Scholarly Information Strategies, 2007): , www.uksg.org/projects/linkfinal (accessed June 1, 2011)..
37. Ibid., 12.
38. Ibid., 28.
39. Adam Chandler, Glen Wiley,  and Jim LeBlanc,  "“Towards Transparent and Scalable OpenURL Quality Metrics,”,"  D-Lib Magazine  (2011)   17, no. 3/4:  2.www.dlib.org/dlib/march11/chandler/03chandler.html (accessed Aug. 24, 2011).
40. Les Hawkins et al.,  "“Journal Title Display and Citation Practices,”,"  Serials Librarian  (2009)   56:  271–81.
41. Regina Romano Reynolds and Cindy Hepfer, "“In Search of Best Practices for the Presentation of E-Journals,”,"  Information Standards Quarterly  (2009)   21, no. 2:  20.
42. Ibid.
43. Jim Cole,  "“E-Ventures: Notes and Reflections from the E-serials Field,”,"  Serials Librarian  (2008)   55, no. 1/2:  45–58.
44. Ibid., 56.
45. ReynoldsReynolds ,  Hepfer,  "“In Search of Best Practices for the Presentation of E-Journals,”,"  :  21.
46. Les Hawkins,  "“Best Practices for Presentation of E-journal Titles on Provider Web Sites and in Other E-Content Products,”,"  Serials Review  (2009)   35:  168.
47. United Kingdom Serials Group, "“Welcome to UKSG,”"www.uksg.org (accessed June 22, 2011)National Information Standards Organization, "“About NISO,”"www.niso.org/about (accessed June 22, 2011).
48. Culling,   Link Resolvers and the Serials Supply Chain . (): .
49. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools: A Recommended Practice ,    ():  iii
50. Ibid.
51. Ibid.
52. Ibid., 1.
53. McCrackenMcCracken ,  Womack,  "“KBART: Improving Access to Electronic Resources,”,"  :  237–38.
54. NISO/UKSG KBART Working Group KBART: Knowledge Bases and Related Tools: A Recommended Practice ,    ():  12
55. Ibid., 7–8.
56. National Information Standards Organization, "“Knowledge Base and Related Tools, KBART Phase II,”"www.niso.org/workrooms/kbart (accessed May 22, 2011).
57. National Information Standards Organization, "“KBART: Endorsement,”"www.niso.org/workrooms/kbart/endorsement (accessed May 22, 2011).
58. National Information Standards Organization, "“IOTA: Improving OpenURLs through Analytics,”"www.niso.org/workrooms/openurlquality (accessed June 1, 2011).
59. Chandler,  "“Results of L'Année philologique online OpenURL Quality Investigation.”,"  
60. National Information Standards Organization, "“What is IOTA?”"http://openurlquality.niso.org (accessed June 1, 2011).
61. The IOTA report providing vendors with information about where to make improvements to the OpenURL strings is updated regularly and can be generating using the “run report” function at http://openurlquality.niso.org.
62. National Information Standards Organization, "“Recommended Practices for the Presentation and Identification of E-Journals (PIE-J),”"www.niso.org/workrooms/piej (accessed June 1, 2011).
63. ReynoldsReynolds ,  Hepfer,  "“In Search of Best Practices for the Presentation of E-Journals,”,"  :  22.
64. Hawkins,  "“Best Practices for Presentation of E-journal Titles on Provider Web Sites,”,"  :  168.
65. TrainorTrainor ,  Price,  "“Rethinking Library Thinking.”,"  
66. Adam Chandler,  "“NISO IOTA: Improving OpenURLs Through Analytics, in Content,”,"  Against the Grain  (2011)   23, no. 1:  30–33.
67. Liz Stevenson and Chad Hutchens,  "“KBART—How It Will Benefit Libraries and Users,”,"  Against the Grain  (2011)   23, no. 1:  28.

Figures

Figure 1

Overview of OpenURL linking

Source: Adapted with permission from both Rafal Kasprowski, “NISO's IOTA Initiative: Measuring the Quality of OpenURL Links” (presentation, North American Serials Interest Group Annual Conference, St. Louis, Missouri, June 2–5, 2011, www.slideshare.net/rkaspro/iota-nasig-2011-measuring-the-quality-of-openurl-links (accessed June 1, 2011) and James Culling, Link Resolvers and the Serials Supply Chain: Final Report for UKSG (Oxford: Scholarly Information Strategies, 2007), www.uksg.org/projects/linkfinal (accessed June 1, 2011).



Tables
Table 1

KBART's Field Name Recommendations for Metadata Transfer from Content Providers to Knowledge Bases


Field Title Description
publication_title Publication title
print_identifier Print- format identifier (i.e., ISSN, ISBN, etc.)
online_identifier Online- format identifier (i..e, eISSN, eISBN, etc.)
date_first_issue Online Date of first issue available online
num_first_vol_online Number of first volume available online
num_first_issue_online Number of first issue available online
date_last_issue_online Date of last issue available online (or blank, if coverage is to present)
num_last_vol_online Number of last volume available online (or blank, if coverage is to present)
num_last_issue_online Number of last issue available online (or blank, if coverage is to present)
title_url Title- level URL
first_author First author (for monographs)
title_id Title ID
embargo_info Embargo information
coverage_depth Coverage depth (e.g., abstracts or full text)
coverage_notes Coverage notes
publisher_name Publisher name (if not given in the file's title)

Source: UKSG, KBART 5.3.2.1.: Data Fields and Labels, www.uksg.org/kbart/s5/guidelines/data_field_labels (accessed June 1, 2011).



Article Categories:
  • Library and Information Science
    • ARTICLES

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core