Chapter 5. The Current Landscape

Narrative descriptions of where libraries want to be relative to the reader’s experience of searching on the web are difficult, if not impossible, to find, but detailed descriptions of what some libraries are doing relative to web technologies are abundant. This means that libraries are investing significantly in some of the dimensions of technology, but the community’s goal and commitment to the convenience of the reader isn’t articulated.

There are several important moments in the movement toward change in library catalogs. An important one was Roy Tennant’s 2002 Library Journal article “MARC Must Die,” which argued that the current data carrier MARC could be replaced by more modern carriers designed in the age of the web.1 A more extensive treatment of the issue and an argument for the need for change from the machine-readable cataloging systems originally developed in the 1960s is in the report from the Library of Congress’s Working Group on the Future of Bibliographic Control, which published its recommendations in 2008. It wrote, “The library community’s data carrier, MARC, is based on forty-year old techniques for data management and is out of step with programming styles of today.”2

The Working Group’s charge was not specifically to solve the problem of raising the visibility of libraries on the web, but its work became the springboard for the central initiative around a movement in libraries to make their data more web-accessible. This became the Bibliographic Framework Initiative, and it used the Working Group’s report as a base and inspiration.

The Bibliographic Framework Initiative (BIBFRAME)

The Library of Congress activity called BIBFRAME declares in its mission the goal to enable better expression of bibliographic data on the web. Its website describes it this way: “BIBFRAME provides a foundation for the future of bibliographic description, both on the web, and in the broader networked world.”3 In practice, the work is primarily focused on the process of replacing the current MARC standard for exchanging bibliographic data between library systems. The inspiration from the Working Group report to modernize the “community’s data carrier” is very much alive in the work of the Library of Congress staff. The mission of the initiative makes that drive explicit by declaring that BIBFRAME is “a replacement for MARC” and that “a major focus of the initiative will be to determine a transition path for the MARC21 formats while preserving a robust data exchange that has supported resource sharing and cataloging cost savings in recent decades.”4

The language of the BIBFRAME mission statement and the work itself continue the tradition of seeking greater efficiency in data exchange and management.

Beacher Wiggins, the Director for Acquisitions and Bibliographic Access at the Library of Congress, extends the mission and goals to a broader purpose, saying that web visibility for library collections is “one of the topmost desires of BIBFRAME.”5 His decades of experience with describing the LC’s collections provides the kind of intimacy with those collections and awe for their depth that leads him to describe them as an “incredibly valuable part of the nation’s intellectual and cultural patrimony.”6 However, he cautions, “There is a dormancy to the content and we render it less valuable if we don’t have ready access to it.”7 The LC’s primary mission is to its funder, the United States Congress, but it has long held a position of leadership in data exchange standards and the production of high-quality data to be shared among all US libraries. Given that tradition and the technical assets the LC has today, there is a natural inclination toward a focus on replacement of the data exchange infrastructure.

The work of the BIBFRAME initiative is focused on creating what specialists call a vocabulary for expressing bibliographic data. The LC is also engaged in a pilot to experiment with creating BIBFRAME-native data. It is doing this in parallel to the existing workflows for creating the traditional MARC21 data. The goal of the project is to test the data creation and management tools it has created as part of the BIBFRAME project.

While BIBFRAME’s mission and activities do not explicitly address the convenience of the reader, BIBFRAME does have a role in contributing to some of the best practices for playing by the rules of the web—specifically, the rules around the Knowledge Card component of search engine results. Given Richard Wallis’s suggestion, mentioned in chapter 2, that “semantic properties will prove more fruitful and effective than simple words,”8 it is important to express those properties in a way that the web will recognize and reward. BIBFRAME is therefore a vocabulary for libraries to express their collections on the web in a way that is generally consistent with Semantic Web best practices. Jeff Penka, Vice President for Product Management at Zepheira, the consulting company that contracted with the Library of Congress on the first version of the vocabulary, has described it as “an industry standard for libraries that can be projected into the meaningful vocabularies on the web.”9 This doesn’t mean that BIBFRAME itself is not meaningful; it means that libraries are declaring their own dialect for expressing data on the web, a dialect that can be translated into the recommended languages on the web such as schema.org. The quality of the dialect will be measured by how well it can be translated without loss of meaning or intent. This is a subtle and highly technical measurement, and its success will be measured over time.

Thinking back to the practices that the search engines promote for improved relevance of content, this is the right time to raise questions about the guidance that catalog librarians use for bibliographic description. Beacher Wiggins reports that “RDA is the content standard” for the creation of bibliographic data when using the BIBFRAME vocabulary.10

Resource Description and Access (RDA) provides guidance and instruction for catalog librarians. It tells them how to make decisions about what a title is and if they should be concerned about the punctuation included in the title and author information on the thing being cataloged. But it also contains a set of vocabularies that can be used to express bibliographic data in a Semantic Web context. The recent history of RDA shows a transition from a ruleset focused on the traditional activities of cataloging and limited by the logistical restrictions of cataloging on physical cards; this includes things like the transcription of text from the title page to a system for recording bibliographic data, to a framework of instructions and Semantic Web vocabularies. The library metadata expert Diane Hillmann calls RDA “a coordinated set of vocabularies and guidance instructions capable of capturing the rich relationships of bibliographic entities.”11 According to Hillmann, because RDA is based on sophisticated models of entity relationships such as the Functional Requirements for Bibliographic Records (FRBR) and newer Semantic Web vocabularies, it produces data that can express rich relationships that allow discovery systems to “navigate the bibliographic space.”12

This model is a departure from the legacy Anglo-American Cataloging Rules but has required significant revision to approach a standard that can guide catalog librarians to creating data optimized for exposure on the web. A sharp critique of the early release of RDA was expressed by Mikael Nilsson of the Knowledge Management Research Group, Royal Institute of Technology, Stockholm. He said the rules are “stenographic conventions for constructing value strings.”13 The implication is that the ghosts of catalog card production are haunting the work that is meant to modernize bibliographic description. But precisely because of those criticisms and the devastating published criticisms by Hillmann and Karen Coyle in 2007,14 the body responsible for RDA has undertaken revisions. More recently, RDA as a whole has been described by Gordon Dunsire as “a package of data elements, guidelines and instructions for creating library and cultural heritage resource metadata that are well-formed according to international models for user-focussed linked data applications.”15 This is a positive trend and focus on the effectiveness of RDA in producing data optimized for web exposure should continue.

Library of Congress staff are engaged in a number of activities to develop and promote the BIBFRAME vocabulary among US libraries. LC staff can be seen at professional library conferences presenting to librarians the latest changes to the vocabulary and the LC’s plans for production implementation. Full production requires significant retooling of the programs and methods used by the LC’s cataloging teams. This is a decades-old infrastructure with significant current investment. It will likely be a long process for the LC to switch from current systems to new systems based on the vocabulary. The LC has publicly made this commitment and regularly reports on its progress.

BIBFLOW

The BIBFLOW project, whose formal title is Reinventing Cataloging: Models for the Future of Library Operations, is centered at the University of California, Davis and is funded to reinvent

cataloging and related workflows, in light of modern technology infrastructure such as the Web and new data models and formats such as Resource Description and Access (RDA) and BIBFRAME, the new encoding and exchange format in development by the Library of Congress. Our hypothesis is that, while these new standards and technologies are sorely needed to help the library community leverage the benefits and efficiencies that the Web has afforded other industries, we cannot adopt them in an environment constrained by complex workflows and interdependencies on a large ecosystem of data, software and service providers that are change resistant and motivated to continue with the current library standards (e.g. Anglo-American Cataloguing Rules . . . and MARC.16

This mission statement captures an energetic commitment to reinventing the workflows that provide the data that describes library collections. The project’s lead, Carl Stahmer, the Director of Digital Scholarship at UC Davis, is motivated to make library data more accessible on the web, saying, “Making library collection data play on the web is crucial.” He cautions his library colleagues against maintaining the status quo by saying, “The idea that libraries can continue to operate as a silo alongside the open web is destructive.”17

The BIBFLOW approach to remodeling library data is sophisticated in the sense that the project leaders want to move beyond a simple statement of what is available in the library to create “relational and comparative systems that allow us to ask different questions about how library data sets are the same or how they are different.”18 They expect to achieve this through a “good push toward the semantic web.”19

On the question of reinventing rulesets, like RDA, that describe how library collections can be more in line with web practices, Stahmer reports that the BIBFLOW team is explicitly avoiding the “transcription fixation” of legacy description regimes.20 BIBFLOW has not created an alternative ruleset that is specifically tuned to the needs of optimized webpages, but they are committed to experimentation to establish the “rule of the street.”21 The “rule of the street” is Stahmer’s principle to use techniques that get results on the web over historical commitments to legacy models.

On the question of optimization of web-based catalogs for web exposure, Stahmer reports that BIBFLOW rejects the idea of a monolithic discovery system in favor of an array of discovery systems dedicated to thematic collections and tuned to the students and scholars who need them to support their research needs.22 This is a utilitarian approach that has a very good chance of being rewarded by the search engines. It rejects conventional thinking that massive aggregations of data will automatically attract attention by search engines and embraces the concept that high-quality data that gets traffic from affinity websites will be indexed and the pages will increase their chance of being more relevant to web searches. Stahmer provides the hypothetical narrative that “a graduate student in Malaysia builds a system that connects one of our dedicated collections using open web standards and connects that data set to many other like-configured systems thereby creating the ‘best’ system for research and specific queries to the data.”23 This is a bright spot in the constellation of projects around visibility on the web and reflects a sophisticated understanding of the requirements of the web.

Linked Data for Libraries and Linked Data for Production

Philip Schreur from Stanford sets the tone for the two projects Linked Data for Libraries and Linked Data for Production when he says directly, “In the future we will be working on the web.”24 To this end, he paints a vision of a distributed network of data shared by like institutions with the express goal of making it more web-accessible. This means shared databases of data built on commonly understood schemas such as BIBFRAME. It will include contributions from multiple affinity institutions with a common goal of representing a wide variety of library assets in a Semantic Web framework.

Schreur is experienced enough to know that the projects do not have a documented recipe for what a distributed data management landscape will look like. He describes this experimentation as a way to feel their way to answering his question, “How will we work on the web in a distributed way?” and acknowledging immediately that “we will not be able to control it.”25 That last comment echoes Carl Stahmer’s expectation that the most effective data will be created under the “rule of the street.” In the ideal narrative, libraries will experiment with different models for describing their data, and the most effective ones will evolve into a community standard. That’s the paradoxical value of loss of control and rule of the street. It will be a culture shift for librarians, but the benefit is aligning with the web’s effectiveness and broadcasting content.

Linked Data for Libraries (LD4L) and Linked Data for Production (LD4P) are grant-funded collaborations between libraries with a mutual interest in reinventing their bibliographic infrastructure. The participating libraries are bellwether institutions with strong technical resources, deeply knowledgeable staff, and strong funding from the Mellon Foundation. The Linked Data for Libraries project has a two-year grant for just under $1 million. Because of the participation of three prestigious institutions—Cornell, Stanford, and Harvard—knowledgeable librarians are following their efforts and watching their communications for leadership and results.26

The results that the projects predict are highly technical. As with BIBFRAME and BIBFLOW, the focus is on infrastructure. The project website for LD4L declares that “the goal of the project is to create a Scholarly Resource Semantic Information Store (SRSIS) model”27 that describes a broad spectrum of library assets and follows the rules of the Semantic Web. A subpage of the project website declares that last goal: “Our larger goal is to encourage libraries, archives, and cultural memory institutions to think much more broadly about using structured information about their scholarly information resources to make those resources more discoverable, accessible, and interconnected.”28 The goal therefore is to promote the use of Semantic Web technologies in the service of making a wide variety of things more discoverable.

The project doesn’t declare any specific goals relative to the convenience of the reader and search engine results. In a discussion of the question of improving the visibility of library collections on the web Schreur says: “[At the beginning of the Bibliographic Framework Initiative] we were told that was the goal.”29 But he emphasizes that the LD4L and LD4P projects are “not just moving to the web”; they plan to “play by the rules of the web” in making a broad definition of their data accessible on the web.30

The projects are notable for their broad view of library assets. This group seems more keenly aware of the principle that academic library users are interested in a wide range of things to support research and learning. The inclusive language of “scholarly information resources” abstractly hints at it, but when you talk to project leaders, the enthusiasm for a broad definition of things that they are responsible for exposing is evident. Schreur’s enthusiasm for the mandate from Stanford University is infectious, and it is shared by his colleagues at Cornell University, who are building on their success of describing the universe of Cornell scholars in the VIVO system. Cornell’s VIVO project describes not just published things, but also includes durable descriptions of the persons who authored them.31 This positive feature of the project acknowledges that a definition of the library collection such as “books” is too narrow to satisfy the academic library reader.

During the period of active funding, the project expects to create several technical and infrastructural deliverables:32 an ontology, a management system for the discovery and updating of the assets of each institution. Notably, it will allow import from a wide variety of local systems at each institution. These include the MARC-based library catalogs, local systems containing the institution’s knowledge of its researchers—the person’s scholarly outputs, awards, specialties, and so on. It will also include pathfinder systems—these are topic and curriculum-based lists of resources used by students and scholars interested in a given topic. Pathfinders are curated by subject specialists in the libraries. This commitment to a wide variety of inputs to be converted to data formats that are more readily exposable on the web reveals a commitment to a broad definition of discoverable things. Finally, for the convenience of specialists at other like-minded institutions, the project will deliver the technical infrastructure to allow other institutions using the Project Hydra content management system to discover the data in the project’s main database. On the question of redefining the rules for cataloging and web discovery to optimize pages and data for search engines, the commitment is similar to the BIBFLOW. Schreur explains that they are moving away from an “emphasis on transcription” and they must “play by the rules” of the web.33 He acknowledges that the current rulesets were built in an environment that was “designed to represent catalog cards” when collation and exact transcription were paramount.34 Those requirements are less important now when the structure and semantics of the webpage are rewarded or punished by the search engines.

The Linked Data for Production project is a collaboration between the LD4L libraries and other institutions that have a vision for a complete transformation of their technical processes. The current academic library processes for acquiring the data for their traditional catalogs and the related databases describing persons, programs, pathfinders, and so on are generally optimized for legacy data formats designed either before the web or just not responding to any imperative to make the data discoverable on the web. This is why institutions like the Library of Congress, Harvard, Stanford, Princeton, Columbia, and Cornell are participating in an effort to redesign and retool their technical processes. Once again, the focus here is on technical processing and the efficiency of the librarian’s workflow.

Integrated Library System Vendors and Bibliographic Utilities

Since the 1980s, US libraries have relied on a set of mostly commercial providers for their enterprise systems. These providers sell locally installed and cloud-hosted software that allows the library to efficiently manage its inventory, purchasing, and reporting systems. These systems also include a discovery layer that provides a view into the library’s inventory of books and journals. Libraries are now augmenting these systems with a free-standing discovery layer that exposes the traditional collection and the articles that are so critical to the reader.

Twice a year, American librarians gather for a professional conference that features a panel discussion on BIBFRAME implementation that includes representatives from the library system vendors with the biggest market shares: Ex Libris, Sirsi/Dynix, and Innovative Interfaces. The panels also include representatives from the Library of Congress, the library cooperative OCLC, and Zepheira. The content from the library system providers affords a good description of their commitment to enhancing the visibility of libraries on the web. That content generally falls into two categories: a general support for the value of linked data and BIBFRAME, and a statement that changes to their systems will be considered in their future roadmaps. The enthusiasm for linked data and BIBFRAME is genuine, but the specifics in roadmaps tend to be more vague. There are some exceptions: the academic and public library vendor Innovative Interfaces highlights its partnership with Zepheira in providing BIBFRAME orientation to libraries (what is it and what experimental tools are available) and an explicit statement that it is committed to external partnerships over changes to the local system.

OCLC is the library cooperative that offers bibliographic data and a wide range of workflow and discovery services to libraries. The research and data science arm has distinguished itself by its experiments with transforming legacy bibliographic data in MARC format into the kind of representations that could be useful in an environment where libraries are playing by the rules of the web and using global identifiers. These global identifiers refer to the things that readers want to acquire from libraries such as bibliographic works and the publication history of the persons who contribute to those works. OCLC Research has produced a linked data representation of persons. Persons are defined as identities or corporate bodies that have done things like written, illustrated, edited, performed, translated, or otherwise adapted bibliographic entities. OCLC Research has done this by joining the authoritative descriptions from national libraries and other important bibliographic agencies throughout the world. It uses big-data tools and world-class data scientists to process the data into a web-accessible graph. This service creates consumable forms of the authors, editors, translators, and so on who contribute to bibliographic works. OCLC calls it the Virtual International Authority File, and the identifier, included in the data, is considered by experts in the library Semantic Web space to be the canonical identifier for persons. This status has been earned by OCLC Research’s active management of the data and the reputation of the contributors for careful management of data and high quality standards. The Virtual International Authority File was created by OCLC Research in deep collaboration with national libraries and other sources of authoritative data. The identifiers are already used by the web-accessible data experiments produced by the national libraries of France, Sweden, and Spain. This data is potentially valuable because it contains authoritative descriptions of persons that can be used in local and global knowledge graphs for searching and for linking the bibliographic works that the persons created or contributed to.

The business side of OCLC provides a range of applications and traditional bibliographic data to thousands of libraries worldwide. In addition to the existing WorldCat.org site that allows crawlers to harvest its titles and uses Schema.org markup,35 it is building a strategy for enhancing its metadata services infrastructure for a BIBFRAME future. Building on a foundation created by OCLC Research, it has begun a process of augmenting WorldCat data, including processes to model it, assign URIs, and make it suitable for use in linked data contexts. When discussing production use of its linked data assets, John Chapman, Product Manager at OCLC, explains that OCLC wants “to prove the value of the data.”36 In the fall of 2015, OCLC announced a pilot project for a new tool that allows libraries to look up data about persons. This pilot service allows producers of data—including libraries and commercial partners—to enhance their content with authoritative data about persons who have contributed to bibliographic works. Chapman points out that these persons are not limited to creators and contributors, but extend to persons named as topics or subjects of resources. He says they plan to add article authors at some point and “are aware of the need to integrate article authors into the Persons data.”37 If there is uptake on services like the Person Entity lookup service, OCLC has the opportunity to provide data to thousands of libraries and to provide the canonical identifiers that are required by the Semantic Web.

Chapman says OCLC is in close contact with the libraries in the BIBFLOW and Linked Data for Libraries projects and plans to “learn from these projects so we can draw some conclusions about efficient workflows for putting linked data to use.”38

Ex Libris, the integrated library system vendor to academic libraries, has published its principles and roadmap related to workflows and discovery. Its published information indicates a mix of workflow changes and library catalog (discovery system) enhancements. It describes its “Key Elements of Linked Data for Ex Libris Roadmaps”:

The following principles related to linked data have helped shape the roadmap of the Alma resource management solution:
  • The use of linked-data format for loading and publishing bibliographic records.
  • URI support for cataloging and technical services: identifying “things” based on URIs instead of simple identifiers.
  • Access to linked data to enrich data displayed to staff in routine workflows.
  • Support for the BIBFRAME model and ontology as they mature.
The following principles have helped shape the roadmap of the Primo discovery and delivery solution:
  • Discovery of the underlying metadata and access to it via URIs.
  • The use of linked data by non-library applications.
  • The discovery system as the key interface to make data accessible to people and computers.
  • The use of RESTful APIs to provide support for applications based on linked data.39

Ex Libris’s detail on BIBFRAME relates very specifically to library workflows:

Alma will support both the export and the import of catalog records in BIBFRAME format. Thus Alma records will be part of BIBFRAME-based record workflows outside Alma. A new option will be added to the title-level export job, so existing MARC-based bibliographic records will be exportable in BIBFRAME format. Similarly, imported catalog records in BIBFRAME format will seamlessly become part of the Alma catalog, regardless of the format in which the catalog is managed. Alma will use the metadata import framework with BIBFRAME as a source format.40

Schema.org and Schema Bib Extend

In addition to the rules for crawling and indexing described earlier, the world’s biggest search engines have declared their preference for how they want data on websites to be represented. Their preferred markup, called schema.org, is optimized for expressions of data that emphasize Semantic Web principles such as canonical identifiers to unambiguously represent things and the representation of “offers,” which are the terms of purchase or lending of inventory items or services such as a car rental or movie showing. A group of librarians, consultants, and commercial vendors has quietly and effectively influenced this preference through collaboration and effective recommendations to the schema.org editors. Led by Richard Wallis, this group, Schema Bib Extend, has taken a highly pragmatic approach to inserting changes to schema.org that make descriptions of bibliographic items more precise. As with Linked Data for Libraries, the explicit mission is technical and aimed at the quality and precision of the infrastructure. The group declares its mission to “discuss and prepare proposal(s) for extending Schema.org schemas for the improved representation of bibliographic information markup and sharing.”41

Schema Bib Extend has made suggestions that schema.org allow new properties that let a site declare that a work is a translation of another work, or that the work is a newspaper. These are seemingly obvious declarations, but they were not available in the schema.org vocabulary, and the group used its collective knowledge and experience to recommend them and a small set of other changes to the schema.org editors. Wallis describes the successes of the BibEx group:

  • Less-commercial wording—Sounds simple but was very effective (Just adding “or to loan a book” to the description of offer is a benefit for libraries)
  • Citation—Moved from an obscure place on MedicalScholarlyArticle onto the more generic and useful CreativeWork
  • Work Relationships—A lightweight version of the complex entity relationship model described by libraries
  • Periodicals—Added ability to optionally describe an article in a PublicationIssue in a PublicationVolume of a Periodical
  • Multi-volume works—Added hasPart and isPartOf to CreativeWork—much broader applicability than just multivolworks
  • Many examples of bibliographic items42

Finally, the most significant acknowledgment of the value of input from libraries was in the creation of the new addition to schema.org called bib.schema.org. It contains the specific additions from this group of experts and is a durable contribution to schema.org.

A knowledgeable observer might ask why BIBFRAME is necessary when the search engines have already declared a preference for a vocabulary. The reply to this suggestion from library Semantic Web experts is that it will always be necessary to have a vocabulary that is used within libraries to exchange data at a level of detail that isn’t useful on the web. The additional detail would include transaction data, legacy data from the old MARC systems, and anything else that is important for the efficiency of library workflows, but not useful on the web.

Zepheira and Entrepreneurial Efforts

In a time of change, with challenges to familiar ways of working, and perceived threats to the ongoing perceived value of libraries, there emerges the opportunity to provide commercial services around web visibility.

So far, just one company has entered the market to explicitly provide those kinds of services. Zepheira, based in Powell, Ohio, won the original contract to help the Library of Congress define the BIBFRAME vocabulary. It was chosen because it was able to demonstrate familiarity with libraries and experience with Semantic Web technologies. Zepheira’s marketing materials use language that is explicit about the issue of libraries’ visibility on the web: “The promise of moving library assets to become visible on the web is exciting. It is also a move that will be most successful with planning and foresight into the full range of a library’s operations, content, collections, and internal and external partners [sic] capabilities.”43

Zepheira has been unique in seizing the opportunity to offer services that really fall under the category of change management: it explains principles and shows some experimental tools to take advantage of the desire to see what the future technical infrastructure will look like. Technical services librarians are comfortable with the focus on their processes and tools. Zepheira has essentially turned that culture into a business, helping to assuage the librarian’s anxiety by explaining process and tools. It offers training services to fill that need.

To provide a community forum for talking, experimentation, and learning, Zepheira founded the LibHub service. Experimental activities involved Zepheira working with libraries to take traditional library data and transform it into web-accessible formats to allow libraries to see what their data looks like in these formats and to learn technical details along the way. Next, the group experimented with how search engines could crawl, index, and use the data.

Zepheira’s second line of business, called The Library Link Network, is aimed at the issue of visibility on the web. Its technical and product leads, Eric Miller and Jeff Penka, understand the technical requirements for success on the web, and they are aware of the limitations of the current library catalogs in meeting those requirements. In response to that, they have designed a product that takes the library’s traditional data and makes it available for crawling and indexing by the search engines. Their goal is to create a data set that is accessible to the search engines and a data set that is created by a set of algorithms that understand the requirements of the Semantic Web. This is a strong move toward satisfying the requirements laid out for relevance in the traditional search engine results and placement in the Knowledge Card.44

In the area of training libraries to understand Semantic Web concepts and the technical details of vocabularies and other Semantic Web infrastructure, Zepheira has provided training services, and more recently, the Library Juice Academy has emerged as a source for those services.

Notes

  1. Roy Tennant, “MARC Must Die,” Library Journal 127, no. 17 (October 15, 2002), http://lj.libraryjournal.com/2002/10/ljarchives/marc-must-die.
  2. Working Group on the Future of Bibliographic Control, On the Record (Washington, DC: Library of Congress, 2008), 24, www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf.
  3. “BIBFRAME,” Bibliographic Framework Initiative, Library of Congress, accessed February 11, 2016, https://www.loc.gov/bibframe.
  4. Ibid.
  5. Beacher Wiggins (Director for Acquisitions and Bibliographic Access, the Library of Congress), interviewed by Ted Fons by telephone, 9 November, 2015.
  6. Ibid.
  7. Ibid.
  8. Richard Wallis (Independent Structured Web Data Consultant), interviewed by Ted Fons by Skype, 23 October, 2015.
  9. Jeff Penka (Vice President for Product Management, Zepheira, Inc.), interviewed by Ted Fons, 24 November, 2015.
  10. Beacher Wiggins (Director for Acquisitions and Bibliographic Access, the Library of Congress), interviewed by Ted Fons by telephone, 9 November, 2015.
  11. Diane Hillmann (Partner, Metadata Management Associates), interviewed by Ted Fons by telephone, 4 February, 2016.
  12. Ibid.
  13. Diane Hillmann and Karen Coyle, “Resource Description and Access (RDA): Cataloging Rules for the 20th Century,” D-Lib Magazine 13, no. 1/2 (January/February 2007), www.dlib.org/dlib/january07/coyle/01coyle.html.
  14. Ibid.
  15. Gordon Dunsire, “RDA Data Capture and Storage,” (presentation, American Library Association Midwinter Conference, Boston, MA, January 8–12, 2016).
  16. “About,” BIBFLOW, IMLS Project of the University of California, Davis, University Library and Zepheira, accessed February 11, 2016, https://www.lib.ucdavis.edu/bibflow/about.
  17. Carl Stahmer (Director of Digital Scholarship, University of California, Davis), interviewed by Ted Fons by telephone, 17 November, 2015.
  18. Ibid.
  19. Ibid.
  20. Ibid.
  21. Ibid.
  22. Ibid.
  23. Ibid.
  24. Philip Schreur (Assistant University Librarian for Technical and Access Services, Stanford University), interviewed by Ted Fons by telephone, 6 November, 2015.
  25. Ibid.
  26. Linked Data for Libraries (LD4L), main page, DuraSpace wiki, last updated February 10, 2016, https://wiki.duraspace.org/pages/viewpage.action?pageId=41354028.
  27. Ibid.
  28. Dean B. Krafft, “Expected Outcomes,” Linked Data for Libraries (LD4L), DuraSpace wiki, last updated September 26, 2014, https://wiki.duraspace.org/display/ld4l/Expected+Outcomes.
  29. Philip Schreur (Assistant University Librarian for Technical and Access Services, Stanford University), interviewed by Ted Fons by telephone, 6 November, 2015.
  30. Ibid.
  31. Dean Krafft, Kathy Chiang, and Mary Ochs, “Enhancing the University’s Knowledge Management Using VIVO,” a presentation given at the LITA Forum, November 2014, http://connect.ala.org/node/230876.
  32. Philip Schreur (Assistant University Librarian for Technical and Access Services, Stanford University), interviewed by Ted Fons by telephone, 6 November, 2015.
  33. Ibid.
  34. Ibid.
  35. Ted Fons, Jeff Penka, and Richard Wallis, “OCLC’s Linked Data Initiative: Using Schema.org to Make Library Data Relevant on the Web,” Information Standards Quarterly, 24, no. 2/3 (Spring/Summer 2012).
  36. John Chapman (Product Manager, Metadata Services, OCLC), interviewed by Ted Fons by telephone, 10 November, 2015.
  37. Ibid.
  38. Ibid.
  39. Shlomo Sanders, “Linked Library Data: Making It Happen,” Tech Blog, Ex Libris Developer Network, December 27, 2015, https://developers.exlibrisgroup.com/blog/Linked-Library-Data.
  40. Ex Libris, Putting Linked Data at the Service of Libraries: The Ex Libris Vision and Roadmap (Ex Libris, 2015), 5, www.exlibrisgroup.com/files/Publications/LinkedDataattheServiceofLibraries.pdf.
  41. Schema Bib Extend Community Group (accessed 23 October, 2015), https://www.w3.org/community/schemabibex.
  42. Richard Wallis (Independent Structured Web Data Consultant), interviewed by Ted Fons by Skype, 23 October, 2015.
  43. Readiness and Visibility Assessment Information Request Form (accessed 29 April, 2016), http://zepheira.com/assessmentinfo/.
  44. Jeff Penka (Vice President for Product Management, Zepheira, Inc.) and Eric Miller (President, Zepheira, Inc.), interviewed by Ted Fons, 24 November, 2015.

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy