lrts: Vol. 55 Issue 3: p. 124
Assessing the Cost and Value of Bibliographic Control
Erin Stalberg, Christopher Cronin

Erin Stalberg is Head, Metadata and Cataloging, North Carolina State University Libraries, Raleigh; serin_stalberg@ncsu.edu
Christopher Cronin is Director, Metadata and Cataloging Services, University of Chicago Library; croninc@uchicago.edu
The authors wish to thank Ann-Marie Breaux, John Chapman, Karen Coyle, Myung-Ja Han, Jennifer O'Brien Roper, Steven Shadle, and Roberta Winjum, members of the Task Force on Cost/Value Assessment of Bibliographic Control, who along with the authors, participated in the work delivered to the Heads of Technical Services in Large Research Libraries Interest Group at the American Library Association Annual Conference, Washington, D.C., June 25, 2010, on which this paper is based.

Abstract

In June 2009, the Association for Library Collections and Technical Services Heads of Technical Services in Large Research Libraries Interest Group established the Task Force on Cost/Value Assessment of Bibliographic Control to address recommendation 5.1.1.1 of On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control, which focused on developing measures for costs, benefits, and value of bibliographic control. This paper outlines results of that task force's efforts to develop and articulate metrics for evaluating the cost and value of cataloging activities specifically, and offers some next steps that the community could take to further the profession's collective understanding of the costs and values associated with bibliographic control.


The technical services community has long struggled with making sound, evidence-based decisions about bibliographic control. This has been demonstrated recently by controversy over the 2006 Library of Congress (LC) decision to change its practices for series authority control, concern over the impending implementation of Resource Description and Access (RDA), the increasing need to better integrate library bibliographic data with nonlibrary web data, and requests from library administrators to document the value of investment in cataloging operations. The ability to make evidence-based decisions has been hindered by a lack of both operational definitions of value and methods for assessing cost and value within larger institutional constructs. To date, libraries have not developed robust cost/benefit metrics, and those for bibliographic control are even further lacking. The development of cost/benefit analyses for libraries may be difficult, but faced with limited resources and an array of directions in which to move forward, libraries find that articulating the varied cost/value propositions in measured and concrete ways is increasingly necessary.

In June 2009, the Heads of Technical Services in Large Research Libraries Interest Group of the Association of Library Collections and Technical Services (ALCTS) sponsored the Task Force on Cost/Value Assessment of Bibliographic Control (hereafter referred to “the Task Force”) to begin to identify measures of cost, benefit, and value of bibliographic control. This paper offers a literature review, outlines the work of that Task Force, explores operational definitions of value associated with bibliographic control, suggests research areas that will further the profession's understanding of the value of cataloging activities, discusses possible cost measures, and considers interdependencies between creators and consumers of bibliographic data.


Literature Review

The literature gives evidence of a lengthy dialogue about the cost and value of cataloging, often tied to a discussion about the impact of advancing technology. Of interest is how similar that dialogue has been over time. In an address to the New York State Library School in 1915 titled Cataloging as an Asset, Bishop asked his audience “of what value is a knowledge of cataloging?”1 Only fifteen years after the LC had begun distributing cards, Bishop—who at the time was the superintendent of the LC's reading room of the Library of Congress—remarked,

Seventy-five per cent of the cards needed in the various libraries of the country are being supplied by the Library of Congress. It is not unnatural, in fact it is almost inevitable, that there should have come a lessening of interest in cataloging work, and even a dearth of catalogers… . The successful adaptation of a manufactured product is seldom as interesting as the making itself… . Catalogs and catalogers are not in the forefront of library thought. In fact a certain impatience with them and their wares is to be detected in many quarters. Shallow folk are inclined to belittle the whole cataloging business. And there have not been wanting persons to sit in the seat of the scornful.2

The tension between the increasing availability of what we would now call “copy” and the resulting value of employing professional catalogers was clearly palpable almost a century ago. The crux of Bishop's argument is that cataloging forms the core of the profession because the catalog itself is a valuable and essential instrument for the reader to do his or her work. Recognizing that libraries and their indexes, shelf lists, and public catalogs were growing exponentially, Bishop was concerned about the implications for cataloging, the catalog, and the values placed on them. He wrote,

We have continued to use an instrument whose value for small collections is well established, and we have built it up until it fairly threatens to break down of its own size and weight… . But we have not seemed to realize that all our skill and all our abilities are now needed to make our huge card catalogs workable. We shall need every bit of energy, vigor, and knowledge that we possess to adapt the card catalog to libraries of the future.3

Even in 1915, Bishop was keenly aware of the shared network that would eventually develop for library cataloging and record keeping, the impact that an investment in network-level operations could have on the profession, and the value of the rules developed to create cataloging. In thinking of the task of keeping track of books scattered across branches of public libraries, for example, he noted,

What a complicated thing is a modern “union shelf list,” a “combined catalog!” And how near we are to the day of union catalogs or “repertories” designed to show the resources of cities, or regions, perhaps of the entire country! Can you imagine anyone unversed in practical cataloging undertaking to supervise such records? … The future is a day of co-operation, and co-operation in most cases on the common basis of one set of cataloging rules governing a supply of contributed entries. You will begin to see something of the value of those rules.4

Cost measures, especially for technical services, have formed part of the library literature since at least the latter part of the nineteenth century.5 In his historical review of discussions of cataloging costs, Harris highlighted Congress's own complaints that it cost 22 cents per book to catalog the library that Thomas Jefferson sold to them after the original Library of Congress was burned by the British in 1812.6 Harris surmised that this incident “yielded one of the earliest figures on cataloging in this country and probably the first recorded protest over the high cost of cataloging.”7 By 1941, Metcalf wrote that surveys of cataloging costs had become “vogue,” but that they “accomplished little, except to make us understand that costs were high and that there seemed nothing to be done about it.”8

By the 1950s, librarians sought to not just list the types of costs associated with cataloging and formulas to calculate them, but also to contribute to the interpretation of those costs. Swank wrote that calculating the per-unit cost of cataloging is not enough on its own, one also must understand the evolving context of those unit costs, which will not remain static over time.9 For instance, unit costs increase “as ever sharper distinctions must be drawn among ever larger quantities of materials. The ‘no conflict’ principle is increasingly more difficult to apply, and the definition of the relationships among books becomes more and more subtle.”10 Reading this in 2010 is eerily timely, considering the proliferation of access to information afforded by the Internet and efforts like the Google Books Project, as well as the amount of current discussion on the costs of implementing RDA, which focuses largely on describing relationships between resources.11

Swank also touched on the then-half-century mark of LC card distribution and the fact that many libraries had still not realized the economic benefits of centralized copy production; they were not accepting copy as is, but were reviewing and altering cards at length. While libraries at the time argued that they were placing a greater value on the institution's specialized needs over universal or shared economies of scale (“Uniformity at the local level may seem to be more important than conformity at the national level”), Swank observed that entrenchment may be at the root of this value compromise. 12 He recognized the hidden costs of intangibles—such as morale, organizational culture, and the provision of adequate training—and encouraged moving beyond computing unit-costs alone to also analyzing and measuring the work itself. Swank was dismayed that, at the time, no studies had related costs to values or results in a way that could be used by other libraries; he was even more dismayed that no studies had yet evaluated the product of cataloging efforts against the needs of the readers.

One notable, early attempt to apply cost/benefit analysis across an entire library system was performed by MIT Libraries in 1969 and reported by Raffel and Shishko.13 However, despite acknowledging that cataloging used 21.2 percent of MIT Library's general and research collection budget at the time—3.4 percent higher than the purchasing budget itself—the authors dedicate only four pages of the book to analysis of cataloging operations. Reviews of the cost/benefit analysis methods applied to MIT were not positive across the profession, largely because the analysis was performed by an economist and a political scientist, not a librarian. McAnally observed that “the authors suffer from two severe handicaps—relative ignorance about the details of libraries, learning, and research, and also the absence of clear objectives and good measures of success or effectiveness in the university library world.”14 Indeed, Raffel and Shishko themselves had recognized the limitations of their method, noting that “much of the analysis that has been presented so far has relied primarily upon impressionistic judgments of the benefits associated with a given system and of comparisons among systems serving different objectives.”15 The study did not have broad impact either at MIT or elsewhere.

To Bishop, the rapidly increasing size of collections justified the investment in cataloging.16 More than seventy years after his address to the New York State Library School in 1915, the importance of working at the network level was clearly understood on the philosophical level, but not-yet fully realized operationally. In the late 1980s, Lahiri wrote that the “proliferation of information has surfaced as a persistent prospect and problem to society. This proliferation further complicates the ways and means of bibliographic control and, more crucially, the justification of cost and benefit of providing access/exposure to the users.”17 Lahiri noted that automation in the 1960s attempted but failed to solve problems associated with cataloging the growing corpus of information because those efforts were largely concentrated in research libraries

with little conceptual basis which could be used for a broader national context. Instead they emphasized that the unique or special aspects of cataloging must meet the needs of their own institutions first. Even the MARC format, despite its far-reaching impact in cataloging automation at the national and international levels, was not free from such proclivity.18

Earlier in 1981, Koel had expressed similar concern about the concentration of cataloging costs at the institutional level and promoted forming a centralized federal Agency of Bibliographic Control that would not only steward master records, but also perform cost/benefit analyses of existing practices that would ensure improved efficiencies of scale.19 Lahiri explained that even though criticisms of the “slow and costly” nature of cataloging have not declined over time, there was still

not much or any calculation, clarification, explanation, and justification for the value of an authoritative catalog. An absence of the discussion on the benefits of bibliographic control is equally conspicuous. Some believe that in order to increase widespread demand for information, bibliographic systems will have to provide their worth in dollars. To estimate that value we should use economic criteria.20

Unfortunately, even with this explicit recognition of the lack of applicable measures, few are offered.

In advocating for the idea of a national database, Lahiri was acutely aware of the same challenges we recognize even today, stating that cost/benefit analysis of cataloging is complicated by the fact that users who access information do so outside of a typical marketplace economy.21 Access, he wrote, is largely provided in a noneconomic environment, making difficult the justification of building new technologies and systems to support bibliographic data, the value of which is elusive. Lahiri's own assessment of the library literature of the time was that the concepts of “benefit” and “effectiveness” were used in interchangeable and confusing ways, and that most experts at the time felt that library services as a whole could not be measured in monetary terms. In relation to the idea of building a national integrated bibliographic database, he asserted that while devising “ways for the measurement of the value of national bibliographic control activity” is critical, most existing studies are ambiguous, ill-defined, misleading, unreliable, and “clouded with contradictions.”22 He stated that the extraneous benefits of improvements to “research, education, and social well-being” are so intangible in character “as not to be susceptible to appraisal in monetary terms.”23

Tangible measures for cost, value, and benefits of cataloging are rare in library literature. Lahiri posited that the effects of centralizing a national database could be measured by the speed at which records could be reliably used by others, that benefits could be measured in terms of increases to scholarly output, but that ultimately, quantification would be nearly impossible. Even if the benefits could be quantified,

they cannot be valued by any market criteria and are generally termed as intangibles… . Although some quantitative assessment of benefits is possible, the multiplicity of benefits and their diffusion among different aspects of life will normally be such that their precise quantification is difficult to trace … [and would be] an attempt to measure the unmeasurable.24

In 1997, the Council on Library Resources commissioned a detailed study on the value of library and information services.25 The study had two primary objectives: to analyze issues related to the value of library and information services in order to develop a conceptual structure that could serve as a theory of “use-oriented value of information and information services,” and to apply the theoretical framework to propose methods for similar studies of other information services generally.26 The report, issued in two parts, discussed the difficulty and complexity of defining value. Despite generating a strong taxonomy of values, the study did not apply those value structures directly to monetary or other economic measures, concluding, “While studies and determination of value are a difficult and involved proposition, they are only the first step in meeting a larger challenge. The challenge is to connect studies of value with some appropriate economic indicators.”27

Missingham, and Imholz and Arns have reviewed and summarized studies that attempted to quantify, in monetary terms, the benefits of library services generally. However, they did not quantify bibliographic control specifically, except to show calculations of cost savings derived from using copy cataloging records extracted from national bibliographic databases.28 A recent attempt at cost/benefit analysis within large academic research libraries is the University of Illinois at Urbana-Champaign's attempt to correlate library costs with direct monetary benefits in the form of grant funds.29 Using the results of a study on return on investment at one institution, during only one year, has obvious and acknowledged limitations in terms of wider applicability, but it does serve as an example of one approach that could be adjusted for other environments or institution-types. Again, the context of the Illinois study is the organization as a whole and the model put forth does not specifically address bibliographic control. Cornell Library's more informal approach to demonstrating return on investment simply lists how the library is used and how it generates more value than the money expended to support its operations.30 Again, however, they do not focus on bibliographic control and metadata provision.

Some studies have been careful to note that costs are not necessarily comparable across institutions when those institutions have variations in such factors as number and levels of staff, types of resources, levels of cataloging, and the number of records processed as a result. McCain and Shorten analyzed the results of a survey of academic libraries, which focused on staffing levels, the number of items processed, the presence and size of a backlog, the automation system in use, and perceptions of efficiency.31 They presented measures of efficiency and effectiveness for cataloging departments on the basis of those factors. Morris and colleagues described a longitudinal study by Iowa State University measuring cataloging time and costs, as well as the tasks that staff performed.32 Their article categorized the tasks (e.g., copy cataloging, original cataloging, authority control, recataloging, and monographic and serials cataloging) in detail, and analyzed the productivity of each task with staff. A subsequent study analyzed tasks librarywide.33

Measuring the relationships between copy cataloging, original cataloging, and partial-original cataloging, and levels of staffing has been a frequent focus of studies, perhaps because they are considered more tangible or well-understood categories of bibliographic control. Miksa recently reflected on her 2005–6 study of technical services operations in rural, urban, and suburban libraries in North Texas, which asked respondents to give the average number of hours per week dedicated to original or partial-original cataloging activities.34 Of the 103 respondents, 8 libraries reported 0 hours, 59 reported fewer than 10 hours, and only 5 reported 31 to 40 hours. Miksa also described her own anecdotal experiences with her cataloging students and library staff across Texas, many of whom have expressed concerns about the diminishing value their organizations place on cataloging. She offered authority control as an example of a cost/value compromise made by many libraries in her survey, with only 12.5 percent of respondents reporting weekly or monthly maintenance of authorities databases. While the survey does not elicit reasons behind the time spent on maintaining authority databases, Miksa reported her own impression that perhaps there is a lack of understanding of the purpose and value of authority control or a belief that outsourced records are good enough or “it may simply be rooted in the more realistic lack of funding.”35

Miksa further posited that investment in cataloging is an investment in adding value to resource description and access. She stated that poor or “dead-end” metadata is a reflection of the lack of value placed on quality cataloging, as evidenced by the decreased emphasis on cataloging in graduate library programs. She wrote, “I strongly suspect that we are seeing in our catalogs the result of the disturbing lack of knowledge of many cataloging librarians and library administrators that resulted from relegating traditional courses to the back burner over the past decade or so.”36

Hider's application of the contingent valuation method to estimate the monetary value added to a collection by the technical services operations of an Australian city public library demonstrates a recent (Hider claims the first) attempt to place a dollar figure on the value of technical services.37 Contingent valuation employs survey methods to establish value for resources and services that are nonmarket (i.e., not sold). For the study, Hider presented three scenarios to gauge the relationship between cost and the respondent's willingness to pay: a referendum was held to ask citizens to pay a monthly levy to maintain library services at present levels or the library would close the library was converted to “self-service,” wherein the library would maintain the catalog and the collection as it exists today; and the self-service library consisted solely of the collection, with no catalog. The benefit/cost ratio for the first scenario was 1.33:1; the ratio for the second scenario was 1.8:1; the ratio for the third scenario—that is, for technical services specifically—was 2.4:1, demonstrating an especially good value provided by technical services.38 While Hider's study is of a small city public library and does not focus on bibliographic control in an academic library environment, he articulated interesting methodological issues that could translate across any library size or type.

Gorman related the value of cataloging to the value of the resources cataloged. He posited that the two main problems with bibliographic control of electronic resources are that the majority are themselves “of no value, little value, very localized value, or temporary value,” and they are “inherently unstable and shape-shifting.”39 He stated that cataloging resources that are valueless, of limited value, or that could be changed or altered in the future is neither rational nor efficient. Instead, the value of cataloging will only be realized if resources that are actively assessed to have value and permanence are cataloged.

RDA, the proposed successor to the Anglo-American cataloguing rules, includes cost efficiency as one of its objectives: “The data should meet functional requirements for the support of user tasks in a cost-efficient manner.”40 Institutions participating in the U.S. national RDA test will, as a part of that process, contribute surveys for every bibliographic record created during the test, including details on how long it took to catalog a resource. The results of the test, and what might be learned from that process about the costs of implementing RDA or the values it will help realize, remain unknown.


Background

The objective of the Task Force was not to develop a complete model of costs and value for bibliographic data, but to begin to identify sound measures that can inform decisions by those engaged in the creation, exchange, and use of bibliographic data. The establishment of the task force was one response to the 2008 On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control, in which the working group noted that the community has inadequate measures for moving forward on sound decision-making.41 The Heads of Technical Services in Large Research Libraries Interest Group therefore charged the Task Force with identifying measures of the cost, benefit, and value of bibliographic control for key stakeholder communities, taking into account interdependencies between creators and consumers of bibliographic data, and developing a plan for implementing these measures.42 Measures of cost and value, the charge read, could be granular and relative; for example, it could address the cost/value of controlled and uncontrolled name headings in different contexts or compare the differences between descriptive practices and standards used by libraries with those commonly used by the publishing or book trade industry. The charge also stated that stakeholders should include not only the end-users of library materials, but the parties and processes involved in the management of information resources and data, such as book vendors, system vendors, and software applications. Cost and value would be considered in relation to all sizes and types of libraries (public, academic, special, school, etc.). Interdependencies between creators and consumers of bibliographic data would be identified, since the benefits of bibliographic control may be separated from the current cost centers by multiple business processes, or may be cumulative over time.

The Task Force interpreted its charge broadly, encouraged to do so by the Working Group on the Future of the Bibliographic Control:

The phrase “bibliographic control” is often interpreted to have the same meaning as the word “cataloging.” The library catalog, however, is just one access route to materials that a library manages for its users. The benefits of bibliographic control can be expanded to a wide range of information resources both through cooperation and through design. The Working Group urges adoption of a broad definition of bibliographic control that embraces all library materials, a diverse community of users, and a multiplicity of venues where information is sought.43

The Task Force therefore challenged itself to consider the value of bibliographic data in a variety of contexts and from a variety of perspectives. In doing so, the Task Force sought out a useful vocabulary for discussing value in relation to bibliographic control, but ultimately found none.

The Task Force also addressed vocabulary around which to discuss cost. While one can outline elements contributing to the cost of cataloging (and work has been done in this area), evaluating those metadata costs and determining whether those costs are currently too high, without first having a clear understanding of their value, is difficult. When the LC changed its treatment of authorizing series headings in 2006, it opted for a cost-lowering technique without community metrics for assessing the value impact of that decision.44 In the course of its work, the Task Force attempted to consider cost and value separately. Separate analyses can be pursued simultaneously to a point, but one cannot simply lower costs (unless one can figure out how to achieve exactly the same outcome for less cost) without discussing what would be lost in value. Before useful and specific measures could be written, therefore, the Task Force needed to reframe its work to propose community definitions for value. The Task Force, in its final report, suggested a research agenda for the community and recommended that the Heads of Technical Services in Large Research Libraries Interest Group identify institutions within this group or solicit partners from the community who are willing to contribute to an evolving effort of applying metrics to assess cost and value of bibliographic control.45

The context for the Task Force's charge was provided by section 5.1 of On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control:

Bibliographic control occurs in a complex system of participants (contributors and users), information resources products and services, and technological capabilities. There are increasing numbers of participants, information formats and media, and information technologies. Contributors of bibliographic data and services may have different and sometimes conflicting agendas. Multiple user communities may have changing and expanding needs and expectations. In this increasingly complex environment, the actions taken by key players can have downstream impacts on others. Unfortunately, there are still inadequate measures of the costs, benefits, and value of bibliographic information and almost no information on the interdependencies within the broader bibliographic control environment, including the impact of internationalization.

Although the use of cost-benefit analysis for service organizations such as libraries is problematic, all organizations must achieve goals and provide value. Bibliographic control may be considered by many to be a public good, but it has real costs attached to it, just as, presumably, it has real value.46 [emphasis added]

With the publication of On the Record in January 2008, the ALCTS board established the Task Group on the LC Working Group Report to analyze the recommendations put forward in the report and to identify those recommendations that ALCTS is well suited to address. In April 2008, the Task Group released of ten recommendations for the ALCTS community.47 The ALCTS board then formed the Implementation Task Group, charged to identify ALCTS committees and others outside of ALCTS to take responsibility for moving one or more of the ten recommendations forward.

ALCTS's eighth recommendation was to bring together key participants to agree to implement a set of measures of costs, benefits, and value of bibliographic control for each group of participants and to identify interdependencies between participants.48 At the 2009 American Library Association (ALA) Midwinter Meeting, the chair of the ALCTS Implementation Task Force proposed that the Heads of Technical Services in Large Research Libraries sponsor a task force to look at those measures of cost, value, and benefit for bibliographic control. The Heads of Technical Services in Large Research Libraries have the authority and leadership to bring key players together from their libraries and others to forge agreement on costs and benefits of bibliographic control—an effort that would serve not only research libraries, but potentially be of interest to libraries of all sizes and constituencies. The Task Force on Cost/Value Assessment of Bibliographic Control began work following the 2009 ALA Annual Conference.

The Task Force's report outlines its discussions of four fundamental questions necessary to defining metrics for value: (1) Can value be measured in ways that are non-numeric? (2) Is discussing relative value over intrinsic value helpful? (3) Does value equal use? and (4) Is it possible to define a list of bibliographic elements that are “high-value” and others that are “low-value”?49 Given the difficulty in answering these questions, the lack of research into the area of value for bibliographic control, and the Task Force's desire to advance discussions about quantifying the value of bibliographic control in an environment where the vocabulary for doing so does not yet exist, the Task Force proposed seven operational definitions of value and offered suggestions for research in these areas. While the charge was to develop measures for value, the Task Force determined that doing so would not be helpful until the community has a common vocabulary for what constitutes value and an understanding of how value is attained, and until more user research into which bibliographic elements result in true research impact is conducted. The Task Force chose to scope the problem in a way to encourage discussion about value from various perspectives and provide next steps for institutions interested in taking on these crucial questions.


Operational Definitions of Value and a Research Agenda

At the core of the Task Force's report are seven operational definitions of value with recommendations for a research agenda and strategies for advancing that research.

Discovery Success

The Task Force identified discovery success as a key element of value and proposed research into which bibliographic elements produce useful retrieval results. While research exists into which elements are used in bibliographic data (largely MARC records), this research generally speaks to inputs—what catalogers are entering, based on what the rules proscribe—but does not directly or in measurable ways speak to which elements are of value to users. The MARC Content Designation Utilization Project (www.mcdu.unt.edu) provides a wealth of statistical data on MARC tag use. Publications from that project also address the correlation between MARC tag use and cooperative cataloging guidelines and instructions. Smith-Yoshimura and colleagues have explored the implications of MARC tag use on library metadata practices.50

The Task Force suggested research in the following areas:

  • Recognizing the inadequacy of log data currently generated by MARC-based systems, use search terms from user logs to evaluate which bibliographic elements match those search terms.51 Non-MARC bibliographic systems might exist in which this data can be more easily and accurately captured.
  • In addition to log analysis, directly watch user behavior to determine which records users clicked through to and why.
  • Test discovery success in two systems when indexing two versions of the same record with and without certain metadata fields available. How does the presence or absence of elements affect users’ ability to retrieve?
  • In projects where brief records are being upgraded, capture the initial record set pre–upgrade and compare with discovery success post–upgrade.
  • Identify delivery systems where one system indexes table of contents data and the other does not; research impacts on discovery from user log data.

Research into these areas presents challenges. Data across institutions would vary because of indexing and system design issues (such as last in, first out sorting decisions or relevancy). Assessing such data across institutions would cause the community to ask questions about whether such differences are based on indexing decisions, display decisions, the nature of the collections, and other variables. While proving correlations between trends in findings and any particular factor would be difficult if the institutions comparing results ran tests under different conditions, by using statistical techniques such as meta-analysis, this approach could identify useful value similarities and would have the advantage of enabling analyses of the value of bibliographic data in the information ecosystem that includes systems design.

Use

Use, represented largely as circulation (which may include in-house circulation for those libraries that capture these data), is a helpful measure of value. Use, quantified by circulation counts, has been examined for collection development and maintenance purposes, but not to assess the impact of library resources on a user's research. Hit counts on metadata records in a digital library environment are problematic because they are not always considered reliable measures of the user experience. Bollen, Van de Sompel, and Rodriguez wrote that usage data has great potential for analyzing scholars’ use of resources.52 Perneger argued that hit counts are not reliable measures of actual resource use because the number only reflects the visits to the website.53 Miller wrote that hit counts are considered ambiguous because they include “all of the complex elements that are loaded separately to comprise that page as well as the Web crawlers.”54 Without standards to record and exchange the data, understanding the exact meaning of use data is difficult. Because they are numeric and quantifiable, use statistics may be a tempting but ultimately inadequate measure for articulating value. They are only one piece of a complex puzzle.

What does use mean for non- or low-circulating materials in libraries that have strong commitments to preserve the cultural record, including rarely requested primary source materials? And is value not derived from bibliographic control when a user decides from the metadata record that a particular item is not useful? Although use is clearly only part of the value equation, two questions are of critical interest. Do items with “better” records circulate more frequently or are electronic resources with “better” records more highly used? Is fuller bibliographic information valuable enough to be worth the cost?

The Task Force proposed a method for addressing these questions: where collections were shelved in open, browseable stacks before cataloging, compare circulation statistics of the same items before and after full cataloging. Criteria for choosing institutions would necessarily include running an integrated library system (ILS) that logs the catalog record's date of completion and that contains sufficient historical circulation data. Alternatively, this could to be a longitudinal study going forward.

Display Understanding

Several research questions address this operational value. How much of the data that catalogers create do users understand? How frequently does a user go from a brief display to a full display? When a user does go to the full display for more information, what information is he or she seeking? When users request items from storage or through interlibrary loan (ILL), what is missing in the bibliographic display that would help them assess the usability of that item before requesting it? Assuming that some percentage of users of ILL or stored items request them to evaluate their usefulness for their research, how might the bibliographic record help improve this evaluation step?

Various research projects using user studies, including focus groups and other behavioral research, could address these questions.

  • Ask what in a particular display is not understood, and what in the display helps the user decide this item is what he or she is seeking. Test the metadata with users from multiple approaches (i.e., the presence or absence of certain metadata, the displays of certain data elements for ease of use, and the rate of use and perceived usefulness of specific metadata elements). Particular attention should be paid to the elements that are beyond basic description, such as subject access, uniform titles, and classification. Another set of questions could involve user-assigned data—what would a user add if he or she could add something to a record to help the next person encountering it to determine whether the resource would be useful.
  • Conduct testing of two iterations of the same interface (A/B testing), displaying different metadata elements.
  • Survey users at the point of return of storage and ILL items.

The possible research projects are not without problems. Assessing the value of metadata separately from the quality of any particular discovery interface would be difficult. Data across institutions would vary because of system design issues. Assessing such data across institutions would cause the community to ask questions about whether such differences are based on indexing decisions, display decisions, the nature of the collections, and other reasons. Proving correlations between trends in findings and any particular factor would be difficult if the institutions comparing results ran tests under different conditions. However, using statistical techniques such as meta-analysis, this approach could identify useful value similarities and would have the advantage of enabling analyses of the value of bibliographic data in the information ecosystem that includes systems design.

Ability of Library Bibliographic Data to Operate on the Open Web and Interoperate with Vendors and Suppliers in the Bibliographic Supply Chain

The question here is where would libraries derive value if library bibliographic data were more integrated with web services (separate from or in addition to making library data more valuable to nonlibrary entities)? Certainly, the extent to which data employs a syntax that is machine processable contributes to the value of library data. Significant work has been undertaken in this area in preparation for RDA.55 The community also needs further study on how much nonlibrary entities know about and understand library data and how the use of ONIX data is affected the library supply chain.

Suggested areas for research are:

  • Research ONIX uptake throughout the bibliographic community. With several concrete ONIX-MARC projects underway, analysis can now be done to determine the extent to which ONIX data are valuable for cataloging workflows.56
  • Select a set of ONIX records from a known publisher and track over time how that metadata are used throughout the supply chain to vendors of bibliographic data, OCLC, libraries (i.e., Program for Cooperative Cataloging upgrade) and out to the open web (Amazon, Google, LibraryThing, etc.) as a gauge of value and a measure of success in sharing data beyond library borders.
  • Determine how much library data is currently being used outside the library ecosystem. While the potential here lies in the RDA Vocabularies as linked data, doing research on this now would give the community a baseline for comparing the extent of usability of library data in nonlibrary contexts now with what is hoped will happen when library data become more truly accessible on the open web.
  • Analyze the extent to which library bibliographic data are successfully interacting with other programs in the user's bibliographic toolset (EndNote, Zotero, etc.).

Ability to Support the Functional Requirements of Bibliographic Records (FRBR) User Tasks

RDA does not explicitly address which RDA elements support which FRBR user tasks. While RDA speaks directly to the bibliographic entities (work/expression/manifestation/item), the element lists do not speak directly to the facilitation of the user tasks (find/identify/select/obtain). RDA includes discussion of the user tasks in the introductory matter for the relevant chapters, the chapters have been mapped to the FRBR user tasks, and a number of the chapter names reference a user task (e.g., “Identifying Manifestations and Items”).57 Documentation around the development of the core element list explains that core elements were determined by assessing the value of those elements according to how they support the user tasks.58 However, much of this work is buried in narrative, and a direct mapping of RDA elements to FRBR user tasks has not been issued. Surfacing these data more explicitly would be useful.

Suggestions for research into the ability of RDA to support the FRBR user tasks are:

  • Undertake a mapping of the RDA elements to the FRBR user tasks.
  • Undertake usability research to determine if, in fact, these elements do provide value towards facilitating the user tasks.

The Task Force sought to aggregate various datasets and documents to create a mapping of RDA bibliographic elements to FRBR user tasks and to illustrate a value ranking. The aggregated data are presented in an appendix that accompanied the Task Force's final report.59 The 2009 OCLC report Online Catalogs: What Users and Librarians Want calls particular attention to user desires for elements supporting delivery.60 Users also requested discovery-related data, such as the ability to preview the book, cover art, summary and abstract data, and tables of contents data. While not all of these are covered by the RDA element set, summarization of the content, for example, was rated by IFLA as “low” for the identify task and “medium” for the select task. Work has been published by the MARC Content Designation Utilization Project (www.mcdu.unt.edu) showing how catalogers code MARC tags in support of the FRBR user tasks, but it does not provide value research into the user perspective. The Task Force recommended that further work be done in this area to aim for a common understanding of stated value for individual bibliographic elements and to test the value of an element for a user task.

  • In conjunction with other operational definitions of value above, determine which of these elements are commonly indexed, which are commonly displayed, which users pay attention to, which users understand, etc.
  • Consider integrating an RDA-to-FRBR User Tasks mapping analysis into the RDA Toolkit (www.rdatoolkit.org). Such a resource could provide guidance to catalogers, particularly in light of the RDA Toolkit workflow functionality.

Throughput and Timeliness

The extent to which data-creation processes facilitate timeliness in resource availability is a measure of value. Users cannot access materials that are sitting (literally or digitally) in uncataloged backlogs. Additionally, the value of editing existing records over cataloging materials completely lacking description is, of course, questionable. Research into this area ideally would demonstrate the effect on a community of not having new materials made quickly available.

The following areas are suggested as appropriate for research:

  • Measure the uptake of the data created by catalogers. In cases where the resource itself is available to users both before and after release of metadata in the library's discovery systems, compare resource use before full metadata has been loaded with use (in a defined timeline) after release of the metadata.61
  • Identify older imprints newly added to WorldCat and then determine how quickly other institutions add their holdings once the record has been input. This metric would not demonstrate direct user impact, but it could show something about how quickly uptake of new cataloging occurs throughout the MARC bibliographic ecosystem. If OCLC does not retain long-term retrospective data on record edits in WorldCat, would performing a prospective rolling analysis on records newly added to the database be possible?
  • For a set of materials, analyze publication dates against the dates when items were first acquired, first cataloged, and first circulated to identify trends in resource discovery and use. While other variables that affect discovery and use would be difficult to control, having an understanding of how quickly newly cataloged materials circulate could help determine appropriate throughput expectations.

Ability to Support the Library's Administrative and Management Goals

The question of which bibliographic elements provide value to the library for collection development, acquisitions, auditing, and inventory purposes beyond the value they provide for discovery or use by patrons needs to be addressed. One approach would be to survey the community to understand the value of the bibliographic data elements for librarians involved in managing collections.

Value Multipliers

The Task Force discussed aspects of value that affect the operational definitions above:

  • The extent to which bibliographic data are normalized
  • The extent to which data support collocation and disambiguation in discovery
  • The extent to which data have used controlled terms across format and subject domains
  • The extent to which level of granularity matches what users expect
  • The extent to which data enable a formal and functional expression of relationships (links between resources) to find “like” items
  • The extent to which data are accurate
  • The extent to which data enhancements are able to proliferate to all derivative records

All these items contribute to how valuable library data are in conjunction with the operational definitions proposed above. These are identified as “value multipliers” because they contribute to value, but the degree of contribution cannot be assessed until further research is done on the operational definitions outlined above and the community's value goals become clearer.


Measures of Cost
  • The Task Force also struggled with defining a vocabulary around which to discuss cost. While elements contributing to the cost of cataloging can be outlined (work has been done in this area), evaluating those metadata costs and determining whether those costs are currently too high is difficult without first having a clear understanding of value. Broadly, the following elements contribute to cost:
  • Salary and benefits multiplied by the time for new record creation (for all bibliographic control activities, including searching for copy, original description, MARC encoding, classification, subject analysis, authority work, and local practices that vary from greater accepted practice)
  • Cataloging tools (including Cataloger's Desktop, Classification Web, OCLC, RDA Toolkit, WebDewey, etc.)
  • Database maintenance (salary and benefits multiplied by the time on bibliographic and access (URL) corrections, vended authority control services, vended record upgrade notification services, activities such as “typo of the day,” etc.)
  • Overhead (training, policy development, documentation, cooperative cataloging arrangements, the systems that they are built on, the practices that grow up around them, etc.)

While calculating cost for the creation of individual elements or even areas of cataloging (such as authority control) by doing time studies is possible, doing so is most useful against a value question. The level of granularity needed to make the most meaningful analysis is not clear. The community also needs to be clear on the purpose. If the purpose is to bring down the costs generally, the method would be to calculate the costs listed above (in a way agreed on by the community) and work to develop systems or an infrastructure that would help lower those costs. If the purpose is to ask whether the tasks are worth the costs, then better research into the value questions above is needed first.

The Task Force discussed how the community might capture the costs of many individual bibliographic elements and, while even small costs add up over time given the way bibliographic description is done, imagining how one might calculate the cost of creating individual bibliographic elements is difficult. This direction also puts the emphasis on initial record creation and overlooks costs of maintaining the integrity of bibliographic databases over time. Much of the cost of bibliographic control is not in the original data creation but in metadata-maintenance activities that come later in the lifecycle.

Alternatively, the Task Force considered suggesting an extremely simple solution, such as the number of volumes cataloged divided by salaries, but this type of calculation will fail to illustrate true costs. Festschrift is an emblematic example. The act of coding the fixed field value is not where the cost lies, but that is the cost most easily captured. True costs (and true savings, if catalogers were to stop coding this value or many others) are in the overhead category of training and documentation, which are significantly harder to quantify.

Deviations from standard practices carry added cost and consume additional processing time, often in the work and always in the overhead (training, policy development, documentation, etc.). The Task Force challenged libraries to determine how much their local practices are costing and to undertake conversations about whether they are of appropriate value to their constituencies. All libraries must actively decide what they are willing to pay to support their priorities. Local practices are often brought into question by outside forces: consultants, new administration or staff, planning for vended cataloging services, and so on. These influences force articulation and assessment of local priorities. The Task Force encouraged cataloging departments to embrace a culture of continuous cost/value discussion and assessment and, when possible, to invite objective, external influences to the discussion that will elicit attentiveness to library priorities and their associated costs.

Finally, opportunity costs need to be quantified. Time spent on low-value activities (no matter which operational definition is used for “value”) is time not spent on high-value activities. Having materials sitting in technical services waiting for copy to appear while libraries edit existing records inhibits discovery and use of collections. In its final report, the Task Force considered the research conducted by R2 Consulting, who completed a report to the LC in the midst of the Task Force discussions.62 This 2009 report, Study of the North American MARC Records Marketplace states,

Our survey results also confirm our direct observation of many “aging” backlogs in libraries. Because of their own staffing constraints, or unwillingness to bear the cost of original record creation, many libraries simply wait for another library to catalog an item they have already received. On average those items are held for three to six months, with periodic searches of OCLC to determine whether another library has blinked. While this makes sense as a way of controlling costs, it does not provide optimal service for users.63

The extent to which data-creation processes facilitate timeliness in resource availability is a crucial component of value. Additionally, the failure to contribute meaningful edits to the national community causes the community at large to repeatedly pay editing costs. While creating a metric that would calculate that cost may be possible, it is undeniably larger than zero. Any number larger than zero is no longer sustainable.


Interdependencies between Creators and Consumers of Bibliographic Data

The final element of the Task Force's charge was to identify the interdependencies between creators and consumers of bibliographic data. The Task Force believed that this was well documented for the MARC record ecosystem by the Study of the North American MARC Records Marketplace. Appendix B of this report outlines the stakeholders in the MARC record ecosystem.64 While the Task Force noted that the R2 work was scoped to MARC, the Task Force did not identify any missing components particular to the ARL community within that scope. R2’s focus on MARC records specifically and within the realm of LC production puts a different slant on cost and value than in the Task Force charge. The Task Force focused on the value of metadata to users, while R2 focused on the value of bibliographic data to libraries as organizations (as measured by cost reduction). The Task Force felt that, if significant metadata-production changes are made at the LC, these will affect methods the community must develop. The non-MARC bibliographic marketplace is significantly less defined, but the creators and consumers of bibliographic data in that ecosystem can be placed in the R2 context as well. Libraries are creating original non-MARC data, vendors are creating (and selling) original non-MARC data, and aggregators (commercial and noncommercial) are creating non-MARC data through services such as OAI.

The Task Force therefore believed that appendix B of the R2 report captures the stakeholder relationships for the ARL community in the MARC ecosystem and may be extended to encompass non-MARC metadata-creation partners as well.


Conclusion

In 1956, in an article titled “Cataloging Cost Factors,” Swank wrote,

For purposes of evaluation, studies of the use of the catalog would be helpful if related to costs. If we could know, for example, the utility of various added entries and could tell the difference in cost if they were or were not made, we might be able to pass reasonable judgment. But even studies of the use and cost of the catalog would leave much to the imagination, because they would still fail to inform us about the relations of the catalog to other kinds of bibliography. Even though it were demonstrated that a job needs to be done and could be done at reasonable cost in the catalog, there would still be the possibility that the same job might be done better at less cost in some other way. The most valuable single kind of study that could be made at this time, I believe, would be case studies of the experience of readers in using the entire range of a library's bibliographical services studies that could then be related to analyses of the costs of the entire range of services… . The whole area is a great maze which will never be untangled until (a) adequate studies of readers’ needs have been made, (b) the most economical bibliograpical [sic] means of satisfying those needs have been determined, and (c) the role of the catalog as one of those means has been established. This is a big order, perhaps an impossible one.65

Fifty-four years later, these unknowns persist. In the current economic climate, the community must strive to untangle the maze. The Task Force found its charge difficult, but believed that—with more research into value—developing measures of cost and value that communities could agree on is possible. The Task Force submitted its report with the hope to engage conversation about what constitutes value for bibliographic control. The Task Force outlined seven operational definitions of value:

  • Discovery success
  • Use
  • Display
  • Ability of library bibliographic data to operate on the open web and to interoperate with vendors and suppliers
  • Ability to support FRBR user tasks
  • Throughput and timeliness
  • Ability to support the library's administration and management goals

These were offered as a means to frame the problem and to encourage discussion into value from various perspectives, scope the value questions—which are overwhelmingly large at first pass—into segments that are more accessible to undertake, and provide next steps for institutions interested in engaging in these crucial questions.

As representatives of the large research community, the Task Force submitted its report to the Heads of Technical Services in Large Research Libraries Interest Group, but also felt that ownership of these questions is shared by entities of all sizes and types (including vendors of bibliographic data) within the library community. The Task Force hoped that the community would amass enough data to start analyzing the results. While some strategies would best be undertaken by a single, centralized entity, individual institutions could do much of this work on a smaller scale in line with their institutional missions and begin to pool that information in search of aggregate commonalities and differences. While the Task Force directs its recommendations in the final report to the Heads of Technical Services Interest Group, its desire is that any institution with an interest in addressing any of the recommendations therein should feel free and is encouraged to do so.


References and Notes
1. William Warner Bishop,   Cataloging As an Asset: An Address to the New York State Library School, May 15, 1915 (Waverly:  Baltimore, 1916):  4.
2. Ibid., 7
3. Ibid., 8
4. Ibid., 16–17
5. Early attempts at discussing, applying, and debating cost measures include Charles A. Cutter, “Dr. Hagan's Letter on Cataloging,” Library Journal 1 (1877): 219; James L. Whitney, “On the Costs of Cataloging,” Library Journal 10 (1885): 214–16; William Warner Bishop, “Some Considerations on the Cost of Cataloging,” Library Journal 30 (1905): 10–14; Aksel G.S. Josephson, “Committee on Cost and Method of Cataloging,” Library Journal 39 (1914): 598–99; Aksel G.S. Josephson, “The Cataloging Test: Results and Outlook,” ALA Bulletin 10 (1916): 242–44; “Plan for an Investigation into and Report on -the Cost of Cataloging,” Bulletin of the American Library Association 19 (1925): 278–86; Adah Patton, “The Cost of Cataloging,” Library Journal 51 (1926): 140–41; “The Cost of Cataloging,” Library Journal 32 (1927): 239; Fremont Rider, “Library Cost Accounting,” Library Quarterly 6, no. 4 (1936): 331–81; Robert A. Miller, “Cost Accounting for Libraries: Acquisition and Cataloging,” Library Quarterly 7, no. 4 (1937): 511–36; Jerrold Orne, “We Have Cut our Cataloging Costs!” Library Journal 73 (1948): 1475–87; Felix Reichmann, “Cost of Cataloging,” Library Trends 2 (1953): 290–317
6. George Harris,  "“Historic Cataloging Costs, Issues, and Trends,”,"  Library Quarterly  (1989)   59, no. 1:  1–21.
7. Ibid., 1
8. Keyes D. Metcalf,  "“Attitude of the Library Administrator Toward Cataloging,”,"  ALA Bulletin  (Sept. 1941)   35, no. 8:  48.
9. Swank R. C.,  "“Cataloging Cost Factors,”,"  Library Quarterly  (1956)   26, no. 4:  303–17.
10. Ibid., 307
11. Joint Steering Committee for Development of RDA (Chicago: ALA, 2010): "Resource Description and Access: RDA. "
12. Swank, “Cataloging Cost Factors,” 309
13. Jeffrey A.. Raffel and Robert Shishko,  "Systematic Analysis of University Libraries: An Application of Cost–Benefit Analysis to the MIT Libraries,"  (Cambridge, Mass:  MIT Pr, 1969): .
14. Arthur McAnally, review of Systematic Analysis of University Libraries, by Jeffrey A. Raffel and Robert Shishko, Library Quarterly  (1970)   40, no. 3:  355.
15. Raffel and Shishko, Systematic Analysis of University Libraries, 46
16. Bishop, Cataloging As an Asset
17. Amar K. Lahiri,  "“Toward a Bibliographic Common Cause,”,"  Cataloging & Classification Quarterly  (1987)   8, no. 1:  66.
18. Ibid
19. Ake I. Koel,  "“Bibliographic Control at the Crossroads: Do We Get Our Money's Worth?”,"  Journal of Academic Librarianship  (1981)   7, no. 4:  220–22.
20. Lahiri, “Toward a Bibliographic Common Cause,” 70
21. Ibid
22. Ibid., 71–72
23. Ibid., 73
24. Ibid., 74–76
25. Tefko Saracevic and Paul B. Kantor,  "“Studying the Value of Library and Information Services. Part I. Establishing a Theoretical Framework,”,"  Journal of the American Society for Information Science  (1997)   48, no. 6:  527–42,  Saracevic and Kantor, “Studying the Value of Library and Information Services. Part II. Methodology and Taxonomy,” Journal of the American Society for Information Science 48, no. 6(1997): 543–63
26. Ibid., 527
27. Ibid., 562
28. Roxanne Missingham,  "“Libraries and Economic Value: A Review of Recent Studies,”,"  Performance Measurement and Metrics  (2005)   6, no. 3:  142–58,  Susan Imholz and Jennifer Weil Arns, Worth Their Weight: An Assessment of the Evolving Field of Library Valuation (New York: Americans for Libraries Council, 2007), www.bibliotheksportal.de/fileadmin/0themen/Management/dokumente/WorthTheirWeight.pdf (accessed Sept. 7, 2010)
29. Judy Luther,  "University Investment in the Library: What's the Return? A Case Study at University of Illinois at Urbana-Champaign," in Library Connect White Paper,    (San Diego:  Elsevier, 2008): , http://libraryconnect.elsevier.com/whitepapers/lcwp0101.pdf (accessed Sept. 1, 2010). Paula T. Kaufman, “The Library as Strategic Investment: Results of the Illinois Return on Investment Study,” Liber Quarterly 18, no. 3/4 (2008): 424–36.
30. Cornell University Library, Cornell University Library Research & Assessment Unit: Making Data Make Sense, Library Value Calculations http://research.library.cornell.edu/value (accessed Sept. 1, 2010)2010
31. Cheryl McCain and Jay Shorten,  "“Cataloging Efficiency and Effectiveness,”,"  Library Resources & Technical Services  (2002)   46, no. 1:  23–31.
32. Dilys E.. Morris et al.,  "“Cataloging Staff Costs Revisited,”,"  Library Resources & Technical Services  (2000)   44, no. 2:  70–83.
33. Dilys E.. Morris et al.,  "“Where Does the Time Go? The Staff Allocations Project,”,"  Library Administration & Management  (2006)   20, no. 4:  177–91.
34. Shawne D. Miksa,  "“You Need My Metadata: Demonstrating the Value of Library Cataloging,”,"  Journal of Library Metadata  (2008)   8, no. 1:  23–36.
35. Ibid., 27
36. Ibid., 34
37. Philip Hider,  "“How Much Are Technical Services Worth? Using the Contingent Valuation Method to Estimate the Added Value of Collection Management and Access,”,"  Library Resources & Technical Services 52, no  (2008)   4:  254–62.
38. Ibid., 258
39. Michael Gorman, Our Enduring Values: Librarianship in the 21st Century (Chicago: ALA  (2000) :  114.
40. Joint Steering Committee for Development of RDA, Resource Description and Access: RDA, Objective 0.4.2.2
41. Library of Congress Working Group on the Future of Bibliographic Control (LC Working Group), "On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control, Jan. 9", www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf (accessed Aug. 31, 2010)2008
42. The charge and membership of the group is included in the Final Report of the Task Force on Cost/Value Assessment of Bibliographic Control, submitted to the Heads of Technical Services in Large Research Libraries Interest Group, June 18, 2010, http://connect.ala.org/node/106017 (accessed Sept. 1, 2010)
43. LC Working Group, On the Record, 10
44. Library of Congress, "“Series at the Library of Congress”""June 1"www.loc.gov/catdir/cpso/series.html (accessed Nov. 14, 2010)2006
45. Final Report of the Task Force on Cost/Value Assessment for Bibliographic Control
46. Library of Congress Working Group on the Future of Bibliographic Control, On the Record, 37–38
47. ALCTS Task Group on the LC Working Group Report, “On the Record: Report of the LC Working Group on the Future of Bibliographic Control: Ten Actions for ALCTS,” www.ala.org/ala/mgrps/divs/alcts/ianda/bibcontrol/lcwgtop10.cfm (accessed Aug. 31, 2010)
48. Ibid
49. Final Report of the Task Force on Cost/Value Assessment for Bibliographic Control, 5–6
50. Karen Smith-Yoshimura et al.,  Implications of MARC Tag Usage on Library Metadata Practices (Dublin, Ohio: OCLC Research, 2010), www.oclc.org/research/publications/library/2010/2010-06.pdf (accessed Aug. 31, 2010)
51. Ibid., 15–16
52. Johan Bollen, Herbert Van de Sompel,  and Marko A. Rodriguez,  "“Towards Usage-Based Impact Metrics: First Results from the MESUR Project,” Apr. 23"http://arxiv.org/abs/0804.3791 (accessed Sept. 2, 2010)2008
53. Thomas V Perneger,  "“Relation between Online ‘Hit Counts’ and Subsequent Citations: Prospective Study of Research Papers In the BMJ,”,"  BMJ  (Sept. 4, 2004)   329, no. 7465:  546–47.
54. Rush G. Miller,  "“Shaping Digital Library Content,”,"  The Journal of Academic Librarianship  (2002)   28, no. 3:  100.
55. Diane Hillmann et al., “RDA Vocabularies: Process, Outcome, Use,” D-Lib Magazine 16, no. 1/2 (Jan./Feb. 2010), http://dlib.org/dlib/january10/hillmann/01hillmann.html (accessed May 15, 2011); Open Metadata Registry, “The RDA (Resource Description and Access) Vocabularies,” http://metadataregistry.org/rdabrowse.htm (accessed Aug. 31, 2010)
56. Projects underway include Next Generation Cataloging, www.oclc.org/partnerships/material/nexgen/nextgencataloging.htm (accessed Sept. 7, 2010); OCLC Metadata Services for Publishers, http://publishers.oclc.org/en/metadata/default.htm (accessed Aug. 31, 2010); Library of Congress, ONIX Pilot, http://cip.loc.gov/onixpro.html (accessed Aug. 31, 2010); research being conducted by Myung-Ja Han at University of Illinois at Urbana-Champaign; and Carol Jean Godby, Mapping ONIX to MARC (Dublin, Ohio: OCLC, 2010), www.oclc.org/research/publications/library/2010/2010-14.pdf (accessed Sept. 2, 2010)
57. Tom Delsey, RDA Editor, “RDA, FRBR/FRAD, and Implementation Scenarios,” memorandum to Joint Steering Committee for Development of RDA, Jan. 23, 2008, www.rda-jsc.org/docs/5editor4.pdf (accessed Aug. 31, 2010)
58. Deirdre Kiorgaard,  "“RDA Core Elements and FRBR User Tasks,” memorandum to Joint Steering Committee for Development of RDA"www.rda-jsc.org/docs/5chair15.pdf (accessed Aug. 31, 2010)2008
59. “RDA Value Matrix, Task Force on Cost/Value Assessment of Bibliographic Control,” appendix to Final Report of the Task Force on Cost/Value Assessment for Bibliographic Control, June 18, 2010, http://connect.ala.org/node/106017 (accessed Feb. 5, 2011)
60. OCLC Online Catalogs: What Users and Librarians Want (Dublin, Ohio:  OCLC, 2009): www.oclc.org/reports/onlinecatalogs/fullreport.pdf (accessed Feb. 5, 2011).
61. Existing research demonstrates increased use of e-books when MARC records are integrated into the online public access catalog, but it does not consider the importance of timeliness in record creation and integration. See, for example, Jacqueline Belanger, “Cataloguing E-Books in UK Higher Education Libraries: Report of a Survey,” Program: Electronic Library & Information Systems 41, no. 3(2007): 203–16; Dennis Dillon, “E-Books: The University of Texas Experience, Part 2,” Library Hi Tech 19, no. 4 (2001): 350–62; Susan Gibbons, “NetLibrary eBook Usage at the University of Rochester Libraries, Version 2,” Sept. 27, 2001, www.lib.rochester.edu/main/studies/analysis.pdf (accessed Aug. 31, 2010)
62. R2 Consulting, "Study of the North American MARC Records Marketplace, Oct"www.loc.gov/bibliographic-future/news/MARC_Record_Marketplace_2009-10.pdf (accessed Aug. 31, 2010)2009
63. Ibid., 27
64. Ibid., appendix B
65. Swank, “Cataloging Cost Factors,” 312–13

Article Categories:
  • Library and Information Science
    • ARTICLES

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core