Notes on Operations: Patron Driven Acquisitions: Determining the Metrics for Success

Notes on Operations: Patron Driven Acquisitions: Determining the Metrics for Success

Jason C. Dewland (jasondewland@email.arizona.edu) is Assistant Librarian, Research, Instruction, and Outreach in the University of Arizona Libraries. Andrew See (andrew@email.arizona.edu) is a Library Information Analyst in the University of Arizona Libraries.

Submitted March 26, 2014; returned to author for revisions June 24, 2014; revised manuscript submitted August 21, 2014; paper accepted for publication September 18, 2014.

Patron Driven Acquisition (PDA) programs have been established in many libraries, but there is no agreed upon set of metrics to evaluate the programs’ performance. With that in mind, the University of Arizona (UA) formed the On-Demand Information Delivery (ODID) Metrics Team in January 2012 to establish metrics to evaluate their PDA program. This paper examines the results of the team’s findings and provides an extensive analysis of the purchases by Library of Congress (LC) classification, publisher, format, etc. The discussion includes an analysis of the process and challenges of measuring a PDA program based on UA’s experience. This paper also provides a list of key metrics that the authors argue that every library with a PDA program should monitor.

Patron Driven Acquisition (PDA) as a collection development tool has become increasingly common for libraries, but there has not been much discussion regarding what constitutes a successful PDA program. What metrics should libraries use to evaluate how the program is working? What metrics should libraries monitor to judge the effects that a PDA program is having on a library collection?

The authors provide an overview of the project initiated by the University of Arizona (UA) libraries to determine what metrics should be used to evaluate their PDA program. This paper examines how the project team identified, crosswalked, and normalized the data that was needed to build a profile of the program. The examination includes analysis of the difficulties that were encountered due to the different e-book platforms, organization of the data, and the sometimes lax data integrity. Finally, the paper reviews initial statistics of the program’s purchases and discusses some possible next steps for the program. It is the authors’ hope that UA’s experience will be of benefit to other libraries that wish to gain a better understanding of their PDA programs.

Literature Review

PDA has its roots in the Just in Time (JIT) and Vendor Managed Inventory (VMI) movements that took place in the early to mid-1980s. JIT was developed by Japanese automotive manufacturers who could not stock large inventories due to limited natural resources such as minerals and iron.1 In the late 1970s and early 1980s Japanese manufacturing plants were opening in the United States that practiced JIT and VMI. Perhaps buoyed by their competitors’ success in the United States, it was at this point that partnerships in managing the supply chain began to develop in the form of VMIs throughout the domestic automotive sector in North America.2

The first successful retail supply chain integration was the partnership between Procter & Gamble and Wal-Mart in the late 1980s.3 These supply chain partners shared each other’s inventory systems so that they could communicate by sharing their internal inventory and projected demand estimates that resulted in significant decreases in the cost of goods sold and inventory as a percent of revenue.

The supply chain integration model found in retail was not a good fit for many libraries due to the mission of preservation of collections and the limited availability of small presses’ print runs. That began to change in the early 2000s when libraries began experimenting with PDA, but efforts were modest, with universities like Purdue adding only 10,000 volumes over a decade.4 Successive financial crises coupled with increasing calls to demonstrate an academic return on investment (ROI) and the adoption of the e-book by the consumer ushered in the modern purchase on demand model for e-books.5

Several studies have compared the cost per use (CPU) of titles purchased through PDA programs to the CPU of titles purchased using traditional selection methods. For example, Herrera found that the CPU was significantly lower for titles purchased using a PDA model.6 Both Herrera and Lannon found that the CPU of PDA titles was significantly better than many of the e-book subscription packages.7 Other studies have examined which subject areas generate the most purchases. An early test of patron-initiated purchases by OhioLINK found that half of all purchases occurred in the health sciences, business/economics, psychology, education/physical education, and engineering subject areas.8 Delivery time of materials was examined to determine if slow interlibrary loan delivery times would decrease the demand for patron-initiated consortia borrowing. In her study, Curl found that slow delivery time did not significantly decrease patron satisfaction of the program.9

As libraries attempt to meet customer demand, offer more resources, and maintain relevant collections, many are using PDA programs to manage their collections. These programs have demonstrated higher circulation than traditionally acquired resources and allow resource managers to get past Trueswell’s now infamous 80/20 rule, which suggests that the top 20 percent of a library’s circulating material represents 80 percent of its overall circulations.10 At least one study has shown a higher rate of interdisciplinary selections made by users than by traditional methods.11

Project Overview

The UA Libraries implemented a wholly unique PDA program in the summer of 2011. Known as On Demand Information Delivery (ODID), the program expands on the traditional PDA method by acting as the main driver of the collection. While popular PDA practices generally focus on the collection of electronic resources, the UA Libraries uses the ODID program as the primary acquisition method for both electronic and print content. E-book content is exposed through the library catalog and discovery layer, and a purchase is automatically triggered after a set number of uses. Selection records for print material are also available in the library catalog and discovery layer, and an embedded link enables patrons to place a direct order. The resources available to the user are filtered by vendor profiles, allowing the UA Libraries to ensure that content being ordered meets the general collection development criteria. These combined measures have allowed the UA Libraries to expand the discoverability of content to our users, show a significant decrease in the acquisitions budget, and deliver a lower cost per use for titles purchased through the program.

Since July 2011, the UA Libraries have added discovery records for more than 594,000 electronic and 46,000 print titles to the collection. With a focus on providing access to resources to a greater number of users, the ODID program defaults to e-books when possible. The UA Libraries established profiles with our vendor to exclude print records in the OPAC if an electronic version will be published within six months. Additional filters ensure that titles are current scholarly material (five years or newer) and are not textbooks, popular fiction, or manuals. Because of these filters, the UA Libraries can be confident that PDA titles selected by our users fit within the scope of our collection. Welch and Koch’s article, which outlines a very similar PDA program at the Cowles Library at Drake University, discusses similar filters, which have shown good results.12

The selection records in the Online Public Access Catalog (OPAC) and in the current discovery tools (Worldcat Local and Summon) greatly expand the discoverability of content. Where the UA Libraries might have previously been constrained by budget limitations in its acquisition of titles prior to the ODID program, the libraries can expose users to far more content and acquire only needed materials. The program has drastically decreased the acquisitions budget since roughly 10 percent of the e-books exposed and 14 percent of the print books exposed have been purchased. The e-book acquisitions statistics parallel those of East Carolina University’s Joyner Library where slightly less than 8 percent of e-books were purchased through their comparable PDA program.13

The e-book selection records, which are provided by Ingram (available on their MyiLibrary platform), provide seamless access to the library’s users. Purchase triggers vary with each vendor, and once a trigger event occurs, the UA Libraries automatically purchase the title. This is perhaps the easiest and most convenient iteration of PDA as content is immediately available to the user whether they are viewing in preview mode or whether the title has been purchased. The print iteration of PDA is somewhat different in that the catalog records contain embedded order links in the MARC 856 field (where the UA Libraries normally provide a link to full text in traditional electronic resource records). These links connect to the Ingram application programming interface (API), and create a rush firm order. The print book is then sent to the library, shelf ready, and placed on hold for the user.

With any strategy, the live implementation often differs from how it was originally planned and may produce unintended consequences. To address this, the UA Libraries created the ODID User Group. The group was charged with ensuring that the ODID process, from discovery to delivery, was as seamless as possible. As new challenges were discovered after launching the program, the ODID User Group reshaped processes to best meet our users’ needs and expectations.

In terms of the user experience for PDA, the current process for customers ordering print titles involves the following: when users select an order link in the catalog, they authenticate using their university ID (NetID), and the request is then sent to the vendor and communicates with their API to determine if the book is in stock. The user is brought to a landing page, indicating that the order has been placed and providing an estimated delivery time (see figure 1). Because it takes roughly twenty-four hours for UA Libraries’ catalog to update with the vendor supplied bibliographic and order record (replacing the original order link), users could potentially click on an order record that has already been placed. If this happens, they are brought to a landing page that alerts them to the duplicate order (see figure 2). When the library overlays the order records with the full bibliographic records supplied by the vendor, Technical Services staff use patron information (including name and e-mail address) embedded in a hidden MARC 961 field (which is generally used by vendors for order information) to place a hold on the book. After the hold is placed, the individual’s identifying information is deleted from the record. When the item ships from the vendor, users receive an e-mail that includes UPS tracking information (see figure 3). Providing tracking information directly from UPS allows users to get the most up-to-date information regarding when the book will arrive. When the shipment arrives, all books are checked in, which triggers the “hold-available notice” to users. The item is then placed on the hold shelf, where it is held for seven days. If the item is not checked out during that period, it is removed from the hold shelf and shelved in our regular book stacks.

Charge to the Team

The ODID Metrics Project Team was charged with coordinating the design and implementation of the data gathering processes to evaluate the ODID program’s effectiveness. It consisted of five members: a librarian and library analyst from the Research Services Team (the team that oversees collection development), a library information associate (LIA) from the Delivery, Description, and Acquisitions Team (the team that handles most of the back-end operational duties for the program), an LIA from the Library Infrastructure Team (the team that handles the physical maintenance of our collection), plus the materials budget, procurement, and licensing librarian. The project addressed two main criteria: (1) should the library have a balanced and efficient set of metrics and processes to assess the ODID program, and (2) should librarians have established processes that result in readily available data and analyses to inform the ODID decision making process? Decision making for the ODID program required a balanced and efficient set of metrics and data collection processes that incorporated factors such as the evaluation of the quality of resources exposed to the libraries’ customers, the amount of use of those resources, the amount of use seen after the purchase of the resources, the cost effectiveness of the program, and the overall customer satisfaction with the program.

Readily available data and analyses were defined to aid in the decision-making process of when to buy what resources versus when to borrow what resources. The data would also support the assessment of the ODID program, assist with identifying areas that might require further refinement, and support the assessment of remaining approval plans. These metrics would need to be provided to the key internal user groups in a dashboard setting that would focus on key performance indicators. The indicators would need to be chosen from a large number of metrics based on a clear decision making process that users could easily understand.

From Metrics to Key Performance Indicators

Phase I of the ODID Project provided the scope, system, and the implementation of the PDA program at the UA Libraries, but it did not develop the assessment metrics and data collection processes. The Phase I team provided a laundry list of potential metrics (more than one hundred suggestions) that could be collected. This list was neither exhaustive nor prescriptive. The key for the metrics project team was to reduce the list of possible metrics down to key performance indicators (KPI) that would define the metrics to best measure the program’s outcomes. “KPIs are financial and non-financial metrics used to help an organization define and measure progress toward organizational goals.”14 If the ODID Metrics Team was not successful in determining the proper KPIs, it could lead to diminished patron satisfaction and failure of the program.

As the first step in the process of identifying the KPIs for the ODID program, the metrics team grouped like metrics to make the list more manageable. For example, the metrics regarding expenditures by subject, publisher, LC Class, and date published were combined into one metric since they require the same source for the data. Combining like statistics and removing items that were deemed outside of the scope of the project reduced the list to twenty metrics.

The list of metrics was consolidated into five main categories to provide additional clarity and structure. The categories were financial metrics, patron metrics, performance metrics, usage metrics, and resource metrics. Financial metrics were analyzed by breaking down the costs associated with the program by such factors as cost per a use, cost per use per LC Subject classification category, etc. (see the appendix for the final list of KPIs). Patron metrics focused on customer satisfaction and differences in customer behavior by discipline and patron type. Performance metrics examined how suppliers met the agreed upon performance standards, average delivery time, and out of stock metrics. Usage metrics were defined as those that measured usage, such as circulation and in-house use of print books and e-books. Resource metrics aimed at providing the library with an understanding of the characteristics of the selection pool and the relationship of purchases made to the selection pool.

When the metrics were divided into these categories, the project team ranked and evaluated the metrics based on their importance, understanding the program, and the difficulty of producing statistics. This focused the team’s efforts on the statistics that would provide the biggest impact to the library with the least amount of effort. Both the importance of the metric and the difficulty of producing the statistics were assigned a one to three ranking. These two numbers were then multiplied together, which resulted in a blended rank for each metric from one to nine, with one being the most important.

Metrics with a ranking of one were seen as KPIs and were critical in determining the program’s success. Metrics with a ranking of two were viewed as primarily descriptive and could be used to determine the program’s success. Metrics with a ranking of three did not provide valuable information to the decision makers but may have limited value to understanding the project. These rankings are provided in the impact column in the appendix.

The amount of effort was analyzed for each of the metrics to determine the amount of individual effort that was needed to create the metric. Effort was divided into categories ranked from one to three, with one assigned to metrics where accessing the data was easy or was already being done. A two was assigned to metrics that required a new process to be created. A ranking of three was assigned to metrics for which the data did not exist or it would be extremely difficult to collect, crosswalk, and normalize the data into a usable format. These rankings are shown in the cost effort column in the appendix.

The resulting ranked list of the metrics determined the team’s workflow and priorities. The metrics with the lowest scores became the top priority for the team while the metrics with a ranking of nine were not pursued due to lack of relevance and the time required to collect and analyze the responses. The combined rank of the metrics is available in the rank column inthe appendix.

The final list of metrics (see the appendix) provided a total of ten metrics each for electronic and print format. Customer satisfaction surveys were not provided since the work would have required changes in the work processes by other groups in the library. Circulation reports by LC classification for print materials prior to the implementation of the PDA program were determined to be outside of scope of the project team’s charge.

Challenges

After defining the metrics to measure the overall success of the ODID program, the group was tasked to develop data collection workflows for key stakeholders, which included the Research Services Team, the Delivery Description and Acquisition Team, and the Information and Access Oversight Management Group. Part of this deliverable was to design a Microsoft Access database that staff could easily populate with data collected both quarterly and annually. The goal was to create an easy to use data analysis tool for resource managers and administrative personnel.

The main sources of data were the integrated library system (Innovative Interfaces’ Millennium), Ingram’s OASIS for print resources, and Ingram’s MyiLibrary platform for e-books. Other sources of data included information pulled from the library’s interlibrary loan system (OCLC’s ILLiad) and qualitative data that would be collected from ODID users with surveys delivered at the point of order.

The most challenging aspect of implementing the data collection process was to integrate data from the three previously noted sources into a single database. Data collection is a universal challenge for librarians. In a recent study, Fleming-May and Grogg indicated that manual usage data collection is the biggest challenge for librarians at Association of Research Library (ARL) institutions.15 It became apparent early in the process that each system generated data in formats that did not integrate well with data from other systems. For example, a list of ISBN numbers from Ingram contained dashes in the 13-digit number, and the ISBN format in the library’s ILS lacked dashes. One database had 10 digit ISBNs associated with specific titles, and another contained 13 digit ISBNs. While these particular instances were not too difficult to remedy, they were indicative of data normalization challenges required for the project.

The data being input in Microsoft Access required significant normalization (removing spaces after numbers, eliminating all punctuation, and removing non-ISBN elements from the MARC 020 field). It was a difficult task to retrieve data and clean it up to ensure that information from different systems could be combined to create reports. Microsoft Excel was used primarily to normalize the data, which correlates with the findings of Wical and Kishel who, in a recent statewide study conducted in Wisconsin, found that 66 percent of academic librarians used Excel for collection management data.16 Most of the data normalization was accomplished using the painstaking find and replace method, for example using the command keys CRTL + F to locate all instances of semicolons and delete them. This process was repeated several times to catch all data incongruences. As an experiment in time saving measures, Open Refine (http://openrefine.org), an open source tool for data normalization, was used to attempt some normalization. After experimentation, the team discovered that the data sets were not consistent enough for Open Refine to be effective. The tool was utilized to try to normalize the publishers’ names with limited success.

The team encountered another substantial problem with inconsistent metadata and lack of authority control in bibliographic records. There were several instances of a lack of authority control in bibliographic records, even within the same integrated system, that used different iterations for author’s names, publishers, and other descriptive metadata elements. Normalizing inconsistent descriptive metadata became an enormous undertaking as the find and replace strategy used to remove spaces after numbers and dashes within ISBNs evolved into a much more complicated undertaking. Detecting the different iterations of all of the descriptive metadata for Taylor & Francis, for example, led to the discovery that the publisher was also described in bibliographic records as “Taylor and Francis” and “Taylor/Francis” among other variations. There were also instances where descriptive metadata was combined with other data (publication year was added to the ISBN MARC, for example). The challenges encountered when drafting the data collection processes for the metrics emphasize the need for proper authority control and metadata integrity when handling bibliographic data.

Part of the process for using our metrics was the multidepartmental retrieval of data representing the user experience and user behavior (e.g., turnaround time and its impact on use, correlations between ODID and ILL use, and the overall user experience). These data collection processes, as with the collection processes of any qualitative information, became challenging as the correlation between two different services, ODID and ILL, was explored. There have been several studies that used ILL management software to measure PDA success, since the two library services have similarities in meeting patron demand for resources. The University of Mississippi recently conducted a study on PDA titles using the same data management system that UA uses for ILL processing (ILLiad). Their study indicated a positive correlation between purchased PDA titles that were initially requested through ILL.17 The ODID user group discovered that collecting corollary evidence on user behavior could be challenging. As an example, ILL use is declining as the use of ODID titles is increasing. However, showing a correlation between the two is difficult. ILL staff cancels requests for books that can be obtained through the ODID program with a special cancellation method that enables tracking the frequency of ILL requests for ODID titles. The process does not enable us to determine whether the user then proceeds to request the book via the ODID process.

The delivery time of ODID titles can be monitored, as our books are checked in upon delivery and immediately placed on the hold shelf. Determining whether turnaround time affects the probability of the user actually checking out the book is more challenging. This data would require the direct surveying of users and to date, a survey process to gather qualitative feedback from our users has not been initiated. This task was assigned to the ODID user group that is developing a best practices model to launch the survey.

The charge of the ODID metrics group was to draft metrics that would be used to measure the program’s overall success, and our findings indicate that the program has been successful. The authors hope that as more libraries adopt similar PDA programs, this particular collection management strategy will become the norm for libraries and future ILSs will yield more streamlined approaches to gathering metrics.

Analysis of Results

The results are analyzed in two sections by their format type: print and electronic. Due to the differences inherent in the two formats and how their usage is calculated, only comparisons of relative rankings (for example, what subjects were more heavily used in print versus electronic format) will be analyzed and no attempt will be made to compare raw usage data between the formats. All the results provided below are taken from the beginning of the program in July 2011 until to December 2013.

Print Format

Books ordered via the print PDA program had a significant amount of use when compared to traditionally acquired books at UA Libraries. Prior to the implementation of the program, around 60 percent of the print books circulated at least once. Since the program started in late 2011, 6,744 print books were purchased in 155 different LC subject categories for a total of $324,617 for an average of $48.13 per book. The total circulation plus renewals of the 6,744 print books during this time period was 17,798, and there were 2.6 uses per book at a cost of $18.24 per a book. The total usage by LC subcategories shows the heaviest use of the print PDA in the social sciences and the humanities with eighteen of the top twenty LC subcategories coming from those areas. See table 1 for details.

Electronic Format

Electronic usage was heaviest in the language and literature and sciences categories, and business, engineering, and history titles also had high usage. History and language disciplines are not typical candidates for high PDA usage based on feedback UA Libraries have received from history faculty, yet these disciplines consistently show high use. Since the beginning of the program, the library has purchased 4,952 titles at a cost of $710,214, which resulted in 1,076,717 total Counting Online Usage of NeTworked Electronic Resources (COUNTER) section requests, for an average cost of $0.66 per a section request.

Language and literature titles were the heaviest used part of the collection and accounted for 21.2 percent of the collections usage. Science accounted for 16.4 percent, and business rounded out the top three with 9.6 percent of the total usage. See figure 5 for the most heavily used electronic books by LC subcategory of the subclass.

Cost per use varied by LC class from $0.11 per use for anthropology titles to $5.28 per use for geography titles. The greatest sum was spent on language and literature ($142,970), sciences ($82,683), business ($64,553), and engineering ($63,233). Each category generated a cost per use of $0.63, $0.47, $0.63, and $0.75 respectively.

Next Steps

Since the ODID program began, the UA Libraries have seen a considerable ROI, particularly for PDA e-book titles. Due to the ongoing and sustained use, and a substantially reduced cost per use of our PDA e-books, the program has been expanded to four e-book platforms with the addition of YBP as an additional ODID e-book vendor. YBP provides both the EBL and Ebrary platforms, and the Arizona libraries have recently added EBSCO content through Ingram. In the near future, ProQuest has plans to merge both EBL and Ebrary into a single integrated platform. Methods to generate and analyze usage statistics and purchase data for these additions will later be incorporated into the ODID metrics program.

The National Information Standards Organization (NISO) has drafted best practices for the Demand-Driven Acquisition (DDA) of monographs, which should lay the framework for libraries to adopt similar data and collection management strategies. The best practices draft was available for public comment in spring 2014.18

The UA Libraries are currently implementing a new discovery tool, Summon, and whether records will be added directly in Summon or will continue to be added to the OPAC is a major question. The benefit of adding them into Summon is that there will be time/cost savings by relieving technical services staff from having to manage these records in the local system.

The libraries are also in the process of identifying a next generation library management system (NGLMS). A large-scale cleanup of bibliographic records is underway. As both the ODID Metrics Team and another working group created to implement the NGLMS have discovered, new guidelines are needed so that local records are flexible, scalable, and can be exported to work with various data analysis tools.

Currently, there is no program in place to remove old and unordered ODID records from the catalog. The program is still relatively new and this has not posed a major problem to date. Some of the print order records may become problematic as books go out of print or newer editions are printed. The UA Libraries will investigate strategies to address this issue, which could negatively affect the user experience. Collection managers have not yet examined unordered records for print monographs to determine if titles should be added to the collection regardless of patron demand. Though our primary goal is to support UA’s research needs, another function of libraries is to record and archive the world’s scholarly record. In terms of maintaining a collection that meets this secondary need, some resources may need to be purchased beyond the immediate demand of our users.

Lastly, information resource managers and library administrators will examine current data collection practices and will discuss the viability of sustaining such a large data gathering process. As with many libraries, staff time is stretched to capacity and allocating time to collect data that does not directly support critical strategic priorities is not sustainable. They will discern what data we can scale back on collecting, assuring that we only capture actionable data.

Conclusion: Implications for Other Libraries

As PDA programs have been widely implemented across all types of libraries, the need to measure the effectiveness of such programs has never been more important. Measuring the effectiveness of our collection development programs is critical to good information resource management. Since libraries have instituted PDA programs for well over a decade, we have an abundance of data, which should allow libraries to gain insight on who their customers are and what their buying habits include. In academic libraries, we can use demographic data acquired at the point of purchase to determine how different academic disciplines are shaping our collection. This information provides an open line of communication with academic departments and administrators whose faculty and students are heavy users of PDA titles. As a result, academic libraries have the potential to gain some leverage in budget discussions and larger strategic planning initiatives where the libraries’ importance to academia is in question.

A good metrics program will allow libraries to better oversee the authority control of their data. As was discovered in our study, the lack of authority control over bibliographic data can be a serious roadblock to your ability to measure the success of your program. Of course, the importance of maintaining authority control over bibliographic data is not a new concept. The authors of this paper are providing yet another example of why this is so critical in the information management industry.

Lastly, a well thought out and carefully crafted metrics program will go a long way in allowing libraries to ultimately provide better customer service. As the UA Libraries discovered, promises made to our customers with regard to turnaround time were not being honored. Though a small fraction of customers may report problems with a particular service, the vast majority of customers do not. And without some way to measure how well delivery systems are working, libraries are working in the dark. The UA Libraries were able to successfully use data collected through its metrics to provide substantiated proof to the vendor and, as a result, improve the service for customers.

References

  1. T.C. Edwin Cheng and Susan Podolsky, Just-in-Time Manufacturing: An Introduction (London: Chapman & Hall, 1993); Kee-hung Lai and T. C. Edwin Cheng, Just-in-Time Logistics (Farnham, England: Gower Publishing Limited, 2009).
  2. Ronald K. Ireland with Colleen Crum, Supply Chain Collaboration: How to Implement CPFR and Other Best Collaborative Practices (Boca Raton, FL: J. Ross Publishing, 2005).
  3. Ireland and Crum, Supply Chain Collaboration.
  4. Kristine J. Anderson et al., “Buy, Don’t Borrow: Bibliographers’ Analysis of Academic Library Collection Development through Interlibrary Loan Requests,” Collection Management 27, no. 3–4 (2002), 1–11, http://dx.doi.org/10.1300/j105v27n03_01.
  5. Carol Tenopir, “New Directions for Collections,” Library Journal 135, no. 10 (2010): 24.
  6. Gail Herrera, “Deliver the eBooks Your Patrons and Selectors Both Want! PDA Program at the University of Mississippi,” Serials Librarian 63, no. 2 (2012): 178–86.
  7. Amber Lannon and Dawn McKinnon, “Business E-books: What Can Be Learned from Vendor Supplied Statistics?” Journal of Business & Finance Librarianship 18, no. 2 (2013): 89–99.
  8. Dracine Hodges, Cyndi Preston, and Marsha J. Hamilton, “Patron-Initiated Collection Development: Progress of a Paradigm Shift,” Collection Management 35, no. 3–4 (2010): 208–21.
  9. Margo Warner Curl, “Delivery Time of Materials and Patron Satisfaction with Patron-Initiated Borrowing in a Library Consortium,” Collection Management 29, no. 2 (2005), 15–31.
  10. Judith Nixon and E. Stewart Saunders, “A Study of Circulation Statistics of Books on Demand: A Decade of Patron-Driven Collection Development, Part 3,” Collection Management 35, no. 3-4 (2010): 151–61; Richard L. Trueswell, “Some Behavioral Patterns of Library Users: the 80/20 Rule,” Wilson Library Bulletin 43, no. 5 (1969): 458–61.
  11. Anderson et al., “Buy, Don’t Borrow.”
  12. Teri Koch and Andrew Welch, “Patron-Driven Acquisitions: Integrating Print Books With eBooks,” Against the Grain 24, no. 6 (2013): 26, 28.
  13. William Joseph Thomas, Heather Racine, and Daniel Shouse, “eBooks and Efficiencies in Acquisitions Expenditures and Workflows,” Against the Grain 25, no. 2 (2013): 14, 16, 18.
  14. Nils H. Rasmussen, Manish Bansal, and Claire Y. Chen, Business Dashboards: a Visual Catalog for Design and Deployment (Hoboken, NJ : Wiley, 2009).
  15. Rachel Anne Fleming-May and Jill E. Grogg, The Concept of Electronic Resource Usage and Libraries (Chicago: ALA TechSource, 2010).
  16. Stephanie H. Wical and Hans F. Kishel, “Strategic Collection Management through Statistical Analysis,” Serials Librarian 64, no. 1–4 (2013): 171–87.
  17. Gail Herrera and Judy Greenwood, “Patron-Initiated Purchasing: Evaluating Criteria and Workflows,” Journal of Interlibrary Loan, Document Delivery & Electronic Reserve 2, no. 1–2 (2011): 9–24, http://dx.doi.org/10.1080/1072303X.2011.544602.
  18. National Information Standards Organization, “Demand-Driven Acquisition (DDA) of Monographs,” accessed September 18, 2014, www.niso.org/workrooms/dda.

Appendix. Priority Matrix

Metrics

Group

Definition

Purpose

Impact

Cost Effort

Rank

Costs of items purchased/cost per use, $ expended per title exposed, overall level of purchases by subject, by publisher, by LC, by published date

Financial Metrics

Actual invoice amount for each title obtained from the suppliers invoices.

To track costs of the ODID programs

1

1

1

Cost per use

Financial Metrics

Actual invoice amount for each title obtained from the suppliers invoices divided by the use for each item as supplied in use reports, Counter Reports preferred. Use for print items will be derived from III circ. reports.

a) cost/use data should include time factor such as total, per year, after year 1; b) for print materials be sure to include both external circulation and in-house use fields.

To track costs of the ODID programs

1

1

1

Expenditures by subject, publisher, LC class, date published

Financial Metrics

Actual invoice amount for each title obtained from the suppliers invoices aggregated by each area— subject, publisher, LC class, date. Obtain from invoices or reports.

To track costs of the ODID programs

2

1

2

Cost analysis, correlation of PDA ODID program and the reduced cost of ILL borrowing. Is this saving us money?

Financial Metrics

Total costs of the ODID books purchased compared to total costs of books purchased in previous models (approval, firm order, etc.) Track Ill levels and see if there are decreases in volume/costs that might be attributable to items being supplied through the ODID program. Track cancellations duplicated in the ODID program.

To track costs of the ODID programs

2

2

4

What savings did the institution experience? Savings on book costs?

Amount ($) of approval plan reduced vs. PD costs

Financial Metrics

Total costs of the ODID books purchased compared to total costs of books purchased in previous models (approval, firm order, etc.).

To track costs of the ODID programs

2

2

4

Review LibQual and see if there are questions that need to be added (phase 2?)

Patron Metrics

We want to make sure that we are capturing the user experience about the program at different times. The first is when the users make the request and then at some point in the future after having time to reflect on their overall experience. Using existing LibQual questions.

Customer Satisfaction

3

1

3

Print PDA. What is the level of customer satisfaction?

Patron Metrics

A short multiple choice survey to gauge satisfaction of transaction with brief demographic questions.

Customer Satisfaction

2

3

6

Was there a difference in patron activity or library response based on patron type: faculty, grad, undergrad, dept. affiliation, etc.?

Patron Metrics

Would like to develop a profile for searching behavior and usage by user type and field of study. May be able to set up a look that would help guide users to their preferred research based on their demographics.

Gain a better understanding of how our patrons are using our products

2

3

6

Look for changes in trends in user behavior

Patron Metrics

Want to know how the program is affecting other areas of the library. The program should reduce the need for ILL and local document delivery, may increase or decrease holds and circulation. Monitor heavy selection activity of materials that have not circulated.

Financial data, circulation data and patron usage to determine if the program has contributed to increased relevance, higher use of the collection, and its effect on other library services.

2

3

6

Track print PDA delivery to be sure fulfillment and speed of delivery meet the established quality standard. Materials will be processed and delivered to the UAL on an average of no more than 5 (desired) days

Performance Metrics

Will determine whether vendor is meeting their QS regarding delivering resource within 3–7 business days

Determine if the vendor meeting the terms of their license agreement.

1

2

2

Track amount of time between UA delivery (i.e.: when it arrives) to availability to customer.

Performance Metrics

Will determine if we are making resources available to the customer in a timely manner.

Information resources should be made available to patrons in a timely manner.

1

2

2

Ability to download or print a reasonable amount of content for personal use. These should be consistent with best industry benchmarks for such services

Performance Metrics

Electronic resources should allow a reasonable amount of downloadable content based on industry “best practices”.

Resources should be customer centered and as a result, aggregators allow for the most flexibility in their DRM

3

3

9

What type of material was requested: by subject, by publisher, by LC, by published date

Resource Metrics

This metric will determine what types of On Demand materials (print or electronic format) and firm orders are being requested by patrons by subject, publisher, publishing date, etc.

Library will be able to “sort” requested materials by subject, by publisher, by published date, etc. to determine what impact these items are having on overall collection or to determine scholarly trends, etc.

2

1

2

% of items purchased that were print vs. electronic vs. titles exposed

Resource Metrics

This metric will allow library to determine what percentage of print vs. electronic items are being purchased from discoverable OD records.

To determine what percentage of discoverable items patrons request (or format patrons perhaps prefer) in print vs. electronic format.

3

1

3

Measure the time between placement of orders and the original ingest date for the selection record and records that have never been requested.

Resource Metrics

Will define the time of orders and ingestion date in order to determine if date of publication is a significant factor in whether a book is ordered or not ordered.

To determine how long we should keep the record in the catalog.

2

2

4

% of items selected to available titles—by subject, by publisher, by LC, by published date (both print and eBook PDA) What % of added PDA titles were selected by customers?

Resource Metrics

This metric will help library determine the percentage of discoverable items that were then purchased.

To determine if there were blind spots in the PDA process that prevent patrons from requesting items in specific subject areas, by publishers, by publishing dates, etc. and to determine how big an impact this has on the library’s collection

2

2

4

Circulation/use of all items: approval, print PDA, e PDA: especially subsequent use after purchase

Usage Metrics

The usage data (circulation statistics) of approval items of both print PDA and E PDA. The usage data will also include any subsequent circulation after initial purchase.

This data will show what is circulating from the approvals and will also help determine the effectiveness of the approval plan.

1

2

2

Comparing collection circulation stats between now, a year ago, 5 years ago by LC classification.

Usage Metrics

Set-up baseline circulation data and then do comparison analysis in a year and 5 years using the LC classification.

To show usage and usage patterns over time.

2

3

6

Is there any correlation between the time ordered and the time that it’s available to the patron and usage for print PDA?

Usage Metrics

To analyze the potential correlation between lead time of print PDA and whether the length of the lead time will prohibit usage. Examine any correlation between books with holds placed on them and books without holds placed.

To identify if the length of a lead time for print PDA has a negative impact on actual usage.

2

3

6

Figure 1

Figure 1. Order Acknowledgment Page

Figure 2

Figure 2. Acknowledgment of Duplicate Order Page

Figure 3

Figure 3. Order Tracking Information Page

Table 1. Print PDA Purchases

LC Category

Class

Number of Titles

Sum of Cost

Use

Average Price of Book

Cost Per Use

Use Per Book

History of the Americas

E

87

$3,113.72

66

$35.79

$47.18

0.76

Theory and practice of education

LB

91

$3,845.97

57

$42.26

$67.47

0.63

Literature (General)

PN

78

$2,953.96

51

$37.87

$57.92

0.65

Industries. Land use. Labor

HD

69

$3,111.68

50

$45.10

$62.23

0.72

Philology. Linguistics

P

40

$2,594.23

49

$64.86

$52.94

1.23

American literature

PS

54

$1,861.99

42

$34.48

$44.33

0.78

Mathematics

QA

48

$4,278.18

41

$89.13

$104.35

0.85

The Family. Marriage. Women

HQ

56

$2,818.04

40

$50.32

$70.45

0.71

History of the Americas

F

38

$1,290.89

37

$33.97

$34.89

0.97

Sociology (General)

HM

34

$1,926.55

34

$56.66

$56.66

1.00

Table 2. E-Book PDA Purchases

LC Category

Class

No of Titles

Sum of Cost

Use

Cost per use

Mathematics Computer Science

QA

202

$23,293.95

$66,063.00

$0.35

Linguistics

P

189

$29,002.48

$55,515.00

$0.52

Economic History

HD

163

$15,675.68

$36,350.00

$0.43

Physics

QC

85

$13,419.08

$31,334.00

$0.43

Science

Q

45

$3,989.99

$19,354.00

$0.21

Literature

PN

175

$16,083.70

$16,369.00

$0.98

Asia

DS

94

$9,312.55

$15,487.00

$0.6

Electrical Engineering

TK

67

$8,392.64

$14,210.00

$0.59

Civil Engineering

TA

47

$7,155.34

$12,548.00

$0.57

US History

E

83

$6,950.02

$11,390.00

$0.61

Refbacks



ALA Privacy Policy

© 2024 Core