Notes on Operations: Combining Citation Studies and Usage Statistics to Build a Stronger Collection

Stephanie H. Wical (wicalsh@uwec.edu) is Assistant Professor, Periodicals and Electronic Resources Librarian in the McIntyre Library, University of Wisconsin-Eau Claire. R. Todd Vandenbark (vandernt@uwec.edu) is Assistant Professor, Research and Instruction Librarian in the McIntyre Library, University of Wisconsin-Eau Claire.

Submitted October 23, 2013; returned to authors for revision February 10, 2014; revisions submitted April 11, 2014; returned to authors for additional revisions August 11, 2014; revised manuscript submitted August 29, 2014; accepted for publication September 23, 2014.

The authors wish to thank their valued colleague, Associate Professor Hans F. Kishel, for his assistance with data collection and organization.

Citation studies and analyses of usage statistics are two approaches academic librarians take to determine if their journal collections support the needs of research faculty. Librarians at a small, regional liberal arts university compiled a list of faculty journal publications covering a thirteen-year span from four academic departments—nursing, chemistry, biology, and mathematics—and, from these publications, generated a list of the journals that were cited. As expected, this university’s faculty members publish in many of the same journals that they cite. However, faculty members cite a wide range of sources. Wiley journal usage statistics were examined from 2011 and 2012 to determine if the number of PDF downloads of articles in the published in and cited Wiley journals were higher than the average numbers of PDF downloads of Wiley journals. Combining an analysis of usage statistics with citation analysis provides a more strategic way to look at a Big Deal package. This information is of interest to the departments represented and other stakeholders, and the implications for collection development purposes are addressed.

Academic librarians managing electronic resources have used different approaches to evaluate journal collections to better serve the research needs of their parent institutions. Citation studies and analyses of usage statistics are two different approaches for assessing the value of journal collections and are well-established in the professional literature. Individually, each method cannot fully address questions about the potential value of the collection. Looking at citations in conjunction with usage statistics may provide better insight into how well a library supports faculty research interests. This study represents an effort to combine the two approaches in a meaningful way in an attempt to answer the following questions:

  • In what journals are faculty publishing?
  • What journals are faculty citing?
  • Does the library subscribe to these journals?
  • What level of access to each journal is currently provided?

The current study is a “proof of concept” that can be applied to large journal collections.

Literature Review

Citation studies provide a way for researchers to observe trends and patterns in research output. Garfield is known for his pioneering work in early citation studies. He first mentioned the idea of an impact factor in “Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas” in Science in 1955.1 An experimental Genetics Citation Index was published and this evolved into the Science Citation Index in 1961.2 Since this time, many studies have examined what could be considered core journal collections to discern what researchers need to add to the growing body of knowledge in their fields. Echezona, Okafor, and Ukwoma found that the journal cited most often by library and information science postgraduates at the University of Nigeria Nsukka was College & Research Libraries. They attributed this to the fact that College & Research Libraries was available in the university’s library, highlighting a critical shortcoming of citation analysis: journal use is influenced by availability. A lack of research in some subjects may be due, in part, to a lack of resources in those areas. To be beneficial, citation studies should be combined with other methods.3

In contrast, reliable electronic journals usage statistics have only been available since the implementation of Project Counting Online Usage of NeTworked Electronic Resources’ (COUNTER) original goals in 2003. Project COUNTER (www.projectcounter.org) is an international initiative to bring consistency and reliability to the measures used to evaluate library electronic resources that includes librarians, publishers, and aggregators. Usage statistics have undergone further refinements since the first release of the COUNTER Code of Practice in 2002. Prior to COUNTER, usage statistics would not allow for easy comparisons across platforms, and some librarians today would contend that cross-platform comparisons are not advisable because of interface issues that elevate counts for some platforms.4 A method that combines citation studies and usage analysis is needed to provide additional information, which could help inform subscription decisions.

For over two decades, academic libraries have been in transition from print journals to electronic access. As Xu observed, the tools and methods that were developed in the 20th century for collection analysis were not created with evaluating modern serials collections in mind, evaluating collections in subsets focused on subject areas or as a whole, or across formats such as serials, monographs, etc.5 In studying the relationship between print and electronic journal (e-journal) use and e-journal discovery, McDonald found that both print use and e-journal use were significant predictors of local citation rates, with print use predicting local citation rates with a two-year delay.6 De Groote et al. found a high correlation between vendor data and link-resolver data, demonstrating that vendor usage statistics provide a statistically valid substitute for this local access measure.7 Additionally, usage statistics from either source can predict local citation rates for journals. In regard to measuring access use, counting full-text downloads may seem reliable, but publishers are still working to perfect how to measure this activity. Moreover, publishers have economic incentives to “over-report” such statistics.8

Studying usage based on link server reports is a twenty-first-century approach. Bollen and Van de Sompel examined how usage patterns obtained from the link resolver SFX (www.exlibrisgroup.com/category/SFXOverview) at nine major institutions in the California State University system in 2004 correlated with the Institute for Scientific Information Institute Impact Factor (ISI IF), obtained from the 2004 Journal Citation Reports (JCR). They studied full-text download requests for articles from 2002 and 2003 and observed a “negative correlation between the CSU UIF [California State University Usage Impact Factor] and the ISI IF” over a period of eight years, ranging from -0.159 to -0.207.9 This finding contradicts previous studies that showed a positive correlation between the ISI IF and either journal downloads and citations or article downloads and citations.10 Their findings suggest that librarians might attach too much importance to something like impact factor, when local needs dictate otherwise, which is consistent with Duy and Vaughan’s findings.11 In their three-month study of the usage of 3,465 journals indexed by MEDLINE, Gallagher et al. found that usage data captured by link servers represents less than 10 percent of e-journal usage when compared to vendor usage data.12 Consequently, while there is a correlation between usage derived from link resolvers and that provided by vendors, link resolver statistics may not provide enough information for local decisions.

Because of shrinking budgets and the ongoing task of managing collections, evaluation of electronic resources could not wait for the tools to mature, and librarians are applying different methodologies. To assist in critical decision making with regard to journal subscriptions, some libraries are developing their own charts or checklists of data.13 As part of a cleanup project designed to eliminate encumbrances that were never expended, Smulewitz broke down large journal packages by title, applying a fund code and subject identifier to each. Adding usage statistics to calculate cost per use, Smulewitz was able to look at cost and use per title across packages and years, allowing for better-informed decisions on renewals and cancellations.14 Usage statistics are often consulted in reaction to a crisis, such as dealing with a budget cut or shortfall.15 To evaluate large journal collection package purchases as a whole and by title, Blecic and colleagues created metrics that combined Successful Full-Text Article Request (SFTAR) data for three years, subscription status for each journal, and cost. Though this approach is a step above single-measure comparison while remaining less complex than other methods, Blecic et al. caution that an electronic resources librarian must use thoughtful consideration to ensure fair and even access to journals across subject areas needed by the library’s stakeholders.16

One way to compile a list of core journals in a given subject area is to focus on the citations in several leading journals and determine which journals are cited most often.17 Tsay studied all scholarly articles published from 1998 to 2008 in the Journal of the American Society for Information Science & Technology, Information Processing & Management, the Journal of Information Science, and the Journal of Documentation. Analyzing a total of 2,913 research articles, Tsay found that these four journals cited 105,063 references, with journal literature topping the list. The four journals accounted for 50.3 percent of the citations. The top thirty most cited journals accounted for nearly 50 percent of all journal citations, but interestingly half of the cited journals were cited only once.18 Kimball et al. used a traditional citation study to indicate “that the collection development practices for that portion of the collection are effective.”19 Yet, if half of the journals were cited only once, perhaps a new, big picture approach is warranted.

Another approach to define the core journals in a given collection is to apply the 80/20 rule, also known as the Pareto Principle. Simply put, this principle states that for most occurrences in any given area, about 80 percent of the events were triggered by 20 percent of the causes. For example, a likely scenario would be that 80 percent of journal use is attributable to 20 percent of the journals being accessed by users. According to Nisonger, the “basic 80/20 pattern provides a valid approach to operationalizing the core journal concept and is applicable to collection management decision making.”20 Gallagher et al. found that “20 percent of print titles accounted for 77.8 percent of print use, while 20 percent of e-journals accounted for 73.8 percent of use” at Yale University’s Cushing/Whitney Medical Library.21 In examining the University of Illinois at Chicago’s (UIC) COUNTER data, De Groote and colleagues found that 80 percent of successful full-text requests were concentrated in 24 percent of the titles.22 Although Nisonger admits that the percentages do not exactly match the 80/20 rule, ideally the majority of an academic library’s journals budget should be allocated for resources that get the majority of use.

Taking a different approach, Ke used Elsevier’s SCOPUS database (www.elsevier.com/online-tools/scopus) to analyze the citations in papers published by the University of Houston’s psychology faculty to determine if the library was meeting their needs and to gauge how psychology faculty use information beyond their stated uses. Questions asked included what journals were cited and how often? Does the library subscribe to the journals that researchers cite? Ke found that her library subscribed to 100 percent of the journals that were cited more than one hundred times and 92 percent of the journals that were cited twenty-one times or more. She sought to show if there was a connection between the number of times a journal was cited in 2012 to the number of times it was downloaded during that year. The Journal of Applied Psychology was downloaded nearly six thousand times in 2012 and was cited more than fifty times in 2012 journal publications indexed in Scopus. Ke concluded that citation analysis can be used to demonstrate that the library effectively supports campus research in the area of psychology.23 Similarly, Whiting and Orr sought to determine how well their library’s collection supported the research needs of doctorate of nursing practice students at the University of Southern Indiana. They found that Rice Library could have provided at least 71 percent of the total items cited in student papers and 81 percent of the journal articles cited.24 These approaches are attainable ways to demonstrate the library’s effectiveness in supporting faculty and student research.

Return on investment (ROI) is one way to demonstrate the library’s role in teaching and research to high-level administrators. Determining an ROI is a challenge because there are often costs, such as consortia fees, that are not apparent to people outside the library. These costs may be detected by an experienced electronic resources librarian who would know where to look for them. Local collections are specialized, and electronic resources librarians must know their collections and their respective histories. For that reason, calculating overall electronic product expenditures requires knowledge of current and previous subscriptions and the ability to work with the available tools. How items are counted or what counts as a use is a vital question to ask. As Hulbert, Roach, and Julian noted, “Decisions must be made locally as to how to count usage and costs.”25 Cost per use is their libraries’ indicator of the value of a title to the collection. When determining cost per use, Hulbert Roach, and Julian recommend the following: “keep it simple; be consistent, and document decisions,” which is sound advice for any library.26

Even with standards that were created to simplify and streamline the process of collecting electronic resource usage statistics, this data is not as clear and easy to delineate as one would hope.27 To illustrate this, Davis and Price gathered COUNTER JR1 reports (the number of successful full-text article requests by month and journal) for Cornell University journal subscriptions with six publishers in 2004. From a possible 1,590 titles, 818 remained for analysis after eliminating titles that provided only one version of the full text. They also looked at Embo Journal because it was hosted on both the Nature and Highwire Press platforms, and thirty-two research institutions had access to it on both platforms. Looking at the number of full-text downloads, Davis and Price found that ratios of PDF-to-HTML downloads, while consistent for a given publisher, vary significantly across publishers, even when controlling for content.28 Some publishers’ interfaces inflate their journal usage statistics by requiring users to access HTML versions of articles before accessing the PDF versions.29 Such findings “refute the notion that all COUNTER-compliant publishers are reporting comparable numbers.”30

University administrators need a solid understanding of usage data and an awareness of the limitations of relying solely on quantitative data. Price and Fleming-May noted, “Administrators’ thorough understanding of use is essential in measuring and evaluating the library’s effectiveness in the campus community.”31 It is important to demonstrate to administrators how deep budget cuts will adversely impact teaching and research at their academic institutions. While teaching faculty are creative and can find ways to work around limited access to resources, Bradley and Soldo note that “limiting access to the scholarly record puts students at a disadvantage by restraining what their instructors can freely expose them to via accessible course readings due to both cost and copyright restrictions.”32

Often administrators see the large price tag of a Big Deal journal package and question whether the library needs to have a bundled collection. But this type of approach does not take into consideration how faculty are using titles that are part of a Big Deal. De Groote et al. concluded that “citation data as a subset may tell the library which journals are most used for research by faculty, while vendor or publisher statistics and link-resolver data reflect all types of use, including educational and clinical.”33 Both citation data and usage data can be used to inform decisions related to the retention of expensive journal collections. For Gallagher et al., “Analyzing e-journal statistics by vendor and package will provide libraries with useful information to better determine the true value of each package deal.”34 Additionally, considering how well a particular collection meets the needs of academic department only enhances the analysis of a journal packages value to the institution as a whole. Interest in how faculty research citation and publication data are reflected in vendor-provided full-text downloads statistics provided the impetus for this research.

Method

The University of Wisconsin– Eau Claire (UWEC) is a small, regional liberal arts university with a student full-time equivalent (FTE) of 9,857 located in western Wisconsin, approximately ninety minutes east of the Twin Cities of Minneapolis and St. Paul. To find a new and meaningful way to determine the level of coverage provided by current journal subscriptions, this research sought answers to the following questions:

  1. In what journals are faculty publishing?
  2. What journals are faculty citing?
  3. Does the library subscribe to these journals?
  4. What level of access to each journal is currently provided?

Publications from four academic departments at UWEC were examined: nursing, chemistry, biology, and mathematics. Faculty in these areas who were on the university’s official list for the 2011–12 academic year were included, across all levels of academic rank: nursing, twenty-two faculty; chemistry, nineteen faculty; biology, nineteen faculty; mathematics, thirty-three faculty.

The university’s Office of Research and Sponsored Programs (ORSP) tracks scholarly publications, faculty/student collaborations, creative achievements, and external grant awards for faculty and academic staff and publishes this information in an annual report. Historically, these reports covered the academic year from 1987–88 through 2008–9. ORSP switched to a calendar year interval beginning in 2010. At the time data was being collected, only reports from 1998–99 through 2010 were available in a digital format (PDF). These reports served as additional resources for locating faculty publications to be included in the first lists. Author searches were performed for each faculty member in databases appropriate to a given discipline:

  • CINAHL: Nursing
  • Web of Science: Chemistry and Biology
  • MathSciNet: Mathematics

Each publication found in the search results was added to the appropriate departmental the list.

Four lists of publications were created, one for each discipline. Since the focus was on journal articles published by department, articles with two or more faculty authors were counted as follows:

  • If they worked primarily in the same department, the article was included once.
  • If they were from different departments, the article was included once for each department involved.

Nursing faculty published in forty-four journals, chemistry in sixty-two journals, biology in fifty-eight journals, and mathematics in thirty-nine journals. These faculty publication lists were used to determine which journals faculty cited in their research. A total of 408 articles were published by UWEC faculty from the four departments examined. For each published article, bibliographic citations would be used if available within an electronic, full-text version of the article itself, or in a database appropriate for each discipline, such as those mentioned previously.

Once created, the publication lists were used to discover which journals the faculty cited in their publications. The resulting citations lists were grouped by academic discipline (nursing, chemistry, etc.), each in a separate spreadsheet, with 589 items for nursing, 782 for chemistry, 855 for biology, and 354 for mathematics. Both sets of lists, the publication lists and the citation lists, were verified using the appropriate databases previously mentioned.

The lists were sorted alphabetically by publication title. To determine whether the library provided access to journals on these lists, the authors (both librarians) reviewed each list separately and checked each publication title using the library’s SFX knowledgebase, eliminating titles that were not journals (books, book chapters, monograph series, etc.). Title searching was made easier since both print and online holdings show up in UWEC’s SFX searches. For journal title changes, splits, and mergers, the citation with the most recent version was retained in the list and others were treated as duplicates and eliminated. Duplicate titles and items that could not be verified as journals were also removed, resulting in 441 journal citations for nursing, 584 journal citations for chemistry, 623 for biology, and 269 for mathematics.

Each of the 1,917 items was then coded to indicate access to the journal:

  • “Current access” meant that the collection offered print or electronic access to the most recent content of the journal without an embargo period.
  • “Some access” covered various types of access ranging from three-month embargo barriers for titles in full-text databases to shorter runs or limited access due to a subscription cancellation.
  • “None” or “No Access” meant that the library did not have access to any version of the journal and that UWEC faculty would have had to obtain the content through ILL.

The authors compared lists and, where discrepancies were found, conducted verification searches using SFX, WorldCat, ResearchGate (www.researchgate.net), and Google. A journal was defined as something that had an International Standard Serial Number (ISSN), and publications with both an ISSN and an Switch to International Standard Book Number (ISBN) were deemed to be books and removed from the lists.

Determining the correct title for a publication based on abbreviations provided proved to be a challenge. The following is an example of how much citations can vary depending on the database source:

  • ISR J MATH Volume: 7 Pages: 325-349 DOI: 10.1007/BF02788865 Published: 1969
  • Israel J. Math. 8, 273–303 (1970). MR0271721 (42 #6602)
  • ISRAEL JOURNAL OF MATHEMATICS Volume: 22 Issue: 2 Pages: 138-147 DOI: 10.1007/BF02760162 Published: 1975

While these are different citations, the journal title, Israel Journal of Mathematics, may not be obvious when it appears as “ISR J MATH.” Searches using OCLC’s WorldCat and Google helped to confirm that these abbreviated titles were in fact journals. Truncated searches in WorldCat required at least three letters per search term. Sometimes, but not always, searching Google for the abbreviated title led to the official journal site, allowing for easy verification. Contending with varying citation styles was a challenge that was often alleviated by the inclusion of a Digital Object Identifier (DOI) in the citation. When a DOI existed for a vague or confusing abbreviation, using it allowed for searches to find the preferred version of the title in the SFX knowledgebase.

Another issue to contend with was journal title changes, for example

Old:
HOSPITAL AND COMMUNITY PSYCHIATRY Volume: 41 Issue: 5 Pages: 549–551 Published: MAY 1990
New:
PSYCHIATRIC SERVICES Volume: 57 Issue: 8 Pages: 1153–1161 DOI: 10.1176/appi.ps.57.8.1153 Published: AUG 2006

Rather than coding each version of the journal as a separate title, the most recent or current title was used because that is what most of the content providers use when they provide usage statistics reports. Progeny of a parent journal were counted as individual journals, while the parent (with a superseded version of the title) was treated as a duplicate. After coding the titles on the shorter lists of faculty publication and on the longer lists of articles that faculty cited, the authors compiled the results to study trends. This entire process is diagrammed in the flowchart in figure 1 for easy replication.

Since statistics in COUNTER journal reports are not consistent across platforms, a single vendor platform needed to be selected. Initially, statistics for the EBSCO databases were considered because EBSCO is a major provider of the library’s content, but the lack of specificity in the “Some Access” category would be problematic. Because the “Some Access” category was not granular enough to provide data that could be compared across titles, the authors chose to focus on “Current Access” titles available from Wiley. While creating the publications and citations lists, the Wiley journal package repeatedly provided current access to titles in both lists that were embargoed or unavailable through UWEC’s other databases. This package provides COUNTER reports for the 1,218 Wiley e-journals to which UWEC subscribed in 2011 and the 1,224 Wiley e-journals to which UWEC subscribed in 2012. These reports include data on the average number of PDF downloads by journal title and for the entire Wiley package. All four departments in this study published in, as well as cited, Wiley journals. Wiley 2011 and 2012 COUNTER JR1 PDF downloads were pulled, and the numbers of PDF downloads were examined for the journals that UWEC faculty published in and cited. Titles published in, or cited by, nursing faculty were grouped together, and the average number of PDF downloads calculated. This was repeated for the other three academic disciplines.

Results

A total of 408 journal articles published by the university faculty across the four disciplines were included in this study, which together cited 1,785 different journals. Looking first at access to the journals where faculty had published their papers, the library provides current or some access to 60 percent or more of the titles in the publications list titles, with nursing having the best access at 86 percent. Table 1 provides a comparison of access levels by discipline. When gauging access to journals cited in faculty articles, the library offers current or some access to over 50 percent of the citation list titles, again with nursing holding the top spot at 76 percent.

From the group of journals that faculty in each department both cite and publish in—the overlap of the two publication lists—the library provides current access to nearly three-quarters of the nursing journals, while access for the other departments is at less than half (see table 2). From the group of journals that faculty in each department both cite and publish in—the overlap of the two publication lists—the library provides current access to nearly three-quarters of the nursing journals, while access for the other departments is at less than half (see table 2). Additionally, only member of the chemistry department cited from all twenty-eight of the journal in which they also publish. Nursing faculty published in three journals they did not cite (Journal of Obstetric, Gynecologic and Neonatal Nursing; Luso-Brazilian Review; and Nursing Education Perspectives), biology faculty published in three journals they did not cite (American Biology Teacher, Journal of Animal Breeding and Genetics, and Journal of Nematology), while mathematics faculty published in only two that were not cited (Chemistry and Biodiversity and Electronic Journal of Combinatorics). No single journal was cited more than six times. Faculty in chemistry, biology and mathematics published twice in exactly one journal per each discipline whereas faculty in nursing published more than one time in two different journals (see table 3).

Wiley Usage Statistics

UWEC subscribed to 1,218 Wiley e-journals in 2011 and 1,224 Wiley e-journals in 2012. According to 2011 and 2012 COUNTER reports, the average number of Wiley full-text PDF downloads per title each year was 6.79 and 9.16, respectively. The average number of PDF downloads for journals in the Wiley package cited by nursing faculty was 49.65 in 2011 and 39.78 in 2012, exceeding the overall package average by a factor of seven and four, respectively. For the Wiley journals in which the nursing faculty published, the average number of PDF downloads was 60.17 in 2011 and 61.67 in 2012, again substantially exceeding the package averages of 6.79 and 9.16, respectively (see table 4).

Regarding journals cited by chemistry faculty, the average number of PDF downloads was 12.18 in 2011 and 16.56 in 2012, surpassing the overall Wiley package averages as noted above. For Wiley journals in which the chemistry faculty published, the average number of PDF downloads was 68.50 in 2011 and 75.50 in 2012. Journals cited by biology faculty from the Wiley package averaged 7.07 PDF downloads in 2011 and 11.02 in 2012, noticeably closer to the Wiley package average as a whole. The average number of PDF downloads was 20.20 in 2011 and 24.10 in 2012 for those Wiley journals where biology faculty published. The average number of PDF downloads was only 3.90 in 2011 and 8.70 in 2012 for those Wiley journals cited by mathematics faculty. These faculty averaged 7.00 PDF downloads in 2011 and 21.75 in 2012 for the journals where they published.

For the average number of downloads in journals cited, all academic disciplines had a higher per-journal average than the collection as a whole, with nursing holding the top spot. An examination of this analysis by academic discipline shows that for Wiley journals in which university faculty published, three out of four subjects matched or exceeded this average, with chemistry faculty averaging four to seven times the amount.

Discussion

This research is important because faculty often view the library as a purchasing agent.35 While an academic library cannot offer current access to all journals cited by faculty, focusing on titles that appear in both the publications and citations lists can serve as an indicator of how well the library supports core areas of faculty research. Faculty in the four departments examined in this study published in and cited a range of publications, including journals outside their traditional discipline’s areas. Moreover, citations were not as concentrated as in the findings of Maharana and colleagues.36 All four departments had current or partial access to more than half of the journals published in and cited, with nursing having the highest levels of coverage (86 and 76 percent, respectively). Narrowing the focus to titles overlapping these lists, this study highlighted that while a majority of nursing journal titles offer current access, all other departments have current access to less than half of the titles on their lists. A higher level of such coverage can serve to demonstrate the library’s successful support of research, while lower levels can provide an additional incentive to negotiate for and acquire current access to additional titles as part of journal package purchases.

Because publishers typically group journal and database subscriptions in packages, and renewal statements for journal packages are received at different times (even when a subscription agent is used), this approach offers a more comprehensive picture of the merits of a particular journal package or database subscription. The findings of this proof-of-concept study provided enough data to support the decision to renew the Wiley journal package, which provides considerable current access to titles that are otherwise subject to an embargo in full-text databases. Within any given package, comparing one title’s usage against the average use of journals as a group does not take into consideration the complex constellation of how academic libraries are billed for database and journal subscriptions. Much attention has focused on cost per use, but if a highly downloaded title does not make it into the literature, or if local faculty choose to publish in other titles, stakeholders will not get a complete picture unless they look at where faculty publish, which titles they cite, and what is downloaded. Experienced electronic resources librarians and subject liaisons know that certain journal packages serve various departments better than others, and this method provides a way to measure and confirm this knowledge.

Whereas some of the usage can be attributed to students and faculty outside these four departments, it is not unreasonable to assume that students are utilizing resources that their professors also use. This approach allows academic librarians to see whether a journal package or full-text database subscription serves a department as a whole. Sharing this information could possibly prevent a situation where an administrator may cut a library budget, thinking that a big-ticket journal package is unnecessary. Comparing the publication and citation rate averages of a department to the overall average for a journal package provides a measure that can be taken to departmental faculty to get their support for a particular course of action.

Combining usage data and citation data, this study’s findings do not show that usage and citation fall into an 80/20 distribution as might be expected. This will make deselection more difficult. It also makes it necessary to collect more data to determine a core list of journals for the four departments at this university. Moreover, the initial findings may not be true for other departments at UWEC, like music and theater arts, where the departmental evaluation plan for faculty states explicitly which journals are examples of acceptable peer-reviewed journals for promotion and tenure. Other departmental evaluation plans leave the selection of journals in which to publish more open, giving faculty a broader array of options.37

After examining citations and publications that spanned a dozen years, the authors suggest that future research may focus on specific departments over a shorter time span, perhaps three years. This would be less cumbersome, allowing more time to perform analysis of multiple platforms, journal collections, and databases. This approach could be more useful to the research faculty who want to know how a package compares to other packages in their discipline. However, this information needs to be put in its context. Departmental research needs often change with personnel changes, and it will be interesting to see how average uses, citation, and publication rates change over time. Measuring publication, citation, and usage rates provides compelling information on how a collection is used. The authors’ experience is that faculty members believe that the library can cancel certain journals when they are bundled even when they are explicitly told that bundled titles are not cancellable. It is important to resist assigning a value to an individual title, unless the cost of the package did not depend on the individual title. A better solution is to provide the cost of the entire bundle or package, including any additional costs or fees that are required to provide access to the full package and recent full-text download counts.

Considering PDF downloads rather than the total for full-text (HTML and PDF) downloads in COUNTER reports eliminates the usage inflation issue that different interfaces bring to usage measures. Since this study was limited to Wiley, the next step is to apply this method to other journal packages, including Oxford, Sage, and Elsevier and full-text databases from EBSCO and other providers. This approach will be used with other departments’ publications, including psychology and women’s studies, to see how well the library’s collections meet the faculty’s research needs. Lastly, another step is to share this information with other stakeholders, if appropriate, to gain their support.

Although this small-scale study yielded some practical information, the research was limited by available usage data and the need to manage the complexity of holdings information. Straightforward comparisons between database usage and vendor-supplied usage statistics could not be made in any meaningful way. The poor quality of available citations was also a challenge. Moreover, the process of gathering and checking the citations was extremely time consuming. Further research is needed to expose how limiting the sample to article citations on hand, either electronically or in print, could skew results. However, examining the results of this part of the study allowed the authors to gauge how well their journal collections in general, and Wiley journals in particular, support UWEC faculty research. This is important because UWEC’s McIntyre Library spends a considerable amount of money on journals relative to the overall budget. Plans include examining other journal and database packages as they related to the nursing department and considering journals and databases in relation to the women’s studies program and the psychology department, which has a strong research component.

Conclusion

Usage statistics and citation analysis can be combined in a meaningful way. This study provided an approach that is a more targeted and possibly strategic examination of usage statistics by filtering those statistics through the lenses of journals that are important to academic departments. The described approach allows for evaluation of usage statistics in a meaningful way—meaningful both to faculty outside the library and those within the library. This approach encourages stakeholders to think differently about evaluating usage statistics. Cost per use means nothing without the proper context and perspective. Using this approach is labor intensive but perhaps justifiable when the amount of money academic libraries spend on electronic journal and database subscriptions is taken into consideration. In conclusion, combining measures of citations and publications in conjunction with usage data provides a better view of the relative merits of an electronic resource package.

References

  1. Eugene Garfield, “Citation Indexes for Science,” Science 122, no. 3159 (1955), 108–11.
  2. Eugene Garfield, “The History and Meaning of the Journal Impact Factor,” Journal of the American Medical Association 295, no. 1 (2006), 90–93; Eugene Garfield, “Genetics Citation Index: Experimental Citation Indexes to Genetics with Special Emphasis on Human Genetics,” Essays of an Information Scientist 7 (1984): 515–22.
  3. R. I. Echezona, V. N. Okafor, and Scholastica C. Ukwoma, “Information Sources Used by Postgraduate Students in Library and Information Science: A Citation Analysis of Dissertations,” Library Philosophy and Practice 7 (2011), accessed August 25, 2014, http://digitalcommons.unl.edu/libphilprac/559.
  4. Philip M. Davis and Jason S. Price, “Ejournal Interface Can Influence Usage Statistics: Implications for Libraries, Publishers, and Project Counter,” Journal of the American Society for Information Science & Technology 57, no. 9 (2006), 1243–48.
  5. Fei Xu, “Implementation of an Electronic Resource Assessment System in an Academic Library,” Program-Electronic Library and Information Systems 44, no. 4 (2010), 374–92.
  6. John D. McDonald, “Understanding Journal Usage: A Statistical Analysis of Citation and Use,” Journal of the American Society for Information Science & Technology 58, no. 1 (2007), 39–50.
  7. Sandra L. De Groote, Deborah D. Blecic, and Kristin E. Martin, “Measures of Health Sciences Journal Use: A Comparison of Vendor, Link-Resolver, and Local Citation Statistics,” Journal of the Medical Library Association 101, no. 2 (2013):110–19.
  8. McDonald, “Understanding Journal Usage,” 44.
  9. Johan Bollen and Herbert Van de Sompel, “Usage Impact Factor: The Effects of Sample Characteristics on Usage-Based Impact Metrics,” Journal of the American Society for Information Science & Technology 59, no. 1 (2008): 146.
  10. Thomas V. Perneger, “Relation between Online ‘Hit Counts’ and Subsequent Citations: Prospective Study of Research Papers in the BMJ,” British Medical Journal 329, no. 7465 (2004): 546–47; Henk F. Moed, “Statistical Relationships between Downloads and Citations at the Level of Individual Documents within a Single Journal,” Journal of the American Society for Information Science & Technology 56, no. 10 (2005), 1088–97; Stefan J. Darmoni et al., “Reading Factor: A New Bibliometric Criterion for Managing Digital Libraries,” Journal of the Medical Library Association 90, no. 3 (2002): 323–27.
  11. Joanna Duy and Liwen Vaughan, “Usage Data for Electronic Resources: A Comparison between Locally Collected and Vendor-Provided Statistics,” Journal of Academic Librarianship 29, no. 1 (2003): 16–22.
  12. John Gallagher, Kathleen Bauer, and Daniel M. Dollar, “Evidence-based Librarianship: Utilizing Data from all Available Sources to make Judicious Print Cancellation Decisions,” Library Collections, Acquisitions, & Technical Services 29, no. 2 (2005): 169–79.
  13. Nancy Beals and Marcella Lesher, “Managing Electronic Resource Statistics,” Serials Librarian 58, no. 1–4 (2010): 219–23.
  14. Gracemary Smulewitz, “Analyze This: Usage and Your Collection— Building an Investigative Culture and a Meaningful Tool,” Against the Grain 24, no. 6 (2012), 80–82.
  15. Mary Ann Trail, Kerry Chang-FitzGibbon, and Susan Wishnetsky, “Using Assessment to Make Difficult Choices in Cutting Periodicals,” Serials Librarian 62, no. 1–4 (2012): 159–63.
  16. Deborah D. Blecic et al., “Deal or No Deal? Evaluating Big Deals and Their Journals,” College & Research Libraries 74, no. 2 (2013): 178–94.
  17. Bulu Maharana, Smaranika Mishra, and Bipin Bihari Sethi, “Evaluation of Chemistry Journals at IIT Kharagpur, India: Use and Citation Analysis,” Library Philosophy & Practice no. 1 (2011), accessed August 25, 2014, http://digitalcommons.unl.edu/libphilprac/587/.
  18. Ming-yueh Tsay, “Knowledge Input for the Domain of Information Science: A Bibliometric and Citation Analysis Study,” Aslib Proceedings 65, no. 2 (2013): 203–20.
  19. Rusty Kimball et al., “A Citation Analysis of Atmospheric Science Publications by Faculty at Texas A&M University,” College & Research Libraries 74, no. 4 (2013): 356–67.
  20. Thomas E. Nisonger, “The ‘80/20 Rule’ and Core Journals,” Serials Librarian 55, no. 1–2 (2008): 78.
  21. Gallagher, “Evidence-Based Librarianship,” 178.
  22. De Groote, “Measures of Health Sciences Journal Use.”
  23. Irene Ke, “Using Scopus to Study Citing Behavior for Collection Development” (presentation, Association for Library Collections and Technical Services, Collection Evaluation and Assessment Interest Group, American Library Association Annual Conference, Chicago, June 30, 2013).
  24. Peter Whiting and Philip Orr, “Evaluating Library Support for a New Graduate Program: Finding Harmony with a Mixed Method Approach,” Serials Librarian 64, no. 1-4 (2013): 88–99.
  25. Linda Hulbert, Dani Roach, and Gail Julian, “Integrating Usage Statistics into Collection Development Decisions,” Serials Librarian 60, no. 1–4 (2011): 158–63.
  26. Ibid., 159.
  27. Pat Hults, “Electronic Usage Statistics,” in Electronic Resource Management in Libraries: Research and Practice, ed. Holly Yu and Scott Breivold (Hershey, PA: Information Science Reference, 2008): 1243–48.
  28. Philip M. Davis and Jason S. Price, “eJournal Interface Can Influence Usage Statistics: Implications for Libraries, Publishers, and Project COUNTER,” Journal of the American Society for Information Science & Technology 57, no. 9 (2006): 1243–48.
  29. Hults, “Electronic Usage Statistics.”
  30. Davis, “eJournal Interface can Influence Usage Statistics,” 1245; Hults, “Electronic Usage Statistics.”
  31. Amanda N. Price and Rachel Fleming-May, “Downloads or Outcomes? Measuring and Communicating the Contributions of Library Resources to Faculty and Student Success,” Serials Librarian 61, no. 2 (2011): 197.
  32. Lauren Bradley and Brian Soldo, “The New Information Poor: How Limited Access to Digital Scholarly Resources Impacts Higher Education,” Serials Librarian 61, no. 3–4 (2011): 366–76.
  33. De Groote, “Measures of Health Sciences Journal Use,” 117.
  34. Gallagher, “Evidence-based Librarianship,” 178.
  35. Roger C. Schonfeld and Ross Housewright, “Faculty Survey 2009: Key Strategic Insights for Libraries, Publishers, and Societies,” 2010, accessed August 16, 2013, www.sr.ithaka.org/sites/default/files/reports/Faculty_Study_2009.pdf.
  36. Maharana, Mishra, and Sethi, “Evaluation of Chemistry Journals at IIT Kharagpur, India.”
  37. Stephanie H. Wical and Gregory J. Kocken, “A Look at Department and Program Evaluation Plans at a Liberal Arts University: Do They Support Open Access?” (unpublished manuscript, University of Wisconsin Eau Claire, 2014).
Figure 1

Figure 1. Citation Study Process Flowchart

Table 1. Journal Access by Department

Nursing

Chemistry

Biology

Mathematics

Journals Published In

Current

75% (33)

45% (28)

39.655% (23)

38.46% (15)

Some

11% (5)

23% (14)

39.655% (23)

23.08% (9)

None

14% (6)

32% (20)

20.690% (12)

38.46% (15)

Total

44

62

58

39

Journals Cited

Current

51% (224)

31% (183)

34% (212)

31% (84)

Some

25% (112)

27% (155)

23% (143)

25% (67)

None

24% (105)

42% (246)

43% (268)

44% (118)

Total

441

584

623

269

Table 2. Overlap Coverage by Department

Access

Nursing

Chemistry

Biology

Mathematics

Current

30

28

20

13

Some

5

12

22

7

None

6

20

11

10

Totals

41

60

53

30

Table 3. Maximum Repeated Citations From, or Publications in, the Same Journal

Academic Department

Nursing

Chemistry

Biology

Mathematics

Citations

6

6

4

4

Publications

2

1

1

1

Table 4. Wiley Usage Statistics by Department: Average Number of PDF Downloads

2011

2012

Entire Wiley package

6.79

9.16

Journals Cited

Nursing

49.65

39.78

Chemistry

12.18

16.56

Biology

7.07

11.02

Mathematics

3.90

8.70

Published In

Nursing

60.17

61.67

Chemistry

68.50

75.50

Biology

20.20

24.10

Mathematics

7.00

21.75

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2024 Core