ltr: Vol. 46 Issue 6: p. 17
Chapter 3: Improving Understanding of Electronic Resources Usage: Beyond Logons and Downloads
Rachel A. Fleming-May
Jill E. Grogg

Abstract

Logons and downloads offer a glimpse into user behavior, but they present only part of the picture. To create a fuller understanding, initiatives such as Project MESUR and the Eigenfactor, as well as user-oriented models and ROI studies, have emerged.


We've all seen signs like the one in figure 10: as access to Web-based resources has improved, libraries have broadcast to patrons that library-provided resources are available to them, 24/7, in the comfort of their home, office, or dorm room. It seems that patrons have gotten that message loud and clear; while academic and public libraries report that door counts have increased significantly from the dark days of the late 1990s, some statistics, such as reference requests, have never fully rebounded. As a result, libraries have shifted energy and financial resources to realizing the potential of electronic access to increase and improve service to patrons, leading some to speculate that “electronic use is replacing physical use.”1 Researchers investigating remote library use frequently must make do with data about the number and duration of logons to specific databases. This approach to measuring use is arguably little different from the virtual equivalent of door counts and circulation statistics, and usually does little to clarify our understanding of the role of the library and information sources in the life of the user.

Librarians recognize the need for a creating a deeper understanding of electronic resources usage but are hampered by the Three Billy Goats Gruff of librarianship: lack of time, lack of financial resources, and lack of technical capability. Few of the electronic resources librarians who responded to an informal survey (see chapter 4) reported that they assess electronic resources usage beyond reviewing COUNTER-generated statistics, although many expressed frustration at not being able to do so. The need for improved vendor support and skepticism about the accuracy of statistics—even in reports issued by COUNTER-compliant products—were frequently cited as impediments. In the words of one respondent, “we do keep track of sessions and searches, but have not gone further into the data than the basic numbers. Although there may be valuable information within that data, I do not have the time to mine it.”

Although the LIS literature features regular assertions that there is much to be learned about patron use behavior from database statistics, little is reported on this topic beyond information about the number and nature of database logons and article downloads. Although download-level statistical analysis remains the dominant approach, there are several models in various stages of development that offer a promising glimpse at the future of electronic resource evaluation, several of which were discussed at a December 2009 workshop entitled “Scholarly Evaluation Metrics: Opportunities and Challenges” sponsored by the National Science Foundation (NSF). While speakers focused more specifically on alternatives to relying on citation as a gauge of scholarly research influence, several approaches that were discussed have implications for improving understanding of library-provided electronic resources.


Alternatives to Download Statistics: Citation

Citation—the act of making reference to a journal, a particular work, or individual or collected works by a specific author—has traditionally been treated as a proxy for scholarly influence or importance. According to Wilson, “the main strategy for determining what information has actually been used over the past fifty years has been citation analysis.”2 Kurtz and his colleagues called citation “the primary bibliometric indicator of the usefulness of an academic article.”3 If we agree that an article or book that has been cited has been determined to be useful by the person making the citation, can we also assume that (a) the cited work's content has been used and (b) the citing author considers the cited work to be of high quality or importance?

Not necessarily. While citing a work indicates that the person doing the citing has engaged in usage beyond downloading the item, citation may serve purposes other than acknowledging the source of ideas and research that have been referenced. Sandstrom identifies two additional motivations for citation: persuasion by indicating a preponderance of evidence; and displaying allegiance to a particular individual or school of thought.4 Citations of either of these descriptions certainly demonstrate uses of a work, but these uses differ from those indicated by the use of a work implied by its having been downloaded from a database. Eugene Garfield, founder of the Institute for Scientific Information (ISI) identified fifteen reasons to provide citations to other works:

  1. Paying homage to pioneers
  2. Giving credit for related work
  3. Identifying methodology, equipment, etc.
  4. Providing background reading
  5. Correcting one's own work
  6. Correcting the work of others
  7. Criticizing previous work
  8. Substantiating claims
  9. Alerting researchers to forthcoming work
  10. Providing leads to poorly disseminated, poorly indexed, or uncited work
  11. Authenticating data and classes of fact—physical constants, etc.
  12. Identifying original publications in which an idea or concept was discussed
  13. Identifying the original publication describing an epynomic concept or term
  14. Disclaiming work or ideas of others (negative claims)
  15. Disputing priority claims of others (negative homage)5

While many of Garfield's reasons for citing a work reflect a “use” of that work, several may not (e.g., “alerting researchers to forthcoming work”). Frost noted that citation is an action with various “motives, purposes, and functions [that] must be inferred from the context in which the citations appear”6 and identified two purposes for citation—neither of which requires the work to actually have been “used”—that Garfield didn't include: providing evidence by personal allegiances and ambitions, and serving as “window dressing” to establish the author's scholarly bona fides or to impress readers.7 Peritz pointed out that “citation of a study because of its connection with the subject matter of the citing paper may be qualitatively different from a citation indicating its use or application” and the two types of citation should be weighted differently in any type of assessment of citation (emphasis original).8 Hooten agreed that although citation is frequently treated as an “objective” activity and a measure of the quality of the cited work, it is, in fact, a highly subjective and variable activity that may serve different functions depending on the citing author, placement of the citation within the citing work, or the discipline within the citing work is situated.9

Additionally, authors may omit citations to works that have actually been used. Though Peat advocated for examining citations in scholarly publications to assess use levels, she noted that citation does not account for consultation of numerous sources that are deemed, eventually, to be irrelevant. This, Peat acknowledged, is “very important use” of information resources, and therefore, “any study that focuses on the published result will invariably understate use.”10 White and Wang's study of the citation behavior of economists raised similar concerns: they found that citations underrepresented the amount of literature that was actually used. In many cases, documents perceived to be of poor quality or of specific material types were not cited in spite of having contributed to the work.11 Equally problematic is the variety of methods with which citations can be assessed. It is possible to assess raw use, or the simple number of citations to a specific work, author, or journal; or to adjust for impact or density of use by considering the number of citations in the context of the total number of items available for citation. Adjusting for density of use, said Sandison, gives a more accurate depiction of the “heaviness of use” of a particular idea or item, while considering only raw use data can be “dangerously misleading.”12

Although the Normative Theory of Citation holds that authors “give credit where credit is due,”13 having too great an influence in one's field can actually prevent an author or work from being cited, as authors frequently neglect to cite works because they consider the subject matter to be “common knowledge” to readers. For reasons not apparently tied to date of publication or any other discernable variable, MacRoberts and MacRoberts also found that though a direct citation is provided consistently for particular works, many individual works are cited only through a secondary source. Other works, they found, are either never cited or cited only rarely in spite of their clear influence on a particular piece of research.14 Additionally, bibliometricians have noted that scholarly literature includes a suspiciously low disproportionate number of citations to practitioner- or lay-oriented or newsletter publications, which are certainly read. Instead, citations in scholarly works tend to be to other scholarly works.

Despite these concerns, citation analysis is frequently applied in collection management decisions for both print and electronic titles. In addition to tracking citation to individual articles and books, the ISI calculates an Impact Factor for the journals it reviews. Essentially, the basis of a journal's Impact Factor is the number of total citations to that journal in any given year divided by the total number of citable articles in the journal's previous two years of publication.15 Although a journal's Impact Factor is considered an important metric for evaluating its quality, there are significant criticisms about the Impact Factor both conceptually and in practice. Among these are concerns that ISI indexes a relatively small number of journals and has been slow to add open access journals to its collection; a few heavily cited articles—especially review articles—can artificially boost a journal's Impact Factor, and that authors have figured out how to “game the system” through self-citation in order to boost their own citation count, which can skew a journal's Impact Factor. These issues have led to the development of several alternative models for assessing journals, some of which may also contribute to the e-resource usage data for purposes of collection management and resource allocation.

The Eigenfactor

While the Eigenfactor (figure 11) utilizes the same data that forms the basis for ISI's Impact Factor, its creators claim that the approach they use in calculation remedies many of the complaints about the Impact Factor. Rather than relying strictly on citation counts, Eigenfactor takes into consideration the relative influence of a citing journal within the field in recognition of “the fact that a single citation from a high quality journal may be more valuable than multiple citations from peripheral publications.”16 Eigenfactor calculations employ an algorithm similar to Google's PageRank approach. It should also be noted that Eigenfactor calculations are based on five years’ citation data, while ISI's Impact Factor uses only two.

Although Davis's study showed that variation between the Eigenfactor, total citations, and Impact Factor of a collection of journals wasn't especially dramatic,17 this approach does add to the librarian's assessment toolbox. That a journal's Eigenfactor and cost-effectiveness, based on influence, can be calculated—at no cost—online is an added benefit.18 Because Eigenfactor calculations rely on ISI data, however, concerns about the relatively small number of journals evaluated by ISI apply to Eigenfactor calculations as well.

Project MESUR

Project MESUR (MEtrics from Scholarly Usage of Resources) takes an additional step away from the Impact Factor. Because MESUR investigators consider citation to be just one type of usage event—”the formal end-result” in the life of a scholarly work,19 they have expanded their model for calculating influence to include other types of “usage events” including downloading, reading, and other consultation (figure 12). Johan Bollen, MESUR's principal investigator, considers usage data superior to citation counts for several reasons. First, usage data provides a greater level of granularity—leading to an improved understanding of what users are actually doing—than citation, which tracks one action. The automated nature of usage data collection provides access to much greater volume than citation data, which is available on a smaller scale. Bollen and Van de Sompel also emphasizes that usage information is not hampered by the time lag necessary for citations to a work to be published and harvested by a publisher like ISI.20 In the sciences especially, this is an important benefit.

The MESUR team has collected analyzed a wide variety of longitudinal usage data from libraries (University of Texas's nine campuses, six health institutions, and California State University's twenty-three campuses) and vendors such as Thomson Scientific (Web of Science), Elsevier (Scopus), JSTOR, and Ingenta. Collecting a few pieces of information about each “request” (date and time of the request, session identifier, article identifier, and request type) allowed researchers to recreate individual search sessions to construct a complex model of influence and communication within scholarly networks.21 In so doing, they have developed a more information-rich ontology for analyzing usage events based on the following elements:

  1. Agent: authors, users, institutions, etc.
  2. Document: articles, journals, conference proceedings, books, etc.
  3. Context: Uses, Citation, Metric, CoAuthors, etc.22

This model allows the MESUR team to analyze usage events within a context in order to chart relationships between authors, works, and titles at the article or journal level as well as predicting the probability that a specific journal will be cited, and the “centrality” of a specific journal to other journals in a network (as calculated by connections made from that journal to other titles within a session). MESUR represents a significant departure from ISI's “author-generated, frequentist” approach in calculating journal impact factor to a “reader-generated” social network–oriented approach (figure 13) in which journal titles can be recognized for playing essential roles beyond citation.23


PLoS: The Public Library of Science

Arguing that scholars are more likely, in the online world, to find articles through a search engine than by browsing a journal, Mark Patterson, PLoS ONE director of publishing, wonders why “researchers and their paymasters remain wedded to assessing individual articles by using a metric (the impact factor) that attempts to measure the average citations to a whole journal?”24 According to Patterson, PLoS ONE, “an international, peer-reviewed, open-access, online publication” publishing “original research from all disciplines within science and medicine”25 believes that we've continued to rely on an anachronistic measure of influence or importance because other options have not been available. PLoS takes an alternative approach to article-level metrics: each article published in a PLoS journal is accompanied by a collection of metrics (figure 14), some traditional; others, less so:

  • Article usage statistics—HTML page views, PDF downloads, and XML downloads
  • Citations from the scholarly literature—currently from PubMed Central, Scopus, and CrossRef
  • Social bookmarks—currently from CiteULike and Connotea
  • Comments—left by readers of each article
  • Notes—left by readers of each article
  • Blog posts—aggregated from Postgenomic, Nature Blogs, Bloglines, and ResearchBlogging.
  • Ratings—left by readers of each article26

According to the editors, this information helps readers “determine the value of that article to them and to the scientific community in general. Importantly, they provide additional and regularly updated context to the article.”27 Web-based article-level metrics, like the Impact Factor, have drawbacks, which PLoS acknowledges. Specifically, clicks on articles by automated “robots” artificially increase an individual article's access statistics. The editors say that PLoS has made an effort to exclude known robots from accessing their servers, but concede that no list could ever be exhaustive.


User-Oriented Models

While MESUR's and PLoS's approaches each represent a shift in thinking about how the importance or influence of a resource should be assessed, the article or journal is still the subject of importance in these models. Other approaches utilize a variety of methods to improve understanding of resource usage by users.

Log Analysis

Peters defines log analysis as the “study of electronically recorded interactions between online information retrieval systems and the persons who search for the information found in those systems.”28 Log analysis augments the data reported to COUNTER with session-level data, such as records of actual patron database searches. In this way, Project MESUR could be considered a log analysis project; however, while MESUR focuses on the research object, log analysis can provide insight into user behavior. For example, Eason, Richardson, and Yu analyzed e-journal search log files from an aggregator service. The authors classified users’ access behavior based on the range of journals consulted in terms of title and age; frequency of use based on the number of sessions and the length of each session; depth of use measured by percentage of results consulted at the article citation, abstract, or full-text level; and the function of use: browsing electronic tables of contents, printing articles, or searching. The authors used this data to create a taxonomy of user types: enthusiastic, forced, regular; specialized, occasional, and restricted users. Low-level users were classified as lost users, who began the project enthusiastically, then dropped off; exploratory users, who began somewhat tentatively, then dropped off; tourists, who used the service minimally; and searchers, whose only use activity on the service was searching. In spite of some acknowledged shortcomings with their approach to collecting data, the authors noted that it was “possible to see the influence of the tasks, status, and disciplines of users, the content, function and delivery” on the users’ behavior.29

Nicholas and collaborators at the Centre or Information Behaviour and the Evaluation of Research (CIBER) believe logs can inform about users by providing “a direct and immediately available record of what people have done: not what they say they might, or would, do; not what they were prompted to say, not what they thought they did.”30

CIBER, based at University College London, has invested a great deal of research energy into the log analysis approach. Recently CIBER was part of the three-year, Institute of Museum and Library Services (IMLS)–funded project Maximizing Library Investments in Digital Collections Through Better Data Gathering and Analysis (MaxData). CIBER partnered with Carol Tenopir at the University of Tennessee to conduct an in-depth study of the long-term impact of “Big Deal” subscriptions on user information behavior. One contribution of MaxData was the development of Deep Log Analysis, a procedure Nicholas describes as a “more sophisticated form of transactional log analysis.”31 Instead of relying on data as packaged by vendors or an ILS, DLA works with raw search data, allowing “more accurate, detailed, and panoramic pictures of digital information seeking behavior” to be produced.32 The MaxData investigators acknowledge that the log data available to them provided little information about the searchers themselves, but note that it's possible to make certain generalizations about some aspects of information behavior typical of students and faculty in certain disciplines on the basis of the type of database searched. Among the session-level data analyzed were page views, length of time spent on a specific article, and methods of “bouncing” from one item to another.

User Surveys

Of course, this approach provides insight into the “what” of information behavior, but little insight into the why an individual spent fifteen minutes looking at one article, but only three looking at another. Similarly, IP addresses provide little information about who a user is … beyond possibly distinguishing on-campus users from those logging in from a remote location. These questions were answered more completely through a survey project led by Tenopir as part of MaxData. In addition to basic demographic information, students and faculty at five universities were asked to provide information about their research habits based on the critical incident method, “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.”33 Tenopir and her longtime research collaborator, Donald W. King, apply the critical incident approach by asking respondents to remember and describe their “last incident of reading” an information resource.34 Through this approach Tenopir and King have made considerable contributions to researchers’ and practitioners’ understanding of user and practitioner perspectives of articles and journals—both print and electronic.

While this type of research can be extremely valuable in augmenting understanding of user needs and behavior, Tenopir acknowledges that conducting surveys is not without difficulty. Tenopir says one of her biggest challenges in conducting user-focused surveys is securing participation from busy students and faculty. Her prospective subjects receive so many requests to participate in surveys—and so much e-mail—she says, that individual requests sometimes get lost. As e-mail volume increases, e-mail–solicited survey response rates fall—in the past, paper-based surveys have gotten better response rates. She also adds that having the invitation to participate in a survey come from a prospective respondent's own campus—ideally the provost's or dean of the library's office—is tremendously helpful as it immediately lends authority and name recognition to the request.

Further complicating matters, Tenopir says that as online access becomes more seamless and transparent—in other words, better—patrons are becoming less aware that they are using library-provided resources. This increased transparency in the information-retrieval process had made it more difficult for her respondents to accurately identify, for example, the last time they accessed an article through a library-provided e-journal subscription. After all, when one is able to move seamlessly from a Google Scholar search to an article PDF—with no apparent interchange with one's home library—tracking which resources are provided by the library and which are freely available is challenging.35

The Association of Research Libraries’ Measuring the Impact of Networked Electronic Services (MINES for Libraries)36 program assesses user behavior and needs at the article level (figure 15). MINES is a brief Web-based survey that is retrieved when a user clicks on a library-subscribed resource. In order to progress to the resource, the user must provide limited demographic data as well as location at the time of access and reason for accessing that particular resource. While instruments like MINES can be extremely effective in collecting data from a large group of respondents with a minimum of effort, they should be deployed with caution lest librarians run the risk of irritating patrons.

Mixed Methods: Survey, Observation, Statistical Analysis

In 2006–2007, ProQuest initiated a large-scale, multiphase study of undergraduate students’ interaction with information resources. Over the study, researchers observed students’ information behavior in connection with a school assignment, both in person and through a remote screen viewing program. In an effort to provide a research environment that was as naturalistic as possible, students worked in their homes, coffee shops, and other locations of their choosing. Findings from these two studies were augmented with a survey of 10,000 students in which respondents were asked questions related to the role of Google and library resources in their schoolwork. In aggregate, findings from the three projects indicated that students experienced significant barriers in accessing information resources through the library. In a presentation of the project at the 2008 VALA conference, John Law, ProQuest's Director of Strategic Alliances and Platform Development, asserted that increased access to Web-based information had “shifted the balance of power in libraries to end-user researchers”37 and emphasized that in order to compete with free, Web-based information resources, libraries and vendors would need improve both discovery of and access to the information resources they provide.

The Joint Information Systems Conference (JISC) conducted a similar large-scale study of e-book usage in the United Kingdom. During 2008-9, researchers collected a wide variety of data related to e-book usage in higher education, including user surveys, server logs, circulation and sales statistics, and focus groups to accomplish a variety of goals. In addition to assessing user attitudes and practices related to e-book usage, the JISC team was interested in exploring the financial viability of libraries’ moving toward creating larger e-book collections. In order to do this, they compared sales and circulation data for e-books and print monographs. Perhaps unsurprisingly, findings indicate that electronic-format textbooks are exceptionally popular with both students and faculty.38


Cost, Investment, and Value

According to Carol Tenopir, one of the purposes of usage assessment is to provide data for collection management: in this regard, COUNTER reports go a long way in providing necessary data for decision making and internal improvement of services and resource management. Usage as defined by COUNTER, however, speaks only to the “implied value of a resource. Libraries also must assess resources’ explicit value: ‘as a result of using/reading/accessing this resource, I was able to accomplish this action that furthers the university's mission.’” While this type of usage may constitute a relatively small percentage of overall activity, Tenopir says it is still important to assess. Database vendors frequently provide “cost-per-use” data, but beyond demonstrating that resources are being accessed, does little to prove actual benefits.39

Demonstrating value and return on investment in library services and resources is difficult, but academic libraries are beginning to realize the need to develop models for doing so. Although special and public libraries have been involved in this type of work for some time, models developed to assess ROI in those setting generally focus on financial return (special libraries) or benefits derived from taxpayer investment. Recently, however, research projects have been designed to assess financial return on investment in library resources. Judy Luther's recent white paper on the topic describes the development of a model to assess return on investment in electronic resources in terms of grant dollars generated. She and Paula Kaufman, dean of libraries at the University of Illinois, Urbana–Champaign, in consultation with Carol Tenopir and Donald W. King, created a survey to administer to UIUC faculty. One of the basic arguments of the study, articulated by Kaufman, was that the availability of electronic resources increased faculty efficiency, enabling them to write grant proposals more quickly. This, in turn results in increased revenue at the university level.40 Upon completing the analysis, the researchers found a significant correlation between investment in electronic resources and successful grant applications. Luther also reports significant qualitative support of the value of electronic resources. Faculty survey respondents remarked consistently that the availability of electronic resources had increased their efficiency and productivity and changed the way they conduct research … for the better.41

While these results are significant and encouraging, researchers on the UIUC project (and a subsequent expansion of the model for testing on multiple campuses, in press) acknowledge that return on investment in grant funding is not a realistic metric for all, or most, schools, nor is it the only measure of value in larger research institutions. Kaufman and Tenopir are building on these projects in the current Value, Outcomes, and Return on Investment of Academic Libraries (or “Lib-Value”) project, funded by the Institute of Museum and Library Services (IMLS). The objective of Lib-Value is to identify models for assessing and demonstrating the value of or return on investment in academic libraries’ resources, facilities, and services.


Conclusions and Future Steps

The world economy is precarious. State governments are facing dramatic budget shortfalls and resorting to drastic measures to maintain basic services … a category that for many, seems not to include the library. In the early months of 2010, the dire financial straits of communities nationwide have been painfully evident: public library branches have closed, librarians and staff have faced mass layoffs and furloughs, and collection budgets have been slashed to the quick. Meanwhile, publishers and vendors continue to increase subscription prices, and the needs of constituents of all types of libraries—public, special, school, academic—are greater than they have been in recent memory. Libraries can no longer no longer “justify [their] existence in terms of the extent of resources available, emulating the Alexandrian ideal.”42 Librarians are being called upon to demonstrate the good they do in both the short and the longer term. It is essential that librarians investigate new methods for demonstrating the quality of their services and resources. Usage statistics as reported by COUNTER are nothing more than inputs—the number of people who logged on—and outputs—the number of articles they downloaded. Johan Bollen, Project MESUR principal investigator, emphasizes that usage statistics should not be confused with usage data, which provides information about “where users came from before they interacted with a particular resource, where they went to after that interaction, at what time they interacted with the resources, what type of interactions they engaged in (full text download, abstract view, etc.), and many other very important structural features of your actual usage that will help you better understand your users and their needs. With usage statistics you're throwing all of that information away, to arrive at simple indicators like total usage per journal per month that may be quite useful but really only a mere shadow of what could be possible.” Even if libraries lack the resources available to a company like ProQuest for conducting research, it may be possible to consult the raw usage data from which usage statistics are gathered, and which “contains very important information that you are discarding when you rely on usage statistics.”43

We hope that presenting these models might inspire readers to consider additional pathways to assessing e-resource usage. Lest the prospect seem overwhelming, it's important to note that large-scale research projects needn't be undertaken on a continual, or even yearly, basis. It also may be possible to partner with local students (regardless of the type of library) to design and conduct e-resource usage assessment. We realize that electronic resources librarians constitute our likely audience for this publication and that you're already asked to do far too much.44 We want to emphasize how important it is for the library in its entirety to be involved in this kind of assessment. After all, for many of today's users, the electronic library is the library. In conclusion, we'd like to add these words from David Nicholas of CIBER: “we are desperately in need of outcomes data, hard information which says that, if you attend this literacy programme, if you really search the library's databases, and don't just use Google, it will make a difference and you will end up with a higher grade. We need this data because you are not going to get funding for resources just because it sounds like a good idea.”45 It's helpful, though, to consider this mandate a little differently … perhaps with a positive spin? Nicholas continues to say that “this is the sort of research that could make librarians very useful, empower them.”46


Notes
1. Charles Martell, “The Elusive User: Changing Use Patterns in Academic Libraries 1995 to 2004,” College and Research Libraries 68, no. 5(Sept. 2007): 435.
2. Thomas D. Wilson, “Information Needs and Uses: Fifty Years of Progress?” in Fifty Years of Information Progress: A Journal of Documentation Review, ed. B.C. Vickery, 15–51 (London: Aslib, 1994), consulted online at http://informationr.net/tdw/publ/papers/1994FiftyYears.html (accessed June 22, 2010).
3. Kurtz, Michael J..; Eickhorn, Guenther; Accomazzi, Alberto; Grant, Carolyn; Demleitner, Markus; Henneken, Edwin; Murray, Stephen S.. “The Effect of Use and Access on Citations,”Information Processing & Management 2005 Dec;41(no. 6):1396.
4. Pamela Effrein Sandstrom. “An Optimal Foraging Approach to Information Seeking and Use,”Library Quarterly 1994 Oct;64(no. 4):422.
5. Garfield, Eugene. “When to Cite,”Library Quarterly 1996 Oct;66(no. 4):451–452.
6. Frost, Carolyn O.. “The Use of Citations in Literary Research: A Preliminary Classification of Citation Functions,”Library Quarterly 1979 Oct;49(no. 4):400.
7. Ibid., 401.
8. Peritz, Bluma C.. “Opinion Paper: On the Objectives of Citation Analysis: Problems of Theory and Method,”Journal of the American Society for Information Science 1992 July;43(no. 6):449.
9. Hooten, Patricia A.. “Frequency and Functional Use of Cited Documents in Information Science,”Journal of the American Society for Information Science 1991 July;42(no. 6):398.
10. Peat, W. Leslie. “The Use of Research Libraries: A Comment About the Pittsburgh Study and Its Critics.”Journal of Academic Librarianship 1981;7(no. 4):231.
11. White, Marilyn Domas; Wang, Peiling. “A Qualitative Study of Citing Behavior: Contributions, Criteria, and Metalevel Documentation Concerns,”Library Quarterly 1997 April;67(no. 2):197.
12. Sandison, Alexander. “Densities of Use, and Absence of Obsolescence, in Physics Journals at MIT,”Journal of the American Society for Information Science 1974 May/June;25(no. 3):172.
13. MacRoberts MH, MacRoberts BR. “Another Test of the Normative Theory of Citing,”Journal of the American Society for Information Science 1987 July;38(no. 4):305.
14. Ibid.
15. Garfield, Eugene. “The History and Meaning of the Journal Impact Factor,”JAMA 2006 Jan. 4;295(no. 1):90.
16. Bergstrom, Carl. “Eigenfactor,”College & Research Libraries News 2007 May;68(no. 5):314.
17. Davis, Philip M.. “Eigenfactor: Does the Principle of Repeated Improvement Result in Better Estimates Than Raw Citation Counts?”Journal of the American Society for Information Science and Technology 2008 Nov;59(no. 13):2186–2188.
18. Eigenfactor Website, http://eigenfactor.org (accessed June 18, 2010).
19. Marko A. Rodriguez, Johan Bollen, and Herbert Van de Sompel, “A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and Their Usage,” in Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries, ed. Ray Larson, Edie Rasmussen, Shigeo Sugimoto, and Elaine Toms, 278–287 (New York: ACM, 2007), consulted online at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.7660&rep=rep1&type=pdf (accessed June 22, 2010).
20. Bollen, Johan; Van de Sompel, Herbert. “Usage Impact Factor: The Effects of Sample Characteristics on Usage-Based Impact Metrics,”Journal of the American Society for Information Science and Technology 2008;59(no. 1):136–149.
21. Johan Bollen, Herbert Van de Sompel, Aric Hagberg, and Ryan Chute, “A Principal Component Analysis of 39 Scientific Impact Measures,” PLoS ONE 4, no. 6, e6022 (June 29, 2009): 2–3, http://dx.doi.org/10.1371/journal.pone.0006022 (accessed June 18, 2010).
22. Rodriguez, Bollen, and Van de Sompel. “A Practical Ontology.”.
23. Bollen, Johan; Van de Sompel, Herbert; Smith, Joan A..; Luce, Rick. “Toward Alternative Metrics of Journal Impact: A Comparison of Download and Citation Data,”Information Processing & Management 2005 Dec;41(no. 6):1420.
24. Mark Patterson, “PLoS Journals: Measuring Impact Where It Matters,” PLoS Blog, July 13, 2009, Public Library of Science website, https://www.plos.org/cms/node/478 (accessed June 18, 2010).
25. “ PLoS ONE Journal Information,” Public Library of Science website, www.plosone.org/static/information.action (accessed June 17, 2010).
26. “Article-Level Metrics Information,” Public Library of Science website, www.plosone.org/static/almInfo.action (accessed June 17, 2010).
27. Ibid.
28. Peters, Thomas A.. “The History and Development of Transaction Log Analysis,”Library Hi Tech 1993;11(no. 2):43.
29. Eason, Ken; Richardson, Sue; Yu, Liangzhi. “Patterns of Use of Electronic Journals,”Journal of Documentation 2000;56(no. 5):501.
30. Nicholas, David; Huntington, Paul; Dobrowolski, Tom; Rowlands, Ian; Hamid, R.; Jamali, M.; Polydoratou, Panayiota. “Revisiting ‘Obsolescence’ and Journal Article ‘Decay’ through Usage Data: An Analysis of Digital Journal Use by Year of Publication,”Information Processing & Management 2005 Dec;41(no. 6):1445.
31. David Nicholas, “If We Do Not Understand Our Users, We Will Certainly Fail,” in The E-Resources Management Handbook (Newbury, UK: UKSG, Feb. 29, 2008): 122, http://uksg.metapress.com/app/home/content.asp?referrer=contribution&format=2&page=1&pagecount=8 (accessed June 17, 2010).
32. Nicholas, David; Huntington, Paul; Jamali, Hamid R..; Tenopir, Carol. “What Deep Log Analysis Tells Us about the Impact of Big Deals: Case Study Ohiolink,”Journal of Documentation 2006;62(no. 4):486.
33. Flanagan, John C.. “The Critical Incident Technique,”Psychological Bulletin 1954 July;51(no. 4):327.
34. Lankester, Alex. “Tenopir: Top Tips on User Surveys,”Library Connect 2006 Jan;4(no. 1):2.
35. Carol Tenopir, interview by the authors, May 15, 2010.
36. Franklin, Brinley; Plum, Terry. “Successful Web Survey Methodologies for Measuring the Impact of Networked Electronic Services (MINES for Libraries),”IFLA Journal 2006;32(no. 1):28–40.
37. Law, John. , VALA 2008: Libraries/Changing Spaces, Virtual Places: Conference Proceedings, 14th Biennial Conference & Exhibition, 5–7 February 2008, Melbourne Convention Centre, Australia. Croydon, Vic, Australia: VALA Libraries, Technology and the Future Inc; 2008. “Observing Student Researchers in Their Native Habitat.”; p. 1
38. JISC. In JISC National e-Books Observatory Project: Final Report, 52. London: Joint Information Systems Committee, 2009. http://www.jiscebooksproject.org/reports/finalreport.
39. Tenopir interview.
40. Kaufman, Paula T.. “The Library as Strategic Investment: Results of the Illinois Return on Investment Study,”Liber Quarterly: The Journal of European Research Libraries 2008;18(no. 3/4):424–436.
41. Luther, Judy. , White Paper 1. San Diego: Library Connect; 2008. University Investment in the Library: What's the Return? A Case Study at the University of Illinois at Urbana–Champaign,”; p. 10.-11.
42. Kyrillidou, Martha. “From Input and Output Measures to Quality and Outcome Measures, or, from the User in the Life of the Library to the Library in the Life of the User,”Journal of Academic Librarianship 2002 Jan./Feb;28(no. 1/2):43.
43. Johan Bollen, e-mail conversation with the authors, June 6, 2010.
44. Fleming-May, Rachel A..; Grogg, Jill E.. Austin, TX: paper, Electronic Resources & Libraries conference; 2010 Feb.1. “Finding Their Way: Electronic Resources Librarians’ Education, Training, and Community”.
45. Margaret Adolphus, “An Interview with David Nicholas,” Emerald website, available to Emerald members at http://info.emeraldinsight.com/librarians/info/interviews/nicholas.htm (accessed June 22, 2010).
46. Ibid.

Figures

[Figure ID: fig10]
Figure 10 

The logo for Ask Us 24/7, a virtual chat “service of cooperating New York State libraries and library systems, including the New York 3Rs Library Councils.” www.askus247.org.



[Figure ID: fig11]
Figure 11 

College & Research Libraries’ detailed Eigenfactor Report for 2008. In addition to providing basic information about the journal (such as publisher and first year of publication), the detailed report provides the Eigenfactor Score, the Article Influence Score, and the ISI Impact Factor. http://eigenfactor.org/detail.php?year=2008&jrlname=COLL%20RES%20LIBR&issnnum=0010-0870.



[Figure ID: fig12]
Figure 12 

Diagram of Project MESUR's process for “extraction of journal clickstream data from article level log data.” (Johan Bollen, Herbert Van de Sompel, Aric Hagberg, Luis Bettencourt, Ryan Chute, Marko A. Rodriguez, and Lyudmila Balakireva, “Clickstream Data Yields High-Resolution Maps of Science,” PLoS ONE 4, no. 3, e4803 (March 11, 2009): figure 2, http://dx.doi.org/10.1371/journal.pone.0004803 [accessed June 22, 2010]).



[Figure ID: fig13]
Figure 13 

Diagram of Project MESUR's “Map of Science Derived from Clickstream Data.” Each circle represents an individual journal; colors map to subject classifications derived from the Getty Institute's Art and Architecture Thesaurus (AAT), which the the MESUR team chose to use in order to resolve discrepancies between ISI's Journal Citation Reports Classifications and Dewey Decimal Classifications. Lines between circles represent “clicks,” or searcher movement between journals (Johan Bollen, Herbert Van de Sompel, Aric Hagberg, Luis Bettencourt, Ryan Chute, Marko A. Rodriguez, and Lyudmila Balakireva, “Clickstream Data Yields High-Resolution Maps of Science,” PLoS ONE 4, no. 3, e4803 (March 11, 2009): figure 5, http://dx.doi.org/10.1371/journal.pone.0004803 [accessed June 22, 2010]). Johan Bollen, MESUR principal investigator, had this to say about the map in a June 6, 2010, e-mail conversation with the authors: “Users behave in ways that may diverge quite strongly from preconceptions of what they ‘should’ do. Our maps demonstrate this phenomenon quite clearly…. If users believe mathematics (as a domain) is closer to statistics and other social sciences than it is to physics, then that belief will be manifested in their usage and thus your usage data. When you organize your resources or services, the question then becomes: will you do so according to what *you* think should be or what your users are actually telling you?”



[Figure ID: fig14]
Figure 14 

Screenshot of an article from PLoSMedicine showing article-level metrics including page views, downloads, citations, and ratings by readers.



[Figure ID: fig15]
Figure 15 

Screenshot of StatsQUAL, which is a gateway to library assessment tools that describe the role, character, and impact of physical and digital libraries. www.digiqual.org.



Article Categories:
  • Information Science
  • Library Science

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy