Chapter 3. Issues, Controversies, and Opportunities for Altmetrics

Chapter 3. Issues, Controversies, and Opportunities for Altmetrics

For academic librarians attempting to assess the potential of the present-day altmetrics landscape, it is just as important to consider the larger discussions that have emerged surrounding the field of altmetrics as it is to evaluate the strengths and weaknesses of specific altmetrics tools.

As mentioned briefly in chapter 1, the general altmetrics movement has alternatively suffered and benefitted from a number of exaggerations that have circulated about its aims and goals. While misunderstandings are inevitable in any effort to change the way that academe approaches sensitive topics like impact, promotion, funding, and tenure, some of these comments have pointed toward genuine weaknesses that the altmetrics movement has struggled to address, or toward unique strengths on which it is attempting to capitalize.

In this chapter, we look at some of the most important issues to come out of the last five years of altmetrics discussion, including the controversies and opportunities that are most poised to affect its ultimate adoption, negatively or positively, across the wider expanse of higher education.

Controversies Surrounding Altmetrics

Gaming

Of all the criticisms that the altmetrics field has had to weather since its 2010 introduction, the most common by far is the suggestion that it is highly susceptible to “gaming” (see figure 3.1) and thus is a poor match for the rigorous standards of academic evaluation.

Gaming in this context refers to the practice of unscrupulously manipulating a system or set of data in order to produce results that fit a user’s desired outcome. Because altmetrics are based explicitly on the collection of web-based data, which may include interactions between research and the general public, critics have accused altmetrics of lacking the security of citation-based approaches to calculating academic impact, which are inevitably more limited in scope and slower to accumulate in value.

To the credit of such critics, it’s indisputably true that gaming does occur across the Social Web, from small disingenuous “Like” practices by well-meaning friends and family to the large purchasing of fake followers (figure 3.2), kudos, ratings, or other indicators of online social capital. One need only think back as far as December 2014, when Instagram instigated a massive purge of spam accounts and bots, resulting in the loss of millions of followers by à la mode celebrities like Justin Bieber (lost 3.5 million followers) and Kim Kardashian (lost 1.5 million followers).1 Those in the business of social media have openly acknowledged how common the practice of purchasing of fake followers is, particularly on sites like Twitter where 1,000 new followers can be had for as little as a few dollars.2

Thus, from a general information perspective, there is always a definite risk in assuming the validity of information gleaned from social portions of the Internet, especially when user interactions can be translated to some form of real-world profit. However, the gaming of altmetrics is arguably a topic that requires a slightly more nuanced perspective on the credibility of online information. For instance, we might ask ourselves, are researchers really as likely as celebrities to manipulate metrics in order to promote themselves? What examples do we have of researchers doing this to date? And to the extent that these incidents do or can happen, what measures, if any, have altmetrics product developers taken to combat interference in their ultimate calculations?

As it turns out, attempts to game altmetrics—that is, to increase the perceived impact of research outputs or researchers via the Social Web—are both much less common and more difficult than many critics have assumed. In fact, most of what can be found today on the topic of gaming altmetrics comes directly from altmetrics advocates, who seem to discuss the issue regularly as part of explaining their respective approaches to gathering and measuring online activity (figure 3.3). For instance, Jennifer Lin of PLOS writes in a paper given in 2012 at the altmetrics12 ACM Web Science Workshop:

In our [article-level metrics] advocacy efforts, we have learned that gaming is a widespread concern of researchers, institutional decision-makers, publishers, and funders. Indeed, one of the hallmark features of altmetrics is in fact the difficulty of gaming a system comprised of a multi-dimensional suite of metrics, setting it apart from the impact factor’s vulnerabilities.3

In a 2013 company blog post, appropriately titled “Gaming Altmetrics,” Euan Adie, founder of Altmetric, also situates the idea of gaming altmetrics in the context of general efforts by a small number of researchers to game academic metrics:

Given that we know a small minority of researchers already resort to manipulating citations, it’s not much of a leap to wonder whether or not an unscrupulous author might spend $100 to try and raise the profile of one of their papers without having to do any, you know, work. How much of this goes on? How can we spot it? What should our reaction be?4

The primary defense of altmetrics against accusations of gaming vulnerability therefore comes down to three main points. First, efforts to game the system of academic merit are already a part of the culture of higher education and include the same players who already try to inflate citation counts to boost their Impact Factors and other bibliometrics credentials. Second, the number of researchers who actually do this is relatively small—nowhere near what we see happening across Instagram and Twitter in general, when cultural capital is really on the line. And third, rather than ignore these warnings and possibilities, most altmetrics providers are taking pains to create safeguards within their already complex systems for assigning relative impact.

This third reason is precisely why altmetrics harvesters are very open about the data sources they include in their calculations and why they include them (e.g., highly auditable or scholarly information). It’s also worth noting that in gathering so much data about researchers’ online activity, altmetrics providers are good at identifying unusual patterns that suggest intentional or unintentional gaming.5 This knowledge, combined with the availability of new technology to detect spam accounts, bots, and fake reviews, has reduced the gaming criticism of altmetrics from a major topic of discussion to a reasonably small acknowledgement of risk.6 Arguably the greater concern for the future of altmetrics is the encouragement of scholarly activities that do not game the system—such as opening up honest conversations about the ways researchers can consciously-yet-scrupulously promote their work in online social spaces like Mendeley, SSRN, ResearchGate, science blogs, and, yes, public social networks like Twitter, too.7

Correlation with Bibliometrics

Another area in which altmetrics has faced some controversy is in its correlation with bibliometrics, or more specifically, the lack thereof. As reviewed in chapter 1, bibliometrics and altmetrics share many of the same intentions in seeking to analyze scholarship quantitatively, although their definitions of scholarship and methods of analysis diverge significantly. Nevertheless, with altmetrics offering a much more immediate picture of scholarly impact than citation-based bibliometrics, researchers have naturally been curious about whether altmetrics can be used as a predictor of future citations, which are obviously desirable as a longer term metric of relative scholarly success.

Several studies have been conducted to explore this question over the years, most of which have proved frustratingly inconclusive, contradictory, or unpromising. For instance, a 2013 study of articles from the medical and biological sciences conducted by Thelwall and his colleagues found that six out of eleven altmetrics (Tweets, Facebook wall posts, research highlights, blog mentions, mainstream media mentions, and forum posts) were associated with citation counts, but that “the methods used do not shed light on the magnitude of any correlation between the altmetrics and citations (i.e., the correlation effect size is unknown).”8 By contrast, a 2014 study of 20,000 Web of Science articles, conducted by Zahedi, Costas, and Wouters and published in Scientometrics, was able to find moderate correlation between Mendeley readership metrics (figure 3.4) and citation indicators (r = 0.49) but also concluded that other altmetrics provided only “marginal information.”9 Many studies published on this subject (noting the lack of altmetrics information for many articles, often due to an absence from key altmetrics-generating networks or databases) have made attempts at finding correlation of any sort between altmetrics and bibliometrics feel largely premature.10

Possible limits and explanations aside, the fact that many altmetrics indicators do not seem to correlate with citation indicators has led to uncertainty among some researchers, who continue to feel pressure to provide citation-based evidence of impact to evaluators, yet who may not have sufficient time to let such impact manifest before facing an important deadline. The realization that altmetrics cannot precisely fill this gap may thus be interpreted by some as a failure on the part of the movement. However, the truth is almost certainly something much more complicated, based on the inherent differences between the understanding in the altmetrics field of scholarly impact and the understanding implied by the citation-based methods of bibliometrics. As Priem, Piwowar, and Hemminger suggested as early as 2012 in the conclusion to an article that examined 24,000 articles from PLOS, “Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone.”11 Acceptance of this argument requires both scholars and evaluators to endorse a profound shift in the way that academia has looked for decades at scholarly impact metrics. It is a change that is coming, but coming so slowly that it puts at risk the near-term adoption of altmetrics in critical circles like higher administration, at least without further help.

Inclusion of Metrics from Public Social Media

The third major issue over which altmetrics has encountered significant challenges is its typical inclusion of metrics from nonscholarly social media tools, such as Twitter, Facebook, and YouTube (figure 3.5), in addition to metrics derived from more academically aimed peer networks like Mendeley, ResearchGate, and SSRN.

As stated in chapter 2, nonacademic social media statistics are currently used in altmetrics because of the potential valuable connections they offer between research, researchers, and the general public. However, critics of their inclusion have pointed out a problem: Although many young and media-savvy researchers are active on these networks, a large number of influential researchers are not—an absence that could have a detrimental effect on the altmetrics associated with their research, or with research in certain areas of expertise. This criticism leads to what is perhaps an even more relevant criticism of the inclusion of metrics from non-academic-peer networks—that networks primarily populated by members of the general public are much less likely to be interested in esoteric fields of research than in research that connects to popular topics of discussion like climate change or weight loss.

A 2014 study published in the medical journal Circulation would seem on its face to add weight to this criticism. In it, researchers tracked the thirty-day page views of 243 Circulation articles while specifically attempting to promote the findings of about half the articles (randomized) via the journal’s Facebook and Twitter accounts. The authors concluded that there was “no difference in median 30-day page views” between the articles that were specifically promoted via their social media strategy and the articles in the control group.12 The Circulation study is particularly interesting, as it contradicts the results of previous studies that tracked the effects of promotion on the altmetrics of nonrandomized articles and found a positive relationship between the two, a fact noted by The Scholarly Kitchen blog contributor Phil Davis in a post about the study.13 However, in the same post, Davis also astutely notes that “Cardiovascular researchers (and other bench and clinical researchers) are very different than computational biologists, social media researchers, and those who spend their days glued to their chairs and computers.”

This observation—that public social media metrics are likely more relevant to fields with compatible communication habits, methods, or researcher demographics—is both a convincing retort to, and a valid critique of, the continued use of nonacademic metrics in altmetrics calculations and reports. Either way, however, it raises the question of better refinement of altmetrics research. As Davis writes in another part of his post, “[The study’s conclusion] questions whether prior studies were successful in isolating and measuring the effects of social media.”14 In the future, it is likely that we will see more intense discussions about the appropriate context for using public social media metrics alongside other altmetrics, as well as more sophisticated research into the effects of promotion on the metrics derived from non-scholarly-peer networks, and on the changing demographics of social media users within the world of academia (see figure 3.6).

Opportunities Surrounding Altmetrics

Despite the degree of attention paid thus far to the criticisms and controversies around altmetrics, it’s fair to say that much, if not most, of the buzz around the field for the last few years has been both positive and promising. Indeed, for academics, administrators, and funders in many areas, the field of altmetrics continues to present a significant and unique opportunity to fill gaps in scholarly impact that have long been in need of attention and that have disadvantaged scholarly outputs that do not fit the mold of citation-based impact. In this section, we look at three of the most notable opportunities presented by altmetrics and the progress of developers and users in making each one a reality.

Article-Level Impact

Arguably one of the most important opportunities opened up by altmetrics for researchers and, indeed, administrators is the uncoupling of the scholarly article from the constraints of the scholarly journal—at least in terms of impact (figure 3.7).

From a bibliometrics perspective, for instance, journal articles are almost always evaluated based on three factors: times cited (i.e., by other articles), journal Impact Factor, and qualitative reviews. However, because published articles typically take at least two years to start generating citation momentum and because fewer articles are reviewed in depth than are published each year by scholars, Impact Factor often becomes the primary substitute for article “quality” in evaluations—this despite the fact that Impact Factor makes no more claims to measure quality than do altmetrics. To base the determination of a specific article’s quality, or even just its importance, mostly on a metric for the average number of citations generated by articles published by the same journal over the past two years is a questionable practice on many levels and has led to widespread criticism of the use of Impact Factor in researcher evaluations (figure 3.8).

Into this debate enter article-level metrics (ALMs), or the array of metrics collected around articles in order to show how interest in a specific article builds up over time. Although the concept of ALMs predated the birth of altmetrics by several years, ALMs are related to altmetrics in that they include data sources that go beyond traditional limits, such as usage statistics, comments, ratings, social media mentions, and appearances on notable scientific blogs. To use the explanation offered by the online primer on ALMs published by SPARC, “The attempt to incorporate new data sources to measure the impact of something, whether that something is an article or a journal or an individual scholar, is what defines altmetrics. . . . ALMs are about the incorporation of altmetrics and traditional data points to define impact at the article level.15

With their attractive combination of metrics from the print and online worlds, ALMs have helped pioneer the idea that a research output’s impact can and should be measured primarily by its own quantitative information and not that of the venue in which it appears. The success of this vision has been seen not only in the growth of ALM-friendly journals, like those published online by PLOS, but in the proliferation of ALM-generating archives, such as the Cornell-based arXiv.org (figure 3.9), that make accessible pre- or post-publication articles. By allowing researchers to gather feedback and get additional information about the use and distribution of their written work, these online repositories already have expanded researchers’ options for understanding the near-term impact of their articles—all without having to rely on the crutch of venue-based citation averages. The result is a form of scholarly independence on which the field of altmetrics itself has capitalized by promoting metrics for works outside the journal article format that can still garner interactions similar to online articles.

(Multi-)Disciplinary Altmetrics

As mentioned in the section above, another opportunity for which altmetrics has been widely touted is its applicability to a wide variety of scholarly outputs, which makes it theoretically suitable for measuring impact across the disciplines in ways previously frustrated by bibliometrics.

As many tenure-track faculty can attest, impact is a tricky topic to pin down within a given field of study, let alone across multiple fields or disciplines. Consequently, attempts to define impact quantitatively have been unpopular with scholars in many nonquantitative fields, particularly in the arts and humanities, but also in some of the social sciences and theoretical sciences. Still, pressure on university campuses and from funding organizations to present “objective” data regarding researcher impact in addition to standard qualitative evidence has made it difficult for scholars undergoing evaluation to fully ignore the question of quantitative impact measurement.

To make things even more difficult, the academic fields that resist quantitative methods of measuring impact are also typically those that put the least emphasis on the production of journal articles as a standard of researcher productivity. Instead, these areas emphasize outputs like monographs, performances, edited works, and digital research projects. And while this emphasis is entirely valid from a general scholastic standpoint, it nonetheless results in a “weak citation culture” for the fields in question, which frustrates related scholars in search of meaningful citation-based metrics. By contrast, researchers in fields with “strong” citation cultures, like engineering and the biomedical sciences, find themselves not only with greater availability of citation-based metrics like Impact Factor, but also higher numbers of citations for their articles on average. Thus, the difference between a “good” and a “bad” Impact Factor for a researcher in genetics may be up to 20 points, while for a scholar in history, the difference may be as little as 1 or 0.5 (figure 3.10).

The opportunity here for altmetrics, of course, is that altmetrics is not exclusively concerned with definitions of impact that can only be measured through the analysis of article citations. By operating on a level that transcends the idea of citation culture, altmetrics opens up a path to quantitative impact for any scholar whose work can be represented in some capacity on the web. For qualitative researchers, this can mean anything from views, downloads, and saves of textual scholarship (e.g., articles, book chapters, essays, slidedecks) to external Tweets, comments, and ratings of scholarly events (e.g., performances, presentations, exhibitions). What’s more, as we saw in chapter 2, altmetrics can also cover works of special relevance to researchers who are already part of strong citation cultures, for example, by collecting information about the use of datasets, code, and pre-publication article drafts.

The opportunity for altmetrics to corner the market on metrics for researchers in the arts, humanities, and interdisciplinary areas while at the same time serving unmet needs for researchers in the sciences and social sciences is one of its greatest potentials. Still, in practice, the field of altmetrics has found itself seriously struggling with some of the same problems as bibliometrics in getting qualitative scholars to participate sufficiently in the movement’s culture and practices. For instance, in a 2014 study conducted by Swedish researcher Björn Hammarfelt of “humanities-oriented articles and books published by Swedish universities during 2012,” Hammarfelt found that coverage remained substantially lacking for humanities publications in key altmetrics-endorsed peer networks, with only 61 percent of the outputs represented via Mendeley readership and 20 percent via Twitter mentions.16 Another study conducted the same year by Mohammadi and Thelwall that looked specifically at Mendeley coverage of social sciences and humanities publications from 2008 (as pulled from Web of Science) was even less optimistic. It found that 44 percent of social science articles published in 2008 were represented via Mendeley readership, versus only 13 percent of humanities articles from the same period.17

While some of these gaps in humanities coverage might be explained by the dates of the articles examined—from 2008, in the second study—or by the country of publication—Sweden, in the first—both studies nevertheless point to a problem in the adoption of seemingly discipline-agnostic academic peer networks like Mendeley by scholars outside of the sciences and social sciences. Additionally, for all the touting of altmetrics as a means of getting beyond the journal article format, instances of altmetrics being actually used productively for purposes of impact measurement and evaluation still tend to focus heavily on articles. According to the conclusion to Hammarfelt’s 2014 article, “The possibilities that altmetric methods offer to the humanities cannot be denied but, as shown in this paper, there are several issues that have to be addressed in order to realize their potential.”18 Among these issues are the need for more liaisons and advocates to bring awareness of altmetrics to researchers across the full disciplinary spectrum, as we will discuss in chapter 4.

Public Funding Agencies and Altmetrics

Funding is a third major area in which altmetrics have had an opportunity to shine, in that their short-term, web-based measures of impact have the potential to be highly attractive to agencies that are connected to interests of the general public. Evidence of funding agencies’ growing interest in the power of altmetrics can been seen in several areas of the field, starting with the receipt of major grants by multiple altmetrics organizations, including the founders of Impactstory (National Science Foundation and Alfred P. Sloan Foundation); the partnership of the University of California Curation Center, PLOS, and DataOne (National Science Foundation); and researchers behind NISO’s Altmetrics Initiative (Alfred P. Sloan Foundation), to which we will return later.19

More attractive to most researchers, however, is the suggestion that altmetrics can be useful to the process of applying for major grants and as part of the justification for winning new grants. In January 2013, for instance, the NSF changed the biographical sketch portion of its new grants application (see figure 3.11) to allow principal investigators to list their research “products”—a term that would seem to open the door to outputs beyond the standard scholarly article. In a short editorial written for Nature in the same month, well-known altmetrics advocate Heather Piwowar pointed out the potential relationship between this decision and the use of altmetrics in funding requests. “Even when applicants are allowed to include alternative products in grant applications, how will reviewers know if they should be impressed? . . . Many altmetrics have already been gathered for a range of research products.”20

As tracking the use of altmetrics on grant applications is naturally not easy, it becomes difficult to say how many researchers have taken to heart Piwowar’s advice and incorporated altmetrics into their applications for new or renewed funding. That said, a small number of researchers have recently begun to speak up about the practice, like Spanish ecologist Fernando Maestre, in a November 2014 post on his blog Maestre Lab titled, “How I Use Altmetrics Data in My Proposals.”21 Maestre describes using evidence of his research impact from Altmetric, Faculty of 1000, and academic blogs alongside citation-based data from ISI Web of Science and Google Scholar. His example mirrors advice offered to researchers in an essay published almost simultaneously in the online journal PLOS Biology by three members of the funding organization Wellcome Trust. “ALMs and altmetrics offer research funders greater intelligence regarding the use and reuse of research, both among traditional academic audiences and stakeholders outside of academia,” explain the authors, two of whom are members of Wellcome Trust’s evaluation team. “While conventional citation data will continue to play a major role in research evaluation, the new metrics have the potential to provide a valuable complement to the insights revealed by traditional bibliometric indicators.”22

The potential for altmetrics to show connections between academic research and nonacademic populations is therefore a strong appeal to funders whose own evaluations often stress the bigger-picture impact of their awarded grants. Nevertheless, with the exact nature of the connection between altmetrics and wider audiences imprecise at best, it’s important to stress that funders will all but certainly continue to need significant additional evidence of a strong public connection for altmetrics to become more than an ancillary bonus in the competition for research funding. However, what altmetrics can do—as we saw in chapter 2—is help aid in the discovery of this significant evidence, such as making evident specific comments or blog posts in the course of providing a quantitative perspective on online engagement. This again is a point of clarification that can and should be passed along to researchers, both to encourage the greater use of altmetrics in funding applications and to temper the expectations of what altmetrics information can accomplish by itself. Funders, too, will need to become a more vocal part of the conversation for this opportunity to be fully realized—something that may increase in likelihood following an increase in the appearance of altmetrics on applications or following an appropriate uptick in pressure from influential leaders at the junction of the research and altmetrics communities.

The Future of Altmetrics: Standards and Institutions

Having now reviewed some of the major controversies and opportunities at play in the current landscape of altmetrics, two questions inevitably arise: First, what’s next for the future of this developing field, and second, what is being done to shift the balance of issues away from the risks of altmetrics and toward their proposed rewards?

Speculating about the future of altmetrics is itself a bit risky—but based on the facts at hand, it seems probable that altmetrics will continue to fight a hard fight on certain issues for the next several years, such as the onboarding of more researchers outside of the sciences and social sciences and the inherent demographic problems that come with investing in technologies that favor certain tools and privileges, à la digital divide. In addition, despite the gigantic leap that the field of altmetrics has made in developing new products and garnering interest from key groups like funders and institutions, it remains strangely unclear whether altmetrics is still operating somewhere within the Peak of Inflated Expectations, the second phase of the famous Gartner Hype Cycle.23 Is the Trough of Disillusionment still to come? Or are we through the worst and really working on the slow Slope of Enlightenment? The answer is hard to guess.

Yet for all these predictions of continued uncertainty, the future does seem to be quite bright for altmetrics with regard to many of its other gaps and weaknesses. Indeed, one particular movement within the field is already helping to address what are almost certainly the problems at the heart of most criticisms of altmetrics: The lack of consistency across the field and the absence of authoritative recommendations for their practical academic use. As mentioned earlier, the National Information Standards Organization (NISO; see figure 3.12) was awarded a two-year Sloan Foundation grant in 2013 in order to study and develop “Community-Based Standards or Recommended Practices in the Field of Alternative Metrics.”24 As standardization is arguably the biggest roadblock to the widespread acceptance of altmetrics by administrators and university evaluators, the existence of the NISO Altmetrics Initiative on its own is excellent news for the future of altmetrics, regardless of the fact that it’s still in progress.

Luckily, the progress made to date on the initiative has been extremely positive, as evidenced by the white paper released upon the completion of the project’s first phase in June 2014. In it, NISO explains how it was able to hold three in-person meetings and conduct thirty in-person interviews with key stakeholders in the future of altmetrics, which the authors identify as researchers, institutional administrators, librarians, funders, publishers, and members of the general public.25 Using the information gleaned from these meetings, in addition to a separate online altmetrics survey open to the general public, the project’s leaders were able to identify a number of specific objectives for the initiative’s second phase, to be completed by November 2015. These objectives include not only the development of a specific definition for what constitutes an alternative assessment metric, but also “definitions for appropriate metrics and calculation methodologies for specific output types,” “development of strategies to improve data quality through source data providers,” “promotion and facilitation of use of persistent identifiers in scholarly communications,” and “descriptions of how the main use cases apply to and are valuable to the different stakeholder groups.”26 Taken together, these projects constitute the Holy Grail of altmetrics development, substantially increasing the clout of the movement and making new strides possible in the use of altmetrics by government agencies, research groups, and educational institutions. The bringing together of altmetrics and the quest for better use of identifiers like DOI and ORCID would also improve the accountability of online scholarship in general, a win that would help address important areas of confusion like multiple versions of online publications and other cases of unnecessary duplication.

Finally, in imagining the future of altmetrics, it’s important to acknowledge that when all is said and done, the altmetrics of tomorrow may look very different from the altmetrics we are discussing and debating today. Between the development of new types of networks and harvesters and the proposal of new methodologies for understanding the impact of different types of scholarly outputs, the altmetrics of the future may indeed be something much less “alternative” and instead appear closer to the formal approach to analysis seen in the world of bibliometrics. Even now, two scholars at the United Kingdom’s Open University are proposing a new movement of “Semantometrics,” which would use full-text semantic analysis of publications to determine their level of contribution across a network of citations.27

Thus, what the next phase of altmetrics will be is largely up to the actions, endeavors, and practices of today’s advocates and innovators. In the next chapter, we consider what it means for librarians to be one of these catalysts and how some libraries are already making investments in the future of altmetrics, locally and on the grander stage.

Further Reading

“NISO Alternative Assessment Metrics (Altmetrics) Initiative.” National Information Standards Organization. www.niso.org/topics/tl/altmetrics_initiative.

The home portal for NISO’s high-profile Altmetrics Initiative, a two-year project set to complete in late 2015 that seeks to produce a set of standards and best practices around the use of altmetrics by academics.

Adie, Euan. “Gaming Altmetrics.” Altmetric blog, September 19, 2013. www.altmetric.com/blog/gaming-altmetrics.

An insightful blog post written by Altmetric founder Euan Adie in response to discussions about gaming across altmetrics.

Woolston, Chris. “Funders Drawn to Alternative Metrics.” Nature 516, no. 147 (December 10, 2014). www.nature.com/news/funders-drawn-to-alternative-metrics-1.16524.

A brief but useful discussion of the early use of altmetrics by researchers submitting grant applications and the positive potential that some funders see in altmetrics-gleaned information.

Notes

  1. Hannah Jane Parkinson, “Instagram Purge Costs Celebrities Millions of Followers,” Guardian, December 19, 2014, www.theguardian.com/technology/2014/dec/19/instagram-purge-costs-celebrities-millions-of-followers.
  2. Julie Keck, “Buying Fake Twitter Followers Will Leave You Tweeting to Mannequins,” MediaShift, PBS, September 16, 2014, www.pbs.org/mediashift/2014/09/buying-fake-twitter-followers-will-leave-you-tweeting-to-mannequins.
  3. Jennifer Lin, “A Case Study in Anti-gaming Mechanisms for Altmetrics: PLoS ALMs and DataTrust” (paper, altmetrics12 ACM Web Science Conference, Evanston, IL, June 21, 2012), http://altmetrics.org/altmetrics12/lin.
  4. Euan Adie, “Gaming Altmetrics,” Altmetric blog, September 18, 2013, www.altmetric.com/blog/gaming-altmetrics.
  5. Ibid.
  6. Stacy Konkiel and Jason Priem, “What Jeffrey Beall Gets Wrong about Altmetrics,” Impactstory Blog, September 9, 2014, http://blog.impactstory.org/beall-altmetrics.
  7. For a slightly longer rumination on this rich topic, we recommend David Crotty’s 2013 post and its resulting comments on The Scholarly Kitchen blog: David Crotty, “Driving Altmetrics Performance through Marketing—A New Differentiator for Scholarly Journals?” The Scholarly Kitchen (blog), October 7, 2013, http://scholarlykitchen.sspnet.org/2013/10/07/altmetrics-and-the-value-of-publicity-efforts-for-journal-publishers.
  8. Mike Thelwall, Stefanie Haustein, Vincent Larivière, and Cassidy R. Sugimoto, “Do Altmetrics Work? Twitter and Ten Other Social Web Services,” PLOS ONE, May 28, 2013, doi:10.1371/journal.pone.0064841.
  9. Recent examples of these studies include the previously mentioned study by Thelwall, Haustein, Larivière, & Sugimoto (Ibid.) and Rodrigo Costas, Zohreh Zahedi, & Paul Wouters, “Do ‘Altmetrics’ Correlate with Citations? Extensive Comparison of Altmetrics Indicators with Citations from a Multidisciplinary Perspective,” Journal of the Association for Information Science and Technology, first published online July 28, 2014, doi:10.1002/asi.23309.
  10. Zohreh Zahedi, Rodrigo Costas, and Paul Wouters, “How Well Developed Are Altmetrics? A Cross-Disciplinary Analysis of the Presence of ‘Alternative Metrics’ in Scientific Publications,” Scientometrics 101, no. 2 (2014): 1491–1513.
  11. Jason Priem, Heather Piwowar, and Bradley Hemminger, “Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact,” arXiv:1203.4745, March 20, 2012, http://arxiv.org/abs/1203.4745.
  12. Caroline S. Fox, Marc A. Bonaca, John J. Ryan, Joseph M. Massaro, Karen Barry, and Joseph Loscalzo, “A Randomized Trial of Social Media from Circulation,” Circulation 131 (2015): 28, doi:10.1161/CIRCULATIONAHA.114.013509.
  13. Phil Davis, “Social Media and Its Impact on Medical Research,” The Scholarly Kitchen (blog), January 14, 2015, http://scholarlykitchen.sspnet.org/2015/01/14/social-media-and-its-impact-on-medical-research.
  14. Ibid.
  15. “Article-Level Metrics,” SPARC website, accessed January 16, 2015, www.sparc.arl.org/initiatives/article-level-metrics.
  16. Björn Hammarfelt, “Using Altmetrics for Assessing Research Impact in the Humanities,” Scientometrics 101, no. 2 (2014): 1419–30, www.diva-portal.org/smash/get/diva2:703046/FULLTEXT01.pdf.
  17. Ehsan Mohammadi and Mike Thelwall, “Mendeley Readership Altmetrics for the Social Sciences and Humanities: Research Evaluation and Knowledge Flows,” Journal of the Association for Information Science and Technology 65, no. 8 (August 2014): 1631, doi:10.1002/asi.23071.
  18. Hammarfelt, “Using Altmetrics,” 1429.
  19. Heather Piwowar and Jason Priem were originally awarded $125,000 by the Sloan Foundation to develop the product that later became Impactstory in 2012. In 2013, the Sloan Foundation awarded them an additional $500,000 to further develop the scalability of Impactstory. That same year, Impactstory received a separate $300,000 Early Concept Grants for Exploratory Research (EAGER) grant from NSF. See Heather Piwowar, “ImpactStory Awarded $500k Grant from the Sloan Foundation,” Impactstory Blog, June 17, 2013, http://blog.impactstory.org/sloan, and Heather Piwowar, “ImpactStory Awarded $300k NSF Grant!” Impactstory Blog, September 27, 2013, http://blog.impactstory.org/impactstory-awarded-300k-nsf-grant. The partnership of the University of Curation Center, PLOS, and DataOne was awarded an approximately $300,000 EAGER grant by NSF in September 2014, with reference to a project that would help develop data-level metrics (DLMs). See National Science Foundation, “Making Data Count: Developing a Data Metrics Pilot,” award abstract 1448821, December 19, 2014, www.nsf.gov/awardsearch/showAward?AWD_ID=1448821&HistoricalAwards=false. NSF awarded NISO $207,500 in May 2014 in order to help develop standards and recommended practices for altmetrics. See “NISO Awarded Sloan Foundation Grant to Develop Standards and Recommended Practices for Altmetrics,” NR [NISO Reports], Information Standards Quarterly 25, no. 2 (Summer 2013): 40, www.niso.org/apps/group_public/download.php/11276/NR_Altmetrics_Sloan_isqv25no2.pdf.
  20. Heather Piwowar, “Altmetrics: Value All Research,” Nature 494, no. 159 (January 9, 2013), www.nature.com/nature/journal/v493/n7431/full/493159a.html.
  21. Fernando Maestre, “How I Use Altmetrics Data in My Proposals,” Maestre Lab blog, November 26, 2014, http://maestrelab.blogspot.com/2014/11/how-i-use-altmetrics-data-in-my.html.
  22. Adam Dinsmore, Liz Allen, and Kevin Dolby, “Alternative Perspectives on Impact: The Potential of ALMs and Altmetrics to Inform Funders about Research Impact,” PLOS Biology, November 25, 2014, doi:10.1371/journal.pbio.1002003.
  23. The five phases of the Gartner Hype Cycle are Technology Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity. To learn more about the Gartner Hype Cycle, see www.gartner.com/technology/research/methodologies/hype-cycle.jsp.
  24. From the title of the 2013 grant proposal authored by Todd Carpenter and Nettie Lagace of NISO. See Todd Carpenter and Nettie Lagace, “Proposal to Study, Propose, and Develop Community-Based Standards or Recommended Practices in the Field of Alternative Metrics,” March 19, 2013, www.niso.org/apps/group_public/download.php/11012/niso-altmetrics-proposal_public_version.pdf.
  25. “Alternative Metrics Initiative Phase 1 White Paper,” NISO, June 6, 2014, www.niso.org/apps/group_public/download.php/13809/Altmetrics_project_phase1_white_paper.pdf.
  26. “Phase 2 Projects,” in “NISO Alternative Assessment Metrics (Altmetrics) Initiative,” NISO website, accessed January 16, 2015, www.niso.org/topics/tl/altmetrics_initiative/#phase2.
  27. Petr Knoth and Drahomira Herrmannova, “Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication’s Contribution.” D-Lib Magazine 20, no. 11/12 (November/December 2014), www.dlib.org/dlib/november14/knoth/11knoth.html.
Figure 3.1. A chart, created by Euan Adie of Altmetric, that illustrates the differences in value and intention between “gaming” and acceptable self-promotion of research. Source: Euan Adie, “Gaming Altmetrics,” Altmetric> blog, September 18, 2013, www.altmetric.com/blog/gaming-altmetrics

Figure 3.1

A chart, created by Euan Adie of Altmetric, that illustrates the differences in value and intention between “gaming” and acceptable self-promotion of research. Source: Euan Adie, “Gaming Altmetrics,” Altmetric blog, September 18, 2013, www.altmetric.com/blog/gaming-altmetrics.

Figure 3.2. Several services exist to allow anxious social media users to buy large quantities of followers, views, plays, and other forms of online interaction. While the use of these services may not be common, they are nevertheless an acknowledged part of the public market for online attention. This screenshot shows the home page of one such “follower” service.

Figure 3.2

Several services exist to allow anxious social media users to buy large quantities of followers, views, plays, and other forms of online interaction. While the use of these services may not be common, they are nevertheless an acknowledged part of the public market for online attention. This screenshot shows the home page of one such “follower” service.

Figure 3.3. As this 2010 <em>Science News</em> article about the dangers of citation inflation demonstrates, concerns about gaming and bias have long existed in reference to bibliometrics like Impact Factor as well. Source: Janet Raloff, “Citation Inflation,” <em>Science & the Public</em> (blog), <em>Science News</em>, June 15, 2010, https://www.sciencenews.org/blog/science-public/citation-inflation

Figure 3.3

As this 2010 Science News article about the dangers of citation inflation demonstrates, concerns about gaming and bias have long existed in reference to bibliometrics like Impact Factor as well. Source: Janet Raloff, “Citation Inflation,” Science & the Public (blog), Science News, June 15, 2010, https://www.sciencenews.org/blog/science-public/citation-inflation.

Figure 3.4. Mendeley readership metrics, such as those in this screenshot, are often credited with having the highest correlation of any altmetric indicator to the bibliometrics standard Times Cited.

Figure 3.4

Mendeley readership metrics, such as those in this screenshot, are often credited with having the highest correlation of any altmetric indicator to the bibliometrics standard Times Cited.

Figure 3.5. The overlap between nonacademic social networks like Twitter and academic users can be complicated. For instance, in addition to the growing percentage of researchers who report using Twitter for teaching or scholarship, a large number of academic publishers have taken to Twitter to promote new research on behalf of their authors. This January 2015 screenshot of the Twitter home of Oxford Journals is a telling example, with its 18,100 followers and nearly 8,000 Tweets. https://twitter.com/oxfordjournals

Figure 3.5

The overlap between nonacademic social networks like Twitter and academic users can be complicated. For instance, in addition to the growing percentage of researchers who report using Twitter for teaching or scholarship, a large number of academic publishers have taken to Twitter to promote new research on behalf of their authors. This January 2015 screenshot of the Twitter home of Oxford Journals is a telling example, with its 18,100 followers and nearly 8,000 Tweets. https://twitter.com/oxfordjournals.

Figure 3.6. Social media demographics have become extremely important when trying to understand the value of altmetrics for particular academic audiences. For instance, according to a survey conducted by the Pew Research Internet Project, 74 percent of all online adults used social networking sites as of January 2014. However, for respondents over age 50, this percentage was much lower—65 percent to age 64 and less than 50 percent for those above 65. These statistics, and related statistics based specifically on the use of social networking sites by researchers, can be useful when considering the inclusion of nonacademic social media metrics in academic contexts. Pew Research Center, “Social Networking Fact Sheet,” accessed January 16, 2015. www.pewinternet.org/fact-sheets/social-networking-fact-sheet

Figure 3.6

Social media demographics have become extremely important when trying to understand the value of altmetrics for particular academic audiences. For instance, according to a survey conducted by the Pew Research Internet Project, 74 percent of all online adults used social networking sites as of January 2014. However, for respondents over age 50, this percentage was much lower—65 percent to age 64 and less than 50 percent for those above 65. These statistics, and related statistics based specifically on the use of social networking sites by researchers, can be useful when considering the inclusion of nonacademic social media metrics in academic contexts. Pew Research Center, “Social Networking Fact Sheet,” accessed January 16, 2015. www.pewinternet.org/fact-sheets/social-networking-fact-sheet.

Figure 3.7. A chart, offered by PLOS, that suggests various benefits of ALMs throughout the research process. PLOS has been a long-time supporter of ALMs and offers them across its seven peer-reviewed open-access journals. http://article-level-metrics.plos.org/researchers

Figure 3.7

A chart, offered by PLOS, that suggests various benefits of ALMs throughout the research process. PLOS has been a long-time supporter of ALMs and offers them across its seven peer-reviewed open-access journals. http://article-level-metrics.plos.org/researchers.

Figure 3.8. An adaptation of a slide from a recent presentation given by the authors on research impact. This image shows a small sampling of the many articles that have expressed criticism of the use of Impact Factor as a tool for evaluation. www.slideshare.net/Plethora121/beyond-bibliometrics-au-librarys-scholar-communication, slide 7 of 18.

Figure 3.8

An adaptation of a slide from a recent presentation given by the authors on research impact. This image shows a small sampling of the many articles that have expressed criticism of the use of Impact Factor as a tool for evaluation. www.slideshare.net/Plethora121/beyond-bibliometrics-au-librarys-scholar-communication, slide 7 of 18.

Figure 3.9. The bare bones home page of arXiv.org, currently one of the most popular of the e-print article archives for scholars in sciences. In December 2014, arXiv announced that it had passed the milestone for one million article uploads.

Figure 3.9

The bare bones home page of arXiv.org, currently one of the most popular of the e-print article archives for scholars in sciences. In December 2014, arXiv announced that it had passed the milestone for one million article uploads.

Figure 3.10. A combined list of genetics and history journals for 2013 created using new InCites Journal Citation Reports tool, showing journals ranked according to their Impact Factor. Note that the history journal with the highest Impact Factor for the year, American Historical Review (Impact Factor: 1.293), ranks beneath the 138th highest genetics journal. By contrast, the top genetics journal, Nature Reviews Genetics, is listed as having an Impact Factor of 39.794.

Figure 3.10

A combined list of genetics and history journals for 2013 created using new InCites Journal Citation Reports tool, showing journals ranked according to their Impact Factor. Note that the history journal with the highest Impact Factor for the year, American Historical Review (Impact Factor: 1.293), ranks beneath the 138th highest genetics journal. By contrast, the top genetics journal, Nature Reviews Genetics, is listed as having an Impact Factor of 39.794.

Figure 3.11. In January 2013, the NSF changed the language in its grants proposal application to allow for the submission of up to ten “research products” with regard to principal investigators’ biographical sketches. While the section in question still requires products to be “published” (i.e., no invited lectures), it also explicitly allows for works that go beyond traditional print-based scholarship.

Figure 3.11

In January 2013, the NSF changed the language in its grants proposal application to allow for the submission of up to ten “research products” with regard to principal investigators’ biographical sketches. While the section in question still requires products to be “published” (i.e., no invited lectures), it also explicitly allows for works that go beyond traditional print-based scholarship.

Figure 3.12. The NISO Altmetrics Initiative began in June 2013 and is set to complete in November 2015, at which point NISO says it will publish its final standards/recommended practices and any related trainings.

Figure 3.12

The NISO Altmetrics Initiative began in June 2013 and is set to complete in November 2015, at which point NISO says it will publish its final standards/recommended practices and any related trainings.

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy