Why Elsevier is taking a new approach to altmetrics – poster 2:AM Amsterdam

Last week, I had the opportunity to present a poster on the new Scopus Article Metrics module at 2:AM Amsterdam (2nd Altmetrics Conference). The poster, “Why Elsevier is taking a new approach to article and alternative metrics”, is now available for download from the Mendeley Data sharing platform at: http://dx.doi.org/10.17632/47y8drx8bk.1

Poster - Why Elsevier is taking a new approach to article and alternative metrics

The same day, an accompanying article was featured on Elsevier’s Reviewers’ Update: To read or not to read? New Scopus Article Metrics can help you decide

To read or not to read? New Scopus Article Metrics can help you decide

In addition to all the feedback from 2:AM and the following altmetrics15 conference, some great feedback also came in from Lizzy Sparrow who blogged her First impression of Scopus Article Metrics.

This year marked the 5th anniversary of the altmetrics manifesto, which was cause for a lot of reflection at both conferences on how far we have gotten. In general, while there is still a long way to go, I was impressed with the level of maturity of the market and community in comparison to where we were five years ago. For a good summary of the conferences, I suggest my colleague, Paul Groth‘s, Trip Report: 5 years of altmetrics #2amconf #altmetrics15.

 

New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Source: New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Altmetrics were never meant as a replacement to established metrics, rather as a way to augment mainstream bibliometrics. With the release of the new Article Metrics module on Scopus, we hope to bring altmetrics one step closer to dropping the “alt.”

The Scopus Article Metrics module was designed in accordance with 12 principles for the responsible use of metrics1 and includes new metrics based on four alternative metrics categories2 endorsed by the Snowball Metrics project:

  • Scholarly Activity — Downloads and posts in common research tools such as Mendeley and CiteULike
  • Social Activity — Mentions characterized by rapid, brief engagement on platforms used by the public, such as Twitter, Facebook and Google+
  • Scholarly Commentary — Reviews, articles and blogs by experts and scholars, such as F1000 Prime, research blogs and Wikipedia
  • Mass Media — Coverage of research output in the mass media (e.g., coverage in top tier media media)

“We believe no single metric tells the whole story — you need a basket of metrics to make informed decisions,” says Michael Habib, Scopus Senior Product Manager, Elsevier. “By combining citation and alternative metrics, this new Article Metrics module will provide a comprehensive view of both the impact of and community engagement with an article.”

In addition to applying the same rigor to altmetrics that we apply to citation metrics, the new module was designed (and user-tested) to aid Scopus users, primarily scholarly researchers, in two primary tasks:

  1. Determining which articles to read
  2. Gaining deep insight into how an article (possibly one’s own?) compares with similar articles

Using the Article Metrics Module

On the Scopus document details (article) page, a sidebar highlights the minimal number of meaningful metrics a researcher needs to evaluate both citation impact and levels of community engagement. This can help a researcher to determine how others have received the article and, along with reading the abstract, to decide whether to read the full article.
The module displays the following (available for each article):

  • Citation count and percentile benchmark
  • Field-Weighted Citation Impact (FWCI)
  • Mendeley readership count and benchmark
  • Count of 1 type of scholarly commentary (e.g., blog posts, Wikipedia)
  • Count and benchmark of 1 type of social activity (e.g., Twitter, Facebook)
  • Total count of additional metrics and link to see breakdown by source

In addition to displaying these metrics, Scopus is introducing new percentile benchmarks3 to show how article citations or activity compare with the averages for similar articles, taking into account:

  • Date of publication
  • Document type
  • Disciplines associated with its source

From the sidebar, clicking <View all metrics> opens the full Article Metrics module, providing an overview of all available metrics and the underlying content for further analysis and understanding.


1Outlined in the “Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment.”

2CiteULike, Scholarly Commentary, Social Activity, and Mass Media data provided by altmetric.com.

3A minimum set of 2,500 similar articles is required to calculate a benchmark. Citation benchmarks and scholarly commentary benchmarks use an 18-month window, and social activity benchmarks are calculated with a two-month window.

Release Date: July 29 2015

Source: New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Scopus panel on authors and altmetrics at ALA-Midwinter

At ALA-Midwinter 2014 in Philadephia, I had the opportunity to organize and moderate a Scopus sponsored panel at the Elsevier booth in celebration of Scopus’ 10th Anniversary. Instead of reminiscing on all that has been accomplished over the past 10 years, we took the opportunity to look forward and explore the intersection of some emerging trends that we see shaping the future.

Photos by Rebecca Brown

The panel was titled “New Possibilities in Evaluation Metrics: Authors + Altmetrics = ?”. In organizing the panel, I tried to arrange a cross-section of speakers who had experience dealing with both author identifiers and altmetrics. All three of our panelists are frequent speakers on both topics independently of one another, and I thought it would be interesting to hear their thoughts on the relationship between these trends.

In his role as Head of Academic Outreach at Mendeley, our first speaker William Gunn (http://orcid.org/0000-0002-3555-2054) is a major proponent of Mendeley readership statistics as a key new metric. Mendeley is also a major scholarly platform for user-generated researcher profiles. Mendeley has been at the center of these trends since the beginning. For example, one of the earliest experiments in altmetrics at the author-level was Dario Tarobelli‘s ReaderMeter, which was based on Mendeley data. William kicked-off by outlining three key benefits of altmetrics: “Get better data on researcher engagement with research; Get it faster; Serve all the stakeholders in research” He continued to explore each of these in turn. William used the example of Mendeley to explain how altmetrics are the outputs of researcher (and public) engagement with research. A user storing a document in their Mendeley collection to read also results in a measure of engagement with that document. The Web has meant that not only are many more forms of communication and interaction with scholarly content possible than ever before, but that they are now also measurable. We can thus capture the broadest impact of a researchers traditional and non-traditional works. William spent some time discussing the example of the new PubMed Commons as one place he sees new forms of discussion beginning to flourish. William also touched on the value of altmetrics to improve discovery platforms. William’s talk proved an excellent introduction to the topic and laid a good foundation for the discussion.

 

Next up was Kristi Holmes (http://orcid.org/0000-0001-8420-5254) who as a Bioinformaticist at the Becker Medical Library is the co-creator of the Becker Model for Assessment of Research Impact. Additionally, Kristi is VIVO Director of Outreach and on the ORCID Outreach Committee. Kristi’s talk was titled “Author identifiers & research impact: A role for libraries”. She began by discussing the growing importance of research assessment and posed the question, “How do we measure what matters?” She proceeded to outline the Becker Model and some of the meaningful things that should be measured but aren’t yet. She then spoke about how researcher networking tools and research information systems like SciVal Experts and VIVO are creating the foundation to begin tracking these new forms of impact. She followed this up with the example of VIVO and finished with a description of VIVO’s efforts to integrate with ORCID. Kristi’s primary focus was on practical advice for libraries.Martin Fenner was last on the agenda. Martin is currently Technical Lead for PLOS’ Article-Level-Metrics and ORCID integration. Martin is also working as a consultant on the NISO Alternative Metrics Project and is a former ORCID board member. Martin created one of the first author-level altmetrics platforms, ScienceCard back in 2011.

Martin’s presentation focused on the big picture and a look toward the future. He began by pointing out that altmetrics have two goals, both to improve discovery and also assessment. He discussed the importance of linking authors with key institutions and funders as well as with their many forms of outputs, such as articles, datasets, and software. He touched on ORCID, ISNI/Ringgold, Fundref, Datacite and many more identifiers and platforms that are working together to enable this. One of the highlights of the panel was a table Martin introduced to argue that different metrics are used for different use cases.

ALA_Altmetrics_Panel_Fenner_Page_7
From https://speakerdeck.com/mfenner/connecting-research-and-researchers

Following the presentations we had time for some questions. One topic discussed was how to track the impact of research data. The ODIN (Orcid and Datacite Interoperability Network) project was brought up as a great example of ORCID being used in a way that will enable altmetrics. The topic of using altmetrics for improved discovery was also discussed. Each of the panelists had a chance to share their vision for what things might look like 10 years on. Martin closed with a very positive outlook. He believes that we will see significant transformations in discovery and assessment over the next five years as altmetrics are enabled by the uptake of author identifiers.

Following the panel, there was a Scopus 10th Anniversary reception at the booth to wrap things up.

Summary + Slides for 2013 SSP Panel – Measure for Measure: The role of metrics in assessing research performance

Photo from @Editage

At the 2013 SSP (Society of Scholarly Publishers) Annual conference I was on a panel about research assessment and metrics. As part of my presentation I shared some findings from a large survey conducted by Elsevier’s Research and Academic Relations group (methodology in the slides).

One finding the Twitter back-channel picked up on was the surprising statistic that, from this random sample, only 1% of academics were familiar with altmetrics. I followed this up with a more optimistic statistic  showing that both researchers under 35 and also researchers from developing nations were more likely to view different types of potential altmetrics as useful. For this section of the talk, my primary point was that we need to focus on raising awareness among this demographic if altmetrics are to gain legitimacy in the researcher community.

Also discussed are DORA, Journal Metrics (SNIP, SJR), Snowball Metrics, and more. I summarized the primary take away points as follows:

  1. Choose methods + metrics appropriate to level and impact type being assessed (DORA)
  2. Don’t confuse level with type (alms ≠ altmetrics) – Tip: Embed free Scopus Cited-By counts at article-level
  3. Awareness of metrics correlates to acceptance, raising awareness matters
  4. APAC + younger researchers open to new metrics
  5. Don’t use just one metric, promote a variety of metrics – Tip: Embed free SNIP/SJR on journal pages
  6. Choose transparent and standard methods + metrics – Tip: Learn best practices from Snowball Metrics free ebook