Category Archives: elsevier

New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Source: New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Altmetrics were never meant as a replacement to established metrics, rather as a way to augment mainstream bibliometrics. With the release of the new Article Metrics module on Scopus, we hope to bring altmetrics one step closer to dropping the “alt.”

The Scopus Article Metrics module was designed in accordance with 12 principles for the responsible use of metrics1 and includes new metrics based on four alternative metrics categories2 endorsed by the Snowball Metrics project:

  • Scholarly Activity — Downloads and posts in common research tools such as Mendeley and CiteULike
  • Social Activity — Mentions characterized by rapid, brief engagement on platforms used by the public, such as Twitter, Facebook and Google+
  • Scholarly Commentary — Reviews, articles and blogs by experts and scholars, such as F1000 Prime, research blogs and Wikipedia
  • Mass Media — Coverage of research output in the mass media (e.g., coverage in top tier media media)

“We believe no single metric tells the whole story — you need a basket of metrics to make informed decisions,” says Michael Habib, Scopus Senior Product Manager, Elsevier. “By combining citation and alternative metrics, this new Article Metrics module will provide a comprehensive view of both the impact of and community engagement with an article.”

In addition to applying the same rigor to altmetrics that we apply to citation metrics, the new module was designed (and user-tested) to aid Scopus users, primarily scholarly researchers, in two primary tasks:

  1. Determining which articles to read
  2. Gaining deep insight into how an article (possibly one’s own?) compares with similar articles

Using the Article Metrics Module

On the Scopus document details (article) page, a sidebar highlights the minimal number of meaningful metrics a researcher needs to evaluate both citation impact and levels of community engagement. This can help a researcher to determine how others have received the article and, along with reading the abstract, to decide whether to read the full article.
The module displays the following (available for each article):

  • Citation count and percentile benchmark
  • Field-Weighted Citation Impact (FWCI)
  • Mendeley readership count and benchmark
  • Count of 1 type of scholarly commentary (e.g., blog posts, Wikipedia)
  • Count and benchmark of 1 type of social activity (e.g., Twitter, Facebook)
  • Total count of additional metrics and link to see breakdown by source

In addition to displaying these metrics, Scopus is introducing new percentile benchmarks3 to show how article citations or activity compare with the averages for similar articles, taking into account:

  • Date of publication
  • Document type
  • Disciplines associated with its source

From the sidebar, clicking <View all metrics> opens the full Article Metrics module, providing an overview of all available metrics and the underlying content for further analysis and understanding.

1Outlined in the “Response to HEFCE’s call for evidence: independent review of the role of metrics in research assessment.”

2CiteULike, Scholarly Commentary, Social Activity, and Mass Media data provided by

3A minimum set of 2,500 similar articles is required to calculate a benchmark. Citation benchmarks and scholarly commentary benchmarks use an 18-month window, and social activity benchmarks are calculated with a two-month window.

Release Date: July 29 2015

Source: New Scopus Article Metrics: A better way to benchmark articles | Elsevier Scopus Blog

Scopus panel on authors and altmetrics at ALA-Midwinter

At ALA-Midwinter 2014 in Philadephia, I had the opportunity to organize and moderate a Scopus sponsored panel at the Elsevier booth in celebration of Scopus’ 10th Anniversary. Instead of reminiscing on all that has been accomplished over the past 10 years, we took the opportunity to look forward and explore the intersection of some emerging trends that we see shaping the future.

Photos by Rebecca Brown

The panel was titled “New Possibilities in Evaluation Metrics: Authors + Altmetrics = ?”. In organizing the panel, I tried to arrange a cross-section of speakers who had experience dealing with both author identifiers and altmetrics. All three of our panelists are frequent speakers on both topics independently of one another, and I thought it would be interesting to hear their thoughts on the relationship between these trends.

In his role as Head of Academic Outreach at Mendeley, our first speaker William Gunn ( is a major proponent of Mendeley readership statistics as a key new metric. Mendeley is also a major scholarly platform for user-generated researcher profiles. Mendeley has been at the center of these trends since the beginning. For example, one of the earliest experiments in altmetrics at the author-level was Dario Tarobelli‘s ReaderMeter, which was based on Mendeley data. William kicked-off by outlining three key benefits of altmetrics: “Get better data on researcher engagement with research; Get it faster; Serve all the stakeholders in research”

He continued to explore each of these in turn. William used the example of Mendeley to explain how altmetrics are the outputs of researcher (and public) engagement with research. A user storing a document in their Mendeley collection to read also results in a measure of engagement with that document. The Web has meant that not only are many more forms of communication and interaction with scholarly content possible than ever before, but that they are now also measurable. We can thus capture the broadest impact of a researchers traditional and non-traditional works. William spent some time discussing the example of the new PubMed Commons as one place he sees new forms of discussion beginning to flourish. William also touched on the value of altmetrics to improve discovery platforms. William’s talk proved an excellent introduction to the topic and laid a good foundation for the discussion.

Next up was Kristi Holmes ( who as a Bioinformaticist at the Becker Medical Library is the co-creator of the Becker Model for Assessment of Research Impact. Additionally, Kristi is VIVO Director of Outreach and on the ORCID Outreach Committee.

Kristi’s talk was titled “Author identifiers & research impact: A role for libraries”. She began by discussing the growing importance of research assessment and posed the question, “How do we measure what matters?” She proceeded to outline the Becker Model and some of the meaningful things that should be measured but aren’t yet. She then spoke about how researcher networking tools and research information systems like SciVal Experts and VIVO are creating the foundation to begin tracking these new forms of impact. She followed this up with the example of VIVO and finished with a description of VIVO’s efforts to integrate with ORCID. Kristi’s primary focus was on practical advice for libraries.

Martin Fenner was last on the agenda. Martin is currently Technical Lead for PLOS’ Article-Level-Metrics and ORCID integration. Martin is also working as a consultant on the NISO Alternative Metrics Project and is a former ORCID board member. Martin created one of the first author-level altmetrics platforms, ScienceCard back in 2011.

Martin’s presentation focused on the big picture and a look toward the future. He began by pointing out that altmetrics have two goals, both to improve discovery and also assessment. He discussed the importance of linking authors with key institutions and funders as well as with their many forms of outputs, such as articles, datasets, and software. He touched on ORCID, ISNI/Ringgold, Fundref, Datacite and many more identifiers and platforms that are working together to enable this. One of the highlights of the panel was a table Martin introduced to argue that different metrics are used for different use cases.


Following the presentations we had time for some questions. One topic discussed was how to track the impact of research data. The ODIN (Orcid and Datacite Interoperability Network) project was brought up as a great example of ORCID being used in a way that will enable altmetrics. The topic of using altmetrics for improved discovery was also discussed. Each of the panelists had a chance to share their vision for what things might look like 10 years on. Martin closed with a very positive outlook. He believes that we will see significant transformations in discovery and assessment over the next five years as altmetrics are enabled by the uptake of author identifiers.

Following the panel, there was a Scopus 10th Anniversary reception at the booth to wrap things up.

Scopus 10th Anniversary Edition overview

Full release notes are available here (.pdf). Below is the full script used for this video.

On February 1st, to commemorate Scopus’ 10th year, we release the most extensive redesign since Scopus launched in 2004.

In this video, we will introduce the key improvements you will see in the Scopus 10th anniversary edition.

The primary goal of the redesign is to optimize core workflows. This means that effort has been made to minimize the number of steps a user needs to take to accomplish the task at hand. Specifically, we have focused on streamlining our interface to support the most common use cases.

Changes to the homepage are relatively minimal as this page was already fairly streamlined.

The main change to note is the addition of “Browse Sources” and “Analyze Journals” links. Previously, these were in the main menu bar as “Sources” and “Analyze”. These links were moved, because they were underused in the main menu where their names were rather ambiguous and they were less visible. In the new location, “Browse Sources” is now equal in stature to the other main content types: Documents, Authors, and Affiliations.

Analyzing journals is the only analysis feature in Scopus that can be used out of context as a standalone tool. As such, it too is called out on the home page as the entry point into a dedicated workflow.

Let’s begin with a quick search.

The document results page has been extensively redesigned around supporting the primary use case of Scopus, which is finding documents to read; and then locating the full text, or, exporting to a reference manager.

The results set has been optimized for quick and easy scanning of results. For example, the font has been chosen specifically for its readability.

Scanning through the results you can quickly see where outward links to full text are available, including image-based linking such as the “Find at Your Library” button.

Upon hover over you can see lesser used links such as “Show abstract” and “Related documents”. Also, the links only appear blue on hover over. Only displaying these at the time of need greatly reduces visual noise thus enabling the user to focus on the results content with minimal distraction.

These updates were made only after multiple rounds of usability testing with two generations of a prototype. The testing ensured that users could still readily locate the hidden links when needed.

Another place you will notice this will be the “View more” links in the “Refine results” panel. It is important to note that all of the existing functionality is maintained both in the refine panel and the results list.

One example of where we minimized the number of steps needed to accomplish primary workflows is the Sort menu. Previously a user had to open a dropdown menu to see the available options and pick from the many available. Most users however only need three sort options:

Newest first is still the default as the number one use case is keeping up with new research literature.

The Second most common use case is trying to determine which research is impactful, by sorting on top Cited.

And the third most common use case is trying to get an overview of a new field of study. For that, users need to sort by most relevant.

These three sort options accomplish over 90% of the tasks users set out to do. The new UI makes them more visible and enables easy one-click toggling between them.

For advanced users, the other sort options are of course still available.

Another change you will notice throughout Scopus is that some buttons and links are deactivated until a necessary action is taken. One example is the “Limit to” and “Exclude” buttons. This update is designed to minimize the number of error messages caused by trying to use these buttons before having selected anything.

Once a user finds a document or documents they are interested in, one of the common next steps is exporting the citation information to a reference manager. One of the most requested features in Scopus has been one-click export. To enable this, we rethought this workflow from the bottom-up.

Currently when a user clicks export, they are sent to an entirely new page where they must make selections each time they export. Now a menu will open instantly in a pop-up window.

Logos have been added to make the needed format instantly identifiable. For example, an Endnote user will no longer have to think about which format they use when making a selection.

You will also notice that an entirely new Export option has been added. With this release, we added direct export to Mendeley. As you can see, this opens the Mendeley Web Importer overlay populated with all the selected documents. You can then save them directly to your Mendeley library

This is where it gets interesting. Notice the message at the top regarding my default settings being saved for this session. Notice also that the Export link has changed to an Add to Mendeley link. This will change to whichever format was chosen. If I had chosen RIS, it would say RIS export.

Now, clicking on this link will instantly perform the export action. In other words, “1 click-export”.

When I log in, this setting is saved for the next time I return to Scopus. Remember, you can register for Scopus and then this feature will be saved for you.

We recognize most researchers only use one reference management tool, so designed this feature with that in mind.Let’s quickly view the first document in our results.

You will notice minimal changes on this page. However, one important change has been made to the sidebar boxes. Previously only 2 documents displayed in the cited-by and related documents boxes.

These links were very popular, so we now display three. While it doesn’t seem like a huge change, this 50% increase in the number of links shown should save users considerable time.

Let’s now return to the homepage and run an Author search.

You will note that the author search pages have been similarly redesigned. One change worth noting is the addition of a filter to view only exact matches. Previously to do this a user needed to run a whole new search. Now this can simply be toggled on or off.

Let’s view an author profile.

While export was the most extensively redesigned feature in Scopus the most extensively redesigned page is the Author Profile. This is the first redesign of the Author profiles since they were released.

The old version had very little information and was not particularly engaging. It now is more similar to the traditional CV format and key interactions have been made more obvious.

The top right hand corner now features a prominent Follow button encouraging the user to sign up to receive email updates when Scopus identifies a new indexed document by this author. This is the exact same functionality as the current set alert link. However, our testing shows that the value of this feature is significantly more apparent to users when the Follow wording is used. More researchers will now sign up to alerts to keep up with their competitors’ latest work.

The other primary actions are clustered in this area: most notably, setting a citation alert and the “Add to ORCID” link.

Links to the three author focused analysis tools are also now clustered together calling attention to those options.

The 20 most recent documents now show, rather than just the two that showed previously. Outward links to full text have also been added. This should save users significant amounts of time by reducing the need to continue on to a further search page.

Similar to the document details page, 3 citing documents now show in the sidebar, rather than the previous 2.

One brand new feature added to this page is the option to Export all. Here, because Mendeley is set as my default reference manager, you see the Save all to Mendeley link. With one click I can then open the Mendeley web importer and import my entire document list to my Mendeley profile.

This example demonstrates how the redesigned one-click export, the new Mendeley direct export, and the redesigned author profile page come together to support a whole new previously unsupported workflow.

The affiliation details page and source details pages have received similar updates as the author profile page. While I will not cover those and the other redesigned pages today, I encourage you to explore on your own once the release is live.

Thank you much for your time.

Summary + Slides for 2013 SSP Panel – Measure for Measure: The role of metrics in assessing research performance

Photo from @Editage

At the 2013 SSP (Society of Scholarly Publishers) Annual conference I was on a panel about research assessment and metrics. As part of my presentation I shared some findings from a large survey conducted by Elsevier’s Research and Academic Relations group (methodology in the slides).

One finding the Twitter back-channel picked up on was the surprising statistic that, from this random sample, only 1% of academics were familiar with altmetrics. I followed this up with a more optimistic statistic  showing that both researchers under 35 and also researchers from developing nations were more likely to view different types of potential altmetrics as useful. For this section of the talk, my primary point was that we need to focus on raising awareness among this demographic if altmetrics are to gain legitimacy in the researcher community.

Also discussed are DORA, Journal Metrics (SNIP, SJR), Snowball Metrics, and more. I summarized the primary take away points as follows:

  1. Choose methods + metrics appropriate to level and impact type being assessed (DORA)
  2. Don’t confuse level with type (alms ≠ altmetrics) – Tip: Embed free Scopus Cited-By counts at article-level
  3. Awareness of metrics correlates to acceptance, raising awareness matters
  4. APAC + younger researchers open to new metrics
  5. Don’t use just one metric, promote a variety of metrics – Tip: Embed free SNIP/SJR on journal pages
  6. Choose transparent and standard methods + metrics – Tip: Learn best practices from Snowball Metrics free ebook