The Limitations of Journal-Based Metrics

Article Article
Monday, July 15, 2024
Photo by iStock/sorbetto
If business schools want to fulfill AACSB’s call for research impact, they must break away from using journal lists to evaluate faculty scholarship.
  • Journal lists that rate the quality of academic publications were created to help libraries prioritize their journal purchasing decisions, not to measure the value of individual intellectual contributions.
  • Even so, research-focused business schools increasingly rely on journal lists to evaluate faculty intellectual contributions, compelling faculty to focus more on where they publish than on what they publish.
  • To produce research with true societal impact, business schools must abandon one-size-fits-all journal list metrics in favor of diverse, personalized, mission-driven research objectives for each faculty member.

 
Research-oriented business schools face a major challenge: How can they measure faculty research performance in ways that are fair, transparent, mission-aligned, and representative of the impact of each faculty member’s contributions?  

Many institutions, especially those where research is an important part of their missions, address this challenge by rewarding only a narrow set of intellectual contributions, leaving faculty to pursue scholarship outside these parameters at their own professional risk. This reality affects deans, tenured professors, tenure-track faculty, doctoral students, and any other students or staff who conduct research as part of their jobs or degree programs. 

In many ways, business schools are stuck in a trap that they have made for themselves. In exchange for a straightforward way to measure faculty intellectual contributions, they willingly choose to evaluate faculty based on a journal’s average citations across all articles, not on an individual article’s quality and impact. As a result, our collective discourse about the value of research has shifted to focus more on journal list rankings than on the quality, value, and impact of research.

But this system is not sustainable. For business schools to secure their long-term future and demonstrate their value outside a narrow academic world, it is essential that they break out of this self-perpetuating trap. But doing so will require courage, nonconformity, and systematic effort. 

A Faulty Methodology 

Over the past few decades, business education and other disciplines have shifted to relying on journal lists, sometimes exclusively, to evaluate the scholarly performance of their faculty members and departments. The practice has become so deeply embedded in higher education that it has become costly for schools and faculty members to end, or even reduce, their reliance on journal lists. 

The lists most commonly used for this purpose are wide-ranging. Some categorize journals into quartiles, grades, or similar measures, such as the Academic Journal Guide compiled by the Chartered Association of Business Schools (CABS) in the U.K. or the lists that are part of the Social Sciences Citation Index from Clarivate Analytics.  

Other resources offer a single list of acceptable journals, such as the Financial Times Top 50 Journals list. There also is ShanghaiRanking’s Global Ranking of Academic Subjects, which selects 50 journals each in the disciplines of business administration, management, and finance, as well as 25 each in public administration and hospitality and tourism management.   

In exchange for a straightforward way to measure faculty intellectual contributions, business schools willingly choose to evaluate faculty based on a journal’s average citations across all articles. But this system is not sustainable.

Most lists measure journal quality by the average number of citations their articles receive—a metric originally intended to help libraries prioritize their journal purchasing decisions. However, many educational institutions have converted journal lists into a way to measure the value of individual intellectual contributions. Variation in citations among articles is very high, making journal lists an extremely unreliable way to judge the quality of a single individual or a scholarly paper.

In other words, using journal-level analytics to judge individual-level performance breaks the rules of good methodology. 

AACSB accreditation standards recognize a wide range of intellectual contributions that qualify faculty members as Scholarly Academics. Even so, business schools with research-oriented missions use journal lists to govern their faculty appointments, faculty classification, promotion and tenure decisions, and contract renewals. Hence, adherence to journal lists has become a matter of personal survival for many tenure-track faculty.  

I saw this in my own institution, Nottingham University Business School China at the University of Nottingham Ningbo China (UNNC). This focus has been driven partly by university-level policies that have increasingly emphasized publication in highly rated journals on a few lists, such as the CABS list, as well as more science- and technology-oriented lists.   

These policies have permeated not only promotion decisions, but also contract renewals (most academics at UNNC are employed on five-year renewable contracts). In practice, the career progression of faculty is based more on the rigid application of the “hard” criteria of journal list metrics and less on “soft” criteria such as teaching quality or the impact and originality of research. As a result, when I served as the dean at the business school, I was forced to let go of some excellent teachers whose publications did not meet these narrow scholarly criteria. 

The Dangers of a ‘Journal Metric’ Mindset 

Of course, the use of journal lists as a performance metric makes some sense, because they can serve as a leading indicator of research quality. After all, a poorly researched article is unlikely to be accepted by a quality journal.

However, many highly ranked journals evaluate articles based not only on the objective quality of research, but also on several self-perpetuating metrics. For instance, the publication criteria of these journals tend to prioritize sophisticated research methods and conventional conceptual approaches.

This encourages scholars to focus on incremental development of existing theories through quantitative research using rigorous statistics. Unfortunately, these statistics can be based on variables that are questionable proxies for what they are supposedly measuring.  

Moreover, highly ranked journals can have lengthy publication cycles. This means that research on current developments in a fast-changing business world will likely be out of date by the time it is published.   

Perhaps the most troubling consequence of using journal metrics to evaluate individual performance is that the discourse of research has become based more on a journal’s ranking than on the published research itself. Even the authors of so-called systematic literature reviews tend to use journal rankings rather than paper originality to select which research results to include, which can bring into question the comprehensive nature of these reviews

The research culture of a business school has become more like that of a sales department looking to hit its targets than of an academic community looking to advance knowledge.

This mindset is being socialized into current and future generations of faculty members, affecting them at all levels of their careers. It starts in doctoral programs, where PhD students are told to focus on journal rankings when selecting what to read. As they look to publish their dissertation results, doctoral candidates are encouraged to submit their work to journals on the lists used by the universities where they want to apply for jobs, whether or not those outlets are appropriate for their research topics.   

So, research-oriented academics must play an optimization game in which they balance quantity and quality. They must publish enough articles to get jobs, have their contracts renewed, gain tenure, and avoid performance pressure. Not surprisingly, if they work at institutions that give equal credit for joint authorship of articles, the number of authors per paper inevitably rises.   

Overall, the research culture of a business school has become more like that of a sales department looking to hit its targets than of an academic community looking to advance knowledge. 

The Disconnect Between Research and Impact 

The implications extend into the real world, where few managers read business research because it is hidden behind paywalls and written in language that is inaccessible to nonacademics. As a result, very few studies in business and management have practical significance, either to businesses or to business classrooms.  

Within business education, many schools differentiate between faculty who focus on research and those who focus on teaching. In principle, this approach should recognize the different strengths required to produce high-quality research and to deliver excellent education to students at all levels.  

In reality, research-oriented faculty often view teaching-oriented faculty as second-class. Similarly, it can be difficult for excellent teachers to be promoted, let alone to have their contributions recognized.   

Even research-focused faculty can be negatively affected if their work does not fit with the dominant pattern of academic journal publication. They might not receive the same recognition and workload allowances as their more conventional colleagues. So, while it’s common for business schools to say they value teaching-oriented staff, it’s disappointingly rare for schools to have sound, consistent strategies to reward these individuals.  

What Can Be Done 

I strongly believe that reliance on journal lists to measure the quality of faculty research is inappropriate. Moreover, given that AACSB and other associations now require schools to demonstrate the impact of their research, the use of journal lists is of little value in this regard—and is even counterproductive.  

Unfortunately, there is no simple solution to this systemic problem. But the first step is to recognize the limited validity and pervasive negative impact of current research metrics. As the 2012 Declaration on Research Assessment (DORA) states, academic institutions should “not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”  

The second step is to apply this recommendation, not just sign the declaration. It’s a step that a growing number of academic institutions are starting to take. 

As part of this process, business schools can take the following measures: 

  • Set targets for research diversity that recognize different research styles, as well as intellectual contributions that go beyond academic journal publications. 
  • Abandon one-size-fits-all journal list metrics in favor of more diverse and personalized research objectives for each faculty member.  
  • Emphasize research with multidimensional impact, rather than taking a unidimensional “box ticking” approach with journal lists. 
  • Change faculty classification methods to incorporate a wider range of scholarly outputs. 
  • Change explicit and implicit recognition systems, so that Scholarly Academics are no longer viewed as the most valued contributors to the school’s mission. For example, give the highest prestige to those who effectively combine academic and professional engagement and set minimum target levels for professionally engaged staff, not just Scholarly Academics.  

Bodies like AACSB are already taking the lead in emphasizing research impact, responsible research, and diverse intellectual contributions. But, ultimately, business schools must choose to take courageous action to develop better systems of research measurement. Otherwise, they will find themselves in a funding crisis as their target audiences turn to institutions whose research has greater relevance beyond academic circles. 

This article represents my personal views and not those of any academic institution.

What did you think of this content?
Thank you for your input!
Your feedback helps us create better content
Your feedback helps us create better content
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.