July 15, 2016

Parasites, predators, gatekeepers



Academic publishing in turmoil: Lessons for open science

Katja Mayer (University of Vienna, Austria) 

The following text is a short version of my lecture in Prag in June 2016 at the conference: Parasitic Relations in Academic Publishing

In recent years, the role of scientometrics has changed substantially: from once being developed as descriptive methods, we nowadays deal with a multitude of approaches designed for normative intervention. This performativity of impact measuring is an apt example of how intervention strategies might conform to a regime's inner logic to the point of subverting it.



Ongoing debates on the hegemonic structures of academic publishing have thus to be regarded in the broader context of scientific knowledge production and its valuation. Increasingly we see resistance emerging against the monopolization of both quality measurement indicators and high impact publishing. My contribution will briefly expose how the open movement creates opportunities to trigger further change and break free from sclerotic forms of scholarly communication and evaluation.


"What science becomes in any historical era depends on what we make of it"
Sandra Harding

Gaming the system

The Leiden Manifesto (http://www.leidenmanifesto.org/) has literally manifested what has been rumbling since several years: the bibliometrics community is fed up with the ill application of quantitative research evaluation and the general obsession with impact factors. They call for a metrics-based research assessment, which allows both evaluators and researchers to be held to account. Since scientometrics have been established as integral part of all levels of academic life, they form the basis for a wide range of decisions in the context of research quality assessment. The metric tide (https://responsiblemetrics.org/the-metric-tide/) is already built into the very core of academic meritocracy and its instruments of rewards and incentives. As such it is integrated in knowledge databases such as Web of Science or Scopus, Journal or University Rankings, hence co-shaping scholarly output- and career evaluation.

Even though shortcomings and limits have been demonstrated many times (i.e. Archambault & Larivière 2009), journal impact factors and similar indicators are still applied uncritically today. Since they originally have not been developed as tools for research evaluation but as selectors for librarians in an US context, they are not only faulty and partial measures for valuation of scientific quality, they also provoke optimisation strategies by publishers, journal editors and authors alike, which have been observed as least since nearly 20 years (i.e. Smith 1997). Quite a few quantified academic selves (http://blogs.lse.ac.uk/impactofsocialsciences/2014/01/13/the-academic-quantified-self/) successfully gamed the system by establishing citation- and peer review cartels, by inventing new document types and editorial styles, and by encouraging self-citation, to name just a few optimisation strategies. Ever-new metrics and new citation pattern algorithms are created to counter these tendencies, and it does not stop there. Paper mills, counterfeit predatory journals, fake peer review, vanity press on the one hand, and bibliometric training in "career crafting" and "life course management" in higher education on the other hand are rounding up successful strategies for survival in the competitive business (or sport) of science.

"Everyone is playing the game, publishers, researchers and funders, but that doesn’t mean that all the players have the same freedom to change it. It is only the research community itself that can change the rules. If institutional leaders chose to change the game, the world would shift tomorrow." Cameron Neylon


Parasitic gatekeeping

Simplistic performance driven and metrics based evaluation of scientific quality is enforcing the hegemonic structure of scientific publishing and scholarly communication. There are currently three knowledge databases serving as the main guardians of citation counting and impact measuring: Web of Science (recently sold by Thomson Reuters to a group of private equity funds), Scopus (Elsevier), and Google Scholar. Besides gathering information about scientific publications and citations, all of them are either involved in providing journal publishing infrastructure and information products, in publishing journals, or in making content searchable and findable, in short they are providers of access. This makes the entanglement, indeed the dependency of access to knowledge and its evaluation so alarmingly apparent. Obviously it is logical that only who has the technical means to store this vast array of information could develop the means to first order it and second to label it with a value, but we have to consider the concentration of power in such a setting. This makes the system especially vulnerable to trickery and subversion. Besides citation scams, WoS and Scopus are also prone to predatory journals (https://scholarlyoa.com/2014/07/22/life-science-journal-delisted-from-scopus/). Hence, this setting reveals the deficiency of metrics based on partial data and biased collections, suggesting that such a universalist approach to measuring scientific quality might be the wrong approach altogether.

On top of this, today we witness an oligopoly of five publishing corporations (incl. Elsevier) controlling 50 percent of all journal articles enjoying high profit margins (Larivière et al 2015). In the last years they have substantially increased the fees for access to their portfolio without adding much benefits for authors, research institutions and tax-payers that mainly fund their business. Hence there is unrest growing about the legitimacy of the monopolised and untransparent flows of capital from public to private (Lawson et al. 2015), and the parasitic behaviour of publishers bleeding their hosts.

A flawed basis for evaluation, high prices of access, little services and benefits for authors and funders: these form the basis for rampant discontent among academic communities and policy makers, especially since the Open Access movement has already proven that there is a way to start disentangling access to knowledge and its evaluation.

Alternative visions for open scholarly communication

Basically open access means making research outputs available online free of cost to access and free of most restrictions on copying and reuse. With the discovery of open access as business model by the publishing industry (including predatory publishers) in a world of "publish or perish" its radiance has somewhat suffered. Through high article processing charges some scientists become aware of the cost of publishing for the first time. We will have to apply for additional funding, they think, this is adding bureaucracy and additional pressure, they think. Furthermore, and especially in the context with fake or predatory publishers piggybacking the metric tide OA is often confused with lack of peer review (http://blogs.lse.ac.uk/impactofsocialsciences/2013/10/07/whos-afraid-of-open-access/) or low impact factors, even with copyright infringement. In communities not already fond of OA publishing and in institutions lacking a strong open policy still a lot of efforts to throw light on the positive aspects of OA are needed. Gold OA as business model on the other hand has accelerated negotiations of policy makers, academia and industry and led to a widespread implementation of OA policies (e.g. in US or EU funding schemes).

However, what is still missing in many regards is the understanding that OA is not an end but only the beginning. In order to foster this we need to start training ourselves in open skills and ideas of sharing that go beyond the traditional dissemination of research output, but also beyond the technocratic liberation ideas focussing mainly on change via technology. (http://blogs.lse.ac.uk/impactofsocialsciences/2016/07/26/how-can-we-build-a-human-centered-open-science/ ) We need more than open-washing of traditional models of knowledge production and dissemination.

Consortially funded OA (Like the Open Library of the Humanities https://www.openlibhums.org/, or SCOAP3 https://scoap3.org/) is a good example of innovative concepts of sharing knowledge and risk, and another important step in the creation of alternative publishing markets for the further disentanglement of evaluation and access to knowledge.

Transition phases to OA by default will need strong visions, concrete open policies and monitoring of cost transparency and negotiations. (https://zenodo.org/record/34079?ln=en) It is vital that policy makers, academia and industry understand the complimenting opportunities of commercial markets and knowledge commons. At the same time stakeholders in the transition process need to consider innovative forms of quality and impact assessment, which goes beyond simplistic or flawed metrics, inspired by the many shades of openness of science in society.

Despite an alleged crisis in academic publishing due to cuts of university library budgets and open access publishing, we witness a continuously growing publishing output since the advent of the digital era. The academic publishing industry is adapting to the changes in the market by shifting their focus to other regions (BRIC and SAMP), monetizing smaller chunks of content, harvesting research data and meta data, or servicing the whole scientific discovery process. The application of quality metrics for all new products is vital since it is adding value to such new forms of content. If such indicators proved to be robust for industry objectives, we should look very close at their construction and impact (e.g. Kraker & Lex 2015 https://zenodo.org/record/35401 ) .

Industry is fast learning from (or harvesting) ideas of the open science movement. The time is now to think about relevant reward and business models, beyond mimicking existing models like "data papers" to make research data citeable. For example, we will have to discuss open licences that provide scalable approaches of evaluating contribution in systems of mass collaboration, respecting community norms as well as legal impact in regard to enforced copyright legislation. Authors, researchers, stakeholders alike should participate in defining what added value means in open science realms, and what it needs to turn parasites into collaborators.

How do we want science to become? The Vienna Principles are an example of a shared vision for the future of scholarly communication, designed to provide a coherent frame of reference for the debate on how to improve the current system. Comments are welcome via viennaprinciples.org.



Archambault, É., & Larivière, V. (2009). History of the journal impact factor: Contingencies and consequences. Scientometrics, 79(3), 635-649.

Kraker, P., & Lex, E. (2015). A critical look at the ResearchGate score as a measure of scientific reputation. In Proceedings of the Quantifying and Analysing Scholarly Communication on the Web workshop (ASCW’15), Web Science conference.

Larivière, V., Haustein, S., & Mongeon, P. (2015). The oligopoly of academic publishers in the digital era. PloS one, 10(6), e0127502.

Lawson, S., Gray, J., & Mauri, M. (2015). Opening the Black Box of Scholarly Communication Funding: A Public Data Infrastructure for Financial Flows in Academic Publishing?. Available at SSRN.

Smith, R. (1997), Journal accused of manipulating impact factor. British Medical Journal, 314 (7079) : 463.

No comments:

Post a Comment