Scholarly publishing has been shaped by the pressure of a liquid economy to become an exercise in branding more than a vehicle for the advancement of science. The current revolution in artificial intelligence (AI) is poised to make matters worse. The new generation of large language models (LLMs) have shown impressive capabilities in text generation and are already being used to write papers, grants, peer review reports, code for analyses, and even perform literature reviews. Although these models can be used in positive ways, the metrics and pressures of academia, along with our dysfunctional publishing system, stimulate their indiscriminate and uncritical use to speed up research outputs. Thus, LLMs are likely to amplify the worst incentives of academia, greatly increasing the volume of scientific literature while diluting its quality. At present, no effective solutions are evident to overcome this grim scenario, and nothing short of a cultural revolution within academia will be needed to realign the practice of science with its traditional ideal of a rigorous search for truth.
Introduction
If you see a gathering of scientists having a heated conversation, taking turns to declaim indignant complaints while the others nod in silent agreement, chances are they are talking about the scientific publishing business. Scientists have a love–hate (mostly hate) relationship with scientific publishing and with scientific publishers in particular. Ask any researcher what they think about the current publishing system, and you are in for a long rant. But at some point, the rant will end because the scientist is busy writing papers to submit to the publishers. Discussions about the evils and unethical aspects of scientific publishing abound and proposals for reform are plenty, though most of them never break free from the page into the real world and those that do never seem to shake the big publishers.
While the situation appears dire now, we seem to be heading toward an even worse scenario. The pivot of this change is the live revolution we are witnessing in artificial intelligence (AI), particularly with the rise of large language models (LLMs), such as Open AI’s chatGPT. While the potential positive uses of these technologies in science are enormous, their usefulness in gaming the publishing system for career purposes has put us on the highway to scientific hell. Never before was it so easy to “write” a shoddy paper, get it published, and then add a new line to the CV (usually after paying the article processing charges). The expected result is the same as we are starting to see for the Internet as a whole: the scientific literature is on track to explode in volume while being diluted in quality. The idea that scientists may engage in such practices should shock us, but we doubt that any scientist reading this paper will be shocked.
How did we get to build and maintain such a pushing system when it is clearly not fit for purpose?1 Scientific publishing has been shaped by the same social trends that influence our culture and economy as a whole, and understanding these trends can help clarify how the system got the way it is, and why the advent of AI will likely synergize with it to damage the scientific literature (and very likely increase the publishers’ profits in the process).
The scientific publishing industry in context
The liquid world of intangible products
At the turn of the 20th century, two books were published that captured the spirit of our times, exposing trends that are still shaping our culture today: No Logo by Naomi Klein (1999), and Liquid Modernity by Zygmunt Bauman (2000). In No Logo, Klein argues that our way of consuming goods and services was undergoing a profound transformation, in which brands acquired disproportionate importance. Klein defined this transformation as a form of social hallucination since the materiality of products started to matter less than the ideas or concepts they evoked. This shift in the attribution of value from material to immaterial occurs within the broader cultural change described by Bauman (2000), where impermanence, disorienting speed, and fluidity are the new zeitgeists.
In this liquid world, where consumers are attracted by “experiences” and not by material reality, a new, intangible economy flourished, made distinct by its immaterial productive capital (Crouzet et al., 2022). This intangible economy is also characterized by the speed and ease of buying and selling products on the Internet. As Liu and Wang (2019) noted, “[f]rom the temporal dimension, e-communication [...] has tremendously accelerated the speed of commercial and monetary transactions, favoring borderless fluidity”—a phrase that could have been written by Bauman himself. In other words, past decades have seen the rise of an economy largely dependent on the products’ capacity to evoke the perception of immaterial qualities by consumers, potentiated by the ease of selling these products. If that business model sounds familiar, it is because as intangible businesses go, scientific publishing is something of an epitome.
The intangible economy in the (liquid) world of science publishing
Publishing scientific journals is an excellent business. The profit margins obtained by publishers such as Elsevier range between 36% and 40%, surpassing leading companies in the technology sector such as Apple, Amazon, and Google (Buranyi, 2017; Walter and Mullins, 2019). The open secret behind this success is that the business model of scientific publishers takes advantage of a substantial cost reduction and a circular production process. Funds, often from government agencies and universities, pay for the supplies, equipment, and salaries of scientists to carry out research. The results are then used to prepare manuscripts that may be published in journals, depending on the evaluation of peer reviewers, who work for free. The final product, the journal’s paper, is sold to libraries that are often financed by governments or private funding organizations, closing a circle that is virtuous for scientific publishers but deeply onerous for scientists and the organizations that support them.
Alternatively, researchers may also pay from a few hundred to a few thousand dollars in article processing changes to the journals to make their papers Open Access for readers. This Open Access model has been gaining traction in recent years, and now amounts to a significant portion of publishers’ profits (Butler et al., 2023). This system is usually presented as a better alternative to standard academic publishing and has the support of a number of important scientific and funding organizations. However, the final effect remains the same as in the traditional publishing system: the flushing of (often taxpayers’) money to publishers. To make matters worse, this mode of publishing puts even more incentive on publishers to maximize quantity rather than quality, spanning an industry of predatory Open Access journals.2 It also creates difficulties for researchers in developing nations, making participation in science more difficult, a problem likely to be exacerbated as publishing fees increase faster than overall inflation rates (Khoo, 2019).
Notably, publishers do not take any risks during the entire process. For example, if resources are invested in a line of inquiry that turns out to be a dead end, this will be a problem only for researchers and their institutions, not for journals. And, as we noted, publishers do not pay the main actors in this productive system (scientists who produce the research and review the manuscripts). If the intangible economy is marked by precarious labor relations (Liu and Wang, 2019), academic publishing takes this to a qualitatively different level. As Walter and Mullins (2019) argued, the transition from the time when journals were published by scientific societies to the current moment, dominated by profit-oriented scientific publishers, marks an involution of the scientific system, going from a symbiotic relationship to a parasitic one.
A study by LeBlanc et al. (2023) estimated the value generated by the free labor of peer reviewers to be between $1.1 and $1.7 US billion per year, which certainly helps to understand high-profit margins with which scientific publishers work. Furthermore, LeBlanc et al. (2023) also commented on the absence of information in journals about the volume of work and time dedicated to peer review by voluntary researchers. Such an absence makes it harder to estimate how much of the publishers’ profits are due to free labor. In this system, the manuscript review is essentially subsidized by public or private employers who actually pay scientists.
The advent of the Internet has promoted beneficial effects not only in terms of distributing papers but also in terms of operational costs. Currently, scientific publishers sell products that are literally immaterial since most journals no longer print papers anymore.3 In a sense, publishers simply follow the global trend of intangible economies (Crouzet et al., 2022), but they play that game better than most other businesses.
Collective delusion of scientists in the (intangible) world of scientific publishing
For journals and publishers to effectively surf the intangible economy, certain attributes are required to transform their products into a brand to be consumed by scientists. In general, massive advertising campaigns are needed to construct a brand (Klein, 1999), and this principle also holds for scientific journals, in its own way. Buranyi (2017) quotes Robert Maxwell, one of the founders of Pergamon Press, saying that “We do not compete on sales, we compete on authors.” Behind such a maxim is the understanding that best-known authors must be compelled to publish in specific journals to create prestige, and to construct and maintain a brand.
This setup works thanks to several mermaid calls built into the process, such as the positive feedback loop between journal prestige, which is usually measured by its impact factor, and the career value that these metrics bring to scientists. One would be excused to believe that scientists, in all their rationality and devotion to scientific ideals, would resist the stratagems used for brand construction. Alas, scientists are all too human. It is common to hear opinions that so-and-so is a top scientist because s/he has published in top journals. This is not to say that reputable journals do not have standards for scientific quality (although, in practice, such standards are far from perfect, with some evidence suggesting a weak link between studies’ methodological quality and journal impact factor [Saginur et al., 2020; Dougherty and Horne, 2022]). But the point here is that, in such discussions, it is not rare for the actual content of those papers to matter less than the journal in which they were published. It’s the brand, not the product. Grants and hiring committees are unlikely to look past these labels as well.
In fact, one of the great advantages of branding papers based on the journal in which they were published is that they provide a heuristic for decision-making amid the deluge of information. For researchers deciding which papers to read among several options dealing with the same topics, this heuristic may facilitate their choice of time allocation. For those sitting on grant or hiring committees, looking at the journals in which candidates published is a shortcut to evaluating their production without having to actually read and assess the papers from all candidates. In other words, the brand stands for the work’s quality. Implicit in this use of journal brand is the assumption that choices made by top journals are based solely on scientific quality and rigor, although we wonder how many scientists actually believe that.
Ironically (and true to the publishers’ commitment to embrace free labor), most of the work in journal brand construction is provided by scientists, for free. Nowadays, this activity usually takes the form of researchers advertising their papers on social media—a practice that many journals explicitly (and sometimes forcibly) encourage (Box 1), even though its effects in terms of commonly used impact measures are questionable (Box 2). Of course, there is nothing wrong with promoting one’s work on social media. There is certainly something in it for the researcher: the more people view the work on social media, the higher the chances of the paper being read, and it may also increase the chances of it being used and cited by other researchers (but see Box 2). But of course, more views and citations also reflect in higher impact factor for the journal, attracting more submissions and publishing/subscription fees. This raises the question of who we are really working for when we freely spend our time and effort publicizing papers that we either paid journals to publish or for which the journals own the copyright.
Scientific marketing
Publishers are making increasing efforts to encourage scientists to disseminate their papers on social media platforms. It “evolved” from something that was much more dependent on the author’s willingness and a priori affinity for social media use to something that is increasingly incorporated into the publishing process. Springer Nature (2024), for example, encourages the authors to increase their articles’ visibility by offering several tips, including choosing a catchy title, engaging the institution to who the authors belong to produce press coverage, and how to use social media. Some journals are going even further, as in the case of Wiley’s Annals of Neurology, which requires a “Summary for Social Media” from authors (https://onlinelibrary.wiley.com/page/journal/15318249/homepage/ForAuthors.html). The expressions used in such calls for active dissemination (e.g., “promotion,” “visibility,” and “make your post more appealing”) seem clearly more oriented toward creating hype and constructing a brand than toward stimulating critical engagement with the work. It remains to be determined how beneficial such strategies are in terms of increasing citations and journal submissions (see Box 2), but journals have nothing to lose as they invest very little in such advertising campaigns, which are especially attractive to mid-tier journals.
Perhaps a more important question is about the impact of such social media strategies on the way researchers engage with scientific literature. In other words, do they promote careful reading and assessment of scientific information, or simply amplify “hype” around some fields and opinions, creating echo chambers, and ephemeral engagement around “hot takes”? Our own experience with social media has given the impression of something much more similar to hyperbolic news and opinion cycles than a serious and nuanced scientific conversation.
Costs and rewards of promoting papers on social media
In an editorial published in the journal Research and Practice in Thrombosis and Haemostasis, Cormier and Cushman (2021) stated the benefits of disseminating papers on social media platforms such as X/Twitter, and they claimed that 7.5% of the Internet traffic that the journal received comes from this platform.
This trend has certainly been detected by other journals as well, as is evident by their investments in promoting social media engagement. However, the relationship between the use of X/Twitter to advertise scientific papers and the citations they obtain is uncertain. Chan et al. (2023) estimated that in the field of economy, papers with at least one tweet (Xs) had 16–25% more citations compared with papers without any tweet. However, in the field of ecology, Branch et al. (2024) performed an experiment designed to test the causality between the impact in X/Twitter of scientific papers and their citations. They found no statistical differences in citations in Web of Science or Google Scholar but registered a significantly higher Almetric attention score (as expected given their “intervention”). The authors concluded that social attention (Almetric score) and our metrics of impact (imperfect as they are) travel in separate ways.
At this point, if no clear relationship between the use of social media and rewards (citations) is observed, why do scientific editors encourage the authors of their journals to disseminate their findings? It seems meaningless to put so much effort into something that ultimately does not even work, but maybe we are just looking at it in the wrong way. After all, if the goal is marketing to attract more submissions, getting free social media exposure is a good strategy on its own, and it is certainly worth “investing” all that free labor.
Artificial intelligence and the final countdown in the scientific world
The field of AI is seeing nothing short of a revolution with the new generation of LLMs, Open AI’s chatGPT being its poster child. These models are having a significant impact on many areas of human activity, and science is no exception. Tellingly, we can understand a significant part of how these tools are being used in science without the need to assess their actual value for advancing science.
These new LLMs were dropped over a scientific culture dealing with excess information in its literature. Worse still, this excess of scientific papers cannot simply be taken at face value but require careful analysis to be used properly. Simultaneously, every researcher is under pressure to add more papers to the literature as quickly as possible—with a preference for contributing good papers, sure, but avoiding an empty CV being a more pressing need. So no one was surprised when a paper by Zhang et al. (2024) recently started making rounds on social media after some users posted a print of the paper’s Introduction section containing the phrase “Certainly, here is a possible introduction for your topic”—a sentence that clearly suggested the use of an LLM during the writing of the manuscript.4 This was far from being an isolated case, and soon there were several other examples being circulated in response.
While these examples illustrate sloppy uses of AI in writing research, there is evidence that LLM use in writing papers is much more widespread than explicitly acknowledged by authors (Gray, 2024, Preprint). In fact, the practice of using LLMs when writing papers and grants is actually gaining acceptance (Nordling, 2023; Requarth, 2023; Tregoning, 2023; Gruda, 2024), and you can even find guidelines for it (e.g., Seckel et al., 2024; and https://www.thetransmitter.org/from-bench-to-bot/). But regardless of how responsible is the use of AI in the confection of papers and grants, the underlying motivation often lies, as Messeri and Crockett (2024) put it, in “the impulse to produce more science, more quickly and more cheaply.”5
Although there are important shortcomings in current LLMs (not least their propensity to produce so-called hallucinations6), the restraints that should be inspired by such limitations are overcome by their enormous potential to accelerate the production of papers. They are pitched (and already used) not only for writing papers after research is done, but also to write grant proposals and code for analyses (Owens, 2023; Prillaman, 2023; Seckel et al, 2024). They are also being used for peer review and for carrying out literature reviews as part of the research process (Conroy, 2023; Nordling, 2023; Donker, 2023). Soon, we may have papers written by AI, based on research planned with the help of AI, with funding obtained using an AI-written grant, which is then read and evaluated by AI. At some point, we should start asking ourselves what is being lost in this process of automation and what is the point of further increasing speed while further reducing reflection. But right now, we seem too busy feasting on our newfound productivity.
And what about scholarly publishers—how are they impacted by LLMs? There have been expressions of concern here and there, as journals fear the explosion of submissions beyond what they can handle, with an increase in shoddy, low-effort papers (or even outright frauds) that nonetheless require high effort to parse, and the consequent increase in the (mis)use of AI for peer review (Conroy, 2023; Box 3). There is, of course, the expectation that scientists should put in some extra free labor and share the responsibility of dealing with this mess to keep journals going. In the words of Bernd Pulverer, head of scientific publications at EMBO Press, “This [emerging problem with AI] is not something that can be delegated entirely to journals” (quoted in Conroy [2023]). No mention was made about paying for the work required to address these problems, as it seems profits should be delegated entirely to journals.7
The potential (mis)uses of AI in science
A recent paper stated that “the perceived cognitive and material limitations of scientists [...] makes trusting AI tools and welcoming them into our communities of knowledge deeply appealing” (Messeri and Crockett, 2024). Such limitations will seem increasingly stark as we deal with the products of AI in the scientific literature: as the problem of literature volume worsens, AI tools will be in greater demand, creating a vicious circle. Or a virtuous circle, if you are a publisher or an AI company. Because it seems the solutions envisioned for the problems AI is aggravating all boil down to (you guessed it) using more AI.
Messeri and Crockett (2024) provide a good summary of the current visions (and their accompanying illusions) for the possible roles of AI in science. Two of the possible visions they discuss are particularly relevant in their potential interactions with the publishing industry: AI as Oracle and AI as Arbiter. The use of AI as an Oracle is a natural consequence of the huge volume of information produced every year, making it impossible for scientists to keep pace even within their specific topics of expertise. The Oracle function of AI will supposedly fix this problem because it can process, summarize, and predigest information from a large number of papers. In fact, reviewing the literature is already one of the activities being delegated to LLMs. Behind this trend is the implicit assumption that whole swaths of literature describing experiments and results can be compressed into short summaries without losing important nuance and that we can actually advance knowledge without bothering to engage with the complexity of available evidence.
Consider now the use of AI as an Arbiter, another category proposed by Messeri and Crockett (2024). While the Oracle function of AI allows for the generation of hypotheses and summarizing previous literature, the Arbiter persona evaluates the research results and proposals. In a business model where quantity is of the essence, both for researchers trying to fatten their CVs and for journals who increasingly rely on Open Access models, boosting production is paramount. However, there is a bottleneck in this system: peer review. The motivation behind peer review as a moral duty of the scientific community, aiming to maintain the wheel of science spinning, is quickly losing its force. In this scenario, the use of AI during the review process appears to be just a matter of time. And by “use” we mean the official use of AI—as its unofficial use is already happening (Chawla, 2024).
Both the Oracle and Arbiter personas of AI have the potential to produce what Messeri and Crockett (2024) called “monocultures of knowing,” in which “scientists falsely believe they are exploring the full space of testable hypotheses, whereas they are actually exploring a narrower space of hypotheses testable using AI tools.” This is a problem that, before the use of AI, was linked to the formation of homogeneous groups of researchers (in terms of, for example, sex, ethnicity, and world regions), which led to illusions of objectivity grounded in a lack of diverse perspectives. We are now on track to automate this process as well. The monocultures of the future may be those set by AI tools, where their views and judgments reflect the standpoint of their training data. This problem is exacerbated by the lack of transparency of AI companies regarding their training datasets, and will likely worsen as models start to be trained on their own input—something that will likely happen sooner or later (Villalobos et al., 2022, Preprint).
How will these potential biases effectively influence manuscript evaluation by the tireless AI reviewers? Can we expect a judgment bias of Arbiter AI that would worsen the publication bias? Maybe it will pave the way for a more efficient and fair evaluation system. Or maybe our naïve fascination and careless application of these new AI tools will only come back to embarrass us. From outside the hype, we, scientists, seem like Mickey Mouse when turned into a sorcerer’s apprentice in Disney’s classic film Fantasia (1940), making a mess while using the magician’s rod without understanding what it can and cannot do, and what may come out of its use.
The situation may seem dire for top journals as they have a brand to maintain without which they lose their value. However, for publishers as a whole, the rise of LLMs brings a cash cow. Think of all the article processing charges they will collect with the avalanche of half-baked papers to be submitted even to the most predatory journals. At the same time, if top journals succeed in keeping their heads above the water, their human curation activity—backed by the aura of exclusivity provided by their ever-higher rejection rates—will be in ever higher demand.
Grim vistas: spirits in the (in)material world
The future looks bleak for the scientific literature. For a long time, we have dealt with a viciously designed publishing system that is kept alive by the requirements of our careers and by our hunger for status. Both these motivations lead to a culture that often ends up valuing productivity for its own sake, seeing papers as ephemeral products whose branding (based on the journal they are published in) matters more than their actual content for most purposes. These same drives are now synergizing with the advent of LLMs, creating a vicious circle of uncontrolled quantity and speed that will fill the pockets of publishers and AI companies, and will certainly help advance some careers, but whose benefits for science as a whole are dubious at best, and plain scary at worst.
Or maybe we, the authors, are just like two bitter artisans expounding our fears amid an industrial revolution that will bring forth a bigger and better scientific enterprise. This may be true, but it is also true that whenever automation comes, something is lost. Some losses to automation may be good riddance, but as we replace vital parts of scientific practice with AI tools, at some point, we may find out that we are no longer doing science. If our main goal when practicing science is to hack our way to success, and if all there is left is brand-building and career goals, we run the risk of ceasing to be transformed by what we do. And it can happen so smoothly that we may not even notice the moment we stop pursuing science and start pursuing publications (Mattiazzi and Vila-Petroff, 2021). Is it not the essence of our craft to master the use of reason and evidence to inch closer to the truth? Yet, as we automate away, the next generations of scientists may no longer be able to clearly organize and express their ideas on their own,8 to critically assess studies and evidence and reach their own conclusions, to devise their questions and plan their own investigations, even though we might not realize the loss while we admire how productive we are.
Framing the challenges for any aspiring solution
This is the perfunctory section of every text calling the end of times, where we present our vision for solving all these seemingly unsolvable problems, for moving all those unmovable objects, and for resisting all those irresistible forces. In other words, this is where we try not to look like nihilists. Alas, we have no answers, just questions. Proposed solutions for reforming the publishing system and regulating the use of AI abound, but no silver bullets are lying around. The best we can do is help frame the challenge and highlight possible steps to mitigate some of the facets of the problem—even if no current proposals can address our systemic problems in their entirety, and some of the problems have no efficient solutions in sight.
What barriers need to be overcome by any proposal to improve this situation? In our view, there are four main challenges to overcome. The first challenge is inertia. Getting people to act and make changes is challenging in itself. In the case of our relationship with the scientific literature and publishing business, inertia has the added weight of a prisoner’s dilemma that makes not acting a rational choice career-wise. Most proposals for reforming academic publishing involve taking actions that will, in one way or another, impact one’s productivity in terms of the number of published papers.
This is the case, for example, of embracing preprints and alternative forms of peer review and curation. Because our careers depend on publishing papers (and idealism doesn’t pay the bills), such actions can only be safely taken by the whole community. Anyone who goes down that path alone is doomed to see the gates of academia closing for them. Not only that, but isolated actions are bound to be ineffective anyway—it is like trying to move a block of concrete through Brownian motion. The problem is that concerted action will only come with a cultural change in where we place value and in how we assess researchers for career progression, which will only happen with concerted action.
Such a cultural change is likely the only way to maintain a sustainable scientific system—not only in terms of metrics but also in terms of providing the immaterial conditions necessary for scientists to do their best work. There is plenty of evidence of the high incidence of mental illnesses in academia, such as anxiety and depression, which has been linked to the extreme pressures that students and early-career academics experience in an ecosystem dominated by the publish/perish paradigm (Forrester, 2021). The mental health of advisors who are empathetic toward their students’ suffering can also be affected, generating an unhealthy work environment that is simply not conductive to high-quality work. We can wonder how long this scenario can be sustained without a significant systemic reckoning. It is obvious that the stress, low salaries, and few future prospects, together with the effects of the pandemic, should make it increasingly less attractive to work on research, which, in the case of Brazil, has generated a 12% reduction in enrollment in graduate courses between 2019 and 2022 (de Oliveira Andrade, 2024). This should be cause for concern even for scientific publishers as they are essentially contributing to undermining the sustainability of their own businesses; mankind has witnessed many examples of economic activities that have disappeared or decreased significantly due to the abusive exploitation of resources.9
Walter and Mullins (2019) note that the parasitic relationship of profit-oriented publishers with the scientific community is in large part a consequence of the adopted metrics for measuring researcher quality. In fact, the scientific career has fallen prey to a clear case of Goodhart’s law: when a measure becomes a target, it ceases to be a good measure. This situation requires, before anything else, sincere self-criticism from the scientific community. Importantly, this is not just a matter of idealism—it can have real and practical consequences. For example, Park et al. (2023) (further discussed in Box 4) reported a significant decline in disruptive studies from 1945 to 2010 and commented that, despite the large increase in scientific productivity in terms of papers and patents, the number of disruptive studies remained nearly stable. This indicates that we have been very good at increasing the number of papers produced, but not at increasing the rate of significant advancements. There may be a fixed carrying capacity for disruptive research. Or maybe science just got harder as the low-hanging fruits were already picked, and we are left chasing smaller effects and solving more complex problems. But maybe, just maybe, we have gone too deep into the rabbit hole of our own metrics, caught on our self-set Goodhart’s trap, and simply became collectively less capable of asking bold and relevant questions and doing what it takes to answer them. This possibility raises unsettling questions: why should funding agencies allocate (often public) money to such a scientific system? A system that, as discussed by Park et al. (2023), seems dedicated to generating ever-narrower slices of knowledge, a strategy that benefits individual careers, but not necessarily scientific progress as a whole.
Liquid universities
Other problems in academia can also be associated with the paradigms of liquid modernity. Batko (2014) coined the term “liquid universities” to draw attention to the pressure that universities suffer to become simply professional educational centers, offering students a handful of practical skills that will be valuable in the labor market. The catch is that the labor market is liquid and, as a consequence, it is hard to predict what skills universities should prioritize to guarantee a ticket for a future good job for their students. In this process, the status of universities is being increasingly undermined, as their solid foundations melt away and their autonomy slowly disappears (Batko, 2014). As money becomes an ever more central issue, research is also affected by a preference shift toward projects that are tailored to the funding systems’ metrics and capable of producing a steady flow of papers—which in turn can bring a steady flow of grant overheads.
While making causal statements about this topic is tricky, it is remarkable to note the association between the entrenchment of this system and the steady decline in disruptive science (that is, scientific contributions that render existing knowledge obsolete) between 1945 and 2010, as analyzed by Park et al. (2023). Using a measure (the CD index) that quantifies the disruptive or consolidating nature of a paper, they showed a decay of 90% and 100% in disruptiveness for social and physical sciences, respectively. The authors concluded that to promote disruptive science, universities should change their focus on quantity, start to award quality, and provide some kind of immunization against the publish or perish paradigm.
One may hope that advances in LLMs will help address this problem, speeding up the rate of breakthroughs through their productivity-enhancing effects. However, we are not sure how much speed can be gained with LLMs in terms of actual significant research (as opposed to the mass production of stamp-collection studies). Our guess would be that such improvements would be very limited not only because breakthroughs are generally unpredictable but also because they usually involve significant engagement with the subject matter—something that may actually become rarer as we delegate tasks such as literature reviewing, data analysis, and articulating our ideas in prose to LLMs.
Addressing the unethical publishing system also stumbles on a second challenge: the fact that journals play a fundamental role in scientific ecosystems. Journals are especially important for researchers who have no resources for effective networking or who do not have a social media platform and yet need to find a way to have their work stand out amid a sea of scientific outputs and break the attention bubble to be considered by the research community. While journals have all sorts of biases in their decisions about what to publish, it is not clear whether replacing journals with, for example, preprints and social media platforms would reduce bias. An alternative would be to keep journals but publish them under a fairer system. The problem is that this would require establishing and supporting ethical journals, and these would have to compete with established journals based on the current system, which is a significant challenge.
What can we do to break this inertia circle to elicit change while preserving what is good about scientific journals? At an individual level, we believe that the most balanced strategy is that outlined by Receveur et al. (2024), which involves favoring more ethical journals such as those controlled by public institutions, non-profit organizations, and scientific societies. More impactful measures would require the participation of our scientific institutions. One thing that needs to be changed is the way we assess scientific output. Some initiatives have attempted to address this issue to create better standards, such as the Declaration on Research Assessment (DORA). The DORA offers a framework for discussing research evaluation beyond single metrics, such as the impact factor (DORA, 2024). The declaration discusses the limitations of these metrics and highlights the importance of being careful when interpreting them (DORA, 2024). Some journals have even backed this initiative and mention it on their web pages. However, the practical impact of this initiative is still limited.
Scientific associations and societies also need to step up. They should work with local research communities to raise awareness of publishing issues and stop handing over their journals to for-profit publishers. Many have the resources and expertise to open and manage journals in a not-for-profit format, with any occasional profit redirected to the research communities they serve. This would help preserve the best features of journals, while creating a more ethical system. It is undeniable that such journals would have a hard time competing with established, high-profile, for-profit journals. However, if funding agencies and academic institutions actually embraced evaluation principles such as those of DORA, this would help level the competition between journals since the quality of the work itself would start mattering more than the brand of its journal.
While implementing these measures seems difficult in practice, the remaining two problems on our list are even harder to tackle. The third and most serious challenge we see for enacting change in our current system—often ignored in such discussions—is that publishers currently own the copyright of a significant part of the scientific record. Any proposal to reform or replace journals must face the fact that publishers’ grip on scientific communication no longer depends only on our publish-or-perish culture. We have been giving our records to them for free for a long time. Imagine we create a more sustainable, effective, and ethical alternative to the current system—say, something like a universal version of Latin America’s SciELO (https://scielo.org/en/about-scielo/program-publication-model-and-scielo-network/). If researchers embrace such alternatives (or any other alternative) and start migrating away from journals, publishers will end up breaking. But if they break, what happens with all the records they hold? This will likely lead to a case-by-case negotiation, where richer and more influential institutions will probably have better conditions to reach fair agreements (see Walter and Mullis [2019] for an example).
And then, on top of all these challenges to reform the current publishing system, there is the challenge of preventing the indiscriminate use of LLMs to amplify all the bad incentives of the current system. Can we really expect AI companies to regulate the use of their products? Can journals and universities regulate the use of these models instead? We doubt this, as there are no clear ways to enforce limits on the use of these tools. Will researchers restrain themselves and only use these tools with care and deliberation instead of trying to maximize production at the cost of rigor? Under the paradigm of publish or perish that governs scientific activity, there is no clear reason to expect such restraint.
These are daunting challenges, and overcoming them requires nothing short of a cultural revolution within science. Can we actually bring about such a cultural shift? Your guess is as good as ours. But now it is about time to conclude this long rant. After all, we have papers to submit.
Acknowledgments
Eduardo Ríos served as editor.
The authors would like to thank Duane Barros Fonseca, Horacio de la Iglesia, and Sabine Pompeia for reading and commenting on earlier versions of this text. Of course, they are not responsible (nor do they necessarily agree) for any of the opinions expressed in this paper.
This work was supported by the Coordination for the Improvement of Higher Education Personnel (CAPES; finance code 001). J.M. Monserrat is a productivity research fellow from the National Council for Scientific and Technological Development (CNPq) (process number PQ 307888/2020-7).
Author contributions: T.F.A. França: Conceptualization, Investigation, Writing - original draft, Writing - review & editing, J.M. Monserrat: Conceptualization, Investigation, Writing - original draft, Writing - review & editing.
References
Here we are assuming that the purpose of the scientific literature is to allow efficient and effective scientific communication to support the collective pursuit of knowledge—a point that should be commonplace, but that may need stating nowadays, as Mattiazzi and Vila-Petroff (2021) have emphasized.
We use the term “predatory publishers” in the way it is commonly understood, i.e., referring to Open Access journals that try to maximize profits while disregarding quality control. That said, as far as the focus on profits as a priority goes, and as Amaral (2018) noted, “all publishers are predatory - some are just bigger than others.”
Importantly, this is not the only sense in which papers have become immaterial. There is another deeper sense related to Klein’s ideas about brands, as discussed below.
That article, with its GPT telltale opening, was still online at the time we started writing this piece on March 27, 2024, but it has since been retracted by the publisher.
Importantly, the same pressure also applies to science education. As a grim example, the state of São Paulo, Brazil, recently announced a pilot plan to use AI in the production of didactic materials for elementary and high schools (Martins, 2024).
It is worth noting that do not we aim to exclusively criticize EMBO Press, but Pulverer’s statement seems to reflect a general view among publishers. In fairness, EMBO Press is part of the European Molecular Biology Organization, which is a not-for-profit organization and is unusually open about its finances (see https://www.embo.org/features/financial-transparency-at-embo-press).
The case of using AI to write drafts is a particularly telling one. Most scientists have had the experience of achieving greater clarity of thought after trying to put their results into writing. Such gains in understanding are usually a consequence of the effort required to articulate a result or an argument we thought we had already pinned down. Trying to automate this away is like going to a gym and asking a robot to exercise for you—it ignores where the value in the process really lies.
An example is the fishing industry, where overfishing sharply limited this economic activity and even replaced it with other productive activities such as aquaculture, according to the FAO report (https://www.fao.org/publications/home/fao-flagship-publications/the-state-of-world-fisheries-and-aquaculture/en), surpassed extractive fishing in 2022.
Author notes
Disclosures: The authors declare no competing interests exist.