Many scientists find themselves mystified as the electorates in their countries choose leaders and governments that appear at odds with the voters’ self-interest. This is especially true as we watch less affluent portions of the population vote for people whose policies do nothing to ensure better universal health care or education. At the same time, much of the scientific community has chosen to fall in line with the stranglehold that high profile journals have on our supposedly merit-based publication and promotion processes. The paper by França and Monserrart in this issue (França and Monserrat, 2024) brings new concerns about the integrity of our publication and promotion processes as artificial intelligence (AI) makes increased inroads into our worlds, both in science and in general. This is a must-read for all academic scientists, in particular those of us in the life sciences, and makes a number of important points about how scientists are often following social trends in being more influenced by the “brand,” e.g., journal, in judging a researcher or a career than by the caliber of the work.

Most academic scientists working today understand that the merit of a paper is not determined by the rejection rate or the impact factor of the journal that publishes it (Marder, 2006; Marder et al., 2010). That said, I was horrified to read a recent blog post in eScienceinfo (eScienceinfo_Q4516U) giving advice to young scientists about doing a good journal club presentation that starts with the assertion that the first step is to find a paper published in a high impact journal such as Nature, Science, and Cell. This was published at a time when many senior and junior investigators complain that many high-profile papers are less informative and trustworthy than others published in lower-impact journals that still search to publish the truth, but not novelty for its own sake. Along these lines, I recently had a conversation with an ex-editor of a high-profile journal who said that her goal was always to find a paper with novelty, and she cared less about whether its findings would be eventually validated. Her justification for this was that there was plenty of capacity in the scientific workplace to fill in the missing pieces and eventually get it right, but sparking new lines of investigation was so important that it outweighed the value of the slow and steady search to “get it right.”

This editorial philosophy obviously enhances the “brand” of the journal, but potentially at great cost to our trainees. I have personally met numerous graduate students and postdocs who have spent months and years attempting and failing to replicate data in a high-profile paper. This experience contributes to the disillusionment that causes so many of our best and brightest to leave the field, and is wasteful of our precious scientific resources. And of course, we all know that the barriers to publication of failures of replication are very high. It goes without saying that honest errors are bound to occur whenever scientists are working at the frontiers of knowledge. These are likely to occur more often when a finding is truly new, but this is all the more reason we should double our efforts to “get it right.”

A brand that glorifies novelty is destructive in another way. True scientific breakthroughs do not come frequently, and they are often not recognized as breakthroughs when first reported, as they may not be truly understood or appreciated within the conceptual framework of that month or year. Sometimes it just takes a while for the field to incorporate a new idea or finding into the collective ethos. This means that the search for novelty by journal editors may find many papers that overstate their claims, without being truly novel. Of course, it is often argued that the literature is so vast that the prestige journals benefit scientists by pointing them to work worth reading, with the assumption that papers published in lesser journals may not be worthy of the attention of the field. But, the prestige journals benefit financially from the present system, and it is a mistake to believe that they are only motivated by altruistic desires to help their readers better navigate a large literature.

França and Monserrart (2024) make a number of points about the influence of AI on science and scientists. But what they do not mention, for me, are two issues that I encounter working with trainees in academic settings. Our students now view their phones, with all of their search capabilities, as extensions of their brains. Obviously, it is wonderful that they have access to the world’s knowledge in their back pockets! But there are two very deleterious consequences of their reliance on their electronic devices.

First, our younger students seem to have lost some of their ability to remember and reason with what they have studied, in comparison to students 40, 30, or 20 years ago. This is totally understandable, as they have the sensation that they can always “look it up.” And that is true. But what they fail to appreciate is that having a new idea means that they will have to think with and synthesize new ideas using knowledge that they have stored in their biological brains. The human brain has yet to do a mindmeld with computer knowledge, and accessing what is known requires knowing what questions to ask of the phone or the laptop. At this moment in time, the vast power of our stored knowledge cannot replace the human brain thinking creatively with remembered principles, data, and experience. Maybe this will change in the future, but as of 2024, it is still important to know stuff to have a new thought!

Second, despite the fact that our trainees have access to the world’s knowledge at their fingertips, oftentimes they are very poor at finding what is relevant to their own work. Agreed, the amount of knowledge is immense, and our trainees are adrift in literatures that are too large for anyone to parse successfully. In the last several years, I asked trainees as we were writing a paper, to go to the literature and check to see if anyone had done something directly relevant to an aspect of our work. On several occasions, they failed to find papers from the 1970s and 1980s that I was pretty sure had to be there from what I remembered from the work that was being done many years ago. Then I did my own search, and very rapidly found the papers I was confident were there, when people were studying many of the fundamental properties of ion channels. I then discovered that I was doing my searches using a very different process than what my trainees were doing. I started with names of people I thought might have been studying the process of interest, and I used PubMed. My trainees started with Google Scholar, which is a wonderful tool, but uses its own sense of what is important and often fails to identify relevant papers published in 1986 with only 65 citations! Subsequently, my trainees started using me as their primary search engine, asking who might have done the early work, and this helped them find those papers that Google Scholar did not initially show them. I worry that when my generation is gone, so will be a cadre of people who lived through the first explosion of research in biophysics and neuroscience and preserve that older knowledge.

I feel sad for our present trainees. When I was a beginning Ph.D. student, the literature was small enough that I could go to a physical library and find “all” that was known relevant to a specific problem. As a graduate student, I could more or less get my mind around entire fields. As I finished my Ph.D., I was a mini-expert on the literature relevant to my work, and that helped give me a sense of ownership of my work. As powerful as the phones are in my trainees’ pockets, with their unparalleled abilities to access the world’s knowledge, I am not sure that many of our trainees feel themselves to be experts, and this may contribute to some of them experiencing less satisfaction in their work.

The 2024 election in the United States has brought to the forefront of our consciousness the ubiquitous force of social media in all aspects of our lives. The erosion of our ability to know when someone is telling “the truth” about anything would have been inconceivable even 30 years ago. How dare we, as scientists, decry the general population’s distrust of science, when we teach our junior scientists to follow fads and chase citations? As we increasingly forgo “scholarship,” it is easy to understand why the general populace cannot distinguish between the natural evolution of scientific progress, with its concomitant mistakes, and overt fraud. I worry that the greatest liability of our increasing use of AI, will be that, we as scientists will lose our own ability to distinguish fact from fiction. For many years, I have looked at raw data to see what they show. That is no longer possible in many instances. As more and more complex data analysis tools enter our fields, it becomes harder and harder to find ground truth and to know that our scientific houses have strong foundations that can help us withstand the AI hurricanes of the future.

Eduardo Ríos served as editor.

Research in the author’s laboratory supported by National Institutes of Neurological Disease and Stroke R35 NS 097343.

França
,
T.F.A.
, and
J.M.
Monserrat
.
2024
.
The artificial intelligence revolution...unethical publishing: Surfing the liquid world and its intangible economy
.
J. Gen. Physiol.
156
:e202413654.
Marder
,
E.
2006
.
Rejecting arrogance
.
Curr. Biol.
16
:
R70
.
Marder
,
E.
,
H.
Kettenmann
, and
S.
Grillner
.
2010
.
Impacting our young
.
Proc. Natl. Acad. Sci. USA
.
107
:
21233
.
This article is distributed under the terms of an Attribution–Noncommercial–Share Alike–No Mirror Sites license for the first six months after the publication date (see http://www.rupress.org/terms/). After six months it is available under a Creative Commons License (Attribution–Noncommercial–Share Alike 4.0 International license, as described at https://creativecommons.org/licenses/by-nc-sa/4.0/).