The troubled institution of science
“Is the point of research to make other professional academics happy, or is it to learn more about the world?” —Noah Grand, sociology professor, UCLA
“Science, I had come to learn, is as political, competitive, and fierce a career as you can find, full of the temptation to find easy paths.” — Paul Kalanithi, neurosurgeon and writer (1977–2015)
Vox has conducted a very interesting study and has written a long, insightful article: The 7 biggest problems facing science, according to 270 researchers. Excerpts:
In the past several years, many scientists have become afflicted with a serious case of doubt — doubt in the very institution of science.
As reporters covering medicine, psychology, climate change, and other areas of research, we wanted to understand this epidemic of doubt. So we sent scientists a survey asking this simple question: If you could change one thing about how science works today, what would it be and why?
We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.
The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat.
But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.
Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.
“As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”
Many scientists have had enough. They want to break this cycle of perverse incentives and rewards. They are going through a period of introspection, hopeful that the end result will yield stronger scientific institutions. In our survey and interviews, they offered a wide variety of ideas for improving the scientific process and bringing it closer to its ideal form.
Academia has a huge money problem
Their gripe isn’t just with the quantity, which, in many fields, is shrinking. It’s the way money is handed out that puts pressure on labs to publish a lot of papers, breeds conflicts of interest, and encourages scientists to overhype their work.
Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley, a neurobiology postdoc at the University of Bristol, points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.
Some of our respondents said that this vicious competition for funds can influence their work. Funding “affects what we study, what we publish, the risks we (frequently don’t) take,” explains Gary Bennett a neuroscientist at Duke University. It “nudges us to emphasize safe, predictable (read: fundable) science.”
Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work.
Too many studies are poorly designed. Blame bad incentives.
Scientists are ultimately judged by the research they publish. And the pressure to publish pushes scientists to come up with splashy results, of the sort that get them into prestigious journals.
Some of this bias can creep into decisions that are made early on. Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.
“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”
Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously.
“The current system has done too much to reward results,” says Joseph Hilgard, a postdoctoral research fellow at the Annenberg Public Policy Center. “This causes a conflict of interest: The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”
“I would make rewards based on the rigor of the research methods, rather than the outcome of the research,” writes Simine Vazire, a journal editor and a social psychology professor at UC Davis. “Grants, publications, jobs, awards, and even media coverage should be based more on how good the study design and methods were, rather than whether the result was significant or surprising.”
“We’ve gotten used to working away in private and then producing a sort of polished document in the form of a journal article,” Gowers said. “This tends to hide a lot of the thought process that went into making the discoveries. I’d like attitudes to change so people focus less on the race to be first to prove a particular theorem, or in science to make a particular discovery, and more on other ways of contributing to the furthering of the subject.”
“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht, a University of Toronto psychology and neuroscience professor.
Judith Curry note: New Scientist just published a relevant article Evolutionary forces are causing a boom in bad science. … continue
No comments yet.