by C. Glenn Begley, Alastair M. Buchan & Ulrich Dirnagl
Illustration by David Parkins
“[…] Institutions must support and reward researchers who do solid — not just flashy — science and hold to account those whose methods are questionable.”
“Amplifying these pressures is a human prejudice in favour of our own ideas. There is a very real temptation to ignore a result that does not conform to our preconceptions, or to recast it so that it does. Data-dredging is used to find statistically significant results that justify a publication. Sound practices such as blinding, multiple repeats, validated reagents and appropriate controls are dismissed as luxuries or nuisances.
Research institutions contribute to and benefit from these perverse incentives. They bathe in the reflected glory of their faculty; they trumpet breakthroughs published in top-tier journals, lauding achievements to the media and donors. Some even pay investigators for publications. Many require that investigators generate their salary from research grants.
An anonymous survey of around 140 trainees at the MD Anderson Cancer Center in Houston, Texas, found that nearly one-third had felt pressure to prove a mentor’s hypothesis even though their experimental results did not support it, and nearly one-fifth had themselves published results they considered less than robust5. Nearly half knew of mentors who required lab members to publish a high-impact paper to complete training in their labs (see ‘Pressured findings’).”
“The core instinct of scientists — scepticism — is punished by the current system. Institutions have a duty to reform it. They must shoulder their responsibility for training graduate students and postdoctoral fellows, for supporting the scientific behaviour of their faculty members and for the knowledge that emanates from their endeavours.”
“Many labs already comb through data and methods as a group before submitting a paper. Such discussions should be broadened and formalized across an institution. Regular department and cross-department meetings should be established to dissect manuscripts in preparation. Methods and processes (rather than conclusions) would be debated just as a competitor’s paper might be critiqued in a journal club. Primary research material would be available. This practice is roughly analogous to the ‘Morbidity and Mortality’ conferences routine in hospitals, in which working hours are also intense.
Regular critique sessions help scientists to learn to defend their science without feeling defensive. Investigators publicly hold each other to account, and trainees learn what to demand of their own research. Anxieties can be raised informally, highlighting institutional weaknesses and systematic errors. The practice also puts a short-term focus on what has traditionally been a long-term reward: a reputation for careful science.”
“Institutions should find ways to deter non-compliance with guidelines, poor mentoring and scientific sloppiness. Faculty members with poor records should face loss of laboratory space and trainees, decreased funding and potential demotion. Conversely, faculty members who excel as mentors and careful experimentalists should be rewarded. Appropriate metrics should be developed so that promotions are based on robustness and high-quality mentoring, rather than simply on high-profile publications6. Surveys such as that conducted at MD Anderson exemplify one way in which administrators can gain the insight necessary to improve the research environment. Institution-level metrics could help to monitor overall performance and remind all researchers and administrators of their responsibility to the scientific community.”
“There will not be one ideal solution. Faculty members, trainees and administrators will need to come together for honest, difficult discussions to restructure institutions. Neither scientists nor institutions should engage in mere box checking; new practices must restrain sloppiness while interfering only minimally with the many scientists who are behaving well.”
“Nothing an institution can do will prevent misconduct altogether. This is not the goal. Rather, it is to support the work of well-meaning scientists, to reduce the waste from biased results, and to relieve some of the pressures that encourage sloppy science.”