As you snuggle by the fire this holiday season to watch Love Actually, you should know that you’re also viewing the work of a published academic neuroscientist. That’s right – actor Colin Firth is cited on a 2011 brain imaging study in the journal Current Biology. And it doesn’t take an insecure graduate student like me to accuse Firth of not pulling all-nighters in the laboratory.
I’ve been in research for about seven years now. But if you search my name in PubMed, the life sciences research database maintained by the US National Library of Medicine, precisely one scientific paper will pop up – a manuscript from my current research group published in spring. I’m also pretty far down on the author list, which reflects my contribution relative to my colleagues on this paper – while I helped with some of the writing, the data collection and most of the analyses were performed by others.
Authorship in science is tricky. In some laboratories, it’s a bit of a taboo topic. Ask your average scientist if they’ve witnessed abuses in authorship, and they’ll likely be brimming with stories – from people being “gifted” an authorship they don’t truly deserve, to hard-working (often junior) scientists being wrongly shafted by their colleagues. These stories are rarely discussed among lab mates, and almost never between junior and senior investigators.
And then there are the extremes, like the 2001 Nature paper on the sequencing of the human genome that boasted 2,900 authors and the 2012 paper detailing the Higgs boson, which cited a whopping 3,171 co-authors.
So where exactly do we draw the line between who has made a meaningful contribution to a project and who is better suited for the “acknowledgments” section?
Authorship: a brief history
From the late 1600s to the early 1920s, sole authorship on scientific papers was the norm. It wasn’t until the 1950s that a few co-authors began trickling in. By the 1980s, authorship among multiple colleagues and collaborators was standard – and expected.
Today, getting one’s name on papers keeps scientists afloat. A scientist’s publication record is proof that they’re an expert in a particular field; the more papers they have out, the more productive they appear. After all, publications are what keeps the grant money rolling in. Authorship is a form of scientific currency and only the rich remain buoyant in today’s “publish or perish” culture.
Are there any standards?
Efforts by the International Committee of Medical Journal Editors have resulted in a set of recommendations, now adopted by a number of medical journals. According to these criteria, those listed as authors should have contributed “substantially” to the study’s design, data collection, or data analysis, drafted or revised the article, and approved the final version of the article
Guidelines set out by the American Psychological Association are similar. While the APA recognises that not all authors must have written the manuscript, everyone listed should have made substantial contributions to “formulating the problem or hypothesis, structuring the experimental design, organising and conducting the statistical analysis, [or] interpreting the results”.
More commonly, individual journals detail their own authorship guidelines – and certain journals are more stringent than others. A journal to which I recently submitted a paper required each co-author to submit a separate form. Among other information, we had to “attest to having provided substantive intellectual contribution” in at least one of the following areas: study design, data collection, data analysis, interpretation of results and preparation of the manuscript.
As another option, many journals allow – and often encourage – authors to include an “acknowledgments” section to properly cite individuals who do not otherwise meet authorship criteria.
Unfortunately, these are only guidelines. There is no law of the land in scientific authorship, and nothing prevents a person from checking off any or all boxes on such forms. Journal editors who receive a new manuscript for review cannot be certain whether or not a research group has followed these recommendations. And if they could, what can they do about it?
Doing the math
Despite having some of the most direct, intimate involvement with data collection, analysis and writing, graduate students like me are significantly less likely than a postdoc or principal investigator to be cited on a paper. After all, no one knows my name, and I’m nowhere near being established in my field – I’m not exactly an asset to a list of authors quite yet.
However, a 2005 paper by Larry Claxton reported that in chemistry, the average number of publications per investigator rose from 4.9 articles to 10.8 articles per two-year period over the past few decades. Twenty chemists in particular managed to be authors in more than 32 papers per year, amounting to one new paper every 11.3 days. If one were ethically following proposed authorship guidelines, this record would be simply impossible.
But scientists, be warned: abuse of authorship can result in a journal retraction. Retraction Watch is a blog run by Ivan Oransky and Adam Marcus that reports on scientific papers that have been pulled from journals for one reason or another. A quick skim of the “authorship issues” tag reveals a range of retractions due to researchers being denied credit or papers that were submitted without the knowledge of all authors. It’s a wonder that authorship abuse continues despite this constant, looming, and humiliating threat.
So what’s the story with Firth?
Here’s the lowdown: on December 28, 2010, Colin Firth guest-edited an episode of BBC Radio 4’s Today programme. For his edition, he and science correspondent Tom Feilden commissioned University College London professor Geraint Rees to scan the brains of politicians.
Firth and Feilden’s hypothesis was that different political leanings would be associated with structural differences in the brain. Conservative MP Alan Duncan and Labour’s Stephen Pound participated in the MRI study for the programme, and the published study represents data from 90 young adults who identified as being on either end of the political spectrum. The study authors reported that while conservatism was associated with a larger right amygdala (a structure linked to emotional processing), being on the left was associated with a larger anterior cingulate (a region involved in error detection, attention, and motivation).
Should Firth have been listed an author? Most would say no. Although it makes a great story, an “acknowledgment” would have been most appropriate in this case. Let’s face it, if Mr Darcy were to potentially co-author a scientific paper with me, I’d make darn sure we spent quality time working hard to perfect the research project together.
Jordan Gaines Lewis does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
< Prev | Next > |
---|