Investigating the story behind the discovery, Hartman realized just how many scientists are involved in modern experimental physics: more than 3000 scientists from the ATLAS team are listed as authors on the publication reporting the Higgs Boson's detection and every experiment involves the active participation of hundreds of scientists.
Considering that organizing scientists has been likened to herding cats, the achievement of coordinating such large teams probably comes close to outshining even this groundbreaking discovery. Important ATLAS publications list the authors in alphabetical order, instead of in order of importance, as is common in biomedical research where it is a constant source of aggravation.
![]() |
A subset of the authors of a recent ATLAS publication |
The researchers at CERN also take great care to avoid known psychological pitfalls of experimental research. "We don’t work with real data until the very last step," Kerstin Tackmann, a member of the Higgs to Gamma Gamma analysis group. explains. "Once we look at the real data,” says Tackmann, “we’re not allowed to change the analysis anymore." This precaution is similar to the blinded analysis of clinical trials in biomedical research, considered an essential tool in drug development.
Biomedical researchers appear to be less attuned to the risks of being misled by their own data and MacLeod et al recently estimated that "85% of research resources are wasted".
In my own experience, most researchers are careful and strive to include the necessary controls and safeguards. Yet, with most experiments yielding negative results and the competition for funding and positions increasing, we often overestimate the significance of exciting (though perhaps unlikely) results and tend to disregard contradictory data. While we easily spot overconfident colleagues, many scientists readily state that " I know the effect is there, I just don't have the data to show it, yet".
A few simple procedures can help to remove some of the nagging doubt of whether we interpret too much into our observations. True and tested methods include randomization (e.g. assigning animals randomly to cages instead of placing them on a "first come, first serve" basis) and blinding, e.g. ensuring that the data collection is not biased by the expectation of the experimenter (e.g. by scoring microscopy images without knowing whether samples were treated or not).
Statistics [are used] in the same way that a drunk uses lamp-posts—for support rather than illumination (Andrew Lang)
Yet, I still vividly remember the blank look of disbelief I got from a colleague, when I helpfully offered to replace the names of her digital microscopy images (treated1.tif, treated2.tif, control1.tif, control2.tif, etc), with random labels before she counted the number of cells surviving a drug treatment . She clearly didn't think that her expecting a significant treatment effect could bias the results and didn't take any comfort in reducing it through 'blinding' (although I did offer to provide the original labels afterward - for free).
During the experimental research process, statistical tools can contribute useful feedback by quantifying how confident we should be in our results. Yet, in my experience, these tools are most often used too late, e.g. when the data has been collected and statistical tests have to be added as an afterthought to pacify a reviewer. Facing last minute judgment of their body of work by a t-test, many researchers are be hard pressed to embrace statistics as a useful addition to their tool kit.
Yet, as new technologies - from single-cell sequencing to CYTOF - enable biomedical researchers to collect more and more data, relying on our intuition is going to mislead us. With the blessing of more data comes the "curse of dimensionality", requiring biologists (like myself) to learn new tricks and kindle their appreciation and use of statistics.
0 comments:
Post a Comment