Malgosia Pakulska is a research associate in the Shoichet lab at the University of Toronto and a science writer for Research2Reality, a blog designed to engage the public in Canadian research. Malgosia wants to educate, entertain, and show people what science is really like, one story at a time. When she is not in the lab, Malgosia can be found eating and drinking her way through Toronto and developing recipes at Smart Cookie Bakes. Follow her on twitter (@SCBakes) and Instagram (malgosiapakulska)!
I’ve been working in research labs for longer than I like to admit, but it wasn’t until I got to grad school that I felt it: the pressure to publish. All of a sudden, science wasn’t just the fun of discovery, it was a requirement to graduate. And when your research depends on living things like cells or animals, you can pretty much say goodbye to any kind of predictability.
Research projects start with a problem, an interesting question that you want to answer (or at least, a question that you wanted to answer before the daily grind of experiments wore you down). This is followed by a hypothesis. Lately, I’ve heard a lot of people, including myself, talking about striving to get data that “fits their hypothesis” or “proves their hypothesis.” But a hypothesis is an idea, an educated guess, not an objective.
You can’t prove your hypothesis; you can only disprove it or get data that supports it. I still remember one of my lectures in undergrad where a Prof talked about this. He used the example of swans. You can hypothesize that all swans are white, but you could never prove that. In his words, you’d have to find “every damn swan” and check their colour. But all you’d need is one black swan to disprove your theory.
I’m not sure why I remember this so well. Maybe it was the novelty of a Prof saying “damn” in a lecture. In any case, the lesson was clear: as scientists, we shouldn’t be trying to prove anything. We should be equally satisfied with data that supports or disproves our hypotheses, because it means we’ve learned something.
In reality, however, negative results are almost impossible to publish in good journals, which leads to a whole slew of other problems like publication bias, “p-hacking”, or even data falsification. How many times have you desperately tried analyzing your data in different ways hoping for that p value to fall below 0.05?
But don’t lose hope. Many scientists are recognizing these problems, and journals such as the Journal of Negative Results in Biomedicine are popping up. Let’s also remember that what initially seems like a negative result could actually turn out to be something really cool and useful to share with others! So, despite the pressures and stresses, let’s have faith in the scientific method and follow our results wherever they may lead.
Our regular feature, Right Turn, appears every Friday and we invite you to submit your own blog to info(at)ccrm.ca. We encourage you to be creative and use the right (!) side of your brain. We dare you to make us laugh! Right Turn features cartoons, photos, videos and other content to amuse, educate and encourage discussion.
As always, we welcome your feedback in the comment section.
Latest posts by Guest (see all)
- Art meets regenerative medicine in the hands of Toronto artist - December 14, 2017
- Seeing isn’t always believing: a cautionary tale when trying to restore vision – TMM 2017 - November 7, 2017
- Engineered stem cell platform gives new insights into beginnings of human development - November 1, 2017