This post originally appeared on The Black Hole blog and is reprinted with permission from the author and University Affairs.
I hate to admit this, but I find an incredible number of scientific papers really boring. It seems that more and more, research papers are using the same sets of sexy and expensive tools without actually answering the question they set out to explore and overload their readers with “big data”. It further appears that this is the primary formula for getting published in big journals – and the nasty part of that whole business is that publishing big is controlled by an ever-diminishing fraction of the world’s scientists.
Remember when you were in high school, and there were popular ways to dress and popular places to be? It was difficult for some kids to afford to keep up while other non-conformists simply opted out of “being popular”. Eventually, we look back fondly at these people who didn’t follow along – many of them had a much better sense of self and preferences. No matter how much the popular groups or trends pushed, some people just didn’t buckle and emerged many years later as cool people with novel ideas.
My fear is that the academy is subject to the same primitive bullying techniques resulting in social exclusion as a consequence of breaking rank. The system (unknowingly?) props up the careers of a cadre of researchers who are just really good at following along. The really sad corollary to this in the age of tight funding is that we lose the non-conformist kids who have the creative ideas of today and tomorrow. Surely universities are the place that should foster new and alternative ideas and approaches and be immune to such behaviour. Academic bullying is a problem and it’s squeezing the creativity and lifeblood out of science.
Let me explain how I see this operating. The three things that matter most to a scientist’s career progression are publications, grants, and personal reputation (e.g., the ability to attract the best PhDs and postdocs). All three are determined by a frighteningly small number of people who have the power to socially exclude for their own benefit (e.g., keep an idea out of the mainstream, promote the careers of the people they like, etc, etc). While they don’t necessarily do this, the power is theirs to wield.
How might this manifest itself? One example is that the experiments requested by reviewers are often expensive and technology-laden, only really performable at the top-flight institutions in the world (kinda like that new watch that everyone “must have”). Dan Tenen, a professor at the Harvard Stem Cell Institute jokingly refers to these as “Figure 5 – the experiments that the reviewer requested and never mean anything, but had to be done to get published”. While Dan’s lab is in the position to do the experiments and poke the fun at the process, this is sadly not the case for the vast majority of research labs. Not only does this process slow down science, but it also makes non-privileged scientists collaborate with the top dogs, thus reinforcing the circle. If the experiment addresses a fundamental flaw in the paper, fine – but I worry that this is not often the case.
Moreover, granting and funding agencies have “go to” people for peer review and one of the worst things they’ve done recently is made these panels public before applications are submitted. The “followers” will study these panels, look for what they’ve published and how they think and write their application to meet these criteria. Some people call this good strategic planning, I call it a unfortunate side effect of the need to survive. Again, we risk squeezing out the good novel ideas.
The challenge going forward must therefore be to create a scientific research environment where the pressure to publish falls a distant second to new idea generation and development of the human capital. At this juncture though, careers depend on papers, so scientists will do what it takes to get published… sadly this all too often means towing the party line and not really exploring new ideas.
There are some interesting models out there for how to tackle this and I’ll be exploring those in future articles – for now though, ask yourself how representative our current system is when we often rely on the judgment of two to three experts chosen by a single journal editor or funding agency…
Latest posts by David Kent (see all)
- The need for high quality public engagement in the regenerative medicine field - February 6, 2018
- Evaluating stem cell therapies: “Small trials and difficult statistics” - January 22, 2018
- Valuing the good, the bad and the ugly - August 29, 2017