The warts and all of open science

The case for giving power back to the individual scientist

‘The Wonder Woman Pose’ (try it, it’s fun) is used to get psyched up before an important presentation

Science has become the new self-help. In favour of street-smart classics like How to Make Friends and Influence People, we reach for popular science titles. David Eagleman's The Brain: The Story of You uses neuroscience to shed light on aspects of the human condition like creativity or depression while Caroline Webb's How To Have a Good Day tells us how to get happy through behavioural science. Data is king and the scientific process awards the crown.

If you have ever used “The Wonder Woman Pose” (try it, it’s fun) to get psyched up before an important presentation then you have probably read about Amy Cuddy’s scientific research on the connection between what are known as “power poses” and their impact on how confident and strong we feel; the mind-body connection translated into “fake it ‘til you make it”.

Cuddy measured this increase in feelings of confidence and power as hormonal changes. A rise in testosterone and drop in cortisol levels is consistent with increased feelings of assertiveness and lower anxiety. The only problem with the study was its reproducibility. In other words, other scientists tried to replicate the finding using the same experiment as outlined by Cuddy and they could not. These scientists could not conclude that power poses were in any way effective in making us feeling more confident before that big meeting or striding on stage to give a talk.

This is not to single out Cuddy but rather give one of numerous examples of what is being called a reproducibility crisis in science. Astonishingly, a 2016 survey from the journal Nature found "more than 70 per cent of researchers have tried and failed to reproduce another scientist's experiments".

READ MORE

What does this mean? Does it mean all science is bunkum – they’re making it up as they go along? No. This can be partially explained by the pressures scientists are under to produce positive results fast and publish this in order to make their way up the ladder of scholarly success while maintaining funding. Along the way, data can be manipulated, massaged and fall victim to “p-hacking”, trying out different models of analyses until a statistical significant result emerges.

"There is so much pressure to, as it's called, 'publish or perish'," says Dr Karen Matvienko-Sikar, a researcher in the area of perinatal health and well-being in the School of Public Health, University College Cork.

“There is pressure to have all of these research outputs in addition to the other responsibilities and tasks of academia. P-hacking is, I would imagine, much more common than we are aware of.”

As a way of countering this reproducibility crisis, Matvienko-Skiar is a fan of what is known as open science: transparency in publishing scientific results and accompanying data, and transparency in the peer review process as well as complete public access to scientific literature.

"I think it is very important for the end users to be able to see how the researchers have not only conducted their research but how they have justified and explained what they have done, especially when standing up to a potential critique of their work," says Matvienko-Sikar, who welcomes the recent announcement by the Health Research Board (HRB) to publish its research on an open online platform.

"HRB Open Research's inclusion of the source data underlying the results will facilitate secondary analysis and replication attempts, generating more high-quality evidence to inform policy, clinical care, and interventions in addition to addressing the issue of research reproducibility," says HRB programme manager Dr Patricia Clarke.

This means everyone, including members of the public, can see the submitted manuscript, the expert reviewers’ comments, and any changes made as a result. And this all happens at a pace that leaves traditional scholarly publishing in the dust.

“The standard approach to submitting and publishing research is very closed. You typically don’t get a sense of how the research goes from being a completed piece of research to the end product that you read. And it can take months for papers to get through, which is bad for science and bad for researchers,” says Matvienko-Sikar.

Another important aspect of open science is that it attempts to give power back to the individual scientist – power that traditionally lies with big publishers.

"The idea behind it is to put researchers back in control of their own research. Instead of going through the more traditional journals system to publish their findings they can submit their article after a set of checks that we carry out," explains Rebecca Lawrence, managing director of F1000.

“We can publish it in a matter of days and then it goes into formal open review from experts and it is all open and transparent.”

Science becomes up for debate – as long as it is civil and backed up with supporting evidence, she explains. “We find there’s a lot more collegial discussion and debate about the findings between the authors, referees and indeed any scientist that wants to engage in the debate.”

One downside is the potential cost, says Matvienko-Sikar: “As it currently stands quite a lot of open access publishing options can incur a substantial cost [for the researcher]. You are looking at anywhere between €800 and €1,500 sometimes for open access publishing.”

Unless you are a researcher with the HRB, in which case you can publish on the platform for free, or you are well-funded and can afford the cost of other platforms, it can be difficult to engage with open access publishing, she notes.

“But things are changing and many funding bodies are now actively encouraging open access; when you apply for funding you are encouraged to factor in these costs.”

PANEL: The curse of null findings

Just as important as tackling reproducibility is encouragement by the HRB – and F1000, the providers of the open publishing platform – to publish what are known as null findings or findings that do not support the scientist’s hypothesis. Usually, when scientists get null results they are not published and therefore are never seen.

This is fine for a quirky experiment hypothesising that cows with names produce more milk (they do!) but not so much when testing a psychological intervention or drug treatment. These so-called negative or null results can paint a wider picture of the treatment’s success or failure. Even small and not so novel findings can help.

“We believe this is best achieved through the public disclosure of all results, regardless of outcome – and that a failure to do so can have adverse consequences: exposing patients to unnecessary research, engendering misinformation, and skewing priorities in health research,” says Clarke.

“Too much work is shut away in notebooks, in drawers and cupboards that would benefit other researchers to know it had been done, and save funders from spending money on duplicate efforts,” she adds.