Two recent cases of scientific fraud have raised questions about the validity of research studies published in various medical journals. Claire O'Connell reports
In recent weeks, back-to-back revelations of scientific fraud have stunned the world of biomedical research and have raised questions about how studies are vetted before being published in scientific journals.
At the end of 2005, South Korean stem cell scientist Woo Suk Hwang's career unravelled as it emerged that his lab had faked data underpinning their acclaimed advances in therapeutic stem cell technology. And, earlier this month, the story broke that Norwegian researcher Jon Sudbø had invented details about patients in a cancer study.
The rumblings started after Hwang's group published landmark papers in the prestigious journal Science. Their work centred on stem cells, which have the potential to develop into many different cell types and so carry enormous therapeutic possibilities.
In 2004, Hwang and colleagues claimed to have cloned stem cells from a human embryo, and in May 2005 the group reported they had created patient-specific stem cell lines, an advance that held promise for stem cell therapy because using personalised stem cells could potentially reduce the chance of rejection.
However, whistleblowers raised suspicions about the ethics behind the group's research and, in November last year, Hwang admitted that some eggs had been donated by junior researchers, a practice considered unethical.
Further investigations found that the stem cells described in the 2005 paper did not genetically match the patients they were reported to be derived from.
The breakthrough was a fake.
The journal retracted the paper and Hwang, a national hero before the scandal broke, could now face criminal charges in his home country.
Earlier this month another prestigious journal, The Lancet, ran an editorial noting that patients would suffer from the fallout of the South Korean scandal and called for the integrity of stem cell research to be better protected. Less than a week later, The Lancet found itself embroiled in a brand new case of scientific fraud when it emerged that Dr Jon Sudbø made up details of more than 900 patients in a study published by the journal last October.
Sudbø's paper had suggested that using certain anti-inflammatory drugs in the long term may lessen the risk of mouth cancer. But when a fellow Norwegian read the journal and questioned the source of the patient data, Sudbø reportedly came clean about fabricating details in that study and in two other published papers.
Scientific fraud is always a concern, but it has particular ramifications in the area of biomedicine, where misleading information can potentially result in patients not getting appropriate treatments. So what safeguards are in place?
For scientific journals, the standard and widely accepted method of quality control is peer review. When authors submit a study for publication, the editor first decides whether it suits the journal. If it does, the editor sends it to selected experts in the field (peers) who assess the work for its originality, validity and significance.
Informed by these reviews, the editor decides whether to publish the study, request further work on it or reject it.
"Peer review is an honour system," says Prof David Bouchier-Hayes, editor-in-chief at the Irish Journal of Medical Science (IJMS), which publishes around 40 peer-reviewed original papers each year.
To help safeguard the validity of the data, the IJMS asks authors to confirm that the work is their own and has not already been submitted elsewhere. Peer reviewers then evaluate the content and make recommendations on whether the paper should be published.
Experienced editorial eyes also aid the review process. Results that are too neat and tidy can raise suspicion, according to Dr John Murphy, editor of the Irish Medical Journal, which publishes around 60 peer-reviewed papers each year.
He notes that random variation in biology means you would expect a range of results. So, if a study presents results lining up neatly, it could indicate a problem.
Murphy says it is important to be vigilant and to look for these and other clues, such as small institutions producing data on large numbers of people. "Everybody wants to stamp out fraud, it's a serious problem," he says.
Prof Brian Lawlor, editor of the Irish Journal of Psychological Medicine, says that while peer review has its limitations, it has stood the test of time and can identify problematic issues before publication.
"We use peer review to maintain a standard, so that poorly conducted research is not published," he says. "But you do depend on the goodwill of reviewers to work carefully and in a timely fashion."
The Lancet is one of three major journals taking part in an extensive study of the peer-review system that is tracking over 1,000 papers as they go through the vetting process. In addition, some journals are adopting technology to check whether images submitted by authors have been doctored.
But perhaps the public expects too much of peer review, which is based on trust and is not specifically designed as a fraud-detection tool. Bouchier-Hayes says several other checks and balances exist prior to the peer review system.
For instance, in Ireland clinical trials are regulated by the Irish Medicines Board. And emerging data from other studies are often presented at conferences where fellow scientists can raise issues before the information is submitted to a journal.
However, if someone is determined to commit fraud, such as plagiarising information or cooking the data, Bouchier-Hayes says it can be difficult for the peer review system to detect it. That's why data remain under tough scrutiny even after publication. Scientists verify important studies by repeating experiments, and through journals and conferences they can openly question published findings.
Indeed, in the cases of both Hwang and Sudbø, the false data were spotted by the wider scientific community after the studies had been published, demonstrating the self-regulating net that exists beyond a panel of peers and editors. Such measures do work. Otherwise Hwang and Sudbø would not have been caught.
And it is in everyone's interest to detect science fraud, says Bouchier-Hayes. "We don't have absolute safeguards and essentially all people in the biomedical community must maintain vigilance."
Aftermath of stem cell scandal
The high-profile case of Woo Suk Hwang and colleagues publishing faked details of a major breakthrough in therapeutic cloning is casting a shadow over the field of stem cell research, according to Prof Frank Barry, scientific director at the Regenerative Medicine Institute (Remedi) in Galway. "It's having a damaging effect," he says of the scandal. "This is bad news for the field."
He says that some stem cell research is showing indications of positive results, but there's a long way to go and the suggestion that some workers are not practising rigorous science throws up yet another hurdle on what is already a difficult path. "It forces us to take a step backwards and justify even more what we are trying to do," he says.
He adds that there is an enormous pressure to publish and get funding, and this is what may drive some rare and unscrupulous researchers to falsify data.