Korean Journal of Anesthesiology



Lee: Can we trust the results of scientific papers?

Can we trust the results of scientific papers?

Sangseok Lee
Most authors consider randomized controlled trials (RCTs) and their systematic reviews as the most reliable source of information for medical decision-making and the choice of the optimal intervention. However, there is good evidence that low-quality studies are biased in their estimates of treatment effects [1]. Thus, prior to their publication, the quality of most systematic reviews and RCTs needs to be assessed, which includes evaluation of the bias inherent in the study.
A definition of bias in terms of use of English is: “An inclination or prejudice for or against one person or group, especially in a way considered to be unfair1)” In a statistical context the definition changes to: “A systematic distortion of a statistical result due to a factor not allowed for in its derivation.” This systematic distortion from the true value can cause either under- or over-estimation of the effects of treatment. Because readers are usually more interested in which of several possible interventions is the most effective, rather than which one is ineffective, biases in clinical trials often lead to an exaggeration of the magnitude or importance of the effects of the, reportedly, best treatment. Clearly, bias in clinical research is, in most cases, not due to the malicious intent of researchers or investors. Rather, in many cases, researchers are biased in unintended ways that may escape their notice or that of the RCT's reviewers.
Why is bias a problem in RCTs? The true effects of any clinical treatment are, initially, unknown. The results from the trial population are extrapolated to the target population as a whole and, in a properly designed RCT, all efforts made to predict, detect, quantify, and control for possible sources of prejudice. The inability to know whether the outcome of a particular study is biased reflects the impossibility of determining whether the outcome systematically deviates from the “truth,” which is unknown.
How can we evaluate the bias of RCTs? Assessment of the risk of bias is a recently introduced concept referring to a quality assessment of the literature. It measures the level of research, mainly the methodological quality, and the degree of internal validity. A strict risk assessment based on the literature is critical, because bias is a systemic error, i.e., a deviation from the true values of results or estimates, and may under- or over-estimate the effect of arbitration.
Recently, several cases of research fraud have been publicized in the media, including those of Hwang [2] and Fuji [3], both of whom fabricated the raw data. In another case, Chinese authors used a fake peer reviewer, which resulted in the retraction of 107 papers, including many published articles in major journals [4]. While RCTs inadequately controlled for bias are a very different kind of problem, their implications, that clinical decisions or interventions may be influenced by clinical conclusions, should not be overlooked.
What is the quality of published RCT articles? This can only be ascertained through their verification by the journals in which they are published. While there have been few, if any, such efforts, the review article by Kim et al. [5] in this issue of the Korean Journal of Anesthesiology (KJA), is very encouraging. They evaluated the risk of bias (ROB) of the quasi-RCTs or RCTs published in KJA between 2010 and 2016. In RCTs, there are generally six kinds of bias (selection, performance, detection, attrition, reporting, and other biases), and they are evaluated by determining a low, unclear, or high ROB for several domains (random sequence generation, allocation concealment, blinding of participants/personnel/outcome assessment, incomplete outcome data, selective reporting, and other biases). The authors concluded that the RCTs published in the KJA have limitations in allocation concealment after random sequence generation, blinding of participants and personnel, and full reporting of the results. The work of Kim et al. [5] is the first objective assessment of the various biases that underpin the reliability of clinical trials published in the KJA. Nonetheless, the authors provided a hopeful message that, although bias was determined in many of the KJA papers, the constant efforts of journal editors and a change in the awareness of researchers will lead to better-quality of RCTS.
Xu et al. [6] emphasized the adoption of the Consolidated Standards of Reporting Trials (CONSORT) statement to improve the reporting quality of RCTs, based on their study evaluating the five leading Chinese medical journals. The CONSORT statement is aimed at helping authors improve reporting by using a checklist and flow diagram2). Moher et al. [7] found that the quality of reports of RCTs published in the BMJ-British Medical Journal, JAMA, Lancet, and New England journal of Medicine improved when the authors followed the CONSORT guidelines.
The requirement for being a good journal is to publish good papers. KJA is now taking the first cautious step forward in publishing high-quality RCTs. The inclusion of the CONSORT statement in every journal's “Instructions for authors” should not be delayed, as it is a prerequisite for obtaining good RCT papers.




1. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011; 343: d5928PMID: 22008217.
[Article] [PubMed] [PMC]
2. Kennedy D. Editorial retraction. Science 2006; 311: 335.
3. Kranke P, Apfel CC, Roewer N, Fujii Y. Reported data on granisetron and postoperative nausea and vomiting by Fujii et al. Are incredibly nice! Anesth Analg 2000; 90: 1004-1007.
4. Stigbrand T. Retraction Note to multiple articles in Tumor Biology. Tumour Biol 2017;[Epub ahead of print].
5. Kim JH, Kim TK, In J, Lee DK, Lee S, Kang H. Assessment of risk of bias in quasi-randomized controlled trials and randomized controlled trials reported in the Korean Journal of Anesthesiology between 2010 and 2016. Korean J Anesthesiol 2017; 70: 511-519.
[Article] [PubMed] [PMC]
6. Xu L, Li J, Zhang M, Ai C, Wang L. Chinese authors do need CONSORT: reporting quality assessment for five leading Chinese medical journals. Contemp Clin Trials 2008; 29: 727-731. PMID: 18579449.
[Article] [PubMed]
7. Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001; 285: 1992-1995. PMID: 11308436.
[Article] [PubMed]

Go to Top