Korean J Anesthesiol Search

CLOSE


Oh, Shin, Park, and Chung: Reporting and methodologic evaluation of meta-analyses published in the anesthesia literature according to AMSTAR and PRISMA checklists: a preliminary study

Abstract

Background

There have been few recent reports on the methodological quality of meta-analysis, despite the enormous number of studies using meta-analytic techniques in the field of anesthesia. The purpose of this study was to evaluate the quality of meta-analyses and systematic reviews according to the Assessment of Multiple Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines in the anesthesia literature.

Methods

A search was conducted to identify all meta-analyses ever been published in the British Journal of Anaesthesia (BJA), Anaesthesia, and Korean Journal of Anesthesiology (KJA) between Jan. 01, 2004 and Nov. 31, 2016. We aimed to apply the AMSTAR and PRISMA checklists to all published meta-analyses.

Results

We identified 121 meta-analyses in the anesthesia literature from January 2004 through the end of November 2016 (BJA; 75, Anaesthesia; 43, KJA; 3). The number of studies published and percentage of ‘Yes’ responses for meta-analysis articles published after the year 2010 was significantly increased compared to that of studies published before the year 2009 (P = 0.014 for Anaesthesia). In the anesthesia literature as a whole, participation of statisticians as authors statistically improved average scores of PRISMA items (P = 0.004) especially in the BJA (P = 0.003).

Conclusions

Even though there is little variability in the reporting and methodology of meta-analysis in the anesthesia literature, significant quality improvement in the reporting was observed in the Anaesthesia by applying the PRISMA checklist. Participation of a statistician as an author improved the reporting quality of the meta-analysis.

Introduction

Meta-analysis is defined as ‘an objective statistical analysis which integrates the diverse results of several independent clinical trials with similar concerns’ [1]. Meta-analysis proceeds through searching and selecting studies, retrieving and coding data, accumulating comparable findings, analyzing the result distribution, and reporting the result [2]. Meta-analysis can estimate not only the direction but also the magnitude of effect size of the result. With meta-analysis, clinicians can obtain the conclusions of evidence-based medicine through the examined study according to various methods [3]. Borenstein et al. [3] reported that meta-analysis, with the property of post-hoc analysis, can reveal the trend of the present study and suggest future research directions. Numerous studies have been performed to evaluate the quality of meta-analyses and systematic reviews following the release of Assessment of Multiple Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist reporting guidelines [4,5,6,7]. Many researchers have studied and reported on the quality of meta-analyses in the fields of pharmacology and medicine. Mulrow [4] reported that there were no studies that completed the checklists among the 50 meta-analyses published in the medical journals between 1986 and 1987. Sacks et al. [5] examined the validity of 83 meta-analyses and reported that the overall quality of the studies was low. A follow-up study in 1996 revealed no improvement in the quality of meta-analyses [8]. However, there have been few studies reporting on the methodological quality of meta-analyses and systematic reviews despite the enormous number of studies using meta-analytic techniques in the field of anesthesia. There are limitations in assessing the quality of meta-analyses and systematic reviews, and it is difficult to distinguish good-from poor-quality reviews because of the wide range of quality. There has been a continued proliferation of scales and checklists for assessing the quality of meta-analyses and systematic reviews [9,10,11]. Well-designed guidelines will assist many authors in assessing the methodological quality and integrating the results of their meta-analyses and systematic reviews. The practice of evidence-based medicine will be further established by high-quality evidence from meta-analyses and systematic reviews. With evidence-based medicine, clinician can make the best decisions. These trials would allow researchers to improve the level of quality.
The purpose of this study was to evaluate the reporting and methodologic quality of the meta-analyses and systematic reviews according to the AMSTAR and PRISMA guidelines in the anesthesia literature, as well as the participation of statisticians as authors. We assessed the rate of a ‘Yes’ reply to the various items in the title, abstract, introduction, materials and methods, results, and discussions of the meta-analyses published. We would like to suggest basic data to improve the quality of studies.

Materials and Methods

A search was conducted to identify all meta-analyses and systematic reviews published in the British Journal of Anaesthesia (BJA), Anaesthesia, and Korean Journal of Anesthesiology (KJA) between Jan. 01, 2004 and Nov. 31, 2016. The words ‘systematic review,’ ‘meta-analysis,’ ‘meta analysis,’ and ‘meta analyses’ were searched either in the title, abstract, or key words. The search was conducted on MEDLINE, Scopus, and KoreaMed, and the results were cross-referenced with searches performed on the web sites of the respective journals.
We attempted to apply the AMSTAR and PRISMA checklists to all meta-analyses selected according to our criteria, and a score was assigned to each paper. AMSTAR is an eleven-item checklist used to assess the quality of methodology for meta-analyses and systematic reviews (Appendix 1). The creators of AMSTAR proposed three answer options for each item: ‘yes,’ ‘no,’ and ‘not applicable.’ To obtain a ‘yes’ for an item, the meta-analysis must contain all the major components within that item. AMSTAR can be scored both on the basis of individual components and as a checklist by summing the scores (overall score). PRISMA is a checklist for reporting that has 27 entries divided into 7 sections (Appendix 2). We recorded one of three answers for each item: ‘yes,’ ‘no,’ or ‘not applicable.’ Some of the items in the checklist contain multiple components, so if at least half of them were met, then that item received a ‘yes.’ If less than half of the components were met or a key component was lacking, the item was given a ‘no.’ AMSTAR and PRISMA checklists were made by checking the descriptions of items and sub-items of the title, abstract, methods, results, and discussion. If descriptions of items and sub-items were incomplete, we coded partially described items as ‘1’ or ‘0’ depending on whether the meaning of items (sub-items) was described correctly or not. We shared the concept of the items (sub-items) before analyzing the entire study. We obtained inter-rater reliability after separate analysis of 10% of a study by an independent researcher. We repeated this procedure until we obtained a reliability of 85% for each item (sub-items). We investigated the influence of participation of statistician on the quality of meta-analyses. The participation of statisticians was confirmed by checking authors' affiliations.
The statistical process was carried out using Jandel Sigma-Stat® (version 4.0, Systat Software Inc., San Jose, CA, USA) and SPSS version 24.0 (IBM Corp., Armonk, NY, USA) for Windows. The normality of the categorical data was tested, and then the data were analyzed using the Mann-Whitney rank sum method and the Wilcoxon signed rank test. The significance level was P < 0.05 in all cases.

Results

We identified total 121 meta-analyses published in the BJA, Anaesthesia, and KJA from January 2004 through the end of November 2016 using our inclusion criteria. BJA published 75 meta-analyses (Table 1). Anaesthesia and KJA published 43 and 3 meta-analyses, respectively (Table 1). The most common article topics in all journals were drugs. More than half of the meta-analyses were generated from data retrieved from randomized controlled trials (RCTs).
AMSTAR and PRISMA checklist items and sub-items are described in the Appendix (Appendixes 1 and 2). The results for AMSTAR and PRISMA checklists for articles published in the BJA, Anaesthesia, and KJA from January 2004 through the end of November 2016 are shown in Figs. 1 and 2. The percentage of ‘Yes’ response to the items in the AMSTAR and PRISMA checklists is shown in Tables 1 and 2, respectively. The percentage of ‘Yes’ replies to 27 items (sub-items) of the PRISMA guidelines checked in meta-analyses published after the year 2010 was significantly increased compared to that of studies published before the year 2009 (P = 0.014 for Anaesthesia, Table 2). For the AMSTAR checklist, over 50% of the papers received a positive response for items (Q2, Q3, Q5–Q11 in the BJA; Q1–Q3, Q5–Q11 in the Anaesthesia; Q1, Q3, Q5–Q10 in the KJA) (Fig. 1, Table 3). Most papers clearly stated their research question and search criteria of the AMSTAR (Q3, Q5–Q10) (Table 3). However, there was a very low response regarding the status of publications used as an inclusion criterion (Q4) of the articles published in the BJA, Anaesthesia, and KJA and conflict of interest (Q11) in the KJA (Table 3). Publication bias was observed in 62.4% of the papers in the BJA, 62.8% in the Anaesthesia, and 100% in the KJA (Table 3). Conflict of interest is described in 54.5% of the papers in the BJA, 53.5% in the Anaesthesia (Table 3). Overall fulfillment rates of the PRISMA checklist in the BJA, Anaesthesia, and KJA were 85.1, 81.3, and 87.7%, respectively (Fig. 2, Table 4). For the PRISMA checklist, over 50% of the papers received a positive response for items (Q1–Q11, Q13, Q14, Q16–Q18, Q20, Q21, Q23–Q27 in the BJA; Q1–Q11, Q13, Q14, Q16–Q18, Q20, Q21, Q24–Q26 in the Anaesthesia; Q1–Q11, Q13–Q18, Q20–Q26 in the KJA) (Fig. 2, Table 4). Most papers clearly stated their research question and inclusion criteria of the PRISMA (Q1–Q4, Q6–Q11, Q13, Q14, Q16–Q18, Q20, Q21, Q24–Q26) (Table 4). However, there was a very low response for risk of bias in individual studies (Q12) of the articles published in the BJA (44.0%), Anaesthesia (39.5%), and KJA (33.3%) (Table 4). Complete responses were recorded for the item of title in PRISMA that required using ‘meta-analysis’ in the Anaesthesia and KJA. Analysis of the title, abstract, and introduction of the study in almost all papers (over 90%) identified the report as a systematic review, meta-analysis, or both and demonstrated that a structured summary was provided in the abstract. Numerous papers (66.7%, 51.2%, and 66.7% of those in the BJA, Anaesthesia, and KJA, respectively) indicated that a protocol was registered and accessible (Table 4). All of the trials included in this study provided an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS). The methods of additional analysis (Q16) such as subgroup analysis, sensitivity analysis, and meta-regression were described in 76.0%, 58.1%, and 66.7% of the papers in the BJA, Anaesthesia, and KJA, respectively (Table 4). The results of additional analysis (Q23), were described in 77.3%, 48.8%, and 100% of the papers in the BJA, Anaesthesia, and KJA, respectively (Table 4). Regarding the discussions, almost all papers provided good descriptions of the summary of evidence (Q24), limitations (Q25), and conclusions of the study (Q26) (Table 4). Meta-analyses of the Anaesthesia did not describe their funding sources (Q27) well, and these were not described in the KJA at all (Table 4).
κ-coefficient analysis was utilized to evaluate interreviewer agreement. The κ value for inter-rater agreement was 0.86 (95% CI, 0.78 to 0.93), demonstrating a high reliability between reviewers. There were no inter-rater disagreements during the review process. The influence of participation of statisticians significantly contributed to improved reporting quality in all the meta-analyses in the included anesthesia literature (P = 0.004, Table 5). In the BJA, participation of statistician as authors statistically improved average score of the PRISMA items (subitems) assessed in anesthesia meta-analysis literature published from Jan. 2004 to Nov. 2016 (P = 0.003, Table 5).

Discussion

Meta-analysis is an effective statistical method to obtain a major decision regarding trials with similar concerns. It is also effective in cases in which there is no time to collect raw data, as well as in saving costs and labor. Meta-analysis has become popular in the medical literature. Remarkable advances in evidence-based medicine have been responsible for the development of checklists for reporting and methodology of meta-analyses and systematic reviews. With the development of methodology by different research groups, the need for modern statistical analysis for increasing academic achievements has contributed to successful agreement regarding meta-analysis.
We have investigated all the meta-analyses published in the anesthesia literature (BJA, Anaesthesia, and KJA) from Jan. 2004 to Nov. 2016. Initially, we attempted to evaluate and compare the reporting and methodologic quality of the meta-analyses and systematic reviews published in the KJA and major anesthesia literature. However, the original plan was modified to compare the quality of the meta-analyses and systematic reviews published in the BJA, Anaesthesia, and KJA as a preliminary study because of an enormous amount of information in the major anesthetic journals compared to that of the KJA. Regarding the title and abstract, almost all of the studies clearly stated meta-analysis and systematic review, with the exception of 6 papers (8.0%) in the BJA. The abstract was well organized, but the type of analytic model was not stated clearly in the study. The author's selection of type of analysis model depends on the nature of the data. Recently, the advantage of the random-effect model was described and recommended by several investigators, but this model was not adopted widely because of the complexity of its calculation and underestimated recognition [12]. Although the studies were selected for this review with our criteria, meta-analysis and systematic review was not stated clearly in the title in several studies. There was one meta-regression [13] and study of review articles that was not accounted for statistically because of poor standardization and heterogeneity [14,15]. Systematic review is a review of a formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and collect and analyze data from the study. However, meta-analysis is a statistical method that may or may not be used to analyze and summarize the results of included studies. Meta-analysis refers to the use of statistical techniques in a systematic review to integrate the results of included studies [16]. Precise estimation of the direction and amount of effect size of the intervention is the most characteristic advantage of meta-analysis. Therefore, it is advisable to consider another terminology, such as ‘qualitative meta-analysis,’ instead of meta-analysis or systematic review, in cases of studies in which the study results were not analyzed using statistical techniques. The reader can obtain critical and updated information to understand the magnitude of the study process and results of meta-analysis from the abstract. Therefore, structured information regarding the type of model used in the analysis, the origin of data, and quality of the study (Q15, Q19, and Q22 of the PRISMA items) should be described in the abstract. Detailed suggestion by structured abstract is contradictory to the limitation of word count in the abstract.
The main topic suggested in the introduction may play a critical role in searching and selecting relevant studies. Items of present eligibility criteria (Q6), information sources (Q7), searching strategy (Q8), collecting of grey literature (Q9), and any process for obtaining and confirming data from investigation (Q13, 14) were generally described well in this method. However, items of risk of bias (ROB) (Q12, Q15, Q19) for meta-analysis published in journals were not complied with in this study; that is to say, less than 50% of the requirement for these items was met. The reason for low responses to the item of ROB was due to poor description in the meta-analyses. However, following studies should investigate whether or not the reason for low responses was due to poor examination by the coder.
Methods may affect the study results. Method items in the guideline are important to establish the validity and reliability of the study results. It is crucial to describe whether the author tried to search the present full electronic database. The quality of the meta-analysis is determined by the quality of included studies. In this meta-analysis, certain items (Q5, Q12, Q15, Q19, Q22, and Q27) were not described well in the BJA and Anaesthesia. Items Q12, Q19, and Q27 were poorly described in the KJA. Searching strategy and process for selecting studies were described well in the included studies. The coding sheet of each study is equivalent to one paper of a survey questionnaire. Obtaining data from a questionnaire is a technical process, but the coding process from an individual RCT in meta-analysis is retrieving quantitative results from each study. More than two experts in meta-analysis are necessary for data extraction from reports. Detailed description regarding qualified experts in extracting data from reports and reporting the confidence will increase the validity and reliability of the results of meta-analysis. Description of the third author who coordinates the discrepancy between two coders is necessary to increase the reliability of the process of analysis. Detailed explanation of the management of missing data will increase the objectivity and validity of meta-analysis.
Several studies did neither obtain the mean effect size nor calculation of it. These studies passed over the basic purpose of meta-analysis. The main objective of meta-analysis is to summate the effect size of the result quantitatively, which discriminates meta-analysis from an individual study. To obtain mean effect size, the author should consider sample size. A large sample size can affect determination of effect size and increase the validity of the study. It is advisable to report an effect size with a confidence interval rather than report it as a point estimate. To estimate the mean effect size, it is advisable to evaluate the homogeneity of the individual study. It should be determined whether the result of the study was derived from a homogenous population, and also clarify whether a fixed effect model was used in evaluating homogenous data and a random effect model for heterogenous data [17,18]. In general, estimation of effect size will be overestimated when a fixed effect model was used in calculating heterogenous data [19,20]. Almost all of the studies provided good information regarding selection of the model type, description of the analytic mode, and evaluation of the homogeneity. To explain the process of meta-analysis objectively, the investigator should clearly report the exclusion and inclusion criteria, and this information will contribute to the validity of the study. It will increase the reliability of the study when a follow-up researcher repeats the same meta-analysis and obtains a similar result. If an author reports a table that shows an effect size and sample size of the included study together, the whole process of estimating the effect size is clearly revealed. Therefore, these processes will increase the validity and reliability of the results of meta-analysis. With the analysis of our results, description of ROB within studies [Q19; BJA (38.7%), Anaesthesia (41.9%), KJA (33.3%)], ROB across studies [Q22; BJA (41.3%), Anaesthesia (46.5%)], and additional analysis [Q23; BJA (77.3%), Anaesthesia (48.8%)] regarding whether the author performed a sensitivity analysis, subgroup analysis, and meta-regression to evaluate the heterogeneity of the included studies according to the guidelines of AMSTAR and PRISMA was poor. However, description of the items 22 and 23 in the KJA was perfect. Items of objectives (Q4), information sources (Q7), effect size of the individual study, sample size, table of the characteristic of the study (Q18), and summary of evidence (Q24) were perfectly compliant in the BJA, Anaesthesia, and KJA.
In the discussions, the explanation of study characteristics describing whether the meta-analysis is the first trial or not should be described. The limitations of previous studies and the logic of the present study also should be described. All of the included articles in our study described summary of the evidence (Q24), limitation (Q25), and conclusion of the study (Q26) well.
We investigated the influence of participation of statisticians on the reporting and methodologic quality of meta-analysis. It has been argued that the creation of a meta-analysis requires a team, which should include both statisticians and physicians. Meta-analyses are also of high quality when the statistician focused on designing and conducting the study [21]. In our study, average scores of AMSTAR items of a meta-analysis in which a statistician was not part of the team did not significantly differ from those of teams of physicians alone (Table 6). However, average scores of PRISMA items of a meta-analysis in the anesthesia literature as a whole and in the BJA in which a statistician participated as part of a team were significantly higher than those of teams of physicians alone (P = 0.004, P = 0.003 respectively, Table 5). The relative discrepancy between the scores in this study comes from the different aims of the checklists (Appendixes 1 and 2).
AMSTAR was designed to reveal methodological quality, and the PRISMA statement is related to reporting quality. AMSTAR guidelines were developed in 2007 by combining elements of the Overview of Quality Assessment Questionnaire (OQAQ) and the Sacks instrument as well as other items based on methodological advances that had been made since the OQAQ and Sacks had been introduced [5]. The first checklist specific to meta-analyses was the Quality of Reporting of Meta-analyses (QUOROM), which was published in 1999 and designed to address the suboptimal reporting of meta-analyses [22]. QUOROM was revised to the PRISMA in 2009 to encompass both meta-analyses and systematic reviews [16]. PRISMA is a checklist for reporting that has 27 items divided into 7 sections: title, abstract, introduction, methods, results, discussion, and funding. Since their development, AMSTAR and PRISMA have become widely accepted as tools to ensure adequate reporting and methodology of meta-analyses and systematic reviews. Although scientific proof supports only a subset of the items, we suggest that these guidelines should be used as a standard in the preparation, reporting, and appraisal of meta-analyses of RCTs. As of this date, AMSTAR is widely used by investigators in the medical fields, as well as policy-making associations. AMSTAR has gained a wide acceptance in reliability, respectability, and reproducibility. The purpose of the PRISMA statement is to help authors improve the reporting of meta-analyses and systematic reviews. It may be used as a basis for evaluations of interventions. PRISMA may also be useful for critical appraisal of published systematic reviews. However, the PRISMA checklist is not a quality assessment tool to estimate the quality of a systematic review.
Limitations of our study are that this study was limited to journals related to anesthesia. Meta-analysis of anesthesia journals has been considered to be of higher quality than studies published in other specialty journals [23,24]. Secondly, we evaluated whether participation of statisticians contributed to the reporting and methodology quality. However, we did not investigate the previous experience with meta-analysis of the corresponding author at all. It should be investigated whether participation of an expert in meta-analysis will increase the quality of the study or not.
In conclusion, even though there is little variability in the methodology of meta-analysis in the anesthesia literature, we have proved that the quality of meta-analysis in the anesthesia literature seems to be improving in reporting with time after the release of the AMSTAR and PRISMA guidelines and the participation of statisticians. We suggest that reporting and methodology items should be used for every submitted meta-analysis to increase the quality of the process of evidence-based documents. Applying AMSTAR and PRISMA to future meta-analyses may result in higher quality in both reporting and methodology. We strongly recommend the use of clear guidelines of both AMSTAR and PRISMA for the authors, and the reviewers should be familiar with the checklists accordingly.

References

1. Glass GV. Primary, secondary, and meta-analysis of research. Educ Res 1976; 5: 3-8.
crossref
2. Borenstein M, Hedges LV, Higgins JP, Rothstein HR. How a meta-analysis works. In: Introduction to Meta-Analysis. Chichester, John Wiley & Sons, Ltd.. 2009, pp 1-7.

3. Borenstein M, Hedges LV, Higgins JP, Rothstein HR. Why perform a meta-analysis. Introduction to Meta-Analysis. Chichester, John Wiley & Sons, Ltd.. 2009, pp 9-14.

4. Mulrow CD. The medical review article: state of the science. Ann Intern Med 1987; 106: 485-488. PMID: 3813259.
crossref pmid
5. Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med 1987; 316: 450-455. PMID: 3807986.
crossref pmid
6. Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, Burzotta F, et al. Compliance with QUOROM and quality of reporting of overlapping meta-analyses on the role of acetylcysteine in the prevention of contrast associated nephropathy: case study. BMJ 2006; 332: 202-209. PMID: 16415336.
crossref pmid pmc
7. Valentine JC, Cooper H, Patall EA, Tyson D, Robinson JC. A method for evaluating research syntheses: The quality, conclusions, and consensus of 12 syntheses of the effects of after-school programs. Res Synth Methods 2010; 1: 20-38. PMID: 26056091.
crossref pmid
8. Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mt Sinai J Med 1996; 63: 216-224. PMID: 8692168.
pmid
9. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 2009; 62: e1-e34. PMID: 19631507.
crossref pmid
10. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med 1997; 126: 376-380. PMID: 9054282.
crossref pmid
11. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000; 283: 2008-2012. PMID: 10789670.
crossref pmid
12. Schulze R. Current methods for meta-analysis. J Psychol 2007; 215: 90-103.
crossref
13. Davidson AJ, Smith KR, Blussé van Oud-Alblas HJ, Lopez U, Malviya S, Bannister CF, et al. Awareness in children: a secondary analysis of five cohort studies. Anaesthesia 2011; 66: 446-454. PMID: 21501128.
crossref pmid
14. Giles JW, Sear JW, Foëx P. Effect of chronic beta-blockade on peri-operative outcome in patients undergoing non-cardiac surgery: an analysis of observational and case control studies. Anaesthesia 2004; 59: 574-583. PMID: 15144298.
crossref pmid
15. Pikwer A, Åkeson J, Lindgren S. Complications associated with peripheral or central routes for central venous cannulation. Anaesthesia 2012; 67: 65-71. PMID: 21972789.
crossref pmid
16. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6: e1000097PMID: 19621072.
crossref pmid pmc
17. Borenstein M, Hedges LV, Higgins JP, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods 2010; 1: 97-111. PMID: 26061376.
crossref pmid
18. Nikolakopoulou A, Mavridis D, Salanti G. How to interpret meta-analysis models: fixed effect and random effects meta-analyses. Evid Based Ment Health 2014; 17: 64PMID: 24778439.
crossref pmid
19. Borenstein M, Hedges LV, Chichester , Higgins JP, Rothstein HR. Fixed-effect model. Introduction to Meta-Analysis. Chichester, John Wiley & Sons, Ltd.. 2009, pp 63-67.

20. Schulz KF, Grimes DA. Sample size calculations in randomised trials: mandatory and mystical. Lancet 2005; 365: 1348-1353.
crossref pmid
21. Klimo P Jr, Thompson CJ, Ragel BT, Boop FA. Methodology and reporting of meta-analyses in the neurosurgical literature. J Neurosurg 2014; 120: 796-810. PMID: 24460488.
crossref pmid
22. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999; 354: 1896-1900. PMID: 10584742.
crossref pmid
23. Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One 2011; 6: e27611PMID: 22110690.
crossref pmid pmc
24. Choi PT, Halpern SH, Malik N, Jadad AR, Tramèr MR, Walder B. Examining the evidence in anesthesia literature: a critical appraisal of systematic reviews. Anesth Analg 2001; 92: 700-709. PMID: 11226105.
crossref pmid

Appendix

Appendix 1

AMSTAR Checklist [from: Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, et al. External Validation of a Measurement Tool to Assess Systematic Reviews (AMSTAR). PLoS ONE 2007; 2: e1350]

kjae-70-446-a001.jpg

Appendix 2

Checklist of Items to Include When Reporting a Systematic Review (with or without Meta-analysis) (from: Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred Reporting Items for Systematic Reviews and Meta-analyses: the PRISMA Statement. PLoS Med 2009; 6: e1000097)

kjae-70-446-a002.jpg
Fig. 1

Percentage of papers that fulfilled each item in the AMSTAR checklist. BJA: British Journal of Anaesthesia, KJA: Korean Journal of Anesthesiology.

kjae-70-446-g001.jpg
Fig. 2

Percentage of papers that fulfilled each item in the PRISMA checklist. BJA: British Journal of Anaesthesia, KJA: Korean Journal of Anesthesiology.

kjae-70-446-g002.jpg
Table 1

The Average Percentage of ‘Yes’ Reply to 11 Items of the AMSTAR Guidelines (2007) Checked from Anesthetic Meta-analysis Literature Published from Jan. 2004 to Nov. 2016

2004–2007 2008–2016 Overall P value
BJA 59.7 (46.4)
(n = 7)
72.1 (32.6)
(n = 68)
70.9 (33.2)
(n = 75)
0.278
Anaesth 65.9 (40.7)
(n = 4)
74.3 (26.6)
(n = 39)
74.0 (27.9)
(n = 43)
0.520
KJA 69.7 (40.7)
(n = 3)
69.7 (40.7)
(n = 3)
BJA and Anaesth 66.1 (38.7)
(n = 11)
70.5 (37.0)
(n = 107)
72.0 (32.8)
(n = 118)
0.243

Values are average percent (SD) of ‘Yes’ response from the score of the AMSTAR checklist fulfilled adequately. ≤ 2007; Anesthesia literature of meta-analysis published from Jan. 2004 to Dec. 2007. > 2008; Anesthesia literature of meta-analysis published from Jan. 2008 to Nov. 2016. P value means that the value of average percent scores of ‘> 2008’ compared to that of ‘≤ 2007’ (Wilcoxon signed rank test was done for statistical analysis). BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

Table 2

The Average Percentage of ‘Yes’ Reply to 27 Items (Sub-items) of the PRISMA Guidelines (2009) Checked from Anesthesia Meta-analysis Literature Published from Jan. 2004 to Nov. 2016

2004–2009 2010–2016 Overall P value
BJA 81.3 (31.9)
(n = 18)
86.4 (18.2)
(n = 57)
85.1 (20.9)
(n = 75)
0.352
Anaesth 75.4 (33.2)
(n = 14)
84.2 (19.9)
(n = 29)
81.3 (23.8)
(n = 43)
0.014*
KJA 87.7 (26.4)
(n = 3)
87.7 (26.4)
(n = 3)
BJA and Anaesth 78.3 (32.4)
(n = 32)
85.3 (19.0)
(n = 86)
83.2 (22.2)
(n = 118)
0.538

Values are average percent of the score of the PRISMA checklist fulfilled (SD) adequately. ≤ 2009; Anesthesia literature of meta-analysis published from Jan. 2004 to Dec. 2009. > 2010; Anesthesia literature of meta-analysis published from Jan. 2010 to Nov. 2016. Asterix (*) means that the value of average percent scores of ‘> 2010’ is statistically significant compared to that of ‘≤ 2009’ (Mann-Whitney rank sum test was done for statistical analysis). BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

Table 3

Percentage of 11 AMSTAR Items Fulfilled Adequately from Each Meta-analysis Published from Anesthesia Literature of Jan. 2004 to Nov. 2016

Items BJA
(n = 75)
Anaesth
(n = 43)
KJA
(n = 3)
Q1 Was an ‘a priori’ design provided? 12.3 51.1 66.7
Q2 Was there duplicate study selection and data extraction? 62.4 51.1 33.3
Q3 Was a comprehensive literature search performed? 94.1 97.7 100.0
Q4 Was the status of publication (i.e., grey literature) used as an inclusion criterion? 3.7 18.6 0
Q5 Was a list of studies (included and excluded) provided? 98.5 97.7 100.0
Q6 Were the characteristics of the included studies provided? 97.1 97.7 100.0
Q7 Was the scientific quality of the included studies assessed and documented? 97.1 93.0 100.0
Q8 Was the scientific quality of the included studies used appropriately in formulating conclusions? 97.1 97.7 66.7
Q9 Were the methods used to combine the findings of studies appropriate? 97.8 93.0 100.0
Q10 Was the likelihood of publication bias assessed? 62.4 62.8 100.0
Q11 Was the conflict of interest stated? 54.5 53.5 0

BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

Table 4

Percentage of 27 PRISMA Items (Sub-items) Fulfilled Adequately from Each Meta-analysis Published from Jan. 2004 to Nov. 2016

Items Sub-items BJA (n = 75) Anaesth (n = 43) KJA (n = 3)
TITLE 92.0 100.0 100.0
ABSTRACT Structured summary 100.0 97.7 100.0
INTRODUCTION Rationale 100.0 95.3 100.0
Objectives 100.0 100.0 100.0
METHODS Protocol and registration 66.7 51.2 66.7
Eligibility criteria 100.0 97.7 100.0
Information sources 100.0 100.0 100.0
Search strategy 98.7 93.0 100.0
Study selection 97.3 95.3 100.0
Data collection process 89.3 86.0 100.0
Data items 97.3 93.0 100.0
Risk of bias in individual studies 44.0 39.5 33.3
Summary measures 94.7 93.0 100.0
Synthesis of results 93.3 95.3 100.0
Risk of bias across studies 42.7 48.8 66.7
Additional analyses 76.0 58.1 66.7
RESULTS Study selection 100.0 97.7 100.0
Study characteristics 100.0 100.0 100.0
Risk of bias within studies 38.7 41.9 33.3
Results of individual studies 94.7 100.0 100.0
Synthesis of results 97.3 93.0 100.0
Risk of bias across studies 41.3 46.5 100.0
Additional analysis 77.3 48.8 100.0
DISCUSSION Summary of evidence 100.0 100.0 100.0
Limitations 92.0 88.4 100.0
Conclusions 98.7 100.0 100.0
FUNDING Funding 66.7 34.9 0.0

BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

Table 5

The Average Percentage of ‘Yes’ Reply to 27 Items (Sub-items) of the PRISMA Guidelines (2009) Checked from Anesthesia Meta-analysis Literature Published from Jan. 2004 to Nov. 2016 according to the Participation of Meta-analysis Expert (Statistician)

No Yes P value
BJA 82.3 (20.4)
(n = 66)
87.7 (20.7)
(n = 9)
0.003*
Anaesth 82.2 (23.9)
(n = 34)
77.4 (25.5)
(n = 9)
0.723
KJA 83.3 (34.0)
(n = 2)
96.3 (19.2)
(n = 1)
0.052
Total 82.6 (26.4)
(n = 102)
87.1 (23.1)
(n = 19)
0.004*

Values are average score of PRISMA checklist that fulfilled (SD) adequately. No: Meta-analysis expert (statistician) did not participate in the author team. Yes: Meta-analysis expert (statistician) participated in the author team. Asterix (*) means that the value of average percent scores of ‘Yes’ response is significant compared to that of ‘No’ (Mann-Whitney rank sum test was done for statistical analysis).

BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

Table 6

The Average Percentage of ‘Yes’ Reply to 11 Items of the AMSTAR Guidelines (2007) Checked from Anesthesia Meta-analysis Literature Published from Jan. 2004 to Nov. 2016 according to the Participation of Meta-analysis Expert (Statistician)

No Yes P value
BJA 70.9 (33.3)
(n = 66)
71.7 (34.9)
(n = 9)
0.487
Anaesth 74.9 (29.2)
(n = 34)
70.9 (23.4)
(n = 9)
0.231
KJA 68.2 (46.2)
(n = 2)
72.7 (46.7)
(n = 1)
0.778
Total 71.4 (36.0)
(n = 102)
71.8 (35.1)
(n = 19)
0.974

Values are average score of AMSTAR checklist fulfilled (SD) adequately. No: Meta-analysis expert (statistician) did not participate in the author team. Yes: Meta-analysis expert (statistician) participated in the author team. P value means that the value of average percent scores of ‘Yes’ compared to that of ‘No’ (Mann-Whitney rank sum test was done for statistical analysis). BJA: British Journal of Anaesthesia, Anaesth: Anaesthesia, KJA: Korean Journal of Anesthesiology.

TOOLS
Share :
Facebook Twitter Linked In Line it
METRICS Graph View
  • 7 Web of Science
  • 7 Crossref
  • 7 Scopus
  • 4,813 View
  • 54 Download


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
101-3503, Lotte Castle President, 109 Mapo-daero, Mapo-gu, Seoul 04146, Korea
Tel: +82-2-792-5128    Fax: +82-2-792-4089    E-mail: journal@anesthesia.or.kr                

Copyright © 2024 by Korean Society of Anesthesiologists.

Developed in M2PI

Close layer
prev next