Three pillars of artificial intelligence research in anesthesiology: welcoming address to the Korean Journal of Anesthesiology’s new guidelines for machine learning and deep learning research

Article information

Korean J Anesthesiol. 2025;78(3):181-182
Publication date (electronic) : 2025 May 14
doi : https://doi.org/10.4097/kja.25318
Department of Anesthesiology and Pain Medicine, Dongguk University Ilsan Hospital, Goyang, Korea
Corresponding author: Younsuk Lee, M.D., Ph.D. Department of Anesthesiology and Pain Medicine, Dongguk University Ilsan Hospital, 27 Donggukro, Ilsandong-gu, Goyang 10326, Korea Tel: +82-31-961-7872 Fax: +82-31-787-7864 Email: ylee@dgu.ac.kr
Received 2025 April 21; Accepted 2025 April 29.

Artificial intelligence (AI) technologies, particularly machine learning (ML) and deep learning (DL), are revolutionizing medical research and clinical medicine [1]. These technologies can be used to improve diagnosis, establish prognosis, and personalize treatment strategies. As the public’s enthusiasm grows, the need for scientific rigor, transparency, and ethical responsibility increases. The public should recognize that responsible supervisors are necessary for the use of AI technology in medical practice.

The Korean Journal of Anesthesiology (KJA) has released new guidelines for authors using AI and their reviewers [2]. This initiative is more than an event for the editor to log; it is a declaration of the values steadfastly upheld by the KJA. These guidelines were developed through a persistent collaborative effort by the authors, Prof. Kwak and Prof. Kim, and the editor, Prof. Sangseok Lee, and are structured in such a way as to expand the target audience to researchers who have recently started utilizing AI tools and those searching for a checklist before submitting their manuscripts to the journal. The editor deserves praise for opening these new horizons and the authors should be commended for their clarity, precision, and scholarly purpose.

The KJA’s new guidelines rely on three pillars: transparency, reproducibility, and interpretability. Although these principles are not unique to AI, they are particularly complicated to enact in the AI space. Without a consistent reporting format, the design structure of AI models, especially DL, is often blurred such that the model outputs become incomprehensible and appear as academic excess, “a thicket” so impenetrable that reviewers shrink from it, readers and clinicians lose their way, and patients receive no benefit.

The new guidelines directly address model ambiguity by requiring detailed reporting of the model structure, training and validation procedures, tuning of the hyperparameters, evaluation metrics, and external validation. The guidelines also require the disclosure of dataset characteristics, reporting on the handling of missing values, and strategies for class imbalance. Urging authors to include separate documentation describing feature importance and detailed clustering also lays the groundwork for promoting fairness and bias detection in AI research.

Of particular note is the recommendation to share lines of the source code, provide details on data preprocessing pipelines, and include the software and platform versions used. This practice encourages readers to independently validate the data, which promotes reproducibility, an often-overlooked imperative in clinical AI studies.

By implementing these requirements, the KJA demonstrates its refusal to receptively adopt AI. Instead, it actively aligns itself with leading journals that are instrumental in shaping the future of AI in medicine. The British Medical Journal (BMJ) has introduced a “checklist for medical AI” [3], Radiology has adopted AI-specific extensions of the CONSORT and STARD guidelines [4], and The Lancet Digital Health and New England Journal of Medicine (NEJM) AI have emphasized bias analysis, model interpretability, and open access to code [57]. Consistent with these journals, the KJA recognizes that the ethical and scientific standards of clinical research should not be disregarded in the rush to adopt AI models. The convergence of these journals around a common set of values emphasizes the ever-relevant paradigm that “innovation and responsibility are indivisible.”

More than a set of technical instructions, these guidelines affirm that the KJA does not simply participate in global scientific discourse; it helps to define its direction. In the 50 years since its founding, the KJA has transformed from a well-respected local journal in Korea into an international journal. It has embraced open science, introduced a novel review process, and continually improved the quality and relevance of its published papers. The ongoing transformation of medicine by AI has therefore unsurprisingly provoked these recent foundational initiatives.

These guidelines are not a destination, but rather a new beginning. As advancements in AI continue to develop, standards for its responsible use in clinical research must continually evolve. The community should participate in active discussions and ongoing revision processes. With this publication, the KJA has demonstrated leadership within the field of anesthesiology and the broader medical community. We hope that these guidelines will serve as a practical resource and principal reference for authors, reviewers, and editors alike. We reaffirm our belief that technological innovation, no matter how alluring, must be guided by the higher values of scientific integrity and ethical propriety. The KJA not only provides guidance, but also inspires and reminds us of the importance of publications rooted in ambition and authenticity.

Notes

Funding

None.

Conflicts of Interest

No potential conflict of interest relevant to this article was reported.

References

1. Lee J, Ko T, Yang K, Lee Y. Review of the 2024 fall conference of the Korean Society of Medical Informatics—AI’s role in shaping modern healthcare. Healthc Inform Res 2025;31:1–3.
2. Kwak SG, Kim J. Comprehensive reporting guidelines and checklist for studies developing and utilizing artificial intelligence models. Korean J Anesthesiol 2025;78:199–214. 10.4097/kja.25075.
3. Collins GS, Moons KG, Dhiman P, Riley RD, Beam AL, Van Calster B, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024;385e078378. Erratum in: BMJ 2024; 385: q902. 10.1136/bmj-2023-078378. 38626948.
4. Radiology: Artificial Intelligence. Instructions for authors [Internet]. Oak Brook (IL): RSNA; 2024 Jul [cited 2025 Apr 10]. Available from https://pubs.rsna.org/page/ai/author-instructions.
5. The Lancet Digital Health. Information for authors. 2025 Feb [2025 Apr 10]. Available from https://www.thelancet.com/pb-assets/Lancet/authors/tldh-info-for-authors.pdf.
6. NEJM AI. Author center [Internet]. Waltham (MA): Massachusetts Medical Society; 2025 [cited 2025 Apr 10]. Available from https://ai.nejm.org/author-center.
7. Yesil Science. New guidelines aim to address bias in medical AI technologies [Internet]. Yesil Science Teknoloji Ltd. Sti.; 2024 Dec [cited 2025 Apr 10]. Available from https://yesilscience.com/new-guidelines-aim-to-address-bias-in-medical-ai-technologies/.

Article information Continued