Korean J Anesthesiol Search

CLOSE


Korean J Anesthesiol > Epub ahead of print
DOI: https://doi.org/10.4097/kja.25646    [Epub ahead of print]
Published online February 3, 2026.
Comparison of large language models and conventional machine learning in postoperative outcome prediction: a retrospective, multi-national development and validation study
Jipyeong Lee1, Hyeonsik Kim2, Luke Kim3, Leerang Lim4, Hyung-Chul Lee1,4, Hyeonhoon Lee1,5,6
1Healthcare AI Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
2Department of Interdisciplinary Program in Medical Informatics, Seoul National University, Seoul, Republic of Korea
3Monash School of Medicine, Monash University, Victoria, Australia
4Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
5Department of Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
6Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Republic of Korea
Corresponding author:  Hyeonhoon Lee, Tel: +82-2-2072-0723, 
Email: hhoon@snu.ac.kr
Received: 25 July 2025   • Revised: 16 November 2025   • Accepted: 19 November 2025
Abstract
Background
Conventional machine learning (ML) models for predicting surgical outcomes have limitations in generalizability We explored large language models (LLMs) as scalable alternatives to conventional ML models in predicting postoperative outcomes, including in-hospital 30-day mortality, intensive care unit (ICU) admission, and acute kidney injury (AKI).
Methods
This study utilized the Informative Surgical Patient for Innovative Research Environment (INSPIRE) dataset (n = 80,985) from South Korea for model development and internal validation, and the Medical Informatics Operating Room Vitals and Events Repository (MOVER) dataset (n = 6,165) from the United States for external validation. The study compared three different LLMs—Generative Pre-trained Transformer [GPT]-4o, Llama-3-70B, and OpenBioLLM-70B—against MLs using various prompt engineering approaches. LLMs were evaluated with different model parameter quantizations (4-bit normalized floating point vs. 16-bit brain floating point).
Results
OpenBioLLM-70B were comparable to eXtreme Gradient Boosting (XGBoost) across all tasks (in-hospital 30-day mortality: area under receiver operating characteristic curve [AUROC] 0.782 [95% CI: 0.748–0.813] vs. 0.791 [95% CI: 0.753–0.825]; ICU admission: AUROC 0.595 [95% CI: 0.581–0.609] vs. 0.594 [95% CI: 0.580–0.608]; AKI: AUROC 0.830 [95% CI: 0.802–0.855] vs. 0.823 [95% CI: 0.792–0.851]) during external validation. Open-source LLMs maintained performance with 4-bit quantization, reducing computational requirements by 75%.
Conclusions
The findings support the versatility and efficiency of LLMs for clinical decision support through on-premises compatibility, addressing data privacy. Further validation with diverse datasets is needed to ensure their reliability and applicability across different perioperative settings.
Key Words: Acute kidney injury; Clinical decision support systems; Large language models; Machine learning; Mortality; Patient readmission; Perioperative medicine; Treatment outcome
TOOLS
Share :
Facebook Twitter Linked In Line it
METRICS Graph View
  • 0 Crossref
  •    
  • 328 View
  • 28 Download


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
101-3503, Lotte Castle President, 109 Mapo-daero, Mapo-gu, Seoul 04146, Korea
Tel: +82-2-792-5128    Fax: +82-2-792-4089    E-mail: journal@anesthesia.or.kr                
Business Name: Korean Society of Anesthesiologists
Business Registration: 106-82-07194
Representative: Young-Tae Jeon

Copyright © 2026 by Korean Society of Anesthesiologists.

Developed in M2PI

Close layer
prev next