Artificial Intelligence in Surgery

A special issue of Bioengineering (ISSN 2306-5354).

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 9869

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronics, Information, and Bioengineering (DEIB), Politecnico di Milano, 20133 Milan, Italy
Interests: artificial intelligence in surgery; artificial intelligence in medical imaging; surgical skills assessment; surgical simulation; telemedicine; surgical robots

E-Mail Website
Guest Editor
Department of Surgery, University Health Network, Toronto, ON, Canada
Interests: artificial intelligence; technology assessment; simulation; decision analysis; performance measurement; computer vision; virtual reality; augmented reality

E-Mail Website
Guest Editor
Department of Surgery, Hospital of the University of Pennsylvania, 3400 Spruce St, 4 Silverstein, Philadelphia, PA 19104, USA
Interests: artificial intelligence; computer vision; surgical endoscopy; laparoscopic surgery; robotic surgery

Special Issue Information

Dear Colleagues,

Surgical data science is a fast-growing research field in both the academic and industrial worlds, and will impact all aspects of surgery considerably: training, simulation, intraoperative decision making, and the prediction of events and outcomes, assisting surgeons in the preoperative planning of major operations and reinterventions, postoperative progress, and the management of complications.

In particular, minimal access surgery generates a considerable amount of data that can be processed by artificial intelligence (AI), including data at the preoperative (e.g., clinical, laboratory, and imaging tests of patients), intraoperative (e.g., video recordings and even kinematic data in cases of robot-assisted surgery), and postoperative phases (e.g., operative times).

The availability of more and more complex AI models has led to improvements in the metrics of surgical data science tasks. At the same time, progress in hardware has significantly reduced the computation times of these models.

We therefore invite you to submit original research papers and comprehensive reviews on the theory and applications of AI in surgery, from the development of AI models on existing or new datasets to their clinical applications in laparoscopy, robot-assisted surgery, and endovascular surgery.

Topics of interest for this Special Issue include, but are not limited to, the following:

  • Automatic skills assessment;
  • Autonomous surgical robots;
  • Computer vision;
  • Natural Language Processing (NLP);
  • Federated learning;
  • Imitation learning;
  • Intraoperative decision making;
  • Predictive modeling of risks, diseases, and patients' outcomes;
  • Segmentation of radiological images for preoperative planning;
  • Self-supervised learning;
  • Surgical simulation/training;
  • Video-based assessment of surgical procedures.

Dr. Andrea Moglia
Dr. Amin Madani
Dr. Daniel Hashimoto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 27457 KiB  
Article
Combined Edge Loss UNet for Optimized Segmentation in Total Knee Arthroplasty Preoperative Planning
by Luca Marsilio, Andrea Moglia, Matteo Rossi, Alfonso Manzotti, Luca Mainardi and Pietro Cerveri
Bioengineering 2023, 10(12), 1433; https://doi.org/10.3390/bioengineering10121433 - 16 Dec 2023
Viewed by 1293
Abstract
Bone segmentation and 3D reconstruction are crucial for total knee arthroplasty (TKA) surgical planning with Personalized Surgical Instruments (PSIs). Traditional semi-automatic approaches are time-consuming and operator-dependent, although they provide reliable outcomes. Moreover, the recent expansion of artificial intelligence (AI) tools towards various medical [...] Read more.
Bone segmentation and 3D reconstruction are crucial for total knee arthroplasty (TKA) surgical planning with Personalized Surgical Instruments (PSIs). Traditional semi-automatic approaches are time-consuming and operator-dependent, although they provide reliable outcomes. Moreover, the recent expansion of artificial intelligence (AI) tools towards various medical domains is transforming modern healthcare. Accordingly, this study introduces an automated AI-based pipeline to replace the current operator-based tibia and femur 3D reconstruction procedure enhancing TKA preoperative planning. Leveraging an 822 CT image dataset, a novel patch-based method and an improved segmentation label generation algorithm were coupled to a Combined Edge Loss UNet (CEL-UNet), a novel CNN architecture featuring an additional decoding branch to boost the bone boundary segmentation. Root Mean Squared Errors and Hausdorff distances compared the predicted surfaces to the reference bones showing median and interquartile values of 0.26 (0.19–0.36) mm and 0.24 (0.18–0.32) mm, and of 1.06 (0.73–2.15) mm and 1.43 (0.82–2.86) mm for the tibia and femur, respectively, outperforming previous results of our group, state-of-the-art, and UNet models. A feasibility analysis for a PSI-based surgical plan revealed sub-millimetric distance errors and sub-angular alignment uncertainties in the PSI contact areas and the two cutting planes. Finally, operational environment testing underscored the pipeline’s efficiency. More than half of the processed cases complied with the PSI prototyping requirements, reducing the overall time from 35 min to 13.1 s, while the remaining ones underwent a manual refinement step to achieve such PSI requirements, performing the procedure four to eleven times faster than the manufacturer standards. To conclude, this research advocates the need for real-world applicability and optimization of AI solutions in orthopedic surgical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Figure 1

21 pages, 985 KiB  
Article
Postoperative Nausea and Vomiting Prediction: Machine Learning Insights from a Comprehensive Analysis of Perioperative Data
by Jong-Ho Kim, Bo-Reum Cheon, Min-Guan Kim, Sung-Mi Hwang, So-Young Lim, Jae-Jun Lee and Young-Suk Kwon
Bioengineering 2023, 10(10), 1152; https://doi.org/10.3390/bioengineering10101152 - 1 Oct 2023
Viewed by 988
Abstract
Postoperative nausea and vomiting (PONV) are common complications after surgery. This study aimed to present the utilization of machine learning for predicting PONV and provide insights based on a large amount of data. This retrospective study included data on perioperative features of patients, [...] Read more.
Postoperative nausea and vomiting (PONV) are common complications after surgery. This study aimed to present the utilization of machine learning for predicting PONV and provide insights based on a large amount of data. This retrospective study included data on perioperative features of patients, such as patient characteristics and perioperative factors, from two hospitals. Logistic regression algorithms, random forest, light-gradient boosting machines, and multilayer perceptrons were used as machine learning algorithms to develop the models. The dataset of this study included 106,860 adult patients, with an overall incidence rate of 14.4% for PONV. The area under the receiver operating characteristic curve (AUROC) of the models was 0.60–0.67. In the prediction models that included only the known risk and mitigating factors of PONV, the AUROC of the models was 0.54–0.69. Some features were found to be associated with patient-controlled analgesia, with opioids being the most important feature in almost all models. In conclusion, machine learning provides valuable insights into PONV prediction, the selection of significant features for prediction, and feature engineering. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Figure 1

14 pages, 2048 KiB  
Article
Development of a Machine Learning Model of Postoperative Acute Kidney Injury Using Non-Invasive Time-Sensitive Intraoperative Predictors
by Siavash Zamirpour, Alan E. Hubbard, Jean Feng, Atul J. Butte, Romain Pirracchio and Andrew Bishara
Bioengineering 2023, 10(8), 932; https://doi.org/10.3390/bioengineering10080932 - 5 Aug 2023
Viewed by 1276
Abstract
Acute kidney injury (AKI) is a major postoperative complication that lacks established intraoperative predictors. Our objective was to develop a prediction model using preoperative and high-frequency intraoperative data for postoperative AKI. In this retrospective cohort study, we evaluated 77,428 operative cases at a [...] Read more.
Acute kidney injury (AKI) is a major postoperative complication that lacks established intraoperative predictors. Our objective was to develop a prediction model using preoperative and high-frequency intraoperative data for postoperative AKI. In this retrospective cohort study, we evaluated 77,428 operative cases at a single academic center between 2016 and 2022. A total of 11,212 cases with serum creatinine (sCr) data were included in the analysis. Then, 8519 cases were randomly assigned to the training set and the remainder to the validation set. Fourteen preoperative and twenty intraoperative variables were evaluated using elastic net followed by hierarchical group least absolute shrinkage and selection operator (LASSO) regression. The training set was 56% male and had a median [IQR] age of 62 (51–72) and a 6% AKI rate. Retained model variables were preoperative sCr values, the number of minutes meeting cutoffs for urine output, heart rate, perfusion index intraoperatively, and the total estimated blood loss. The area under the receiver operator characteristic curve was 0.81 (95% CI, 0.77–0.85). At a score threshold of 0.767, specificity was 77% and sensitivity was 74%. A web application that calculates the model score is available online. Our findings demonstrate the utility of intraoperative time series data for prediction problems, including a new potential use of the perfusion index. Further research is needed to evaluate the model in clinical settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Figure 1

15 pages, 1971 KiB  
Article
Surgical Phase Recognition in Inguinal Hernia Repair—AI-Based Confirmatory Baseline and Exploration of Competitive Models
by Chengbo Zang, Mehmet Kerem Turkcan, Sanjeev Narasimhan, Yuqing Cao, Kaan Yarali, Zixuan Xiang, Skyler Szot, Feroz Ahmad, Sarah Choksi, Daniel P. Bitner, Filippo Filicori and Zoran Kostic
Bioengineering 2023, 10(6), 654; https://doi.org/10.3390/bioengineering10060654 - 27 May 2023
Cited by 3 | Viewed by 1796
Abstract
Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, [...] Read more.
Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, defined rigorous ground-truth annotation rules, then pre-processed and annotated the videos. We deployed seven deep learning models to establish the baseline accuracy for surgical phase recognition and explored four advanced architectures. For rapid execution of the studies, we initially engaged three dozen MS-level engineering students in a competitive classroom setting, followed by focused research. We unified the data processing pipeline in a confirmatory study, and explored a number of scenarios which differ in how the DL networks were trained and evaluated. For the scenario with 21 validation videos of all surgeons, the Video Swin Transformer model achieved ~0.85 validation accuracy, and the Perceiver IO model achieved ~0.84. Our studies affirm the necessity of close collaborative research between medical experts and engineers for developing automated surgical phase recognition models deployable in clinical settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Graphical abstract

13 pages, 2846 KiB  
Article
Machine Learning for Detecting Total Knee Arthroplasty Implant Loosening on Plain Radiographs
by Man-Soo Kim, Ryu-Kyoung Cho, Sung-Cheol Yang, Jae-Hyeong Hur and Yong In
Bioengineering 2023, 10(6), 632; https://doi.org/10.3390/bioengineering10060632 - 23 May 2023
Cited by 5 | Viewed by 1548
Abstract
(1) Background: The purpose of this study was to investigate whether the loosening of total knee arthroplasty (TKA) implants could be detected accurately on plain radiographs using a deep convolution neural network (CNN). (2) Methods: We analyzed data for 100 patients who underwent [...] Read more.
(1) Background: The purpose of this study was to investigate whether the loosening of total knee arthroplasty (TKA) implants could be detected accurately on plain radiographs using a deep convolution neural network (CNN). (2) Methods: We analyzed data for 100 patients who underwent revision TKA due to prosthetic loosening at a single institution from 2012 to 2020. We extracted 100 patients who underwent primary TKA without loosening through a propensity score, matching for age, gender, body mass index, operation side, and American Society of Anesthesiologists class. Transfer learning was used to prepare a detection model using a pre-trained Visual Geometry Group (VGG) 19. For transfer learning, two methods were used. First, the fully connected layer was removed, and a new fully connected layer was added to construct a new model. The convolutional layer was frozen without training, and only the fully connected layer was trained (transfer learning model 1). Second, a new model was constructed by adding a fully connected layer and varying the range of freezing for the convolutional layer (transfer learning model 2). (3) Results: The transfer learning model 1 gradually increased in accuracy and ultimately reached 87.5%. After processing through the confusion matrix, the sensitivity was 90% and the specificity was 100%. Transfer learning model 2, which was trained on the convolutional layer, gradually increased in accuracy and ultimately reached 97.5%, which represented a better improvement than for model 1. Processing through the confusion matrix affirmed that the sensitivity was 100% and the specificity was 97.5%. (4) Conclusions: The CNN algorithm, through transfer learning, shows high accuracy for detecting the loosening of TKA implants on plain radiographs. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Figure 1

15 pages, 1755 KiB  
Article
Surgical Gesture Recognition in Laparoscopic Tasks Based on the Transformer Network and Self-Supervised Learning
by Athanasios Gazis, Pantelis Karaiskos and Constantinos Loukas
Bioengineering 2022, 9(12), 737; https://doi.org/10.3390/bioengineering9120737 - 29 Nov 2022
Cited by 5 | Viewed by 1836
Abstract
In this study, we propose a deep learning framework and a self-supervision scheme for video-based surgical gesture recognition. The proposed framework is modular. First, a 3D convolutional network extracts feature vectors from video clips for encoding spatial and short-term temporal features. Second, the [...] Read more.
In this study, we propose a deep learning framework and a self-supervision scheme for video-based surgical gesture recognition. The proposed framework is modular. First, a 3D convolutional network extracts feature vectors from video clips for encoding spatial and short-term temporal features. Second, the feature vectors are fed into a transformer network for capturing long-term temporal dependencies. Two main models are proposed, based on the backbone framework: C3DTrans (supervised) and SSC3DTrans (self-supervised). The dataset consisted of 80 videos from two basic laparoscopic tasks: peg transfer (PT) and knot tying (KT). To examine the potential of self-supervision, the models were trained on 60% and 100% of the annotated dataset. In addition, the best-performing model was evaluated on the JIGSAWS robotic surgery dataset. The best model (C3DTrans) achieves an accuracy of 88.0%, a 95.2% clip level, and 97.5% and 97.9% (gesture level), for PT and KT, respectively. The SSC3DTrans performed similar to C3DTrans when training on 60% of the annotated dataset (about 84% and 93% clip-level accuracies for PT and KT, respectively). The performance of C3DTrans on JIGSAWS was close to 76% accuracy, which was similar to or higher than prior techniques based on a single video stream, no additional video training, and online processing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Surgery)
Show Figures

Graphical abstract

Back to TopTop