Next Article in Journal
A Robust Vehicle Detection Model for LiDAR Sensor Using Simulation Data and Transfer Learning Methods
Next Article in Special Issue
Predictive Analytics with a Transdisciplinary Framework in Promoting Patient-Centric Care of Polychronic Conditions: Trends, Challenges, and Solutions
Previous Article in Journal
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
Previous Article in Special Issue
Design of an Educational Chatbot Using Artificial Intelligence in Radiotherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review

by
Adham H. El-Sherbini
1,
Hafeez Ul Hassan Virk
2,
Zhen Wang
3,4,
Benjamin S. Glicksberg
5 and
Chayakrit Krittanawong
6,*
1
Faculty of Health Sciences, Queen’s University, Kingston, ON K7L 3N6, Canada
2
Harrington Heart & Vascular Institute, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH 44115, USA
3
Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN 55901, USA
4
Division of Health Care Policy and Research, Department of Health Sciences Research, Mayo Clinic, Rochester, MN 55901, USA
5
The Hasso Plattner Institute for Digital Health at the Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
6
Cardiology Division, NYU School of Medicine and NYU Langone Health, New York, NY 10016, USA
*
Author to whom correspondence should be addressed.
AI 2023, 4(2), 437-460; https://doi.org/10.3390/ai4020024
Submission received: 14 March 2023 / Revised: 8 May 2023 / Accepted: 16 May 2023 / Published: 23 May 2023

Abstract

:
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized in preoperative treatment to forecast postoperative results and assist physicians in selecting surgical interventions. Clinicians can modify their strategy to reduce risk and enhance outcomes using ML algorithms to examine patient data and discover factors that increase the risk of worsened health outcomes. ML can also enhance the precision and effectiveness of screening tests. Healthcare professionals can identify diseases at an early and curable stage by using ML models to examine medical pictures, diagnostic modalities, and spot patterns that may suggest disease or anomalies. Before the onset of symptoms, ML can be used to identify people at an increased risk of developing specific disorders or diseases. ML algorithms can assess patient data such as medical history, genetics, and lifestyle factors to identify those at higher risk. This enables targeted interventions such as lifestyle adjustments or early screening. In general, using ML in primary care offers the potential to enhance patient outcomes, reduce healthcare costs, and boost productivity.

1. Introduction

Artificial intelligence (AI) is a field of study that attempts to replicate natural human intelligence in machines [1]. The machines can then independently perform activities that would otherwise require human intelligence. AI can be broken down into several subsets, such as machine learning (ML) and deep learning (DL) [2]. ML makes a software application more accurate in predicting outcomes by feeding it with data rather than explicit programming. Comparatively, DL, a subset of ML, builds a hierarchy of knowledge based on learning from examples. These fundamental ideas of AI are utilized to develop analytic models to turn this productive technology into practice. Since its introduction in the 1950s, AI has made significant strides in manufacturing; sports analytics; autonomous vehicle; and more recently, primary care and preventive medicine [3].
Primary care and preventive medicine, otherwise expressed as day-to-day healthcare practices including outpatient settings, are a growing sector in the realms of AI and computer science. Although AI has endless applications in healthcare, particular sectors of primary care have been more progressive and accepting of AI and its potential. For instance, the Forward clinic is a primary care service incorporating standard doctor-led programs with technology to provide a more inclusive and long-term care [3]. The addition of the technology allows for 24/7 monitoring, skin cancer screening, testing of genes, and biometric monitoring. As with all AI interventions, the Forward clinic endures multiple challenges, such as additional physician training and fees. Although the Forward clinic is just a singular example of how AI can be integrated into primary care, AI’s implementation into primary care can be further broken down into sections of healthcare, such as pre-operative care and screening. This review summarizes AI’s, specifically ML’s, short yet productive impact on primary care and preventive medicine and aims to inform primary care physicians about the potential integration of ML (Figure 1 and Table 1).

2. Pre-Operative Care

Pre-operative risk prediction and management have been promising areas of AI research and its application. PubMed and Google Scholar were searched using keywords for English literature published from inception to December 2022 (Figure 2). Studies were included if they reported outcomes regarding the effectiveness of ML models in pre-operative care or similar domains. Studies have utilized AI to predict mortality and postoperative complications. Such applications are necessary for clinical decision-making, forethought of healthcare resources such as ICU beds, the cost of the patient, and the possible need for transition of care [4]. Typically, researchers utilize a designated number of electronic health records (EHR) to train the analytic model and the remainder to test it. For instance, Chiew et al. utilized EHRs to predict post-surgical mortality in a tertiary academic hospital in Singapore [5]. The study compared five candidate models (Random Forest (RF), Adaptive Boosting (ADA), Gradient Boosting (GB), and Support Vector Machine (SVM)) and found that all GB was the greatest performing model (specificity (0.98), sensitivity (0.50), PPV (0.20), F1 score (0.28), and AUROC (0.96)). Five other studies by Fernandes et al., Jalai et al., COVIDSurg Collaborative, Sahara et al., and Pfitzner et al. have also evaluated how differing types of analytic models (Logistic Regression (LR), RF, Neural Network (NN), SVM, Extreme GB (XGB), Decision Tree (DT), GB, Deep Neural Network (DNN), GRU, and classification tree) can predict postoperative mortality [6,7,8,9,10]. The patient population included those undergoing cardiac surgery, pancreatic surgery, or hepatopancreatic surgery or those infected with SARS-CoV-2. Of the studies undergoing cardiac surgery, the selected ML models were good predictors of mortality and prolonged length of stay. In Fernandes et al., when utilizing pre-operative and intra-operative risk factors alongside intraoperative hypotension, XGB was the best performing model (AUROC (0.87), PPV (0.10), specificity (0.85), and sensitivity (0.71) [6]. In the other study by Jalai et al., deep neural network (DNN) was the best performing of the five models (accuracy (89%), F-score (0.89), and AUROC (0.95)) [7]. Neither study compared its models with established pre-operative risk scores, such as the Revised Cardiac Risk Index or Gupta score. Similarly, Pfitzner et al. used pre-, intra-, and short-term post-operative data on a number of models to assess its ability to predict pre-operative risk for those undergoing pancreatic surgery [8]. The study found maximum AUPRCs of 0.53 for postoperative complications and 0.51 for postoperative mortality, with LR as the best model. As for those undergoing hepatopancreatic surgery, Sahara et al. found that the classification tree model better predicted 30-day unpredicted deaths than the traditional American College of Surgeons National Surgery Quality Improvement Program surgical risk calculator [9]. Finally, a COVIDSurg Collaborative study that generated 78 AI models found that when combining an LR model with four features (ASA grade, RCRI, age, and pre-op respiratory support), an AUC of 0.80 in the testing dataset was achieved. This generated model was the best performing in predicting postoperative mortality among those infected with SARS-CoV-2 [10]. Ultimately, ML models present great promise in its integration into pre-operative care, particularly for simplifying pre-operative evaluations, as observed in Figure 3.

Post-Operative Complications

Other pre-operative risk prediction objectives include assessing models on postoperative complications [11,12,13]. These studies have evaluated how varying ML models (SVM, LR, RF, GBT, DNN, GBT, and XGB) can predict a number of post-operative complications. One study utilized electronic anesthesia records (pre-operative and intra-operative data) to predict deep vein thrombus (DVT), delirium, pulmonary embolism, acute kidney injury (AKI), and pneumonia [11]. GBT was the most promising model, with AUROC scores of 0.905 (pneumonia), 0.848 (AKI), 0.881 (DVT), 0.831 (pulmonary embolism), and 0.762 (delirium). Similarly, Corey et al. utilized EHR data, including 194 clinical features, to train ML models on 14 postoperative complications [12]. Amongst the models, AUC scores ranged from 0.747 to 0.924, with the Lasso penalized regression being the best performing (sensitivity (0.775), specificity (0.749), and PPV (0.362)). Comparably, Bonde et al. trained three multi-labels DNNs to compete against traditional surgical risk prediction systems on post-operative complications [13]. The mean AUCs for the test dataset on models 1, 2, and 3 were 0.858, 0.863, and 0.874, all of which outperformed the ACS-SRC predictors. Ultimately, ML methods appear to be high-performing for predicting post-operative complications, but additional studies comparing models are required to validate the findings.

3. Screening

The applications of AI in screening are by far the most precedented. PubMed and Google Scholar were searched from inception to December 2022, and the databases were searched for studies investigating the role of ML in screening for several diseases and disorders.

3.1. Hypertension

One of these leading domains is hypertension, where studies have assessed the risk of hypertension and predicted resistant hypertension while concurrently estimating blood pressure (BP). Zhao et al. compared four analytical models (RF, CatBoost, MLP neural network, and LR) in predicting the risk of hypertension based on data from physical examinations [14]. RF was the best-performing model with an AUC of 0.92, an accuracy of 0.82, a sensitivity of 0.83, and a specificity of 0.81. In addition, no clinical or genetic data was utilized for training the models. Similarly, Alkaabi et al. utilized ML models (DT, RF, and LR) to assess the risk of developing hypertension in a more effective manner [15]. RF was the best-performing model (accuracy (82.1%), PPV (81.4%), sensitivity (82.1%), and AUC (86.9)). Clinical factors, such as education level, tobacco use, abdominal obesity, age, gender, history of diabetes, consumption of fruits and vegetables, employment, physical activity, mother’s history of high BP, and history of high cholesterol, were all significant predictors of hypertension. Ye et al. investigated an XGBoost model that had AUC scores of 0.917 (retrospective) and 0.870 (prospective) in predicting hypertension. Similarly, LaFreniere et al. investigated an NN model which had 82% accuracy in predicting hypertension given the chosen risk factors [16,17]. Regarding BP, Khalid et al. compared three ML models (regression tree, SVM, and MLR) in estimating BPs from pulse waveforms derived from photoplethysmogram (PPG) signals [18]. The regression tree achieved the best systolic and diastolic BP accuracy, −0.1 ± 6.5 mmHg and −0.6 ± 5.2 mmHg, respectively. In summary, ML appears to be an effective tool for predicting hypertension and BP, though its clinical utility remains to be delineated, since hypertension can be diagnosed through non-invasive procedures.

3.2. Hypercholesterolemia

AI applications on hypercholesterolemia have outputted similar findings, as seen in Myers et al. [19]. Using data on diagnostic and procedures codes, prescriptions, and laboratory findings, the FIND FH model was trained on large healthcare databases to diagnose familial hypercholesterolemia (FH). The model achieved a PPV of 0.85, a sensitivity of 0.45, an AURPC of 0.55, and an AUROC score of 0.89. This model effectively identified those with FH for individuals at high risk of early heart attack and stroke. Comparatively, Pina et al. evaluated the accuracy of three ML models (CT, GBM, and NN) when trained on genetic tests to detect FH-causative genetic mutations [20]. All three models outperformed the clinical standard Dutch Lipid score in both cohorts. Similar findings have been produced for hyperlipidemia, where Liu et al. trained an LTSM network on 500 EHR samples [21]. The model achieved an ACC score of 0.94, an AUC score of 0.974, a sensitivity of 0.96, and a specificity of 0.92. Regarding low-density lipoproteins (LDLs), Tsigalou et al. and Cubukcu et al. concluded that ML models were productive alternatives to direct determination and equations [22,23]. In both studies, ML models (MLR, DNN, ANN, LR, and GB trees) outperformed the traditional equations: the Friedewald and Martin–Hopkins formulas. Although the researched algorithms show great potential, additional studies are warranted to validate these conclusions.

3.3. Cardiovascular Disease

Arguably, the largest field of primary care in which AI has made significant strides is predicting and assessing cardiovascular risk. As cardiovascular diseases are the leading cause of death globally, any advancements in risk prediction and early diagnosis are of substance. In 2017, Weng et al. compared four ML models (RF, LR, GB, and NN) in predicting cardiovascular risk through EHR [24]. The AUC scores of RF, LR, GB, and NN were 0.745, 0.760, 0.761, and 0.764, respectively. The study concluded that the applications of ML in cardiovascular risk prediction significantly improved the accuracy. Zhao et al. reproduced a similar study with LR, RF, GBT, CNN, and LSTM trained on longitudinal EHR and genetic data [25]. The event prediction was far better using the longitudinal feature for a 10-year CVD prediction. Kusunose et al. applied a CNN to identify those at risk of heart failure (HF) from a cohort of pulmonary hypertension (PH) patients using chest x-rays [26]. The AUC scores of AI, chest x-rays, and human observers were 0.71, 0.60, and 0.63, respectively. In a unique perspective, Moradi et al. employed generative adversarial networks (GANs) for data augmentation on chest x-rays to assess its accuracy in detecting cardiovascular abnormalities when a CNN model was trained on it [27]. The GAN data augmentation outperformed traditional and no data augmentation scenarios on normal and abnormal chest X-ray images with accuracies of 0.8419, 0.8312, and 0.8193, respectively. Studies have also compared ML models relative to traditional risk scores, such as a study by Ambale-Venkatesh et al. [28]. A random survival forest model was assessed in its prediction of six cardiovascular outcomes compared with the Concordance index and Brier score. The model outperformed traditional risk scores (decreased Brier score by 10–25%), and age was the most significant predictor. Similarly, Alaa et al. compared an AutoPrognosis ML model with an established risk score (Framingham score), a Cox PH model with conventional risk factors, and a Cox PH model with all 473 variables (UK Biobank) [29]. The AUROC scores were 0.774, 0.724, 0.734, and 0.758, respectively. Pfohl et al. developed a “fair” atherosclerotic cardiovascular disease (ACSVD) risk prediction tool through EHR data [30]. The experiment ran through four experiments (standard, EQrace, EQgender, and EQage) and achieved AUROC scores of 0.773, 0.742, 0.743, and 0.694, respectively. The tool has reduced discrepancies across races, genders, and ages in the prediction of ACSVD. Generally, AI can aid in mitigating gaps in ACSVD risk prevention guidelines, as observed in Figure 4. In the United States alone, one in every three patients undergoing elective cardiac catheterization is diagnosed with obstructive coronary artery disease (CAD). This begs the question of new methodologies to better diagnose the population. Al’Afref et al. assessed how applying an XGBoost model on Coronary Computed Tomography Angiography can predict obstructive CAD using clinical factors [31]. The ML model achieved an AUC score of 0.773, but more notably, when combined with the coronary artery calcium score (CACS), the AUC score was 0.881. Therefore, an ML model and CACS may accurately predict the presence of obstructive CAD. Based on the present literature, AI models screen effectively and predict cardiovascular risks while predominantly outperforming established risk scores.

3.4. Eye Disorders and Diseases

Another area of primary care that has used ML is vision-centric diseases, such as diabetic retinopathy, glaucoma, and age-related macular degeneration (AMD). Ting et al. assessed AI’s metrics in this sector by training a DL system on retinal images (76,370 images of diabetic retinopathy, 125,189 images of possible glaucoma, and 72,610 images of AMD) [32]. For referable diabetic retinopathy, the model achieved an AUC of 0.936, a sensitivity of 0.905, and a specificity of 0.916. As for vision-threatening retinopathy, the AUC was 0.958, sensitivity was 1.00, and specificity was 0.911. For possible glaucoma images, the model achieved an AUC of 0.942, a sensitivity of 0.964, and a specificity of 0.872. Finally, the model on AMD testing retinal images achieved an AUC of 0.931, a sensitivity of 0.923, and a specificity of 0.887. Retinal fundus images can also be used by AI models to extract further information, such as predicting cardiovascular risk factors in the case of the study by Poplin et al. [33]. After training the model on 284,445 and validating it on two datasets, the model could predict age (mean absolute error (MAE) within 3.26 years), gender (AUC 0.97), smoking status (AUC 0.71), systolic blood pressure (MAE within 11.23 mmHG), and major adverse cardiac events (AUC 0.70). In another study, Kim et al. utilized retinal fundus images for training a CNN model to predict age and sex [34]. The MAE for patients, those with hypertension, those with diabetes mellitus (DM), and smokers were 3.06 years, 3.46 years, 3.55 years, and 2.65 years, respectively. Ultimately, well-trained ML models appear to be effective in predicting eye diseases.

3.5. Diabetes

More than 400 million individuals globally are diagnosed with DM. AI’s implementation into primary care has been shown to be effective when targeting this widespread disease’s risk prediction. In one study, Alghamdi et al. used medical records of cardiorespiratory fitness to train and compare five models (DT, naïve bayes, LR, logistic model tree, and RF) in predicting DM. When RF, logistic model tree, and naïve bayes were ensembled with the developed predictive model classifier, a maximum AUC (0.92) was achieved. Similarly, through administrative data, Ravaut et al. trained a GB decision tree on 1,657,395 patients to predict T2DM 5 years prior to onset [35]. While validating the model on 243,442 patients and testing it on 236,506 patients, an AUC score of 0.8026 was achieved. In another study, Ravaut et al. also assessed if a GB decision tree can predict adverse complications of diabetes, including retinopathy, tissue infection, hyper/hypoglycemia, amputation, and cardiovascular events [36]. After being trained (1,029,366 patients), validated (272,864 patients), and tested (265,406 patients) on administrative data, the model achieved an AUC score of 0.777. To support the conclusion on DM, Deberneh et al. found reasonably good accuracies in a Korean population, with DT (77.87%), LR (76.13%), and ANN having the lowest accuracy (73.23%) [37]. In Alhassan et al., when predicting T2DM, the LTSM and gated-recurrent unit outperformed MLP models with a 97.3% accuracy [38]. In India, Boutilier et al. attempted to find the best ML algorithm for predicting DM and hypertension in limited resource settings [39]. RF models had a higher prediction accuracy than established UK and US scores, with an improved AUC score from 0.671 to 0.910 for diabetes and from 0.698 to 0.792 for hypertension. With the current evidence, ML methods appear to be exceptionally effective in predicting diabetes; however, there lacks discussion on the benefits of using ML over a simple blood draw.

3.6. Cancer

In 2020, cancer was responsible for nearly 10 million deaths globally, making it a hotspot for ML implementations and strategies in primary care [40]. Fortunately, ML models have been proven to have potential in the early diagnosis and screening of lung, cervical, colorectal, breast, and prostate cancer [41]. Regarding lung cancer, Ardilla et al. trained a DL algorithm on CT images to predict the risk of lung cancer in 6716 national trial cases [42]. The model achieved an AUC score of 0.944. Similarly, Gould et al. compared an ML model in predicting a future lung cancer diagnosis with the 2012 Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial risk model (mPLCOm2012) [43]. The novel algorithm outperformed the mPLCOm2012 in AUC scores (0.86 vs. 0.79) and sensitivity (0.401 vs. 0.279). Using NNs, Yeh et al. developed a model to screen patients at risk of lung cancer on EHR data [44]. For the overall population, the algorithm achieved an AUC score of 0.90 and 0.87 for patients over the age of 55 years. Guo et al. trained ML models on low-dose CT and found an accuracy of 0.6778, a F1 score of 0.6575, a sensitivity of 0.6252, and a specificity of 0.7357 [45]. More notably, the interactive pathways were BMI, DM, first smoke age, average drinks per month, years of smoking, year(s) since quitting smoking, sex, last dental visit, general health, insurance, education, last PAP test, and last sigmoidoscopy or colonoscopy. Concerning cervical cancer, CervDetect, a number of ML models that evaluate the risk of cervical cancer elements forming, has been a leader in this subject. In 2021, Mehmood et al. used cervical images to evaluate CervDetect and found a false negative rate of 100%, a false positive rate of 6.4%, an MSE error of 0.07111, and an accuracy of 0.936 [46]. Similarly, DeepCervix is another DL model that attempts to combat the high false-positive results in pap smear tests due to human error. Rahaman et al. trained DeepCervix, a hybrid deep fusion feature technique, on pap smear tests [47]. The DL-based model achieved accuracies of 0.9985, 0.9938, and 0.9914 for 2-class, 3-class, and 5-class classifications, respectively. Considering that 90% of cervical cancer is found in low-middle income settings, Bae et al. set out to implement an ML model onto endoscopic visual inspection following an application of acetic acid images [48]. Although resource-limited, the KNN model was the best performing, with an accuracy of 0.783, an AUC of 0.807, a specificity of 0.803, and a sensitivity of 0.75. In parallel, Wentzensen et al. developed a DL classifier with a cloud-based whole-slide imaging platform and trained it on P16/Ki-67 dual-stained (DS) slides for cervical cancer screening [48]. The model achieved a better specificity and equal sensitivity to manual DS and pap, resulting in lower positivity than manual DS and cytology. With respect to breast cancer screening, multiple studies have been conducted to achieve better accuracy in its diagnosis. Using screening mammograms, Shen et al. trained a DL algorithm on 1903 images and achieved an AUC of 0.88, and the four-model averaging improved the AUC score to 0.91 [49]. Similarly, using digital breast tomosynthesis images, Buda et al. achieved a sensitivity of 65% with a DL model for breast cancer screening [50]. Similarly, Haji Maghsoudi et al. developed Deep-LIBRA, an AI model trained on 15661 digital mammograms to estimate breast density and achieved an AUC of 0.612 [51]. The model had a strong agreement with the current gold standard. Another study by Ming et al. compared three ML models (MCMC GLMM, ADA, and RF) to the established BOADICEA model by training them on biennial mammograms [52]. When screening for lifetime risk of breast cancer, all models (0.843 ≤ AUROC ≤ 0.889) outperformed BOADICEA (AUROC = 0.639. Similar findings have been concluded in prostate cancer, where three studies (Perera et al., Chiu et al., and Bienecke et al.) compared numerous ML models (DNN, XGBoost, LightGBM, CatBoost, SVM, LR, RF, and multiplayer perceptron) [53,54,55]. Although all studies trained their respective models differently (PSA levels, prostate biopsy, or EHRs), all concluded that the ML algorithms were efficacious in predicting prostate cancer. Ultimately, there appears to be a substantial body of literature supporting the effectiveness of ML methods in predicting different types of cancer.

3.7. Human Immunodeficiency Virus and Sexually Transmitted Diseases

Another sector of primary care requiring additional applications to assist in its diagnosis and screening is the human immunodeficiency virus (HIV) and sexually transmitted diseases (STDs). In 2021, Turbe et al. trained a DL model on the rapid diagnostic test to classify rapid HIV in rural South Africa [56]. Relative to traditional reports of accuracy varying between 80 and 97%, this model achieved an accuracy of 98.9%. Similarly, Bao et al. compared 5 mL models predicting HIV and STIs [57]. GBM was the best performing, with AUROC scores of 0.763, 0.858, 0.755, and 0.68 for HIV, syphilis, gonorrhea, and chlamydia, respectively. Another study, Marcus et al., developed and assessed an HIV prediction model to find potential pre-exposure prophylaxis (PrEP) patients [58]. Using EHR data to train the model, the study reported an AUC score of 0.84. In terms of future predictions, Elder et al. compared 6 mL algorithms when determining patients at risk of additional STIs within the next 24–48 months through previous EHR data [59]. The Bayesian Additive RT was the best-performing model with an AUROC score of 0.75 and a sensitivity of 0.915. A number of studies have also reported plausible applications of AI on urinary tract infections (UTIs). Gadalla et al. have assessed how AI models can identify predictors for a UTI diagnosis through training on potential biomarkers and clinical data from urine [60]. The study concluded that clinical information was the strongest predictor, with an AUC score of 0.72, a PPV of 0.65, an NPV of 0.79, and an F1 score of 0.69. Comparatively, in Taylor et al., vitals, lab results, medication, chief complaints, physical exam findings, and demographics were all utilized for training, validating, and testing a number of ML algorithms to predict UTIs in ED patients [61]. The AUC scores ranged from 0.826 to 0.904, with XGBoost being the best-performing algorithm. Therefore, the benefits of using ML models to predict and screen for HIV and STDs are evident.

3.8. Obstructive Sleep Apnea Syndrome

There are a number of studies that have reported the use of ML for detecting obstructive sleep apnea syndrome (OSAS). For OSAS, findings have generally been positive, as in the case of a study by Tsai et al. [62]. LR, k-nearest neighbor, CNN, naïve Bayes, RF, and SVM were all compared for screening moderate-to-severe OSAS by being trained on demographic and information-based questionnaires. The study found that BMI was the most influential parameter, and RF achieved the highest accuracy in screening for both types. In another study, Alvarez et al. trained and tested a regression SVM on polysomnography and found that the dual-channel approach was a better performer than oximetry and airflow [63]. Mencar et al. used demographic and information questionaries again to predict OSAS severity [64]. SVM and RF were the best in classification, with the strongest average in classification being 44.7%. This study demonstrates some variability in studies attempting to define a conclusion between AI and OSAS. Overall, there is lack of literature to make a comprehensive conclusion regarding the use of ML for OSAS.

3.9. Osteoporosis

Regarding osteoporosis and related diseases, four studies have compared a number of AI models (XGBoost, LR, multiplayer perceptron, SVM, RF, ANN, extreme GB, stacking with five classifiers, and SVM with radial basis function kernel) [65,66,67,68]. Models were trained on EHR, CT and clinical data, or abdomen-pelvic CT. All studies concluded that ML methods were valid and presented great potential in screening for osteoporosis. An additional study trained ML models (RF, GB, NN, and LR) on genomic data for fracture prediction [69]. The study found that GB was the best-performing model, with an AUC score of 0.71 and an accuracy of 0.88. Ultimately, more studies are required to confirm the effectiveness of ML for predicting osteoporosis.

3.10. Chronic Conditions

Chronic obstructive pulmonary disease (COPD) is characterized by permanent lung damage and airway blockage. To enhance life quality and lower mortality rates, COPD must be diagnosed and treated early. The early identification, diagnosis, and prognosis of COPD can be aided by ML methods [70]. The likelihood of hospitalization, mortality, and COPD exacerbations have all been predicted using ML algorithms. These algorithms create predictive models using a variety of data sources, including patient demographics, clinical symptoms, and imaging data. For instance, Zeng et al. developed an ML algorithm trained on 278 candidate features [71]. The model achieved an AUROC of 0.866. Another chronic condition, chronic kidney disease (CKD), is characterized by a progressive decline in kidney function over time. Kidney failure can be prevented, and patient outcomes can be enhanced by early detection and care of CKD. The early detection, diagnosis, and management of CKD can be helped by ML algorithms. For instance, Nishat et al. developed an ML system to predict the probability of CKD. Eight supervised algorithms were developed, and RF was the best-performing mode reporting an accuracy of 99.75% [72]. At the final stage of CKD, known as ESKD, patients require dialysis or a kidney transplant. The early detection, diagnosis, and management of ESKD can be facilitated by ML algorithms. ML algorithms have been used to forecast mortality and the risk of ESKD in CKD patients. These algorithms create predictive models using a variety of data sources, including medical records, test results, and demographic information. For instance, Bai et al. trained five ML models on a longitudinal CKD cohort to predict ESKD [73]. LR, naive Bayes, and RF achieved similar predictability and sensitivity and outperformed the Kidney Failure Risk Equation. Since chronic conditions are a critical aspect of primary care, more studies involving a variety of ML models are needed to confirm MLs’ potential.

3.11. Detecting COVID-19 and Influenza

ML has shown great promise in detecting and differentiating between common conditions, propagating more effective recommendations and guidelines (Figure 5). Specifically, detection research has rocketed with the rise and timeline of the COVID-19 virus [74]. Zhou et al. developed an XGBoost algorithm to distinguish between influenza and COVID-19 in case there are no laboratory results of pathogens [75]. The model used EHR data to achieve AUC scores of 0.94, 0.93, and 0.85 in the training, testing, and external validation datasets. Similarly, in Zan et al., a DL model, titled DeepFlu, was utilized to predict individuals at risk of symptomatic flu based on gene expression data of influenza A viruses (IAV) or the infection subtypes H1N1 or H3N2 [76]. The DeepFlu achieved an accuracy score of 0.70 and an AUROC of 0.787. In another study, Nadda et al. combined LSTM with an NN model to interpret patients’ symptoms for disease detection [77]. For dengue and cold patients, the combination of models achieved AUCs of 0.829 and 0.776 for flu, dengue, and cold, and 0.662 for flu and cold. For influenza, Hogan et al. and Choo et al. trained multiple ML models on nasopharyngeal swab samples and the mHealth app, respectively, for influenza diagnosis and screening [78,79]. Both studies concluded that ML methods are capable of being utilized for infectious disease testing. Similar findings were presented for chronic coughs in Luo et al., where a DL model, BERT, could accurately detect chronic coughs through diagnosis and medication data [79]. Additionally, in Yoo et al., severe pharyngitis could be detected through the training of smartphone-based DL algorithms on self-taken throat images (AUROC 0.988) [80]. In summary, ML appears to be effective in screening and distinguishing between COVID-19, influenza, and related illnesses.

3.12. Detecting Atrial Fibrillation

Another large center for AI detection is atrial fibrillation (AF). Six studies have evaluated unique ways to detect AF through ML models [80,81,82,83,84]. Through wearable devices, countless algorithms (SVM, DNN, CNN, ENN, naïve Bayesian, LR, RF, GB, and W-PPG algorithm combined with W-ECG algorithm) have been trained on primary care data, RR intervals, W-PPG and W-ECG, electrocardiogram and pulse oximetry data, or waveform data. All studies concluded that ML is capable and has the potential to detect AF through wearable devices and through a number of different information. However, more studies to confirm these findings are required.

4. Limitations

While AI’s applications have been relatively positive, several limitations have set back its implementation. For one, the introduction of AI into healthcare practices raises a number of concerns, such as a lack of trust, ethical issues, and the absence of accountability [85]. Certain human traits, such as empathy, comfort, and trust, are essential to a doctor–patient relationship, and the use of AI makes these components an issue. To add on, traditionally, physicians and healthcare workers are held accountable for their practice [86]. There is no law to keep ML models intact, and there is no defined ownership to take responsibility when an AI algorithm is at fault. This drawback raises several legal and ethical concerns yet to be answered. The common novelty in ML applications across primary care requires additional clinical trials to support the potential advantages. Table 2 presents all ongoing or completed clinical trials registered in ClinicalTrials.gov and found through the keywords “Artificial Intelligence” and “Primary Care”, which were searched for ongoing or completed clinical trials investigating the role of AI in primary care. In addition, there remains mixed findings regarding the potential benefits of ML-based prediction models. For instance, in one systematic review of 71 studies, there was no evidence of a better performance from ML models compared with LR [87]. An additional drawback is that the implementation of ML is costly and would require additional education for incoming medical practitioners [88]. Regarding AI research, many studies suffer from a number of drawbacks that limit the quality of the results. These include a small sample, retrospective data, the inability to separate pre-operative and intra-operative data, missing data, the absence of external validation, data from a single institution, and several biases.

5. Implementing AI in Primary Care

Choosing the correct ML model for a primary care task depends on several factors, including the nature of the task, the available data, and the desired outcome (Table 3). First, a definition of the problem and the necessary data must be identified to select the appropriate model [89]. Subsequently, a suitable AI technique, such as supervised, unsupervised, or reinforcement learning, must be chosen. Following the selection of the model, evaluation of the model’s performance using validation data and fine-tuning is necessary [89]. Several factors must be considered to evaluate the benefits and risks of implementing a specific AI model into a primary care routine. Accuracy and reliability must be assessed by testing the ML model’s performance on validation data [89]. Clinical relevance must be determined by evaluating whether the model is based on relevant risk factors and whether the predictions are helpful for clinical decision-making. Potential benefits such as improving patient outcomes, reducing medical errors, increasing efficiency and productivity, and enhancing the quality of care must also be assessed. Ethical implications of using the AI model in primary care, such as the responsibility of healthcare providers to explain how the AI model works and how decisions are made, and potential issues related to patient autonomy and informed consent, must be considered. Finally, the cost-effectiveness of implementing the AI model, considering the costs of development, implementation, maintenance, and training, as well as potential cost savings and benefits, must be evaluated [90]. Finally, we can anticipate a number of ML technologies, such as sophisticated chatbots and virtual assistants, decision support tools, predictive analytics, wearable technology, and population health management, to become commonplace in primary care during the next two years. These tools could aid primary care providers in making better judgements, delivering more individualized care, and spotting high-risk patients or those needing more intense interventions. However, regulatory approval, patient and healthcare provider acceptance, and integration into current clinical workflows will all be necessary before ML can be deployed. Despite these obstacles, there will likely be major advancements in integrating AI into primary care in the upcoming years, given the rate of technological advancement and the growing desire for more individualized and effective healthcare.

6. Conclusions

AI in primary care and preventive medicine is a relatively new field of study that has developed endless possibilities. The applications are widespread, as seen through a number of studies on all facets of primary care. Although there is some variability within the findings of studies in specific fields, the general development and implementation of ML algorithms are successful and constructive. The models are usually more effective than previously established models or scores. Future research should focus on tackling the aforementioned limitations and furthering the research on promising sectors of primary care.

Author Contributions

Conceptualization, C.K.; methodology, C.K.; software, H.U.H.V.; validation, C.K., Z.W. and H.U.H.V.; investigation, A.H.E.-S. and C.K.; resources, C.K.; data curation, A.H.E.-S.; writing—original draft preparation, A.H.E.-S.; writing—review and editing, A.H.E.-S., H.U.H.V., Z.W., C.K. and B.S.G.; visualization, H.U.H.V.; supervision, C.K.; project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Collins, C.; Dennehy, D.; Conboy, K.; Mikalef, P. Artificial intelligence in information systems research: A systematic literature review and research agenda. Int. J. Inf. Manag. 2021, 60, 102383. [Google Scholar] [CrossRef]
  2. Kersting, K. Machine Learning and Artificial Intelligence: Two Fellow Travelers on the Quest for Intelligent Behavior in Machines. Front. Big Data 2018, 1, 6. [Google Scholar] [CrossRef] [PubMed]
  3. Ghuwalewala, S.; Kulkarni, V.; Pant, R.; Kharat, A. Levels of Autonomous Radiology. Interact. J. Med. Res. 2022, 11, e38655. [Google Scholar] [CrossRef]
  4. Bignami, E.G.; Cozzani, F.; Del Rio, P.; Bellini, V. Artificial intelligence and perioperative medicine. Minerva Anestesiol. 2020, 87, 755–756. [Google Scholar] [CrossRef] [PubMed]
  5. Chiew, C.J.; Liu, N.; Wong, T.H.; Sim, Y.E.; Abdullah, H.R. Utilizing Machine Learning Methods for Preoperative Prediction of Postsurgical Mortality and Intensive Care Unit Admission. Ann. Surg. 2020, 272, 1133–1139. [Google Scholar] [CrossRef]
  6. Fernandes, M.P.B.; de la Hoz, M.A.; Rangasamy, V.; Subramaniam, B. Machine Learning Models with Preoperative Risk Factors and Intraoperative Hypotension Parameters Predict Mortality After Cardiac Surgery. J. Cardiothorac. Vasc. Anesth. 2021, 35, 857–865. [Google Scholar] [CrossRef] [PubMed]
  7. Jalali, A.; Lonsdale, H.; Do, N.; Peck, J.; Gupta, M.; Kutty, S.; Ghazarian, S.R.; Jacobs, J.P.; Rehman, M.; Ahumada, L.M. Deep Learning for Improved Risk Prediction in Surgical Outcomes. Sci. Rep. 2020, 10, 9289. [Google Scholar] [CrossRef]
  8. Pfitzner, B.; Chromik, J.; Brabender, R.; Fischer, E.; Kromer, A.; Winter, A.; Moosburner, S.; Sauer, I.M.; Malinka, T.; Pratschke, J.; et al. Perioperative Risk Assessment in Pancreatic Surgery Using Machine Learning. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Scotland, UK, 1–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2211–2214. [Google Scholar] [CrossRef]
  9. Sahara, K.; Paredes, A.Z.; Tsilimigras, D.I.; Sasaki, K.; Moro, A.; Hyer, J.M.; Mehta, R.; Farooq, S.A.; Wu, L.; Endo, I.; et al. Machine learning predicts unpredicted deaths with high accuracy following hepatopancreatic surgery. HepatoBiliary Surg. Nutr. 2021, 10, 20–30. [Google Scholar] [CrossRef]
  10. COVIDSurg Collaborativ; Dajti, I.; Valenzuela, J.I.; Boccalatte, L.A.; Gemelli, N.A.; Smith, D.E.; Dudi-Venkata, N.N.; Kroon, H.M.; Sammour, T.; Roberts, M.; et al. Machine learning risk prediction of mortality for patients undergoing surgery with perioperative SARS-CoV-2: The COVIDSurg mortality score. Br. J. Surg. 2021, 108, 1274–1292. [Google Scholar] [CrossRef]
  11. Xue, B.; Li, D.; Lu, C.; King, C.R.; Wildes, T.; Avidan, M.S.; Kannampallil, T.; Abraham, J. Use of Machine Learning to Develop and Evaluate Models Using Preoperative and Intraoperative Data to Identify Risks of Postoperative Complications. JAMA Netw. Open 2021, 4, e212240. [Google Scholar] [CrossRef]
  12. Corey, K.M.; Kashyap, S.; Lorenzi, E.; Lagoo-Deenadayalan, S.A.; Heller, K.; Whalen, K.; Balu, S.; Heflin, M.T.; McDonald, S.R.; Swaminathan, M.; et al. Development and validation of machine learning models to identify high-risk surgical patients using automatically curated electronic health record data (Pythia): A retrospective, single-site study. PLOS Med. 2018, 15, e1002701. [Google Scholar] [CrossRef]
  13. Bonde, A.; Varadarajan, K.M.; Bonde, N.; Troelsen, A.; Muratoglu, O.K.; Malchau, H.; Yang, A.D.; Alam, H.; Sillesen, M. Assessing the utility of deep neural networks in predicting postoperative surgical complications: A retrospective study. Lancet Digit. Health 2021, 3, e471–e485. [Google Scholar] [CrossRef]
  14. Zhao, H.; Zhang, X.; Xu, Y.; Gao, L.; Ma, Z.; Sun, Y.; Wang, W. Predicting the Risk of Hypertension Based on Several Easy-to-Collect Risk Factors: A Machine Learning Method. Front. Public Health 2021, 9, 619429. [Google Scholar] [CrossRef]
  15. Alkaabi, L.A.; Ahmed, L.S.; Al Attiyah, M.F.; Abdel-Rahman, M.E. Predicting hypertension using machine learning: Findings from Qatar Biobank Study. PLoS ONE 2020, 15, e0240370. [Google Scholar] [CrossRef]
  16. Ye, C.; Fu, T.; Hao, S.; Zhang, Y.; Wang, O.; Jin, B.; Xia, M.; Liu, M.; Zhou, X.; Wu, Q.; et al. Prediction of Incident Hypertension Within the Next Year: Prospective Study Using Statewide Electronic Health Records and Machine Learning. J. Med. Internet Res. 2018, 20, e22. [Google Scholar] [CrossRef]
  17. LaFreniere, D.; Zulkernine, F.; Barber, D.; Martin, K. Using machine learning to predict hypertension from a clinical dataset. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–7. [Google Scholar] [CrossRef]
  18. Khalid, S.G.; Zhang, J.; Chen, F.; Zheng, D. Blood Pressure Estimation Using Photoplethysmography Only: Comparison between Different Machine Learning Approaches. J. Healthc. Eng. 2018, 2018, 1548647. [Google Scholar] [CrossRef]
  19. Myers, K.D.; Knowles, J.W.; Staszak, D.; Shapiro, M.D.; Howard, W.; Yadava, M.; Zuzick, D.; Williamson, L.; Shah, N.H.; Banda, J.; et al. Precision screening for familial hypercholesterolaemia: A machine learning study applied to electronic health encounter data. Lancet Digit. Health 2019, 1, e393–e402. [Google Scholar] [CrossRef]
  20. Pina, A.; Helgadottir, S.; Mancina, R.M.; Pavanello, C.; Pirazzi, C.; Montalcini, T.; Henriques, R.; Calabresi, L.; Wiklund, O.; Macedo, M.P.; et al. Virtual genetic diagnosis for familial hypercholesterolemia powered by machine learning. Eur. J. Prev. Cardiol. 2020, 27, 1639–1646. [Google Scholar] [CrossRef]
  21. Liu, Y.; Zhang, Q.; Zhao, G.; Liu, G.; Liu, Z. Deep Learning-Based Method of Diagnosing Hyperlipidemia and Providing Diagnostic Markers Automatically. Diabetes Metab. Syndr. Obes. Targets Ther. 2020, 13, 679–691. [Google Scholar] [CrossRef]
  22. Tsigalou, C.; Panopoulou, M.; Papadopoulos, C.; Karvelas, A.; Tsairidis, D.; Anagnostopoulos, K. Estimation of low-density lipoprotein cholesterol by machine learning methods. Clin. Chim. Acta 2021, 517, 108–116. [Google Scholar] [CrossRef]
  23. Çubukçu, H.C.; Topcu, D.I. Estimation of Low-Density Lipoprotein Cholesterol Concentration Using Machine Learning. Lab. Med. 2022, 53, 161–171. [Google Scholar] [CrossRef] [PubMed]
  24. Weng, S.F.; Reps, J.; Kai, J.; Garibaldi, J.M.; Qureshi, N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE 2017, 12, e0174944. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, J.; Feng, Q.; Wu, P.; Lupu, R.A.; Wilke, R.A.; Wells, Q.S.; Denny, J.C.; Wei, W.-Q. Learning from Longitudinal Data in Electronic Health Record and Genetic Data to Improve Cardiovascular Event Prediction. Sci. Rep. 2019, 9, 717. [Google Scholar] [CrossRef] [PubMed]
  26. Kusunose, K.; Hirata, Y.; Tsuji, T.; Kotoku, J.; Sata, M. Deep learning to predict elevated pulmonary artery pressure in patients with suspected pulmonary hypertension using standard chest X-ray. Sci. Rep. 2020, 10, 19311. [Google Scholar] [CrossRef]
  27. Madani, A.; Moradi, M.; Karargyris, A.; Syeda-Mahmood, T. Chest X-ray generation and data augmentation for cardiovascular abnormality classification. In Medical Imaging 2018: Image Processing; Angelini, E.D., Landman, B.A., Eds.; SPIE: Houston, TX, USA, 2018; p. 57. [Google Scholar] [CrossRef]
  28. Ambale-Venkatesh, B.; Yang, X.; Wu, C.O.; Liu, K.; Hundley, W.G.; McClelland, R.; Gomes, A.S.; Folsom, A.R.; Shea, S.; Guallar, E.; et al. Cardiovascular Event Prediction by Machine Learning: The Multi-Ethnic Study of Atherosclerosis. Circ. Res. 2017, 121, 1092–1101. [Google Scholar] [CrossRef]
  29. Alaa, A.M.; Bolton, T.; Di Angelantonio, E.; Rudd, J.H.F.; van der Schaar, M. Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK Biobank participants. PLoS ONE 2019, 14, e0213653. [Google Scholar] [CrossRef]
  30. Pfohl, S.; Marafino, B.; Coulet, A.; Rodriguez, F.; Palaniappan, L.; Shah, N.H. Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019. [Google Scholar] [CrossRef]
  31. Al’aref, S.J.; Maliakal, G.; Singh, G.; van Rosendael, A.R.; Ma, X.; Xu, Z.; Alawamlh, O.A.H.; Lee, B.; Pandey, M.; Achenbach, S.; et al. Machine learning of clinical variables and coronary artery calcium scoring for the prediction of obstructive coronary artery disease on coronary computed tomography angiography: Analysis from the CONFIRM registry. Eur. Heart J. 2020, 41, 359–367. [Google Scholar] [CrossRef]
  32. Ting, D.S.W.; Cheung, C.Y.-L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  33. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef]
  34. Kim, Y.D.; Noh, K.J.; Byun, S.J.; Lee, S.; Kim, T.; Sunwoo, L.; Lee, K.J.; Kang, S.-H.; Park, K.H.; Park, S.J. Effects of Hypertension, Diabetes, and Smoking on Age and Sex Prediction from Retinal Fundus Images. Sci. Rep. 2020, 10, 4623. [Google Scholar] [CrossRef]
  35. Ravaut, M.; Harish, V.; Sadeghi, H.; Leung, K.K.; Volkovs, M.; Kornas, K.; Watson, T.; Poutanen, T.; Rosella, L.C. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw. Open 2021, 4, e2111315. [Google Scholar] [CrossRef]
  36. Ravaut, M.; Sadeghi, H.; Leung, K.K.; Volkovs, M.; Kornas, K.; Harish, V.; Watson, T.; Lewis, G.F.; Weisman, A.; Poutanen, T.; et al. Predicting adverse outcomes due to diabetes complications with machine learning using administrative health data. Npj Digit. Med. 2021, 4, 24. [Google Scholar] [CrossRef]
  37. Deberneh, H.M.; Kim, I. Prediction of Type 2 Diabetes Based on Machine Learning Algorithm. Int. J. Environ. Res. Public Health 2021, 18, 3317. [Google Scholar] [CrossRef]
  38. Alhassan, Z.; McGough, A.S.; Alshammari, R.; Daghstani, T.; Budgen, D.; Al Moubayed, N. Type-2 Diabetes Mellitus Diagnosis from Time Series Clinical Data Using Deep Learning Models. In Artificial Neural Networks and Machine Learning—ICANN 2018; Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11141, pp. 468–478. [Google Scholar] [CrossRef]
  39. Boutilier, J.J.; Chan, T.C.Y.; Ranjan, M.; Deo, S. Risk Stratification for Early Detection of Diabetes and Hypertension in Resource-Limited Settings: Machine Learning Analysis. J. Med. Internet Res. 2021, 23, e20123. [Google Scholar] [CrossRef]
  40. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef]
  41. Alharbi, F.; Vakanski, A. Machine Learning Methods for Cancer Classification Using Gene Expression Data: A Review. Bioengineering 2023, 10, 173. [Google Scholar] [CrossRef]
  42. Ardila, D.; Kiraly, A.P.; Bharadwaj, S.; Choi, B.; Reicher, J.J.; Peng, L.; Tse, D.; Etemadi, M.; Ye, W.; Corrado, G.; et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019, 25, 954–961. [Google Scholar] [CrossRef]
  43. Gould, M.K.; Huang, B.Z.; Tammemagi, M.C.; Kinar, Y.; Shiff, R. Machine Learning for Early Lung Cancer Identification Using Routine Clinical and Laboratory Data. Am. J. Respir. Crit. Care Med. 2021, 204, 445–453. [Google Scholar] [CrossRef]
  44. Yeh, M.C.-H.; Wang, Y.-H.; Yang, H.-C.; Bai, K.-J.; Wang, H.-H.; Li, Y.-C.J. Artificial Intelligence–Based Prediction of Lung Cancer Risk Using Nonimaging Electronic Medical Records: Deep Learning Approach. J. Med. Internet Res. 2021, 23, e26256. [Google Scholar] [CrossRef]
  45. Guo, Y.; Yin, S.; Chen, S.; Ge, Y. Predictors of underutilization of lung cancer screening: A machine learning approach. Eur. J. Cancer Prev. 2022, 31, 523–529. [Google Scholar] [CrossRef]
  46. Mehmood, M.; Rizwan, M.; Ml, M.G.; Abbas, S. Machine Learning Assisted Cervical Cancer Detection. Front. Public Health 2021, 9, 788376. [Google Scholar] [CrossRef] [PubMed]
  47. Rahaman, M.; Li, C.; Yao, Y.; Kulwa, F.; Wu, X.; Li, X.; Wang, Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput. Biol. Med. 2021, 136, 104649. [Google Scholar] [CrossRef] [PubMed]
  48. Wentzensen, N.; Lahrmann, B.; Clarke, M.A.; Kinney, W.; Tokugawa, D.; Poitras, N.; Locke, A.; Bartels, L.; Krauthoff, A.; Walker, J.; et al. Accuracy and Efficiency of Deep-Learning–Based Automation of Dual Stain Cytology in Cervical Cancer Screening. JNCI J. Natl. Cancer Inst. 2020, 113, 72–79. [Google Scholar] [CrossRef] [PubMed]
  49. Shen, L.; Margolies, L.R.; Rothstein, J.H.; Fluder, E.; McBride, R.; Sieh, W. Deep Learning to Improve Breast Cancer Detection on Screening Mammography. Sci. Rep. 2019, 9, 12495. [Google Scholar] [CrossRef]
  50. Buda, M.; Saha, A.; Walsh, R.; Ghate, S.; Li, N.; Swiecicki, A.; Lo, J.Y.; Mazurowski, M.A. A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images. JAMA Netw. Open 2021, 4, e2119100. [Google Scholar] [CrossRef]
  51. Maghsoudi, O.H.; Gastounioti, A.; Scott, C.; Pantalone, L.; Wu, F.-F.; Cohen, E.A.; Winham, S.; Conant, E.F.; Vachon, C.; Kontos, D. Deep-LIBRA: An artificial-intelligence method for robust quantification of breast density with independent validation in breast cancer risk assessment. Med. Image Anal. 2021, 73, 102138. [Google Scholar] [CrossRef]
  52. Ming, C.; Viassolo, V.; Probst-Hensch, N.; Dinov, I.D.; Chappuis, P.O.; Katapodi, M.C. Machine learning-based lifetime breast cancer risk reclassification compared with the BOADICEA model: Impact on screening recommendations. Br. J. Cancer 2020, 123, 860–867. [Google Scholar] [CrossRef]
  53. Perera, M.; Mirchandani, R.; Papa, N.; Breemer, G.; Effeindzourou, A.; Smith, L.; Swindle, P.; Smith, E. PSA-based machine learning model improves prostate cancer risk stratification in a screening population. World J. Urol. 2021, 39, 1897–1902. [Google Scholar] [CrossRef]
  54. Chiu, P.K.-F.; Shen, X.; Wang, G.; Ho, C.-L.; Leung, C.-H.; Ng, C.-F.; Choi, K.-S.; Teoh, J.Y.-C. Enhancement of prostate cancer diagnosis by machine learning techniques: An algorithm development and validation study. Prostate Cancer Prostatic Dis. 2021, 25, 672–676. [Google Scholar] [CrossRef]
  55. Beinecke, J.M.; Anders, P.; Schurrat, T.; Heider, D.; Luster, M.; Librizzi, D.; Hauschild, A.-C. Evaluation of machine learning strategies for imaging confirmed prostate cancer recurrence prediction on electronic health records. Comput. Biol. Med. 2022, 143, 105263. [Google Scholar] [CrossRef]
  56. Turbé, V.; Herbst, C.; Mngomezulu, T.; Meshkinfamfard, S.; Dlamini, N.; Mhlongo, T.; Smit, T.; Cherepanova, V.; Shimada, K.; Budd, J.; et al. Deep learning of HIV field-based rapid tests. Nat. Med. 2021, 27, 1165–1170. [Google Scholar] [CrossRef]
  57. Bao, Y.; Medland, N.A.; Fairley, C.K.; Wu, J.; Shang, X.; Chow, E.P.; Xu, X.; Ge, Z.; Zhuang, X.; Zhang, L. Predicting the diagnosis of HIV and sexually transmitted infections among men who have sex with men using machine learning approaches. J. Infect. 2020, 82, 48–59. [Google Scholar] [CrossRef]
  58. Marcus, J.L.; Hurley, L.B.; Krakower, D.S.; Alexeeff, S.; Silverberg, M.J.; Volk, J.E. Use of electronic health record data and machine learning to identify candidates for HIV pre-exposure prophylaxis: A modelling study. Lancet HIV 2019, 6, e688–e695. [Google Scholar] [CrossRef]
  59. Elder, H.R.; Gruber, S.; Willis, S.J.; Cocoros, N.; Callahan, M.; Flagg, E.W.; Klompas, M.; Hsu, K.K. Can Machine Learning Help Identify Patients at Risk for Recurrent Sexually Transmitted Infections? Sex. Transm. Dis. 2020, 48, 56–62. [Google Scholar] [CrossRef]
  60. Gadalla, A.A.H.; Friberg, I.M.; Kift-Morgan, A.; Zhang, J.; Eberl, M.; Topley, N.; Weeks, I.; Cuff, S.; Wootton, M.; Gal, M.; et al. Identification of clinical and urine biomarkers for uncomplicated urinary tract infection using machine learning algorithms. Sci. Rep. 2019, 9, 19694. [Google Scholar] [CrossRef]
  61. Taylor, R.A.; Moore, C.L.; Cheung, K.-H.; Brandt, C. Predicting urinary tract infections in the emergency department with machine learning. PLoS ONE 2018, 13, e0194085. [Google Scholar] [CrossRef]
  62. Tsai, C.-Y.; Liu, W.-T.; Lin, Y.-T.; Lin, S.-Y.; Houghton, R.; Hsu, W.-H.; Wu, D.; Lee, H.-C.; Wu, C.-J.; Li, L.Y.J.; et al. Machine learning approaches for screening the risk of obstructive sleep apnea in the Taiwan population based on body profile. Inform. Health Soc. Care 2021, 47, 373–388. [Google Scholar] [CrossRef]
  63. Álvarez, D.; Cerezo-Hernández, A.; Crespo, A.; Gutiérrez-Tobal, G.C.; Vaquerizo-Villar, F.; Barroso-García, V.; Moreno, F.; Arroyo, C.A.; Ruiz, T.; Hornero, R.; et al. A machine learning-based test for adult sleep apnoea screening at home using oximetry and airflow. Sci. Rep. 2020, 10, 5332. [Google Scholar] [CrossRef]
  64. Mencar, C.; Gallo, C.; Mantero, M.; Tarsia, P.; Carpagnano, G.E.; Barbaro, M.P.F.; Lacedonia, D. Application of machine learning to predict obstructive sleep apnea syndrome severity. Health Inform. J. 2020, 26, 298–317. [Google Scholar] [CrossRef]
  65. Park, H.W.; Jung, H.; Back, K.Y.; Choi, H.J.; Ryu, K.S.; Cha, H.S.; Lee, E.K.; Hong, A.R.; Hwangbo, Y. Application of Machine Learning to Identify Clinically Meaningful Risk Group for Osteoporosis in Individuals Under the Recommended Age for Dual-Energy X-Ray Absorptiometry. Calcif. Tissue Int. 2021, 109, 645–655. [Google Scholar] [CrossRef]
  66. Kim, S.K.; Yoo, T.K.; Oh, E.; Kim, D.W. Osteoporosis risk prediction using machine learning and conventional methods. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 188–191. [Google Scholar] [CrossRef]
  67. Liu, L.; Si, M.; Ma, H.; Cong, M.; Xu, Q.; Sun, Q.; Wu, W.; Wang, C.; Fagan, M.J.; Mur, L.A.J.; et al. A hierarchical opportunistic screening model for osteoporosis using machine learning applied to clinical data and CT images. BMC Bioinform. 2022, 23, 63. [Google Scholar] [CrossRef] [PubMed]
  68. Lim, H.K.; Ha, H.I.; Park, S.-Y.; Han, J. Prediction of femoral osteoporosis using machine-learning analysis with radiomics features and abdomen-pelvic CT: A retrospective single center preliminary study. PLoS ONE 2021, 16, e0247330. [Google Scholar] [CrossRef] [PubMed]
  69. Wu, Q.; Nasoz, F.; Jung, J.; Bhattarai, B.; Han, M.V. Machine Learning Approaches for Fracture Risk Assessment: A Comparative Analysis of Genomic and Phenotypic Data in 5130 Older Men. Calcif. Tissue Int. 2020, 107, 353–361. [Google Scholar] [CrossRef] [PubMed]
  70. Moslemi, A.; Kontogianni, K.; Brock, J.; Wood, S.; Herth, F.; Kirby, M. Differentiating COPD and asthma using quantitative CT imaging and machine learning. Eur. Respir. J. 2022, 60, 2103078. [Google Scholar] [CrossRef] [PubMed]
  71. Zeng, S.; Arjomandi, M.; Tong, Y.; Liao, Z.C.; Luo, G. Developing a Machine Learning Model to Predict Severe Chronic Obstructive Pulmonary Disease Exacerbations: Retrospective Cohort Study. J. Med. Internet Res. 2022, 24, e28953. [Google Scholar] [CrossRef]
  72. Nishat, M.; Faisal, F.; Dip, R.; Nasrullah, S.; Ahsan, R.; Shikder, F.; Asif, M.A. Hoque A Comprehensive Analysis on Detecting Chronic Kidney Disease by Employing Machine Learning Algorithms. EAI Endorsed Trans. Pervasive Health Technol. 2018, 7, 170671. [Google Scholar] [CrossRef]
  73. Bai, Q.; Su, C.; Tang, W.; Li, Y. Machine learning to predict end stage kidney disease in chronic kidney disease. Sci. Rep. 2022, 12, 8377. [Google Scholar] [CrossRef]
  74. Heidari, A.; Navimipour, N.J.; Unal, M.; Toumaj, S. Machine learning applications for COVID-19 outbreak management. Neural Comput. Appl. 2022, 34, 15313–15348. [Google Scholar] [CrossRef]
  75. Zhou, X.; Wang, Z.; Li, S.; Liu, T.; Wang, X.; Xia, J.; Zhao, Y. Machine Learning-Based Decision Model to Distinguish Between COVID-19 and Influenza: A Retrospective, Two-Centered, Diagnostic Study. Risk Manag. Healthc. Policy 2021, 14, 595–604. [Google Scholar] [CrossRef]
  76. Zan, A.; Xie, Z.-R.; Hsu, Y.-C.; Chen, Y.-H.; Lin, T.-H.; Chang, Y.-S.; Chang, K.Y. DeepFlu: A deep learning approach for forecasting symptomatic influenza A infection based on pre-exposure gene expression. Comput. Methods Programs Biomed. 2021, 213, 106495. [Google Scholar] [CrossRef]
  77. Nadda, W.; Boonchieng, W.; Boonchieng, E. Influenza, dengue and common cold detection using LSTM with fully connected neural network and keywords selection. BioData Min. 2022, 15, 5. [Google Scholar] [CrossRef]
  78. Hogan, C.A.; Rajpurkar, P.; Sowrirajan, H.; Phillips, N.A.; Le, A.T.; Wu, M.; Garamani, N.; Sahoo, M.K.; Wood, M.L.; Huang, C.; et al. Nasopharyngeal metabolomics and machine learning approach for the diagnosis of influenza. EbioMedicine 2021, 71, 103546. [Google Scholar] [CrossRef]
  79. Choo, H.; Kim, M.; Choi, J.; Shin, J.; Shin, S.-Y. Influenza Screening via Deep Learning Using a Combination of Epidemiological and Patient-Generated Health Data: Development and Validation Study. J. Med. Internet Res. 2020, 22, e21369. [Google Scholar] [CrossRef]
  80. Lown, M.; Brown, M.; Brown, C.; Yue, A.M.; Shah, B.N.; Corbett, S.J.; Lewith, G.; Stuart, B.; Moore, M.; Little, P. Machine learning detection of Atrial Fibrillation using wearable technology. PLoS ONE 2020, 15, e0227401. [Google Scholar] [CrossRef]
  81. Ali, F.; Hasan, B.; Ahmad, H.; Hoodbhoy, Z.; Bhuriwala, Z.; Hanif, M.; Ansari, S.U.; Chowdhury, D. Detection of subclinical rheumatic heart disease in children using a deep learning algorithm on digital stethoscope: A study protocol. BMJ Open 2021, 11, e044070. [Google Scholar] [CrossRef]
  82. Kwon, S.; Hong, J.; Choi, E.-K.; Lee, E.; Hostallero, D.E.; Kang, W.J.; Lee, B.; Jeong, E.-R.; Koo, B.-K.; Oh, S.; et al. Deep Learning Approaches to Detect Atrial Fibrillation Using Photoplethysmographic Signals: Algorithms Development Study. JMIR mHealth uHealth 2019, 7, e12770. [Google Scholar] [CrossRef]
  83. Tiwari, P.; Colborn, K.L.; Smith, D.E.; Xing, F.; Ghosh, D.; Rosenberg, M.A. Assessment of a Machine Learning Model Applied to Harmonized Electronic Health Record Data for the Prediction of Incident Atrial Fibrillation. JAMA Netw. Open 2020, 3, e1919396. [Google Scholar] [CrossRef]
  84. Sekelj, S.; Sandler, B.; Johnston, E.; Pollock, K.G.; Hill, N.R.; Gordon, J.; Tsang, C.; Khan, S.; Ng, F.S.; Farooqui, U. Detecting undiagnosed atrial fibrillation in UK primary care: Validation of a machine learning prediction algorithm in a retrospective cohort study. Eur. J. Prev. Cardiol. 2021, 28, 598–605. [Google Scholar] [CrossRef]
  85. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef]
  86. Sunarti, S.; Rahman, F.F.; Naufal, M.; Risky, M.; Febriyanto, K.; Masnina, R. Artificial intelligence in healthcare: Opportunities and risk for future. Gac. Sanit. 2021, 35, S67–S70. [Google Scholar] [CrossRef]
  87. Christodoulou, E.; Ma, J.; Collins, G.S.; Steyerberg, E.W.; Verbakel, J.Y.; Van Calster, B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J. Clin. Epidemiol. 2019, 110, 12–22. [Google Scholar] [CrossRef]
  88. Ahuja, A.S. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ 2019, 7, e7702. [Google Scholar] [CrossRef] [PubMed]
  89. Raschka, S. Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning. arXiv 2018, arXiv:1811.12808. [Google Scholar] [CrossRef]
  90. de Vos, J.; Visser, L.A.; de Beer, A.A.; Fornasa, M.; Thoral, P.J.; Elbers, P.W.; Cinà, G. The Potential Cost-Effectiveness of a Machine Learning Tool That Can Prevent Untimely Intensive Care Unit Discharge. Value Health 2021, 25, 359–367. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Current methods vs. AI-assisted methods in primary care. Figure Description: AI has the potential to assist current primary care methods in three domains: pre-operative care, screening, and detection. In pre-operative care, this includes using AI for predictions of outcomes and mortality. For screening, AI serves a prominent role in screening tools for numerous diseases. Similarly, AI can be used for real-time detection tools and AI-assisted histopathology tools.
Figure 1. Current methods vs. AI-assisted methods in primary care. Figure Description: AI has the potential to assist current primary care methods in three domains: pre-operative care, screening, and detection. In pre-operative care, this includes using AI for predictions of outcomes and mortality. For screening, AI serves a prominent role in screening tools for numerous diseases. Similarly, AI can be used for real-time detection tools and AI-assisted histopathology tools.
Ai 04 00024 g001
Figure 2. Literature Search Method. Figure Description: PubMed and Google Scholar were searched using keywords for English literature published from inception to December 2022. Observational studies, case–control studies, cohort studies, clinical trials, meta-analyses, reviews, and guidelines were reviewed.
Figure 2. Literature Search Method. Figure Description: PubMed and Google Scholar were searched using keywords for English literature published from inception to December 2022. Observational studies, case–control studies, cohort studies, clinical trials, meta-analyses, reviews, and guidelines were reviewed.
Ai 04 00024 g002
Figure 3. Example of AI in pre-operative evaluation. Figure Description: The integration of AI into pre-operative care allows for the refinement of more effective guidelines. For instance, in Figure 2, current guidelines recommend a seven-step pre-operative evaluation before surgery for patients with CAD risk factors. In this process, AI could be utilized to provide risk prediction and MET monitoring through wearable technology, ultimately cultivating a more straightforward process.
Figure 3. Example of AI in pre-operative evaluation. Figure Description: The integration of AI into pre-operative care allows for the refinement of more effective guidelines. For instance, in Figure 2, current guidelines recommend a seven-step pre-operative evaluation before surgery for patients with CAD risk factors. In this process, AI could be utilized to provide risk prediction and MET monitoring through wearable technology, ultimately cultivating a more straightforward process.
Ai 04 00024 g003
Figure 4. Example of AI in ASCVD assessment. Figure Description: ASCVD risk assessment is exceptionally extensive and varies significantly based on age groups. Although the guidelines are thorough, AI has the potential to address potential gaps in the evaluation. For instance, AI can provide risk prediction for individuals > 75 and it could fine-tune ACSVD scores based on race. AI could also detect risk enhancers of ASCVD based on HbA1C monitoring, EHR, and lipid profiles. This may allow for appropriate adjustments to lipid-lowering therapy. AI also has the potential to use phenomapping instead of age categories, allowing for stronger classification.
Figure 4. Example of AI in ASCVD assessment. Figure Description: ASCVD risk assessment is exceptionally extensive and varies significantly based on age groups. Although the guidelines are thorough, AI has the potential to address potential gaps in the evaluation. For instance, AI can provide risk prediction for individuals > 75 and it could fine-tune ACSVD scores based on race. AI could also detect risk enhancers of ASCVD based on HbA1C monitoring, EHR, and lipid profiles. This may allow for appropriate adjustments to lipid-lowering therapy. AI also has the potential to use phenomapping instead of age categories, allowing for stronger classification.
Ai 04 00024 g004
Figure 5. Example of AI in Pulmonary Embolism Evaluation. Figure Description: Current guidelines for a suspected pulmonary embolism (PE) in a patient without hemodynamic instability requires a clinical probability assessment of the PE. Based on the clinical judgment and a potential D-dimer test, a CT pulmonary angiogram is conducted to determine whether treatment or no treatment will occur. AI has the potential to be integrated into this process by potentially detecting deep vein thrombosis, detecting high moderate vs. moderate PE phenotypes, and predicting the risk of thrombectomy.
Figure 5. Example of AI in Pulmonary Embolism Evaluation. Figure Description: Current guidelines for a suspected pulmonary embolism (PE) in a patient without hemodynamic instability requires a clinical probability assessment of the PE. Based on the clinical judgment and a potential D-dimer test, a CT pulmonary angiogram is conducted to determine whether treatment or no treatment will occur. AI has the potential to be integrated into this process by potentially detecting deep vein thrombosis, detecting high moderate vs. moderate PE phenotypes, and predicting the risk of thrombectomy.
Ai 04 00024 g005
Table 1. Abbreviations.
Table 1. Abbreviations.
NameAbbreviation
Acute kidney injuryAKI
Adaptive boostingADA
Age-related macular degenerationAMD
Artificial intelligenceAI
Atherosclerotic cardiovascular diseaseACSVD
Atrial fibrillationAF
Blood pressureBP
Chronic kidney diseaseCKD
Chronic obstructive pulmonary diseaseCOPD
Convolutional neural networkCNN
Coronary artery calcium scoreCACS
Coronary artery diseaseCAD
Decision treeDT
Deep learningDL
Deep neural networkDNN
Deep vein thrombusDVT
Diabetes mellitusDM
Electronic health recordsEHR
Extreme gradient boostingXGB
Familial hypercholesterolemiaFH
Generative adversarial networkGAN
Gradient boostingGB
Gradient boosting treeGBT
Heart failureHF
Human immunodeficiency virusHIV
K-nearest neighborsKNN
Logistic regressionLR
Low-density lipoproteinLDL
Machine learningML
Neural networkNN
Obstructive sleep apnea syndromeOSAS
PhotoplethysmogramPPG
Potential pre-exposure prophylaxisPrEP
Pulmonary embolismPE
Pulmonary hypertensionPH
Random forestRF
Support vector machineSVM
Urinary tract infectionUTI
Table 2. Clinical Trials on Artificial Intelligence in Primary Care.
Table 2. Clinical Trials on Artificial Intelligence in Primary Care.
Trial or RegistryNAimInclusion CriteriaExclusion CriteriaStatus
NCT051661221600Use AI to screen for diabetic retinopathy>18 years, screened for diabetic retinopathy, with diabetes, can take retina picturesPart of community hospital with ophthalmologist, previously diagnosed with some retinal conditions, laser retinal treatment, has other eye diseasesRecruiting
NCT052860344000AI ChatBot to improve women participation in cervical cancer screening program30–65, did not perform pap smear in last 4 years, living in deprived clustersOutside age group, had pap smear in last 3 years, had hysterectomy including cervix, pregnant beyond 6 months, already scheduled screening appointmentNot yet recruiting
NCT0455128716,164Cervical cancer AI screening for cytopathological diagnosis25–65 years old, availably of confirmed diagnosis results of cytological examUnsatisfactory samples of cytological exam, women diagnosed with other malignant tumorsCompleted
NCT054358722000AI for gastrointestinal endoscopy screeningPatients received gastroscopy and colonoscopy, endoscopic exam with AI can be acceptedPatients refusing to participate, patients with intolerance or contraindications to endoscopic examsRecruiting
NCT056976012905Finding predictors of ovarian and endometrial cancer for AI screening toolWomen with gynecological symptoms, women underwent routine gynecological examUnable to undergo serial gynecological examRecruiting
NCT04838756100,000AI for mammography screeningWomen eligible for population-based mammography screeningNoneActive, not recruiting
NCT05452993330AI screening for diabetic retinopathyAdult patients with diabetes, ongoing diabetes treatment, regular pharmacy customer, informed consentUnable to read, write, or give consent, refusing to share results with general practitionerNot yet recruiting
NCT0477867055,579AI for large-scale breast screeningParticipants in regular population-based breast cancerIncomplete exam, breast implant, complete mastectomy, participant in surveillance programActive, not recruiting
NCT05139797300AI-guided echo screening of rare diseasesPatients with high suspicion for cardiac amyloidosis by AIPatients that decline to be seen at specialty clinic, patients that passed awayRecruiting
NCT051399402432AI-enabled TB screening in Zambia18 years or older with known HIV statusIndividuals that do not meet inclusion criteriaRecruiting
NCT047434795000AI screening of pancreatic cancerSubject can provide informed consent, detailed questionnaire filled, and subject has one of several listed conditionsSubject has been diagnosed with pancreatic cancer or other malignant tumors in past 5 years, subject contraindicates MRI or CT, subjects is in another clinical trialRecruiting
NCT0494977627,000AI for breast cancer screening50–69 years old, women studied in the program in the set period and for the first timeUnable to give consent, breast prostheses, symptoms or signs of suspected breast cancerRecruiting
NCT05587452950AI screening for colorectal cancerInformed consent, provide blood samples, diagnosed with colorectal cancer or colorectal adenoma Pregnant or breastfeeding, diagnosed with another cancer, selective exclusions for colorectal cancer and healthy people Recruiting
NCT05456126125AI for infant motor screeningMothers older than 20, no history of recreational drugs, married or live with fathers. Specific criteria for term and preterm infantsNoneRecruiting
NCT0502459132,714AI for breast cancer screeningEligible for national screening, provides consentHistory or current breast cancer, currently pregnant or plans to become pregnant, history of breast surgery, has mammography for diagnostic purposesRecruiting
NCT04732208410AI screening of diabetic retinopathy using smartphone cameraOver 18 years, informed consent, established cases of DM, subjects dilated for ophthalmic evaluationAcute vision loss, contraindicated for fundus imaging, treated for retinopathy, other retinal pathologies, at risk of acute angle closure glaucomaCompleted
NCT053110462400AI screening for pediatric sepsis3 months–17 years of age, diagnosed with sepsis, blood sample collectionParticipating in outside interventions, parents or LARs that do not speak English or Spanish, pregnancyRecruiting
NCT053916591200AI screening for diabetic retinopathyDiagnosed with DM, >18 years old, informed consent, fluent in written and oral DutchHistory of diabetic retinopathy or diabetic macular edema treatment, contraindicated for imaging by fundus imagingRecruiting
NCT043070305000AI screening for congenital heart disease by heart sounds 0–18 years of age, children with or without congenital heart disease, informed consent>18 years of age, unable to undergo echo, not able to provide informed consent Not yet recruiting
NCT04000087358ECG AI-guided screening for low ejection fractionPrimary care clinicians who are part of a participating care team Primary care clinicians working in pediatrics, acute care, nursing homes, and resident care teamsCompleted
NCT041568801000AI in mammography-based breast cancer screeningWomen had undergone standard mammography, histopathology-proven diagnosisConcurring lesions on mammograms, no available pathologic diagnosis or long term follow up exams, undergone breast surgery, diagnosed with other kinds of malignancyRecruiting
NCT05645341400AI screening of malignant pigmented tumors on ocular surfaceDark-brown lesions on ocular surfaceNon-pigmented ocular surface tumors and image quality does not meet clinical requirementsRecruiting
NCT0504809515,500AI in breast cancer screeningWomen participating in regular breast cancer screening programWomen with breast implants or other foreign implants in mammogram and women with symptoms or signs of suspected breast cancerCompleted
NCT048947081572AI for polyp detection in colonoscopy>35 years, planned diagnostic colonoscopy’s screening colonoscopy for men >50 or women >55Colon bleeding, colon carcinoma, known polyps for removal, IBD, colonic stenosis, other suspected colon disease, follow-up care after colon cancer surgery, anticoagulant drugs, poor general condition, incomplete colonoscopy plannedRecruiting
NCT04160988703AI for screening diabetic retinopathy >20 years, DM, image taken by color fundus, include includes macula and optic nerveColor fundus image previously use, macula, optic nerve or other part is unclearCompleted
NCT042131831789DL screening for hepatobiliary diseasesQuality of fundus and slit-lamp images is acceptable, more than 90% of fundus image area includes four main regions, more than 90% of slit-lamp image area includes three main regionsImages with light leakage (>10% of the area)Completed
NCT048325942500AI screening for breast cancer for supplemental MRIFour-view screening mammography examWomen in surveillance program, breast implants, prior breast cancer, breast feeding, MRI contraindication Recruiting
NCT05704491100AI screening for diabetic retinopathyDM diagnosis, diabetes duration >5 years, >18 years old, informed consent, fluent in writing and speaking GermanHistory of laser treatment, contraindication to fundus imaging systemsNot yet recruiting
NCT04699864630AI for screening diabetic retinopathy>18 years and older, informed consent, diagnostic for diabetes, diabetic patient followed and referred by physicianPatients less than 18 years old, no informed consent, patient already had treatment for retinal conditionNot yet recruiting
NCT048596342000AI for detecting multiple ocular fundus lesionsParticipants who agree to take ultra-widefield fundus imagesPatients that cannot cooperate with photographer, no informed consent Recruiting
NCT05734820312AI screening colonoscopy>45 years old, referred for screening colonoscopy, adequate bowel preparation, authorized for endoscopic approachPregnancy, clinical condition making endoscopy inviable, history of colorectal carcinoma, IBD, no informed consentRecruiting
NCT048595305886AI smartphone for cervical cancer screeningInformed consentNo initiation of sexual intercourse, pregnancy, condition altering cervix visualization, previous hysterectomy, health not sufficientRecruiting
NCT03773458500AI for large-scale screening of scoliosisPretreatment back photos and whole spine standing X-ray or ultrasound imagesPatients considered as non-idiopathic scoliosisCompleted
NCT057049202722AI for lung cancer screening50–80 years old, active or ex-smoker, smoking history of at least 20 pack-years, informed consent, affiliated with French social security Clinical signs of cancer, recent chest scan, health problems affecting life expectancy or limiting ability to undergo lung surgery, vulnerable peopleNot yet recruiting
NCT05236855200AI and spectroscopy for cervical cancer screeningWomen undergoing standard HPV screeningNANot yet recruiting
NCT0552753534,500AI for diabetic retinopathy screeningT1DM or T2DM, no full-time ophthalmologist, >18 years old, eligible for fundus photo imagingT1DM or T2DM with an ophthalmologist, previous diagnosed with macular edema, history of retinal laser, other ocular disease, not eligible for fundus imagingNot yet recruiting
NCT057454802NLP for screening opioid misuseAdults hospitalized at UW healthNARecruiting
NCT054908231000AI smartphone for anemia screeningInformed consentOphthalmic or fingernail surgery in past 30 daysRecruiting
NCT04896827244DL and AI for DNIC18–70 years old, chronic or no chronic pain, informed consentCVD, Raynaud syndrome, severe psychiatric disease, injuries or loss sensitivity, pregnant womenRecruiting
NCT057520451389AI for screening eye diseases>18 years, T1DM or T2DM, presenting screening for diabetic retinopathy, benefits of social security scheme, informed consentPatient with known DR, any condition affecting study, presenting social or psychological factors, participates in another clinical research studyNot yet recruiting
NCT052431215000AI for MRI in screening breast cancerPatients with clinical symptoms, undergoing full sequence BMRI exam, at least 6 months of follow-up resultsReceived therapy, contraindications of breast-enhanced MRI exams, prosthesis is implanted in affected breast, patients during lactation or pregnancyRecruiting
NCT04996615924AI for identifying diabetic retinopathy and diabetic macular edemaRoutine exams, routine laser treatment, diagnosed with T1DM or T2DM, presents visual acuityCurrently using AI system integrated into clinical care, inability to provide informed consent Recruiting
NCT039755046000AI for lung cancer screeningEligible participants aged 45–75 years with one of several risk factorsHad CT scan of chest in past 12 months, history of any cancer within 5 yearsRecruiting
NCT056265172000Developing risk stratification tools using AI21 years or older, sufficient English or Chinese language skills, informed consent<21 years old, cardiac event, no informed consentNot yet recruiting
NCT04994899800AI screening for mental health13–79 years old, English-speakingPrevious participant, unable to verbally respond to standard questions, cannot participate in virtual visit, no informed consentRecruiting
NCT051953852400Lung cancer screening with low-dose CT scans50–74 years, smoked at least 20 pack years, quit less than 15 years ago, gives consent, affiliated with social security systemPresence of clinical symptoms suggesting malignancy, evolving cancer, history of lung cancer, 2-year follow-up not possible, chest CT scan performedRecruiting
NCT04240652500,000AI for diabetic retinopathy screeningT2DM or T1DM, subjects from other medical institutes are diabetes, non-diabetic patients and healthy participantsHistory of drug abuse, STDs, any condition not suitable for studyRecruiting
NCT041262391610AI for food addiction screening testBMI >30, able to give informed consentNon-French speaker, unable to use internet toolsRecruiting
NCT04603404430Multimodality imaging in screening, diagnosis, and risk stratification of HFpEFLVEF > 50%, NT-proBNP > 220pg/mL or BNP > pg/mL, symptoms and syndromes of HF, at least one criteria of cardiac structureSpecial types of cardiomyopathies, infarction, myocardial fibrosis, severe arrhythmia, severe primary cardiac valvular disease, restrictive pericardial disease, refuses to participate in studyRecruiting
NCT051596611000AI for screening brain connectivity and dementia risk estimationMale and female 60–75 years, MCI diagnosis with MMSE > 25, MCI diagnosis with MoCa > 17Confirmed dementia, history of cerebrovascular disease, AUD identification test, severe medical disorders associated with cognitive impairment, severe head trauma, severe mental disordersRecruiting
NCT05650086700AI for breast screeningUnderstands the study, informed consent, complies with schedule, >21 years, fits cohort specific criteriaDoes not fit cohort specific criteria, unable to complete study proceduresRecruiting
NCT054261353000AI for tumor risk assessment Participants with suspected cancer, informed consent, detailed EHR data, healthy participantsParticipants with primary clinical and pathological missing data, lost to follow-up, poor medical image quality Recruiting
NCT05639348650AI for risk assessment of postoperative deliriumSurgical patients, >60 years old, planned postoperative hospital stay >2 days, informed consent Preoperative delirium, insufficient knowledge in German or French, intracranial surgery, cardiac surgery, surgery within two previous weeks, unable to provide informed consentRecruiting
NCT05466864120Screening of OSA using BSPHospitalized with acute ischemic stroke, 18–80, informed consentHistory of AF, LVEF < 45%, aphasia, unstable cardiopulmonary status, recent surgery including tracheotomy in 30 days, narcotics, on O2, PAP device, ventilator, unable to understand instructionsRecruiting
NCT05655117440AI for detecting eye complications in diabeticsDiabetic patients aged 18–90Severely ill patient or patient with cancerNot yet recruiting
NCT036889063275AI colorectal cancer screening testDiffers across three cohortsDiffers across three cohortsCompleted
NCT052461631500AI smartphone for skin cancer detectionPatients with one or two lesions meeting one of several criteria, informed consentLack of informed consentRecruiting
NCT05730192950AI for detection of gastrointestinal lesions in endoscopyScreening or surveillance colonoscopy, age 40 or older, informed consentEmergency colonoscopies, IBD, CRC, previous colonic resection, returning for elective colonoscopy, polyposis syndromes, contraindicationsNot yet recruiting
NCT055660022000AI evaluation of pulmonary hypertension>18 years, previous received diagnostic imagingPatients without RHC, quality of exams cannot meet requirement, severe loss of resultsRecruiting
Table 3. Machine learning models.
Table 3. Machine learning models.
ML ModelAdvantagesLimitationsClinical Applications in Primary Care
Logistic RegressionEasy to implement and interpret, handles binary and multi-class classificationDoes not perform well with outliers, assumes linear relationshipDiagnostic tests, selection of treatment, prognostic modeling, predicting disease risk
Convolutional Neural NetworkExcels in video and image recognition, learns hierarchical featuresNeeds a lot of data and resource, interpretation is limitedImage classification, diagnosing from medical imaging
Support Vector MachineHandles non-linear decision boundaries, great generalization Precise kernel function and hyperparameters selection, difficult with noisy dataDiagnosing disease, risk stratification, classifying clinical data
K-Nearest NeighborsEasy, simple, handles non-linear decision boundariesNeeds a lot of memory and time, sensitivities to certain featuresAssisting in disease progression through forecasting
Random ForestPerforms well with high-dimensional data, handles non-linear effectsHard to interpret, overfits noisy dataIdentifying risk factors, predicting outcomes,
Adaptive BoostingHandles regression and classification problems, combines weak learnersOverfits with weak learners, sensitive to noisy dataPredicting risk of disease, and detecting high risk
Gradient BoostingPerforms with large datasets, handles regression and classificationOverfits with weak learners, sensitive to noisy dataForecasting outcomes and diagnosing disease
Neural NetworkHandles large datasets, performs well on speech and image recognitionNeeds a lot of computational resources and data, overfits if complexDiagnosing disease, selecting treatment, predicting risk of disease
Extreme Gradient BoostingFast with large datasets, handles regression and classificationNeeds tuning of hyperparameters, overfits with complex weak learnersPredicting outcomes, detecting high-risk patients, diagnosing disease
Decision TreeSimple, easy, handles categorical and numerical dataOverfits with noisy data, sensitivity to variations in trainingIdentifying risk factors, diagnosing disease, predicting risk of disease
Deep Neural NetworkGood performer with large datasets, automatically learns hierarchical featuresRequires a lot of data, overfits with complex networkDiagnosing disease, detecting high-risk patients, predicting the risk of disease
Gated Recurrent UnitGreat performer with time-series data, handles variable-length sequencesSensitivity to some conditions and parameters, poor generalization to new dataPredicting risk of disease, diagnosing diseases, and determining outcomes
XGBoostFast, accurate, handles regressions and classification problemsNeeds tuning of hyperparameters, overfits with noisy dataPredicting outcomes, identifying risk factors
CatBoostHandles categorical data, handles regression and classification problemsNeeds resources and data, needs tuning of hyperparametersIdentifying risk factors, forecasting outcomes
Naïve BayesSimple, efficient, handles high-dimensional dataIndependent between features, poor performer with correlated featuresDiagnosing diseases, forecasting risk of disease
Logistic Model TreeCombination of DT and LR to get non-linear effectsOverfits with noisy data, needs tuning of hyperparametersDetermining risk factors, predicting risk of disease
Long Short-Term MemoryGood performer with time-series data, handles variable-length sequenceComputational complexity, difficult interpretation, overfitting, difficult to handle long sequencesForecasting outcomes, diagnosing diseases, forecasting risk of disease
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Sherbini, A.H.; Hassan Virk, H.U.; Wang, Z.; Glicksberg, B.S.; Krittanawong, C. Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review. AI 2023, 4, 437-460. https://doi.org/10.3390/ai4020024

AMA Style

El-Sherbini AH, Hassan Virk HU, Wang Z, Glicksberg BS, Krittanawong C. Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review. AI. 2023; 4(2):437-460. https://doi.org/10.3390/ai4020024

Chicago/Turabian Style

El-Sherbini, Adham H., Hafeez Ul Hassan Virk, Zhen Wang, Benjamin S. Glicksberg, and Chayakrit Krittanawong. 2023. "Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review" AI 4, no. 2: 437-460. https://doi.org/10.3390/ai4020024

APA Style

El-Sherbini, A. H., Hassan Virk, H. U., Wang, Z., Glicksberg, B. S., & Krittanawong, C. (2023). Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review. AI, 4(2), 437-460. https://doi.org/10.3390/ai4020024

Article Metrics

Back to TopTop