Next Article in Journal
Reanalysis of Exome Data Identifies Novel SLC25A46 Variants Associated with Leigh Syndrome
Previous Article in Journal
Psychosis in Women: Time for Personalized Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management

1
Department of Fundamental and Applied Research in Surgery, Pirogov Russian National Research Medical University, 117997 Moscow, Russia
2
Department of Radiotechnics, Faculty of Technical Cybernetics, National Research University of Electronic Technology, 124498 Moscow, Russia
3
Department of Data Science, Faculty of Information Technology, Monash University, Melbourne 3800, Australia
4
Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy
5
Department of Surgery, Uniformed Services University of Health Sciences, Bethesda, MD 20814, USA
*
Author to whom correspondence should be addressed.
J. Pers. Med. 2021, 11(12), 1280; https://doi.org/10.3390/jpm11121280
Submission received: 20 October 2021 / Revised: 22 November 2021 / Accepted: 24 November 2021 / Published: 2 December 2021

Abstract

:
Consultation prioritization is fundamental in optimal healthcare management and its performance can be helped by artificial intelligence (AI)-dedicated software and by digital medicine in general. The need for remote consultation has been demonstrated not only in the pandemic-induced lock-down but also in rurality conditions for which access to health centers is constantly limited. The term “AI” indicates the use of a computer to simulate human intellectual behavior with minimal human intervention. AI is based on a “machine learning” process or on an artificial neural network. AI provides accurate diagnostic algorithms and personalized treatments in many fields, including oncology, ophthalmology, traumatology, and dermatology. AI can help vascular specialists in diagnostics of peripheral artery disease, cerebrovascular disease, and deep vein thrombosis by analyzing contrast-enhanced magnetic resonance imaging or ultrasound data and in diagnostics of pulmonary embolism on multi-slice computed angiograms. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of chronic venous disease. Nevertheless, data on using AI in this field are still scarce. In this narrative review, the authors discuss available data on AI implementation in arterial and venous disease diagnostics and care.

1. Introduction

The need for optimizing healthcare management by guaranteeing both top quality consultation and the rationalization of the available infrastructure resources is of paramount importance, particularly in a post COVID-19 pandemic world [1]. Not only might remote consultation be helpful in such situations as a pandemic-induced lock-down, but it may also be of use in rural areas with a limited access to well-equipped healthcare centers [2]. Diagnostic tools based on artificial intelligence (AI)-dedicated software and digital medicine in general are considered an effective solution for remote consultations [1,3].
The term “AI” is used to indicate a simulation of human intellectual behavior by a computer with minimal human intervention [4]. AI is based on a machine learning process or on an artificial neural network (ANN). Machine learning indicates a set of technologies that automatically detect patterns in data and then use them to predict future data or enable decision making under uncertain conditions. Deep learning is part of the machine learning process and is a special type of artificial neural network that resembles a system of synapses between human neurons. The ANN consists of many basic computing units, i.e., artificial neurons that use a simple classifier model. After weighing the evidence, every neuron of an ANN produces a decision signal. To train an ANN, learning algorithms, such as back propagation, are involved. Paired input signals and desired output decisions resemble brain functioning when it analyzes external sensory stimuli to perform different activities depending on the situation. However, no one knows how exactly AI-based tools make conclusions, as they operate like a “black box” [5,6,7,8].
Deep learning is particularly compelling due to its usage in processing and analyzing big data in healthcare, since deep learning can be used to find solutions for effective patient management. Using AI in everyday practice may provide accurate diagnostic algorithms and personalize patient management [9,10].
AI may reduce medical mistake rates and prevent discrepancies in diagnostic data interpretation [7]. Moreover, AI can be used to support cost-effective clinical decision-making, developing healthcare recommender systems, emotion recognition using physiological signals, and patient monitoring [3,11,12]. This can also significantly reduce the burden on healthcare workers, maximizing their time and expertise for optimal patient care [13].
AI has already been used in diagnostics for “object detection” (lesion localization), “object segmentation” (determination of the contours and boundaries of the lesion), and “classification of objects” (malignant or benign) [14,15]. This makes AI useful in radiology, where large datasets are processed [16]. AI interprets images with breast cancer, including lymph nodes metastases [17,18,19]. Analyzing chest radiographs with AI helps to detect lung cancer, metastases, tuberculosis or pneumonia, or diffuse lung diseases [20,21]. The automatic detection and segmentation of brain metastases, as well as prostate cancer detection on magnetic resonance (MRI) images, is another field for AI utilization [22,23]. In ophthalmology, AI helps to diagnose diabetic retinopathy, age-related macular degeneration, glaucoma, and other ophthalmic disorders [24,25,26,27]. In traumatology, neural networks in X-ray diagnostics of intertrochanteric fractures of the proximal femur surpassed conclusions of orthopedic surgeons. Ultra-precise neural networks have significant potential for fracture screening on plain radiographs when orthopedic surgeons are not available in emergency situations [28]. AI allows detection of early-stage melanoma on skin images [29]. Neural networks are as effective as certified dermatologists in differentiating benign neoplasms from malignant neoplasms on photographic and dermatoscopic images [30]. AI can be used to evaluate electrocardiograms [31] and ultrasound images [32], and in pathomorphology [33] and genomics [34]. AI is a promising tool that can be used to trace contacts during the pandemic, to improve pneumonia diagnostics [35], or to monitor COVID-19 patients [36].
Two of the possible areas where AI based diagnostics seems promising are in arterial atherosclerotic images and lower limb venous diseases. We performed a literature search using the following keywords: “artificial intelligence”, “deep machine learning”, “artificial neural network”, “convolutional neural network”, “telehealth”, “peripheral artery disease”, “abdominal aortic aneurism”, “deep venous thrombosis”, “pulmonary embolism” “venous thromboembolism”, “chronic venous disease”, “varicose veins”, “venous ulcer”, “vascular surgery”, “vascular medicine”, “angiology”, and “phlebology”. We then assessed all the articles for their eligibility. The reference list of all the articles related to AI in vascular management was searched for additional sources that contributed to the field.
The present narrative review reports the current state of the art of AI in the management of arterial disease, venous thromboembolism (VTE), and chronic venous disease (CVD).

2. Artificial Intelligence in Arterial Disease

AI is still not widely used in vascular medicine, so not that many publications can be found while searching the literature [37]. However, various AI algorithms are currently being developed in this field. Vascular segmentation is challenging because the vessels are highly variable in morphology, size, and curvature. Nevertheless, AI can help in segmentation and pattern recognition, therefore improving diagnostic efficiency and reducing time spent on analyzing data.
AI opens up many opportunities in vascular surgery, including the management and analysis of medical data, and the development of expert systems for prediction and decision making. It can also be used for patient care, in education and training of vascular surgeons, and as a health information and surveillance system for research [38].
Kurugol S. et al. presented a tool to assess aorta morphology and aortic calcium plaques on CT scans. The authors computed the agreement between the proposed algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01 [39].
Graffy P.M. et al. used instance segmentation with convolutional neural networks (Mask R-CNN). It was applied to a dataset of 9914 non-contrast CT scans from 9032 consecutive asymptomatic adults who had undergone colonography screening. A developed fully automated abdominal aortic calcification scoring tool allows for the assessment of any non-contrast abdominal CT for cardiovascular risk [40].
AI can also help with segmentation analysis of ultrasound, CT, and magnetic resonance imaging (MRI) images in patients with carotid artery stenosis [41,42]. Caetano Dos Santos F.L. et al. developed a tool for the segmentation and analysis of atherosclerosis in the extracranial carotid arteries. A dataset of 59 randomly chosen head-and-neck CTA scans was used. An algorithm mainly based on the detection of carotid arteries, delineation of the vascular wall, and extraction of the atherosclerotic plaque was successful in 83% of stenoses over 50%. Specificity and sensitivity were 25% and 83%, respectively, with an overall accuracy of 71% [43].
Raffort J. et al., while searching the literature on AI tools for abdominal aortic aneurism management, found several prognostic programs. The potential of AI was confirmed in predicting aneurism growth and rupture, in-hospital and 30 day mortality, endograft complications, aneurism evolution, stent graft deployment, and the need for re-intervention after endovascular aneurysm repair. Nevertheless, small datasets were mainly used, while machine learning approaches require large databases for learning and training. A lack of external validation is also a pitfall as it outlines the need for multicenter registries [44].
Dehmeshki J. et al. presented a computer-aided detection tool that detected arteries based on a 3D region growing method and a fast 3D morphology operation. They developed a computer-aided measurement system to measure the artery diameters from the detected vessel centerline. The system has been tested on phantom data and on fifteen CTA datasets of peripheral arterial diseases (PAD) patients. An 88% detection accuracy of stenosis was achieved with an 8% error of measurement accuracy [45].
Ross E.G. et al. used machine learning algorithms to analyze electronic health records. Data of 7686 PAD patients from two tertiary hospitals were used to learn predictive models. Models confirmed the ability of machine learning algorithms to identify patients with peripheral arterial atherosclerosis, favoring an early detection of the subjects at risk of serious cardiac and cerebrovascular events [46].
AI was confirmed as an extremely useful tool for teaching goals by simulating different clinical cases. Virtual reality simulators are already available to train young professionals in basic endovascular skills [47,48].
AI is used in apps designed for use by patients themselves. Those applications analyze photographic images of the legs and help to identify early signs of diabetic foot syndrome [49]. Ohura N. et al. prepared four architectures to build wound segmentation convolutional neural networks (CNNs). The best results were shown by U-Net, which demonstrated an area under the curve of 0.997, specificity of 0.943, and sensitivity of 0.993. Such tools may be applied to diagnostics of arterial and venous leg ulcers as well as pressure ulcers [50].

3. Artificial Intelligence in Venous Thromboembolism

Clinical decisions in patients with pulmonary embolism (PE) can be based on AI-supported analysis of multi-slice computed tomography (MSCT-angiography) of pulmonary arteries images [51,52]. For those patients, timely diagnostics are crucial to save lives, but in routine practice, PE is still one of the most commonly missed diagnoses [53]. Deep machine learning to detect PE on MSCT angiograms was demonstrated to be a valuable solution [54].
The first attempts to detect PE using neural networks were made in the early 1990s. Patil S. et al. supposed that computerized pattern recognition could accurately estimate the probability of PE based on readily available clinical characteristics. Medical history data, physical examination, ECG, chest X-ray scans, and arterial blood gases of patients with suspected acute PE were downloaded to a back propagation neural network. Study data were obtained from 1213 patients in a prospective study on PE diagnostics. They were divided into training group A (n = 606) and test group B (n = 607). These groups were then transformed into training set B (n = 607) and test set A (n = 606). The performance curve was constructed from clinical assessments made by specialists and the neural network in groups A and B. The areas under the corresponding ROC curves were 0.7450, 0.7477, and 0.7324. All differences were not significant. Thus, neural networks were able to predict the clinical probability of PE with an accuracy comparable to that of experienced clinicians [54].
Huang S.C. et al. used a deep learning model capable of detecting PE signs on computed tomography pulmonary angiography (CTPA) images of the pulmonary arteries with simultaneous data interpretation (77-layer 3D convolutional neural network). Researchers conducted a retrospective data collection of 1797 images from 1773 patients. Training (1461 images from 1414 patients), validation (167 images from 162 patients), and hold-out test sets (169 images from 163 patients) were developed. Stratified random sampling was used to create validation and test kits to ensure an equal number of positive and negative cases. There was no patient overlap between sets. When tested, the deep learning model achieved an AUROC score of 0.84 with automatic detection of PE signs on the test set. Thus, the possibility of using deep learning to evaluate complex radiographic data of CTPA angiograms to detect PE was confirmed [55].
Nima Tajbakhsh et al. presented a computer-aided detection of PE on CTPA images. They assumed that despite acceptable sensitivity, existing computer detection systems generate a large number of false-positive results. This may lead to additional burden on radiologists to analyze them. The possibility of convolutional neural networks (the type of neuronets specified for image classification) to eliminate false conclusions was investigated. Developing the “correct” representation of the image is important for the accuracy of AI when analyzing an object in 3D images. For this purpose, a multi-plane image of emboli with alignment along the vessels was developed. This imaging provides three advantages: (1) compactness, i.e., concise summarization of 3D contextual information around the embolus in two image channels; (2) consistency—automatic alignment of the embolus on two-channel images according to the orientation of the affected vessel; and (3) extensibility—natural support for data augmentation for training neural networks. This method was tested using a set of 121 CTPA angiograms with a total of 326 emboli. The sensitivity reached 83% with two false-positive results [56].
The important role of AI-based tools in the recognition of PE signs on CTPA images was also confirmed by other researchers [57,58,59,60].
Deep learning is used in cancer patients who are at high risk of VTE [61]. Randomized studies have shown that prophylactic doses of anticoagulants reduce VTE rates in cancer patients by about half [62]. However, there is also a potentially high risk of bleeding associated with anticoagulant therapy [63]. The decision to use anticoagulants for the prevention of cancer-related VTE should ideally be based on an effective risk stratification strategy [64].
Pabinger I. et al., from the University of Vienna, have developed and tested a computer-based predictive model of VTE based on AI in outpatients with cancer. They used data from 1737 patients from a CATS study (a prospective single-center observational cohort with a baseline biobank) who had recently been diagnosed with active cancer or disease progression after complete or partial remission. Patients who had a malignancy, except primary brain tumors, or lymphoma were selected. Only tumor localization, which is the most important part of the Khorana score, and D-dimer were left for training the model [65].
To test the model’s performance, demographic, laboratory data, and data from a multinational cohort study were used to identify cancer patients at high risk of VTE [66]. With the threshold of predictable cumulative 6 month’s risk of PE set at 10%, model sensitivity was 33% (95% confidence interval (CI) 23–47) and specificity was 84% (95% CI 83–87). The positive predictive value was 12% (95% CI 8–16), and the negative predictive value was 95% (95% CI 94–96). With the threshold set at 15%, model sensitivity was 15% (95% CI 8–24) and specificity was 96% (95% CI 95–97). The positive predictive value was 18% (95% CI 9–29), and the negative predictive value was 95% (95% CI 94–96). This shows that the model may help to find patients eligible for pharmacological thromboprophylaxis [65,67].
Huang C. et al. conducted a study using a fully automatic method of determining DVT extension. AI was used to detect the proximal level of deep vein thrombosis (DVT) on contrast-enhanced MRI images. Images taken from 58 patients with recently diagnosed lower limb DVT were analyzed. A total of 5388 snapshots were made, and on 2683 of them, thrombotic masses were seen. The boundaries of the blood clots on the CT scan were manually delineated by radiologists, and then a deep learning-based neural network was trained. The basic principle of operation is based on the segmentation of the boundaries of thrombosis. A DL network with an encoder–decoder architecture was designed for DVT segmentation. It took about 1.5 s for this model to fulfill the task and to identify the thrombus extension. This model identifies vein segments with thrombosis on the MRI image. The average value of the Dice similarity coefficient (DSC) for 58 patients was 0.74 ± 0.17, and the average value of the DSC was 0.79 (range 0 ~ 0.91). The results showed that the proposed method is relatively effective and quick. If further improved, this method will help clinicians to evaluate DVT quickly and objectively [68].
Willan J et al. confirmed that AI may be applied for risk stratification in patients with suspected DVT [69]. A neural network was trained to stratify the probability of DVT in patients with suspected DVT using the data of the 11,490 consecutive cases, including 7080 cases for which all data, i.e., Wells score, D-dimer, and duplex ultrasound, were available. The network was able to exclude DVT without the necessity for ultrasound scanning in more patients as compared with the existing algorithm with low false-negative rates. After preliminary fast and reliable AI evaluation, patients with symptoms suggestive of DVT can then be sent to the hospital for vascular specialist examination in order to confirm or exclude thrombosis [70]. To do this, the Wells scale has been used to assess symptoms, therefore identifying patients with high DVT probability [71]. AI can become a powerful synergistic tool, together with D-dimer assessment, for better prioritization of the ultrasound scanning and consequent disease management [72,73,74,75].
Deso S. et al. developed CNN to detect 23 different types of inferior vena cava (IVC) filters and diagnostics of related complications. For each type of cava filter, a database of radiographs and CT scans was collected. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code [76].
Ni J.C. et al. used deep learning for automated classification of IVC filter types on radiographs. They took 1375 cropped radiographic images of 14 types of IVC filters, with 139 images for a test set. The CNN classification model achieved an F1 score of 0.97 (0.92–0.99) for the test set overall and of 1.00 for 10 of 14 individual filter types. Of the 139 test set images, 4 (2.9%) were misidentified, all mistaken for other filter types that appeared highly similar [77].

4. Artificial Intelligence in Chronic Venous Disease

CVD is a highly prevalent disease [78], severely impacting quality of life and the national health care budget [79]. Being a chronic condition, CVD needs a permanent follow-up, which is not easy for many patients, especially in the rural regions where access to vascular care is limited in many geographical areas. For rural residents, AI-supported diagnostics seems to be a good option [80,81]. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of CVD. Nevertheless, data on using AI in this field are scarce.
Fukaya E. et al. used machine learning to find genetic risk factors for varicose veins in 493,519 people at the British Biobank. In addition, a genome-wide study of the association of varicose veins was carried out among 337,536 people, followed by a quantitative analysis of loci and expression pathways. They used a gradient boosting machine model. Its principle of operation is based on the introduction of variable data, analysis, and construction of a new tree for predicting possible options. The relationship of the genotype with the presence of varicose veins was tested using a logistic model. Researchers have found that high growth persons are at a higher risk of varicose vein development [82].
Another promising area for AI is varicose vein recurrence risk estimation after invasive procedures. Bouharati I. et al. analyzed risk factors for recurrence, such as age, sex, obesity, genetic predisposition, inadequate diagnosis, double trunk of the great saphenous vein, double trunk of the small saphenous vein, neovascularization, technical failures, and time from procedure. A CNN system was constructed with probable causes of varicose recurrence as input variables and recurrence rate as the output variable. To train neural networks, data on 62 patients who had undergone invasive treatment were used, but no results of how the system performed were published [83].
Artificial neural networks may predict the healing time for venous ulcers [84], therefore helping health professionals in customizing treatment and, at best, improving the patient’s quality of life [85,86,87]. Taylor R. J. et al. retrospectively assessed data on 325 patients with 345 venous ulcers. An ANN based on a computer program (a simple neural network) was used for training. It was loaded with input data on 45 risk factors and ulcer healing time as output. After training, the ANN accurately predicted the healing time in 68% of cases. AI also identified the most important risk factors for ulcer healing. Among them are previous history of venous ulcers, profuse ulcerative exudate, high body mass index, large initial surface skin defect, age, and male sex. The neural network confirmed its ability to predict which ulcer may be resistant to standardized treatment [84].
Bhavani R. et al. used AI to determine venous ulcer stages on photo images. They obtained data from 150 patients. From each patient, 5–15 photographs were taken, with a total of 1770 images for training and 810 for testing. The scheme of the neural network operation consisted of four parts. Images were previously edited to remove flash light reflection. Then, contour segmentation of the ulcer surface was carried out, and the analysis was performed using a multidimensional ultra-precise neural network. During the extraction stage, features such as homogeneity, color, texture, and depth, were analyzed. This tool had an average accuracy of 99.55%, with specificity of 98.06% and sensitivity of 95.66% [88,89].
The most burdensome venous pathology is primary varicose veins, which affect 27–31% of the general population in rural settlements [90]. To diagnose and manage varicose veins in residents of distant areas, AI-based applications seem to be good tools. To date, few studies have been published on the assessment of AI automatic methods for varicose vein detection.
Qiang Shi and co-authors used 221 photographic images of the lower limbs in order to train the neural network to identify CVD classes according to CEAP classification. They mapped low-level image features onto middle-level semantic features using a concept classifier. Then, a multi-scale semantic model was created. The latter model was used to represent images with rich semantics. Finally, a scene classifier was trained using an optimized feature subset and was then used to determine CVD clinical class. The reported accuracy was 90.92% [91].
Hoobi M.M. et al. conducted a similar study based on the analysis of only 100 photographic images (60 with varicose veins, 40 with no CVD). The system used more than one type of distances with a probabilistic neural network to produce a diagnostic system of CVD with high accuracy. New pictures were used when testing 60 of the images. In this CNN model, shape, size, and texture of the skin with varicose veins were used with a reported accuracy of 94% [92].
Most vascular AI tools are based on neural networks trained with a limited number of images (Table 1). It is generally considered that 1000 cases are needed just to build a system, while if aiming for practical use, the number of images for training has to be closer to 100,000 [14]. This is especially true for CVD, which is classified for seven clinical classes, among which varicose veins represent only one of them. Moreover, even in large training samples, the recognition result strongly depends on the conditions of photo taking, including the image resolution, the relative size of the lesion to the total area of the image, the position of the leg, and the degree of the hair line.

5. Conclusions

AI has demonstrated to be an extremely helpful tool in healthcare, particularly in a time in which healthcare resources must be optimized, such as during a pandemic and in the rural conditions. Application of AI in healthcare has the potential to bring financial benefits and savings. However, according to this literature search, these services need further validation before they are used routinely in clinical practice, making this topic of great interest for the healthcare community. This is particularly true for CVD as a condition highly impacting society and because of its healthcare costs.

Author Contributions

Conceptualization, X.B., I.Z., S.S. and S.G.; methodology, X.B., I.Z. and S.G.; resources, X.B., S.S. and M.F. writing—original draft preparation, X.B., S.S. and M.F.; writing—review and editing, I.Z. and S.G.; project administration, S.S., I.Z. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reisman, J.; Wexler, A. Covid-19: Exposing the Lack of Evidence-Based Practice in Medicine. Hastings Cent Rep. 2020, 50, 77–78. [Google Scholar] [CrossRef]
  2. Weisgrau, S. Issues in rural health: Access, hospitals, and reform. Health Care Financ. Rev. 1995, 17, 1. [Google Scholar]
  3. World Health Organization. Regional Office for Europe. Future of Digital Health Systems: Report on the WHO Symposium on the Future of Digital Health Systems in the European Region: Copenhagen, Denmark, 6–8 February 2019; pp. 5–27. Available online: https://apps.who.int/iris/bitstream/handle/10665/329032/9789289059992-eng.pdf (accessed on 17 July 2021).
  4. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  5. Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef] [Green Version]
  6. Murphy, K.P. Machine Learning: A Probabilistic Perspective, 1st ed.; The MIT Press: Cambridge, UK, 2012; p. 25. [Google Scholar]
  7. Park, S.H.; Do, K.-H.; Kim, S.; Park, J.H.; Lim, Y.-S. What should medical students know about artificial intelligence in medicine? J. Educ. Eval. Health Prof. 2019, 16, 18. [Google Scholar] [CrossRef]
  8. London, A.J. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Cent. Rep. 2019, 49, 15–21. [Google Scholar] [CrossRef]
  9. Cutillo, C.M.; Sharma, K.R.; Foschini, L.; Kundu, S.; Mackintosh, M.; Mandl, K.D.; MI in Healthcare Workshop Working Group. Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit. Med. 2020, 3, 47. [Google Scholar] [CrossRef] [Green Version]
  10. Handelman, G.S.; Kok, H.K.; Chandra, R.V.; Razavi, A.H.; Lee, M.J.; Asadi, H. eDoctor: Machine learning and the future of medicine. J. Intern. Med. 2018, 284, 603–619. [Google Scholar] [CrossRef]
  11. Shu, L.; Xie, J.; Yang, M.; Li, Z.; Li, Z.; Liao, D.; Xu, X.; Yang, X. A Review of Emotion Recognition Using Physiological Signals. Sensors 2018, 18, 2074. [Google Scholar] [CrossRef] [Green Version]
  12. Tran, T.N.T.; Felfernig, A.; Trattner, C.; Holzinger, A. Recommender systems in the healthcare domain: State-of-the-art and research issues. J. Intell. Inf. Syst. 2021, 57, 171–201. [Google Scholar] [CrossRef]
  13. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  14. Fujita, H. AI-based computer-aided diagnosis (AI-CAD): The latest review to read first. Radiol. Phys. Technol. 2020, 13, 6–19. [Google Scholar] [CrossRef]
  15. Currie, G.M. Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine. J. Nucl. Med. Technol. 2019, 47, 217–222. [Google Scholar] [CrossRef]
  16. SFR-IA Group; CERF; French Radiology Community. Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn. Interv. Imaging 2018, 99, 727–742. [Google Scholar] [CrossRef]
  17. Herent, P.; Schmauch, B.; Jehanno, P.; Dehaeneb, O.; Saillarda, C.; Balleyguierc, C.; Arfi-Rouchec, J.; Jégoua, S. Detection and characterization of MRI breast lesions using deep learning. Diagn. Interv. Imaging 2019, 100, 219–225. [Google Scholar] [CrossRef]
  18. Rodriguez-Ruiz, A.; Lång, K.; Gubern-Merida, A.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Tan, T.; Mertelmeier, T.; et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison with 101 Radiologists. J. Natl. Cancer Inst. 2019, 111, 916–922. [Google Scholar] [CrossRef]
  19. Bejnordi, B.E.; Veta, M.; van Diest, P.J.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J.A.W.M.; CAMELYON16 Consortium. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  20. Gampala, S.; Vankeshwaram, V.; Gadula, S.S.P. Is Artificial Intelligence the New Friend for Radiologists? A Review Article. Cureus 2020, 12, e11137. [Google Scholar] [CrossRef]
  21. Liu, X.; Zhou, H.; Hu, Z.; Jin, Q.; Wang, J.; Ye, B. Clinical Application of Artificial Intelligence Recognition Technology in the Diagnosis of Stage T1 Lung Cancer. Zhongguo Fei Ai Za Zhi 2019, 22, 319–323. [Google Scholar] [CrossRef]
  22. Charron, O.; Lallement, A.; Jarnet, D.; Noblet, V.; Clavier, J.-B.; Meyer, P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput. Biol. Med. 2018, 95, 43–54. [Google Scholar] [CrossRef]
  23. Hamm, C.A.; Beetz, N.L.; Savic, L.J.; Penzkofer, T. Artificial intelligence and radiomics in MRI-based prostate diagnostics. Radiologe 2020, 60, 48–55. [Google Scholar] [CrossRef]
  24. Shibata, N.; Tanito, M.; Mitsuhashi, K.; Fujino, Y.; Matsuura, M.; Murata, H.; Asaoka, R. Development of a deep residual learning algorithm to screen for glaucoma from fundus photography. Sci. Rep. 2018, 8, 14665. [Google Scholar] [CrossRef] [Green Version]
  25. Oh, E.; Yoo, T.K.; Hong, S. Artificial Neural Network Approach for Differentiating Open-Angle Glaucoma From Glaucoma Suspect Without a Visual Field Test. Investig. Opthalmology Vis. Sci. 2015, 56, 3957–3966. [Google Scholar] [CrossRef] [Green Version]
  26. Ting, D.S.W.; Cheung, C.Y.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations with Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  27. Balyen, L.; Peto, T. Promising Artificial Intelligence-Machine Learning-Deep Learning Algorithms in Ophthalmology. Asia Pac. J. Ophthalmol. 2019, 8, 264–272. [Google Scholar] [CrossRef]
  28. Urakawa, T.; Tanaka, Y.; Goto, S.; Matsuzawa, H.; Watanabe, K.; Endo, N. Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skelet. Radiol. 2018, 48, 239–244. [Google Scholar] [CrossRef]
  29. Petrie, T.; Samatham, R.; Witkowski, A.M.; Esteva, A.; Leachman, S.A. Melanoma Early Detection: Big Data, Bigger Picture. J Investig. Dermatol. 2019, 139, 25–30. [Google Scholar] [CrossRef] [Green Version]
  30. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  31. Ponomariov, V.; Chirila, L.; Apipie, F.M.; Abate, R.; Rusu, M.; Wu, Z.; Liehn, E.A.; Bucur, I. Artificial Intelligence versus Doctors’ Intelligence: A Glance on Machine Learning Benefaction in Electrocardiography. Discoveries (Craiova) 2017, 5, e76. [Google Scholar] [CrossRef]
  32. Al’Aref, S.J.; Anchouche, K.; Singh, G.; Slomka, P.J.; Kolli, K.K.; Kumar, A.; Pandey, M.; Maliakal, G.; van Rosendael, A.R.; Beecy, A.N.; et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur. Heart J. 2019, 40, 1975–1986. [Google Scholar] [CrossRef]
  33. Wang, S.; Yang, D.M.; Rong, R.; Zhan, X.; Xiao, G. Pathology Image Analysis Using Segmentation Deep Learning Algorithms. Am. J. Pathol. 2019, 189, 1686–1698. [Google Scholar] [CrossRef] [Green Version]
  34. Zook, J.M.; Catoe, D.; McDaniel, J.; Vang, L.; Spies, N.; Sidow, A.; Weng, Z.; Liu, Y.; Mason, C.E.; Alexander, N.; et al. Extensive sequencing of seven human genomes to characterize benchmark reference materials. Sci. Data 2016, 3, 160025. [Google Scholar] [CrossRef] [Green Version]
  35. Li, M.; Lei, P.; Zeng, B.; Li, Z.; Yu, P.; Fan, B.; Wang, C.; Li, Z.; Zhou, J.; Hu, S.; et al. Coronavirus Disease (COVID-19): Spectrum of CT Findings and Temporal Progression of the Disease. Acad. Radiol. 2020, 27, 603–608. [Google Scholar] [CrossRef] [Green Version]
  36. Naseem, M.; Akhund, R.; Arshad, H.; Ibrahim, M.T. Exploring the Potential of Artificial Intelligence and Machine Learning to Combat COVID-19 and Existing Opportunities for LMIC: A Scoping Review. J. Prim. Care Community Health 2020, 11, 2150132720963634. [Google Scholar] [CrossRef]
  37. Rajasinghe, H.A.; Miller, L.E.; Chahwan, S.H.; Zamora, A.J. TOI 2. Underutilization of Artificial Intelligence by Vascular Specialists. J. Vasc. Surg. 2018, 68, e148–e149. [Google Scholar] [CrossRef] [Green Version]
  38. Raffort, J.; Adam, C.; Carrier, M.; Lareyre, F. Fundamentals in Artificial Intelligence for Vascular Surgeons. Ann. Vasc. Surg. 2020, 65, 254–260. [Google Scholar] [CrossRef]
  39. Kurugol, S.; Come, C.E.; Diaz, A.A.; Ross, J.C.; Kinney, G.L.; Black-Shinn, J.L.; Hokanson, J.E.; Budoff, M.J.; Washko, G.R.; Estepar, R.S.J.; et al. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions. Med. Phys. 2015, 42, 5467–5478. [Google Scholar] [CrossRef]
  40. Graffy, P.M.; Liu, J.; O’Connor, S.; Summers, R.M.; Pickhardt, P.J. Automated segmentation and quantification of aortic calcification at abdominal CT: Application of a deep learning-based algorithm to a longitudinal screening cohort. Abdom. Radiol. (NY) 2019, 44, 2921–2928. [Google Scholar] [CrossRef]
  41. Gastounioti, A.; Kolias, V.; Golemati, S.; Tsiaparasa, N.N.; Matsakoua, A.; Stoitsisa, J.S.; Kadoglouc, N.P.E.; Gkekasc, C.; Kakisisc, J.D.; Liapis, C.D.; et al. CAROTID—A web-based platform for optimal personalized management of atherosclerotic patients. Comput. Methods Programs Biomed. 2014, 114, 183–193. [Google Scholar] [CrossRef]
  42. Kumar, P.K.; Araki, T.; Rajan, J.; Lairdd, J.R.; Nicolaidese, A.; Surifg, J.S.; Fellow AIMBE. State-of-the-art review on automated lumen and adventitial border delineation and its measurements in carotid ultrasound. Comput. Methods Programs Biomed. 2018, 163, 155–168. [Google Scholar] [CrossRef]
  43. Dos Santos, F.L.C.; Kolasa, M.; Terada, M.; Salenius, J.; Eskola, H.; Paci, M. VASIM: An automated tool for the quantification of carotid atherosclerosis by computed tomography angiography. Int. J. Cardiovasc. Imaging 2019, 35, 1149–1159. [Google Scholar] [CrossRef]
  44. Raffort, J.; Adam, C.; Carrier, M.; Ballaith, A.; Coscas, R.; Jean-Baptiste, E.; Hassen-Khodja, R.; Chakfé, N.; Lareyre, F. Artificial intelligence in abdominal aortic aneurysm. J. Vasc. Surg. 2020, 72, 321–333.e1. [Google Scholar] [CrossRef]
  45. Dehmeshki, J.; Ion, A.; Ellis, T.; Doenz, F.; Jouannic, A.-M.; Qanadli, S. Computer Aided Detection and measurement of peripheral artery disease. Stud. Health Technol. Inform. 2014, 205, 1153–1157. [Google Scholar]
  46. Ross, E.G.; Jung, K.; Dudley, J.T.; Li, L.; Leeper, N.J.; Shah, N.H. Predicting Future Cardiovascular Events in Patients With Peripheral Artery Disease Using Electronic Health Record Data. Circ. Cardiovasc. Qual. Outcomes 2019, 12, e004741. [Google Scholar] [CrossRef]
  47. Winkler-Schwartz, A.; Bissonnette, V.; Mirchi, N.; Ponnudurai, N.; Yilmaz, R.; Ledwos, N.; Siyar, S.; Azarnoush, H.; Karlik, B.; Del Maestro, R.F. Artificial Intelligence in Medical Education: Best Practices Using Machine Learning to Assess Surgical Expertise in Virtual Reality Simulation. J. Surg. Educ. 2019, 76, 1681–1690. [Google Scholar] [CrossRef]
  48. Aeckersberg, G.; Gkremoutis, A.; Schmitz-Rixen, T.; Kaiser, E. The relevance of low-fidelity virtual reality simulators compared with other learning methods in basic endovascular skills training. J. Vasc. Surg. 2019, 69, 227–235. [Google Scholar] [CrossRef]
  49. Hazenberg, C.E.V.B.; De Stegge, W.B.A.; Van Baal, S.G.; Moll, F.L.; Bus, S.A. Telehealth and telemedicine applications for the diabetic foot: A systematic review. Diabetes Metab. Res. Rev. 2020, 36, e3247. [Google Scholar] [CrossRef]
  50. Ohura, N.; Mitsuno, R.; Sakisaka, M.; Terabe, Y.; Morishige, Y.; Uchiyama, A.; Okoshi, T.; Shinji, I.; Takushima, A. Convolutional neural networks for wound detection: The role of artificial intelligence in wound care. J. Wound Care 2019, 28 (Suppl. 10), S13–S24. [Google Scholar] [CrossRef]
  51. Heit, J.A. The epidemiology of venous thromboembolism in the community. Arter. Thromb. Vasc. Biol. 2008, 28, 370–372. [Google Scholar] [CrossRef] [Green Version]
  52. Moore, A.J.E.; Wachsmann, J.; Chamarthy, M.R.; Panjikaran, L.; Tanabe, Y.; Rajiah, P. Imaging of acute pulmonary embolism: An update. Cardiovasc. Diagn. Ther. 2018, 8, 225–243. [Google Scholar] [CrossRef]
  53. Leung, A.N.; Bull, T.M.; Jaeschke, R.; Lockwood, C.J.; Boiselle, P.M.; Hurwitz, L.M.; James, A.H.; McCullough, L.B.; Menda, Y.; Paidas, M.J.; et al. An official American Thoracic Society/Society of Thoracic Radiology clinical practice guideline: Evaluation of suspected pulmonary embolism in pregnancy. Am. J. Respir. Crit. Care Med. 2011, 184, 1200–1208. [Google Scholar] [CrossRef]
  54. Patil, S.; Henry, J.W.; Rubenfire, M.; Stein, P.D. Neural network in the clinical diagnosis of acute pulmonary embolism. Chest 1993, 104, 1685–1689. [Google Scholar] [CrossRef]
  55. Huang, S.C.; Kothari, T.; Banerjee, I.; Chute, C.; Ball, R.L.; Borus, N.; Huang, A.; Patel, B.N.; Rajpurkar, P.; Irvin, J.; et al. PENet-a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging. NPJ Digit Med. 2020, 3, 61. [Google Scholar] [CrossRef] [Green Version]
  56. Tajbakhsh, N.; Gotway, M.B.; Liang, J. Computer-Aided Pulmonary Embolism Detection Using a Novel Vessel-Aligned Multi-planar Image Representation and Convolutional Neural Networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. MICCAI 2015; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9350. [Google Scholar] [CrossRef]
  57. Serpen, G.; Tekkedil, D.; Orra, M. A knowledge-based artificial neural network classifier for pulmonary embolism diagnosis. Comput. Biol. Med. 2008, 38, 204–220. [Google Scholar] [CrossRef]
  58. Özkan, H.; Osman, O.; Şahin, S.; Boz, A.F. A novel method for pulmonary embolism detection in CTA images. Comput. Methods Programs Biomed. 2014, 113, 757–766. [Google Scholar] [CrossRef]
  59. Park, S.C.; Chapman, B.E.; Zheng, B. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: Preliminary investigation. IEEE Trans. Biomed. Eng. 2011, 58, 1519–1527. [Google Scholar] [CrossRef]
  60. Das, M.; Mühlenbruch, G.; Helm, A.; Bakai, A.; Salganicoff, M.; Stanzel, S.; Liang, J.; Wolf, M.; Günther, R.W.; Wildberger, J.E. Computer-aided detection of pulmonary embolism: Influence on radiologists’ detection performance with respect to vessel segments. Eur. Radiol. 2008, 18, 1350–1355. [Google Scholar] [CrossRef]
  61. Ay, C.; Pabinger, I.; Cohen, A.T. Cancer-associated venous thromboembolism: Burden, mechanisms, and management. Thromb. Haemost. 2017, 117, 219–230. [Google Scholar] [CrossRef] [Green Version]
  62. Ben Lustig, D.; Rodriguez, R.; Wells, P.S. Implementation and validation of a risk stratification method at The Ottawa Hospital to guide thromboprophylaxis in ambulatory cancer patients at intermediate-high risk for venous thrombosis. Thromb. Res. 2015, 136, 1099–1102. [Google Scholar] [CrossRef]
  63. Prandoni, P.; Lensing, A.W.A.; Piccioli, A.; Bernardi, E.; Simioni, P.; Girolami, B.; Marchiori, A.; Sabbion, P.; Prins, M.H.; Noventa, F.; et al. Recurrent venous thromboembolism and bleeding complications during anticoagulant treatment in patients with cancer and venous thrombosis. Blood 2002, 100, 3484–3488. [Google Scholar] [CrossRef] [Green Version]
  64. Ay, C.; Pabinger, I. VTE risk assessment in cancer. Who needs prophylaxis and who does not? Hamostaseologie 2015, 35, 319–324. [Google Scholar] [CrossRef]
  65. Pabinger, I.; van Es, N.; Heinze, G.; Posch, F.; Riedl, J.; Reitter, E.-M.; Nisio, M.D.; Cesarman-Maus, G.; Kraaijpoel, N.; Zielinski, C.C.; et al. A clinical prediction model for cancer-associated venous thromboembolism: A development and validation study in two independent prospective cohorts. Lancet Haematol. 2018, 5, e289–e298. [Google Scholar] [CrossRef]
  66. van Es, N.; Di Nisio, M.; Cesarman, G.; Kleinjan, A.; Otten, H.-M.; Mahé, I.; Wilts, I.T.; Twint, D.C.; Porreca, E.; Arrieta, O.; et al. Comparison of risk prediction scores for venous thromboembolism in cancer patients: A prospective cohort study. Haematologica 2017, 102, 1494–1501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Ferroni, P.; Roselli, M.; Zanzotto, F.M.; Guadagni, F. Artificial intelligence for cancer-associated thrombosis risk assessment. Lancet Haematol. 2018, 5, e391. [Google Scholar] [CrossRef]
  68. Huang, C.; Tian, J.; Yuan, C.; Zeng, P.; He, X.; Chen, H.; Huang, Y.; Huang, B. Fully Automated Segmentation of Lower Extremity Deep Vein Thrombosis Using Convolutional Neural Network. Biomed Res. Int. 2019, 2019, 3401683. [Google Scholar] [CrossRef] [Green Version]
  69. Willan, J.; Katz, H.; Keeling, D. The use of artificial neural network analysis can improve the risk-stratification of patients presenting with suspected deep vein thrombosis. Br. J. Haematol. 2019, 185, 289–296. [Google Scholar] [CrossRef] [PubMed]
  70. Willan, J.; Keeling, D. Reducing the need for diagnostic imaging in suspected cases of deep vein thrombosis. Br. J. Haematol. 2019, 184, 682–684. [Google Scholar] [CrossRef] [Green Version]
  71. Geersing, G.J.; Zuithoff, N.P.A.; Kearon, C.; Anderson, D.R.; ten Cate-Hoek, A.J.; Eif, J.L.; Bates, S.M.; Hoes, A.W.; Kraaijenhagen, R.A.; Oudega, R.; et al. Exclusion of deep vein thrombosis using the Wells rule in clinically important subgroups: Individual patient data meta-analysis. BMJ 2014, 348, g1340. [Google Scholar] [CrossRef] [Green Version]
  72. Douma, R.A.; Le Gal, G.; Söhne, M.; Righini, M.; Kamphuisen, P.W.; Perrier, A.; Kruip, M.J.H.A.; Bounameaux, H.; Büller, H.R.; Roy, P.-M. Potential of an age adjusted D-dimer cut-off value to improve the exclusion of pulmonary embolism in older patients: A retrospective analysis of three large cohorts. BMJ 2010, 340, c1475. [Google Scholar] [CrossRef] [Green Version]
  73. Linkins, L.-A.; Ginsberg, J.S.; Bates, S.M.; Kearon, C. Use of different D-dimer levels to exclude venous thromboembolism depending on clinical pretest probability. J. Thromb. Haemost. 2004, 2, 1256–1260. [Google Scholar] [CrossRef] [PubMed]
  74. Wells, P.S.; Anderson, D.R.; Rodger, M.; Forgie, M.; Kearon, C.; Dreyer, J.; Kovacs, G.; Mitchell, M.; Lewandowski, B.; Kovacs, M.J. Evaluation of D-dimer in the diagnosis of suspected deep-vein thrombosis. N. Engl. J. Med. 2003, 349, 1227–1235. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Wells, P.S. Integrated strategies for the diagnosis of venous thromboembolism. J. Thromb. Haemost. 2007, 5 (Suppl. 1), 41–50. [Google Scholar] [CrossRef] [PubMed]
  76. Deso, S.E.; Idakoji, I.A.; Muelly, M.C.; Kuo, W.T. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters. Semin. Intervent. Radiol. 2016, 33, 137–143. [Google Scholar] [CrossRef] [Green Version]
  77. Ni, J.C.; Shpanskaya, K.; Han, M.; Lee, E.H.; Do, B.H.; Kuo, W.T.; Yeom, K.W.; Wang, D.S. Deep Learning for Automated Classification of Inferior Vena Cava Filter Types on Radiographs. J. Vasc. Interv. Radiol. 2020, 31, 66–73. [Google Scholar] [CrossRef]
  78. Ortega, M.A.; Fraile-Martínez, O.; García-Montero, C.; Álvarez-Mon, M.A.; Chaowen, C.; Ruiz-Grande, F.; Pekarek, L.; Monserrat, J.; Asúnsolo, A.; García-Honduvilla, N.; et al. Understanding Chronic Venous Disease: A Critical Overview of Its Pathophysiology and Medical Management. J. Clin. Med. 2021, 10, 3239. [Google Scholar] [CrossRef]
  79. Ma, H.; O’Donnell, T.F.; Rosen, N.A.; Iafrati, M.D. The real cost of treating venous ulcers in a contemporary vascular practice. J. Vasc. Surg. Venous Lymphat. Disord. 2014, 2, 355–361. [Google Scholar] [CrossRef]
  80. Drake, T.M.; Ritchie, J.E. The Surgeon Will Skype You Now: Advancements in E-clinic. Ann. Surg. 2016, 263, 636–637. [Google Scholar] [CrossRef]
  81. Korobkova, O.K. Problems of improving medical services in the rural areas of the Russian regions. Aktual’niye Problemy Ekonomiki i Prava 2015, 1, 179–186. [Google Scholar]
  82. Fukaya, E.; Flores, A.M.; Lindholm, D.; Gustafsson, S.; Zanetti, D.; Ingelsson, E.; Leeper, N.J. Clinical and Genetic Determinants of Varicose Veins. Circulation 2018, 138, 2869–2880. [Google Scholar] [CrossRef]
  83. Bouharati, I.; El-Hachmi, S.; Babouche, F.; Khenchouche, A.; Bouharati, K.; Bouharati, S. Radiology and management of recurrent varicose veins: Risk factors analysis using artificial neural networks. J. Med. Radiol. Pathol. Surg. 2018, 5, 1–5. [Google Scholar] [CrossRef]
  84. Taylor, R.; Taylor, A.; Smyth, J.V. Using an artificial neural network to predict healing times and risk factors for venous leg ulcers. J. Wound Care 2002, 11, 101–105. [Google Scholar] [CrossRef]
  85. Meulendijks, A.; De Vries, F.; Van Dooren, A.; Schuurmans, M.; Neumann, H. A systematic review on risk factors in developing a first-time Venous Leg Ulcer. J. Eur. Acad. Dermatol. Venereol. 2019, 33, 1241–1248. [Google Scholar] [CrossRef]
  86. Tan, K.H.M.; Luo, R.; Onida, S.; Maccatrozzo, S.; Davies, A.H. Venous Leg Ulcer Clinical Practice Guidelines: What is AGREEd? Eur. J. Vasc. Endovasc. Surg. 2019, 57, 121–129. [Google Scholar] [CrossRef] [Green Version]
  87. Wilson, E. Prevention and treatment of venous leg ulcers. Health Thends 1989, 21, 97. [Google Scholar]
  88. Bhavani, R.; Jiji, W. Varicose ulcer(C6) wound image tissue classification using multidimensional convolutional neural networks. Imaging Sci. J. 2019, 67, 1–11. [Google Scholar] [CrossRef]
  89. Bhavani, R.R.; Jiji, G.W. Image registration for varicose ulcer classification using KNN classifier. Int. J. Comput. Appl. 2018, 40, 88–97. [Google Scholar] [CrossRef]
  90. Zolotukhin, I.A.; Seliverstov, E.I.; Shevtsov, Y.N.; Avakiants, I.P.; Nikishkov, A.S.; Tatarintsev, A.M.; Kirienko, A.I. Prevalence and Risk Factors for Chronic Venous Disease in the General Russian Population. Eur. J. Vasc. Endovasc. Surg. 2017, 54, 752–758. [Google Scholar] [CrossRef] [Green Version]
  91. Shi, Q.; Chen, W.; Pan, Y.; Yin, S.; Fu, Y.; Mei, J.; Xue, Z. An Automatic Classification Method on Chronic Venous Insufficiency Images. Sci. Rep. 2018, 8, 17952. [Google Scholar] [CrossRef] [PubMed]
  92. Hoobi, M.M.; Qaswaa, A. Detection System of Varicose Disease using Probabilistic Neural Network. Int. J. Sci. Res. (IJSR) 2017, 6, 2591–2596. Available online: https://www.ijsr.net/get_abstract.php?paper_id=ART20173435 (accessed on 18 November 2021).
Table 1. AI tools based on learning with images.
Table 1. AI tools based on learning with images.
Authors, YearDiseaseAI Used forData Used for AI LearningPrinciple of OperationNumber of ImagesPerformance MetricsApp Available Online
Kurugol S. et al., 2015 [39]PADAorta size calculation, morphology, mural calcification distributionsCT imagesConvolutional neural networks (Mask R-CNN)2500Dice coefficient of 0.92 ± 0.01No
Caetano Dos Santos F.L. et al., 2015 [43]Carotid arteries stenosisSegmentation and analysis of atherosclerotic lesions in extracranial carotid arteriesCTA imagesConvolutional neural networks5971% accuracy Yes
Raffort J. et al., 2015 [44]Abdominal aortic aneurism (AAA)Quantitative analysis and characterization of AAA morphology, geometry, and fluid dynamicsCT imagesConvolutional neural networks4093% accuracyNo
Dehmeshki J. et al., (2014) [45]PADArterial network, artery centerline detection, and distortion correctionCTA imagesComputer-aided detection system1588% accuracyNo
Huang SC. et al., (2020) [55]VTEPE detectionCTPA imagesConvolutional neural network1797AUROC score of 0.84No
Huang C. et al., 2019 [68]DVTProximal level of DVT detectionContrast-enhanced MRI imagesConvolutional neural network5388Dice coefficient of 0.79No
Ni J.C. et al., 2020 [77]DVTDifferent inferior vena cava filters identificationRadiographic imagesDeep-learning convolutional neural network 1375F1 score of 0.97Yes
Rajathi V., Bhavani R.R., Wiselin Jiji G. (2019) [88]CVDVenous ulcer detectionVenous ulcers photosRegion growing, K-means, kNN177094.85% accuracyNo
Shi Q., et al., 2018, [91]CVDVaricose vein detectionLower limbs photosMulti-scale semantic model constructed to form the image representation with rich semantics22190.92% accuracyNo
Hoobi M.M., Qaswaa A., 2017, [92]CVDVaricose vein detectionLower limbs photosProbabilistic neural network10094% accuracyNo
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Butova, X.; Shayakhmetov, S.; Fedin, M.; Zolotukhin, I.; Gianesini, S. Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management. J. Pers. Med. 2021, 11, 1280. https://doi.org/10.3390/jpm11121280

AMA Style

Butova X, Shayakhmetov S, Fedin M, Zolotukhin I, Gianesini S. Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management. Journal of Personalized Medicine. 2021; 11(12):1280. https://doi.org/10.3390/jpm11121280

Chicago/Turabian Style

Butova, Xenia, Sergey Shayakhmetov, Maxim Fedin, Igor Zolotukhin, and Sergio Gianesini. 2021. "Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management" Journal of Personalized Medicine 11, no. 12: 1280. https://doi.org/10.3390/jpm11121280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop