Next Article in Journal
Epicardial Adipose Tissue Thickness Is Related to Plaque Composition in Coronary Artery Disease
Previous Article in Journal
Serum CA125 and HE4 as Biomarkers for the Detection of Endometrial Cancer and Associated High-Risk Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Federated Learning in Ocular Imaging: Current Progress and Future Direction

1
Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
2
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
*
Author to whom correspondence should be addressed.
Current address: CUHK Eye Centre, 4/F Hong Kong Eye Hospital, 147K Argyle Street, Kowloon, Hong Kong SAR, China.
Diagnostics 2022, 12(11), 2835; https://doi.org/10.3390/diagnostics12112835
Submission received: 31 October 2022 / Revised: 11 November 2022 / Accepted: 14 November 2022 / Published: 17 November 2022
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a “centralised location”. However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.

1. Introduction

Artificial intelligence (AI), particularly deep learning (DL), has been widely adopted in recent years to optimise the work processes in medical fields. Research and development in DL have grown significantly in capabilities and popularity in disease screening programs, automated diagnosis, treatment or prognosis prediction, and smart health care, which showed great potential to improve the clinical workflow [1,2]. In ophthalmology, DL algorithms have been developed to detect and classify various ocular diseases such as diabetic retinal diseases [3,4], age-related macular degeneration [5,6], retinopathy of prematurity [7], and glaucomatous optic neuropathy [8,9,10], using image-based data such as retinal photographs, optical coherence tomography (OCT) images and OCT angiography (OCTA) images. The advancement of DL algorithms also showed its ability in detecting and predicting systemic diseases such as diabetes [11], chronic kidney disease [12], cardiovascular events [13,14], and Alzheimer’s disease [15] based on retinal photographs. Furthermore, DL-based ocular image analysis can be incorporated with telemedicine to identify and monitor eye diseases for patients in community clinics and primary care [16].
DL is data-driven and needs to collect extensive and various training datasets to improve robustness and generalizability. Multicentre studies are becoming increasingly important in developing DL algorithms feasible in different real-world settings [17,18]. Currently, the most common paradigm for such collaborative multicentre projects is referred to as “centralised learning”, in which data from different sites is transferred and pooled into a centralised location in accordance with inter-site agreements. However, big data collection and resource sharing could raise practical concerns, and it often takes time to resolve ethical and privacy-related issues. In medical imaging, even anonymous raw images contain patients’ private information. For instance, retinal images are unique as fingerprints [19] and highly sensitive, as age [20], sex [21], cardiovascular risk factors [13], or mortality risk [22] could be predicted from fundus photographs or OCT scans. The human faces can be reconstructed from de-identified magnetic resonance imaging (MRI) scans [23].
Hence, to ensure data privacy and reduce the potential risk of raw data leakage in the conventional paradigm (i.e., centralised learning), the “distributed learning” paradigm [24] has been developed to distribute data across different institutions rather than combine it into a single pool. A recent advancement in distributed learning is federated learning (FL) [25,26,27], which allows multiple medical institutions to collaboratively train AI models without data sharing. It significantly facilitates AI research and development in the healthcare domain, in which data is highly valuable, and it typically needs to involve multiple centres and access to large-scale data.
This review article aims to introduce the basic concept of FL and discuss its advantages and applications in healthcare, especially in ophthalmology, as well as its future development.

2. What Is Federated Learning?

Traditionally, the DL approach requires pooling all available data from multiple institutions into a central source for model training and testing (Figure 1). FL, on the contrary, is a distributed learning paradigm where multiple collaborators train a model on their own data locally and then send their model updates to a central server to be aggregated into a consensus model [28]. It avoids the need to put all the collected data in one place or directly access the sensitive data across collaborators. Each institution keeps its data locally and will not transfer or directly access data across institutions (Figure 2). The FL paradigm for model training is based on three main steps [29]: (i) initially, the global model is initialised by the central server and then distributed to each contributing institution; (ii) each institution trains the mode using its local data, and then sends the local model back to the central server; (iii) the central server aggregates all local models to update a new global model and redistributes it to all collaborators (Figure 2). These steps are repeated back and forth until the global model reaches a stable performance. The model training procedure in traditional DL and FL is the same. However, the only difference between DL and FL training paradigm is that the DL requires a chief institution to train the model on all data, while FL allows each institution to perform training locally. As only the model characteristics (e.g., model parameters or gradients) are to be sent out from institutions, this distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data centralisation. Meanwhile, it can enable the model to be trained and validated across multiple datasets to improve its robustness and generalizability. As a result, FL offers tremendous advantages in data privacy over conventional centralised learning approaches, especially in AI research and the field of healthcare.

3. Types of Federated Learning

Generally, there are two major categories of FL proposed by previous studies, focusing on the type of participants and data (Figure 3).
Based on the properties of participants (or called clients) in FL, it can be grouped into two main types, (1) cross-silo FL and (2) cross-device FL [29]. On cross-device FL, its implementation ensures that learning takes place remotely and updates a central model via a federated system. The cross-device FL usually requires a million devices (e.g., smartphones, wearables, and edge devices) with a small amount of data to participate in the training process. On the other hand, cross-silo FL allows a smaller number of collaborative participants with large sample sizes, typically reliable companies or organisations such as hospitals or banks. Moreover, in cross-device FL, the participants are not always available (e.g., poor network connection and battery status), making the participants inconsistent for each round. Generally, cross-silo FL has a much better performance consistency since they use dedicated hardware and efficient networks [30].
Based on the data distribution between different functions and sample spaces, FL can be categorised into horizontal FL (HFL), vertical FL (VFL), and federated transfer learning (FTL) [25,29]. HFL refers to the sample-based FL, which is introduced when data sets share similar data features but from different samples [29]. For instance, two different eye hospitals treat patients with primary open-angle glaucoma. Both hospitals may have patients with similar disease features, i.e., glaucomatous optic neuropathy, while the patients’ demographic characteristics are primarily diverse, as both hospitals are located in different places. On the other hand, VFL is utilised when there are shared or overlapped samples but differ in data features [29]. For instance, the pharmacy and radiology departments in the same hospital are two different departments with distinct features. However, both departments may have information from the same group of patients. FTL is primarily used in scenarios where datasets vary in both samples and features [25]. For instance, various institutions may be located in different regions, and based on these restrictions, users of these institutions have a few intersections. The purpose of FTL is to develop effective application-specific models in situations where data is scarce. FTL can be offered to bring about solutions for the whole sample and feature space to bridge the gap between heterogeneous datasets.

4. Federated Learning Applications in Healthcare

4.1. Electronic Health Records

Electronic health records (EHRs) are recorded as part of routine care in most healthcare institutions, which contain patients’ medical information, including demographic information, laboratory results, medical imaging, diagnoses, treatments, and prescriptions. The primary benefits of EHRs data are improving the ease of access to patients’ health information and monitoring patients. With the advancement of AI, analysing electronic health data using DL techniques can significantly improve the decision-making process, risk assessment and disease progression, thus increasing healthcare quality. With FL technology to guarantee patient privacy, Deist et al. [31] achieved an improvement in predicting post-treatment two-year survival on more than 20,000 non-small cell lung cancer patients across eight healthcare institutes in 5 countries. Another FL framework on EHRs data proposed by Sharma et al. [32] to predict in-hospital mortality for patients in intensive care units achieved comparable performance to those trained in a centralised manner. Furthermore, within the scope of coronavirus disease 2019 (COVID-19), FL has been shown to predict acute kidney injury in patients with COVID-19 within 3 to 7 days after admission utilising EHRs data, including demographics, medical history, laboratory data, and even vital signs data [33]. Using a similar form of FL, Vaid et al. predicted mortality within seven days after hospitalisation of patients due to COVID-19 [34] by gathering EHRs data from different hospitals. Their results showed notable improvement of the federated model compared to the locally trained model and non-inferior to centralised learning. FL is becoming a promising approach for institutions that wish to collaborate with others in data-driven research utilising EHRs.

4.2. Internet of Things in Healthcare

The advancement of internet of things (IoT) in healthcare offers people the opportunity to monitor their health status and receive early warnings of health issues or existing conditions [35]. FedHealth, the first FTL framework for wearable healthcare, performs data aggregation through FL and builds relatively personalised models by transfer learning [36]. The framework has been achieved accurately in the auxiliary diagnosis of Parkinson’s disease and is promised to be deployed in other healthcare applications, such as elderly care, fall detection, and cognitive disease detection. Furthermore, a recent study by Brophy et al. [37] utilised a FL framework for developing a model to measure continuous arterial blood pressure (ABP) using a single optical photoplethysmogram sensor without compromising the ABP accuracy and patient privacy. This non-invasively method of monitoring ABP could benefit for people suffering from cardiovascular diseases. By using the distributed learning framework, it enables multiple remote devices to train collaboratively without data sharing. The results showed equal performance between federated and non-federated frameworks, which opens up new opportunities for applying FL in wearable devices to monitor patients’ cardiovascular status remotely and accurately.

4.3. Medical Image Analysis

FL is now being applied in a wide range of applications in medical image analysis. This distributed learning approach has the potential to develop a robust model which leverages large and multiple diverse medical image datasets obtained from different institutions, while ensuring patient privacy and data ownership. Li et al. implemented and evaluated a FL system for brain tumour segmentation on MRI scans from the Brain Tumour Segmentation (BraTS) dataset [38]. The proposed FL model can achieve a comparable segmentation performance to the data-centralised training model. Another study using functional MRI data in Autism Brain Imaging Data Exchange (ABIDE) project demonstrated that FL could utilise the multi-site data to boost the neuroimage analysis performance for identifying neurological biomarkers [39]. Lee et al. (15) evaluated the feasibility and performance of FL for thyroid tumour identification from 8457 ultrasound images collected from 6 institutions. The results demonstrated that the performance of FL at each institution was comparable to that of conventional DL using pooled data, with the area under the curve of receiver operating characteristic (AUROC) from 78.88% to 87.56%. Shiri et al. [40] built a federated DL-based model for positron emission tomography (PET) image segmentation by utilising 405 PET images of head and neck cancer patients from 9 different centres. The developed FL model achieved comparable quantitative performance with respect to the centralised DL model while considering the privacy concerns and the legal and ethical problems in medical data sharing in clinical institutions.
FL has been used for medical image analysis to detect COVID-19 lung abnormalities from chest X-rays and CT-scans images [41,42,43]. FL was used to train a DL model using inputs of vital signs, laboratory data, and chest X-rays from 20 institutions in different countries [43]. FL allowed the model to train faster amid the ongoing pandemic and generalise the heterogeneous, unharmonised datasets for predicting clinical outcomes in COVID-19 patients. Dou et al. [41] demonstrated a FL method to build a deep convolutional neural network-based AI model for automated detection of lesions from COVID-19 CT images, which performed well on external data. The result indicated the potential of FL to develop generalisable, low-cost, and scalable AI tools for image-based disease diagnosis and management, both for research and clinical care.
Furthermore, FL has shown its feasibility and effectiveness for weakly supervised classification of carcinoma in histopathology and survival prediction by using thousands of gigapixel whole slide images from multiple institutions [44]. The results demonstrated that FL could effectively assist clinicians in classifying subtypes of renal cell carcinoma and breast invasive carcinoma and address the challenges associated with the lack of detailed annotations in most real-world datasets. FL framework, therefore, has the clear potential to be applied in rare diseases where datasets are limited or in countries that lack access to pathology and laboratory services.

5. Current FL Applications in Ophthalmology

FL has already shown its potential in ophthalmology for different retinal diseases detection from ocular images such as OCT, OCTA, and retinal photographs.

5.1. Diabetic Retinopathy

Yu et al. [45] utilised the FL framework for referable diabetic retinopathy (RDR) classification using OCT and OCTA from two different institutions. The performance of the FL model was compared with the model trained with data acquired from the same institution and from another institution. The results were comparable to those trained on local data and outperformed those trained on other institutes’ data. This study demonstrated the potential for FL to be applied for the classification of DR and facilitate collaboration between different institutions in the real world.
Furthermore, the study also investigated the FL approach to apply microvasculature segmentation to multiple datasets in a simulated environment. The study designed a robust FL framework for microvasculature segmentation in OCTA images. The image datasets were acquired from four different OCT devices. The FL framework in this experiment achieved performance comparable to the internal model and the model trained with combined datasets, showing that FL can be used to improve generalizability by including diverse data from different OCT devices.
However, regardless of the promising performance of the FL approach, it is essential to consider the potential application scenario. Using OCTA images for the classification of RDR could not be feasible in the DR screening programme in the real world. Retinal fundus photography is a widely acceptable imaging modality for identifying RDR, whereas OCTA images would be helpful to detect diabetic macular ischemia. In addition, although the source of images used for microvasculature segmentation was obtained from different OCT devices, the sample size of the datasets was small. Therefore, sample size justification would be needed to make the result of the FL approach more meaningful.

5.2. Retinopathy of Prematurity

Retinopathy of prematurity (ROP), a leading cause of childhood blindness worldwide, is a condition characterised by the growth of abnormal fibrovascular retinal structures in preterm infants. Hanif et al. [46] and Lu et al. [47] explored the FL approach for developing a DL model for ROP. Lu et al. [47] utilised, trained, and validated a model on 5245 ROP retinal photographs from 1686 eyes of 867 premature infants in neonatal intensive care of seven hospital centres in the United States. The images were labelled with clinical diagnoses of plus disease (plus, pre plus, or no plus) and a reference standard diagnosis (RSD) by three image-based ROP graders and the clinical diagnosis. In most DL model comparisons, the models trained via the FL approach achieved a performance comparable with those trained via the centralised learning approach, with the AUROC ranging from 0.93 to 0.96. In addition, the FL model performed better than the locally trained model using only a single-institution data set in 4 of 7 sites in terms of AUROC. Moreover, the FL model in this study maintained its consistency and accuracy to heterogenous clinical data sets among different institutions, which varied in sample sizes, disease prevalence and patient demographics.
In the second experiment, Hanif et al. [46] demonstrated the potential ability of FL to harmonise the difference in clinical diagnoses of ROP severity between institutions. Instead of using the consensus RSD, a FL model was developed based on ROP vascular severity score (VSS). In this study, there was a significant difference in the level of VSS in eyes with no plus disease. VSS could be subjective, with considerable variation between experts in clinical settings that may affect clinical or epidemiology research [48]. However, according to this study, the FL model could standardise the difference in clinical diagnoses across institutions without centralised data collection and consensus of experts. Based on the results of this study, the FL model provides a generalisable approach to assessing clinical diagnostic paradigms and disease severity for epidemiologic evaluation without sharing patient information.
These two studies demonstrated the utility of FL framework in ROP, allowing collaboration between different institutions while protecting data privacy. However, these studies were still conducted under a simulated environment. Practical issues during clinical implementation such as communication efficiency or bias of data among participating centres could not be identified in these studies. Such challenges will be further discussed in the section below.

6. Challenges and Vulnerabilities

Although FL-based models show promising performance and tremendous potential in ophthalmology, most of these models were developed in a simulated environment and without proper testing on unseen datasets. There are still unsolved issues for applying FL in real-time and real-world clinics.

6.1. Data Heterogeneity

Medical data from different institutions are highly heterogeneous. Different institutions in FL have varying amounts of data with different properties such as vendors, imaging protocol and patient population or demographic. These data heterogeneity in FL are usually known as non-independent and identically distributed (non-IID) [49,50]. For example, in the diagnosis of diabetic macular oedema, OCT images collected from different institutions have uniformly distributed labels. However, the image appearance can vary greatly due to different imaging protocols and OCT machines used in hospitals, e.g., different intensity and contrast. Data heterogeneity from participating institutions will result in weight divergence of the local model, which represents the difference between weights updated based on non-IID multi-modal data and centralised data [51], and further deteriorate significantly the performance FL model [50]. The most popular FL algorithm is Federated Averaging (FedAvg), which was demonstrated to be able to handle heterogeneous data. However, FedAvg does not perform well on highly skewed non-IID data and may require much more communication rounds to converge [49]. Zhao et al. reported that the accuracy of FL reduces significantly up to ~55% for neural networks trained on highly skewed non-IID data [50]. Although a large number of approaches have been proposed for handling non-IID data in FL, including data sharing [50], knowledge distillation [52], and personalised FL [53,54], it remains challenges related to data privacy or communication cost and mainly focuses on HFL scenarios [55]. Therefore, further technical studies are needed to find out effective methods to tackle non-IID in FL.

6.2. Bias

Bias is an issue that a model tends to predict a certain kind of outcome more than others due to the imbalance of training datasets (e.g., insufficient or no data for specific diseases or subpopulations) [56]. For example, a binary classification task to determine whether referable or non-referable DR uses a dataset containing fundus photographs. If the input classes in the training data are imbalanced (e.g., more non-referable DR than referable DR), the final model will be biased toward the over-represented class [57]. The problem of bias could be aggravated in FL systems because each participant will contribute their own bias to the global server and may even generate new ones. In addition to bias in the training data, the FL systems could induce bias due to the variety of devices, the difference in network bandwidth and latency or compute performance [58]. Several recent studies have further proposed methods to mitigate the bias in FL based on the degree of bias from each FL participant affecting the global server [59,60,61,62,63]. After estimating the level of bias, this information is introduced to the global server to modify the algorithm and further aggregate participants’ updates. However, these approaches might not be feasible as they require additional information from client data distribution which may leak sensitive information [64]. That is why future work is needed to identify the approaches to mitigate bias in FL.

6.3. Privacy and Security

Although FL has been proven to be effective in improving the privacy of patients by keeping data locally, there remain some privacy-related challenges associated with FL that require attention. During the training process or the interaction between participants and the central server, adversaries may reveal sensitive information and reconstitute the patients’ data by sharing model updates. It has been shown that even a tiny portion of intermediate results, such as gradient information, can be attacked, resulting in reconstruction and interference of original data [65]. Furthermore, due to the fact that FL is built on a large number of participants (especially cross-device FL), malicious users may be able to generate false outputs to manipulate the DL model. In order to overcome these issues, there are privacy-preserving technologies that can be used to enhance FL’s privacy by utilising secure multi-party computation [66], homomorphic encryption [67] or differential privacy [68]. Although these methods improve the privacy of model updates or prevent poisoning attacks from malicious users, they may reduce model performance or system efficiency. This requires the researchers to make efforts on the trade-off between privacy protection and model performance and provide personalised privacy protection.

6.4. Communication Cost

Communication has been considered a critical bottleneck in FL. Federated networks, including many devices (e.g., millions of desktops) and transmission via the network may become slower than local transmission, especially when the local models are uploaded to the server [69]. In addition, the constant communication between participants and the global server requires a reliable network bandwidth to maintain a large amount of download and upload processes. In recent years, several federated optimisation algorithms have been proposed to alleviate the communication cost in FL. Potential methods to improve the communication-efficiency is reducing the total number of communication roads [49] and reducing the size of the uploaded parameters [26,70,71]. However, compression of the model updates presents the communication-precision trade-off as it requires a large compression ratio [72]. Moreover, these methods may also negatively affect the model performance in handling the heterogeneity of decentralised data [72]. In addition to improving the communication between the local and central server, each participating in federated networks need to prepare strong computing resources (e.g., graphics processing unit) and robust network connections between different clinics for data pre-processing.

7. Future Directions

Ophthalmology is a medical speciality driven by imaging that has unique opportunities for implementing DL systems. Ocular imaging is not only fast and cheap compared to other imaging modalities such as CT or MRI scans but contains essential information on ocular and systemic diseases. Utilising FL in prior DR and ROP studies illustrates the potential ability to overcome privacy challenges and inspires further deployment of FL in other ophthalmic diseases. In the future, FL applications and developments in real-world clinics are warranted.

7.1. Multi-Modal Federated Learning

In ophthalmology, there are diverse data from different modalities such as fundus photography, OCT, OCTA, and visual field (VF) with different protocols. With such a wide range of modalities, using one modality alone is often insufficient to detect alterations and diagnose diseases. Glaucoma, for instance, is diagnosed based on a combination of intraocular pressure measurement, colour fundus photograph, VF examinations and peripapillary retinal nerve fibre layer (RNFL) thickness evaluation. A DL algorithm developed based on RNFL thickness without referring to the VF, or relevant clinical diagnostic data may not be enough to diagnose glaucoma in real-world setting. Recently, Xiong et al. trained and validated a bimodal DL algorithm to detect glaucomatous optic neuropathy (GON) from both OCT images and VF [73]. The diagnostic performance of the proposed DL algorithm reached an AUROC of 0.950 and outperformed 2 single modals trained by only VF or OCT data (AUROC, 0.868 and 0.809, respectively). In addition, the model achieved comparable performance to experienced glaucoma specialists, suggesting that this multi-modal DL system could be valuable in detecting GON.
Apart from glaucoma, OCT and OCTA have become necessary non-invasive imaging modalities for quantitative and qualitative assessment of retinal features (e.g., retinal thickness and retinal fluid) in many retinal diseases such as AMD and DR. A recent study by Jin et al. demonstrated the efficacy of a multimodal DL model using OCT and OCTA images for the assessment of choroidal neovascularisation in neovascular AMD, which achieved comparable performance to retinal specialists with an accuracy of 95.5% and an AUROC of 0.979 [74]. In addition to ocular imaging data, EHRs also contain various information, including past medical history and systemic features. Incorporating EHRs data offers an outstanding opportunity to better understand complex relationships between systemic and ocular diseases. Data from medical history or laboratory, such as blood pressure and glycated haemoglobin, can be used to improve the predictive power of AI systems. Therefore, it is necessary to build and implement FL systems to support multi-modal data from different modalities to enhance the performance of DL in early detection and disease management. Several existing studies proposed a multi-modal FL framework using data modalities showing promising results. Recently, Zhao et al. have proposed a multi-modal framework that enables FL systems to work better with collaborators with local data from different modalities and clients with varying setups of devices compared to a single modality [75]. Another study by Qayyum et al. suggested a framework using clustered FL-based methods for an automatic diagnosis of COVID-19 that would allow remote hospitals to utilise multi-modal data, including chest X-rays and ultrasound images [76]. Additionally, the clustered FL presented a better performance in handling the divergence of data distribution compared to conventional FL.

7.2. Federated Learning and Rare Ocular Diseases

In addition, FL is expected to help in the future in diagnosing, predicting, and treating rare or geographically uncommon diseases such as ocular tumours or inherited retinal diseases, where currently there are challenges due to low incidence rates and small datasets [77,78]. Connecting multiple institutions on a global scale could improve clinical decisions regardless of patients’ location and demographic environment. Fujinami-Yokokawa Y et al. [79] trained and validated a DL system for automated classification among ABCA4-, EYS-, and RP1L1-associated retinal dystrophies using a Japanese Eye Genetics Consortium dataset of 417 images (fundus photographs and FAF images). Although the DL system could provide an accurate diagnosis of three inherited retinal diseases, there is limited phenotypic heterogeneity within each group, and the dataset is from a specific ethnic population. Recently, FL has shown its feasibility and effectiveness for weakly supervised classification of carcinoma in histopathology and survival prediction by using thousands of gigapixel whole slide images from multiple institutions [44]. The study demonstrated the potential of the FL framework to be applied in rare diseases where datasets are limited or in countries that lack access to pathology and laboratory services. Therefore, FL is a promising approach for greater international collaboration to develop valuable and robust DL algorithms for rare ocular diseases.

7.3. Blockchain-Based Federated Learning

The development of FL could further combine with the next generation of technology, potentially blockchain technology, to improve the privacy mechanism. Blockchain is a decentralised ledger innovation predicated on privacy, openness, and immutability, which has been used in the healthcare system to manage genetic information and EHRs [80,81]. Blockchain network has also been applied in ophthalmology to detect myopic macular degeneration and high myopia using retinal photographs from diverse multi-ethnic cohorts in different countries [82]. The study suggested that adopting blockchain technology could increase the validity and transparency of DL algorithms in medicine. With its immutability and traceability, blockchain can be an effective tool to prevent malicious attacks in FL. The immediate model updates, either local weights or gradients, can be chained in a cryptographical way offered by blockchain technology to maintain their integrity and confidentiality. Thus, integrating FL and blockchain could effectively allow the processing of vast amounts of data created practically in healthcare settings and improve data security and privacy by offering security and effective points for the deployment of the model [83].

7.4. Decentralised Federated Learning

In FL system, a central server usually orchestrates the learning process and updates the model upon the training results from clients. However, such star-shaped server-client architecture decreases the fault tolerance, does not solve the problem of information governance, and requires a powerful central server, which may not always be available in many real-life scenarios with a very large number of clients [84,85]. Thus, the fully decentralised FL that replaces the communication between central server and each client by interconnected clients’ peer-to-peer communication was proposed to address the above-mentioned problems. Recently, swarm learning, a decentralised learning system without central server, was introduced to build the models independently on private data at each individual site and support data sovereignty, security, and confidentiality by utilising edge computing, blockchain-based peer-to-peer networking and coordinator [84]. Saldanha et al. proved that swarm learning can not only be used to detect COVID-19, tuberculosis, leukaemia and lung pathologies but also to predict clinical biomarkers in solid tumours and yield high-performing models for pathology-based prediction of BRAF mutation and microsatellite instability (MSI) status [86].

7.5. Federated Learning and Fifth Generation (5G) and Beyond Technology

With the advent of wireless communications over the past few decades, the recent 5G and beyond technology has already been launched and provided low latency, high transmission rate, and high reliability compared to existing networks [87]. An efficient 5G network could address the issue of communication latency and network bandwidth in the FL framework. Moreover, the 5G network has been implemented in managing COVID-19 patients by video telemedicine in real-time [88]. In the field of ophthalmology, 5G technology has been applied in ophthalmology to conduct real-time tele-retinal laser photocoagulation for the treatment of DR [89]. This evidence suggests the potential of integration FL and 5G technology to allow pre-processing, training, and processing data in real-time.

8. Conclusions

FL creates a reliable and collaborative DL model for multi-institution collaborations without compromising the privacy of data, which will be critical in ophthalmology healthcare, especially in ocular image analysis. More research is warranted in the field of ophthalmology to investigate how to apply FL efficiently and effectively in real-time and real-world clinical settings.

Funding

Innovation and Technology Fund (ITF), Hong Kong (ref no: MRP/056/20X).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaul, V.; Enslin, S.; Gross, S.A. History of artificial intelligence in medicine. Gastrointest. Endosc. 2020, 92, 807–812. [Google Scholar] [CrossRef] [PubMed]
  2. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metab.-Clin. Exp. 2017, 69, S36–S40. [Google Scholar] [CrossRef] [PubMed]
  3. Ting, D.S.W.; Cheung, C.Y.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; San Yeo, I.Y.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations with Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef] [PubMed]
  4. Tang, F.; Wang, X.; Ran, A.R.; Chan, C.K.M.; Ho, M.; Yip, W.; Young, A.L.; Lok, J.; Szeto, S.; Chan, J.; et al. A Multitask Deep-Learning System to Classify Diabetic Macular Edema for Different Optical Coherence Tomography Devices: A Multicenter Analysis. Diabetes Care 2021, 44, 2078–2088. [Google Scholar] [CrossRef]
  5. Grassmann, F.; Mengelkamp, J.; Brandl, C.; Harsch, S.; Zimmermann, M.E.; Linkohr, B.; Peters, A.; Heid, I.M.; Palm, C.; Weber, B.H.F. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Ophthalmology 2018, 125, 1410–1420. [Google Scholar] [CrossRef] [Green Version]
  6. Burlina, P.M.; Joshi, N.; Pekala, M.; Pacheco, K.D.; Freund, D.E.; Bressler, N.M. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmol. 2017, 135, 1170–1176. [Google Scholar] [CrossRef] [PubMed]
  7. Brown, J.M.; Campbell, J.P.; Beers, A.; Chang, K.; Ostmo, S.; Chan, R.V.P.; Dy, J.; Erdogmus, D.; Ioannidis, S.; Kalpathy-Cramer, J.; et al. Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks. JAMA Ophthalmol. 2018, 136, 803–810. [Google Scholar] [CrossRef] [PubMed]
  8. Ting, D.S.W.; Pasquale, L.R.; Peng, L.; Campbell, J.P.; Lee, A.Y.; Raman, R.; Tan, G.S.W.; Schmetterer, L.; Keane, P.A.; Wong, T.Y. Artificial intelligence and deep learning in ophthalmology. Br. J. Ophthalmol. 2019, 103, 167–175. [Google Scholar] [CrossRef] [Green Version]
  9. Li, Z.; He, Y.; Keel, S.; Meng, W.; Chang, R.T.; He, M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology 2018, 125, 1199–1206. [Google Scholar] [CrossRef] [Green Version]
  10. Ran, A.R.; Cheung, C.Y.; Wang, X.; Chen, H.; Luo, L.Y.; Chan, P.P.; Wong, M.O.M.; Chang, R.T.; Mannil, S.S.; Young, A.L.; et al. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: A retrospective training and validation deep-learning analysis. Lancet Digit. Health 2019, 1, e172–e182. [Google Scholar] [CrossRef]
  11. Zhang, K.; Liu, X.; Xu, J.; Yuan, J.; Cai, W.; Chen, T.; Wang, K.; Gao, Y.; Nie, S.; Xu, X.; et al. Deep-learning models for the detection and incidence prediction of chronic kidney disease and type 2 diabetes from retinal fundus images. Nat. Biomed. Eng. 2021, 5, 533–545. [Google Scholar] [CrossRef]
  12. Sabanayagam, C.; Xu, D.; Ting, D.S.W.; Nusinovici, S.; Banu, R.; Hamzah, H.; Lim, C.; Tham, Y.C.; Cheung, C.Y.; Tai, E.S.; et al. A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations. Lancet Digit. Health 2020, 2, e295–e302. [Google Scholar] [CrossRef]
  13. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [Green Version]
  14. Cheung, C.Y.; Xu, D.; Cheng, C.-Y.; Sabanayagam, C.; Tham, Y.-C.; Yu, M.; Rim, T.H.; Chai, C.Y.; Gopinath, B.; Mitchell, P.; et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat. Biomed. Eng. 2021, 5, 498–508. [Google Scholar] [CrossRef]
  15. Cheung, C.Y.; Ran, A.R.; Wang, S.; Chan, V.T.T.; Sham, K.; Hilal, S.; Venketasubramanian, N.; Cheng, C.Y.; Sabanayagam, C.; Tham, Y.C.; et al. A deep learning model for detection of Alzheimer’s disease based on retinal photographs: A retrospective, multicentre case-control study. Lancet Digit. Health 2022, 4, e806–e815. [Google Scholar] [CrossRef]
  16. Li, J.O.; Liu, H.; Ting, D.S.J.; Jeon, S.; Chan, R.V.P.; Kim, J.E.; Sim, D.A.; Thomas, P.B.M.; Lin, H.; Chen, Y.; et al. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog. Retin. Eye Res. 2021, 82, 100900. [Google Scholar] [CrossRef] [PubMed]
  17. Campbell, J.P.; Lee, A.Y.; Abràmoff, M.; Keane, P.A.; Ting, D.S.W.; Lum, F.; Chiang, M.F. Reporting Guidelines for Artificial Intelligence in Medical Research. Ophthalmology 2020, 127, 1596–1599. [Google Scholar] [CrossRef]
  18. Ting, D.S.W.; Wong, T.Y.; Park, K.H.; Cheung, C.Y.; Tham, C.C.; Lam, D.S.C. Ocular Imaging Standardization for Artificial Intelligence Applications in Ophthalmology: The Joint Position Statement and Recommendations From the Asia-Pacific Academy of Ophthalmology and the Asia-Pacific Ocular Imaging Society. Asia Pac. J. Ophthalmol. 2021, 10, 348–349. [Google Scholar] [CrossRef] [PubMed]
  19. Yeh, F.-C.; Vettel, J.M.; Singh, A.; Poczos, B.; Grafton, S.T.; Erickson, K.I.; Tseng, W.-Y.I.; Verstynen, T.D. Quantifying Differences and Similarities in Whole-Brain White Matter Architecture Using Local Connectome Fingerprints. PLoS Comput. Biol. 2016, 12, e1005203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Shigueoka, L.S.; Mariottoni, E.B.; Thompson, A.C.; Jammal, A.A.; Costa, V.P.; Medeiros, F.A. Predicting Age From Optical Coherence Tomography Scans With Deep Learning. Transl. Vis. Sci. Technol. 2021, 10, 12. [Google Scholar] [CrossRef]
  21. Korot, E.; Pontikos, N.; Liu, X.; Wagner, S.K.; Faes, L.; Huemer, J.; Balaskas, K.; Denniston, A.K.; Khawaja, A.; Keane, P.A. Predicting sex from retinal fundus photographs using automated deep learning. Sci. Rep. 2021, 11, 10286. [Google Scholar] [CrossRef] [PubMed]
  22. Zhu, Z.; Shi, D.; Guankai, P.; Tan, Z.; Shang, X.; Hu, W.; Liao, H.; Zhang, X.; Huang, Y.; Yu, H.; et al. Retinal age gap as a predictive biomarker for mortality risk. Br. J. Ophthalmol. 2022. [Google Scholar] [CrossRef]
  23. VanRullen, R.; Reddy, L. Reconstructing faces from fMRI patterns using deep generative neural networks. Commun. Biol. 2019, 2, 193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Chang, K.; Balachandar, N.; Lam, C.; Yi, D.; Brown, J.; Beers, A.; Rosen, B.; Rubin, D.; Kalpathy-Cramer, J. Distributed deep learning networks among institutions for medical imaging. J. Am. Med. Inform. Assoc. 2018, 25, 945–954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated Learning. Synth. Lect. Artif. Intell. Mach. Learn. 2019, 13, 1–207. [Google Scholar]
  26. Konečný, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated Learning: Strategies for Improving Communication Efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
  27. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. NPJ Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef]
  28. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  29. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 12. [Google Scholar] [CrossRef]
  30. Jin, Y.; Wei, X.; Liu, Y.; Yang, Q. Towards utilizing unlabeled data in federated learning: A survey and prospective. arXiv 2020, arXiv:2002.11545. [Google Scholar]
  31. Deist, T.M.; Dankers, F.; Ojha, P.; Scott Marshall, M.; Janssen, T.; Faivre-Finn, C.; Masciocchi, C.; Valentini, V.; Wang, J.; Chen, J.; et al. Distributed learning on 20 000+ lung cancer patients—The Personal Health Train. Radiother. Oncol. 2020, 144, 189–200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Sharma, P.; Shamout, F.E.; Clifton, D.A. Preserving patient privacy while training a predictive model of in-hospital mortality. arXiv 2019, arXiv:1912.00354. [Google Scholar]
  33. Jaladanki, S.K.; Vaid, A.; Sawant, A.S.; Xu, J.; Shah, K.; Dellepiane, S.; Paranjpe, I.; Chan, L.; Kovatch, P.; Charney, A.W.; et al. Development of a federated learning approach to predict acute kidney injury in adult hospitalized patients with COVID-19 in New York City. medRxiv 2021. [Google Scholar] [CrossRef]
  34. Vaid, A.; Jaladanki, S.K.; Xu, J.; Teng, S.; Kumar, A.; Lee, S.; Somani, S.; Paranjpe, I.; De Freitas, J.K.; Wanyan, T.; et al. Federated Learning of Electronic Health Records to Improve Mortality Prediction in Hospitalized Patients With COVID-19: Machine Learning Approach. JMIR Med. Inform. 2021, 9, e24207. [Google Scholar] [CrossRef]
  35. Meinert, E.; Van Velthoven, M.; Brindley, D.; Alturkistani, A.; Foley, K.; Rees, S.; Wells, G.; de Pennington, N. The Internet of Things in Health Care in Oxford: Protocol for Proof-of-Concept Projects. JMIR Res. Protoc. 2018, 7, e12077. [Google Scholar] [CrossRef]
  36. Chen, Y.; Wang, J.; Yu, C.; Gao, W.; Qin, X. FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare. IEEE Intell. Syst. 2020, 35, 83–93. [Google Scholar] [CrossRef] [Green Version]
  37. Brophy, E.; De Vos, M.; Boylan, G.; Ward, T. Estimation of Continuous Blood Pressure from PPG via a Federated Learning Approach. Sensors 2021, 21, 6311. [Google Scholar] [CrossRef]
  38. Li, W.; Milletarì, F.; Xu, D.; Rieke, N.; Hancox, J.; Zhu, W.; Baust, M.; Cheng, Y.; Ourselin, S.; Cardoso, M.J.; et al. Privacy-Preserving Federated Brain Tumour Segmentation. In Machine Learning in Medical Imaging, Proceedings of the 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, 13 October 2019; Springer: Shenzhen, China, 2019; pp. 133–141. [Google Scholar]
  39. Li, X.; Gu, Y.; Dvornek, N.; Staib, L.H.; Ventola, P.; Duncan, J.S. Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results. Med. Image Anal. 2020, 65, 101765. [Google Scholar] [CrossRef]
  40. Shiri, I.; Vafaei Sadr, A.; Amini, M.; Salimi, Y.; Sanaat, A.; Akhavanallaf, A.; Razeghi, B.; Ferdowsi, S.; Saberi, A.; Arabi, H.; et al. Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework. Clin. Nucl. Med. 2022, 47, 606–617. [Google Scholar] [CrossRef]
  41. Dou, Q.; So, T.Y.; Jiang, M.; Liu, Q.; Vardhanabhuti, V.; Kaissis, G.; Li, Z.; Si, W.; Lee, H.H.C.; Yu, K.; et al. Federated deep learning for detecting COVID-19 lung abnormalities in CT: A privacy-preserving multinational validation study. NPJ Digit. Med. 2021, 4, 60. [Google Scholar] [CrossRef]
  42. Feki, I.; Ammar, S.; Kessentini, Y.; Muhammad, K. Federated learning for COVID-19 screening from Chest X-ray images. Appl. Soft Comput. 2021, 106, 107330. [Google Scholar] [CrossRef]
  43. Dayan, I.; Roth, H.R.; Zhong, A.; Harouni, A.; Gentili, A.; Abidin, A.Z.; Liu, A.; Costa, A.B.; Wood, B.J.; Tsai, C.S.; et al. Federated learning for predicting clinical outcomes in patients with COVID-19. Nat. Med. 2021, 27, 1735–1743. [Google Scholar] [CrossRef]
  44. Lu, M.Y.; Chen, R.J.; Kong, D.; Lipkova, J.; Singh, R.; Williamson, D.F.K.; Chen, T.Y.; Mahmood, F. Federated learning for computational pathology on gigapixel whole slide images. Med. Image Anal. 2022, 76, 102298. [Google Scholar] [CrossRef]
  45. Yu, T.T.L.; Lo, J.; Ma, D.; Zang, P.; Owen, J.; Wang, R.K.; Lee, A.Y.; Jia, Y.; Sarunic, M.V. Collaborative Diabetic Retinopathy Severity Classification of Optical Coherence Tomography Data through Federated Learning. Investig. Ophthalmol. Vis. Sci. 2021, 62, 1029. [Google Scholar]
  46. Hanif, A.; Lu, C.; Chang, K.; Singh, P.; Coyner, A.S.; Brown, J.M.; Ostmo, S.; Chan, R.V.P.; Rubin, D.; Chiang, M.F.; et al. Federated Learning for Multicenter Collaboration in Ophthalmology: Implications for Clinical Diagnosis and Disease Epidemiology. Ophthalmol. Retina 2022, 6, 650–656. [Google Scholar] [CrossRef]
  47. Lu, C.; Hanif, A.; Singh, P.; Chang, K.; Coyner, A.S.; Brown, J.M.; Ostmo, S.; Chan, R.V.P.; Rubin, D.; Chiang, M.F.; et al. Federated Learning for Multicenter Collaboration in Ophthalmology: Improving Classification Performance in Retinopathy of Prematurity. Ophthalmol. Retina 2022, 6, 657–663. [Google Scholar] [CrossRef]
  48. Fleck, B.W.; Williams, C.; Juszczak, E.; Cocker, K.; Stenson, B.J.; Darlow, B.A.; Dai, S.; Gole, G.A.; Quinn, G.E.; Wallace, D.K.; et al. An international comparison of retinopathy of prematurity grading performance within the Benefits of Oxygen Saturation Targeting II trials. Eye 2018, 32, 74–80. [Google Scholar] [CrossRef] [Green Version]
  49. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
  50. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-iid data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
  51. Chen, S.; Li, B. Towards Optimal Multi-Modal Federated Learning on Non-IID Data with Hierarchical Gradient Blending. In Proceedings of the IEEE INFOCOM 2022-IEEE Conference on Computer Communications, London, UK, 2–5 May 2022; pp. 1469–1478. [Google Scholar]
  52. Zhu, Z.; Hong, J.; Zhou, J. Data-free knowledge distillation for heterogeneous federated learning. Proc. Mach. Learn. Res. 2021, 139, 12878–12889. [Google Scholar]
  53. Yoshida, N.; Nishio, T.; Morikura, M.; Yamamoto, K.; Yonetani, R. Hybrid-FL: Cooperative Learning Mechanism Using Non-IID Data in Wireless Networks. arXiv 2019, arXiv:1905.07210. [Google Scholar]
  54. Wang, K.; Mathews, R.; Kiddon, C.; Eichner, H.; Beaufays, F.; Ramage, D. Federated evaluation of on-device personalization. arXiv 2019, arXiv:1910.10252. [Google Scholar]
  55. Zhu, H.; Xu, J.; Liu, S.; Jin, Y. Federated learning on non-IID data: A survey. Neurocomputing 2021, 465, 371–390. [Google Scholar] [CrossRef]
  56. Parikh, R.B.; Teeple, S.; Navathe, A.S. Addressing Bias in Artificial Intelligence in Health Care. JAMA 2019, 322, 2377–2378. [Google Scholar] [CrossRef]
  57. Burlina, P.; Joshi, N.; Paul, W.; Pacheco, K.D.; Bressler, N.M. Addressing Artificial Intelligence Bias in Retinal Diagnostics. Transl. Vis. Sci. Technol. 2021, 10, 13. [Google Scholar] [CrossRef]
  58. Zhou, P.; Hw, X.; Lee, L.-H.; Fang, P.; Hui, P. Are You Left Out? An Efficient and Fair Federated Learning for Personalized Profiles on Wearable Devices of Inferior Networking Conditions. ACM Interact. Mob. Wearable Ubiquitous Technol. 2022, 6, 1–25. [Google Scholar] [CrossRef]
  59. Chu, L.; Wang, L.; Dong, Y.; Pei, J.; Zhou, Z.; Zhang, Y. Fedfair: Training fair models in cross-silo federated learning. arXiv 2021, arXiv:2109.05662. [Google Scholar]
  60. Mohri, M.; Sivek, G.; Suresh, A.T. Agnostic Federated Learning. arXiv 2019, arXiv:1902.00146. [Google Scholar]
  61. Ezzeldin, Y.H.; Yan, S.; He, C.; Ferrara, E.; Avestimehr, S. Fairfed: Enabling group fairness in federated learning. arXiv 2021, arXiv:2110.00857. [Google Scholar]
  62. Zeng, Y.; Chen, H.; Lee, K. Improving Fairness via Federated Learning. arXiv 2021, arXiv:2110.15545. [Google Scholar]
  63. Zhang, D.Y.; Kou, Z.; Wang, D. FairFL: A Fair Federated Learning Approach to Reducing Demographic Bias in Privacy-Sensitive Classification Models. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 1051–1060. [Google Scholar]
  64. Ferraguig, L.; Djebrouni, Y.; Bouchenak, S.; Marangozova, V. Survey of Bias Mitigation in Federated Learning. In Proceedings of the Conference Francophone d’Informatique en Parallélisme, Architecture et Système, Virtuel, Lyon, France, 5–9 July 2021. [Google Scholar]
  65. Tang, F.; Wu, W.; Liu, J.; Wang, H.; Xian, M. Privacy-Preserving Distributed Deep Learning via Homomorphic Re-Encryption. Electronics 2019, 8, 411. [Google Scholar] [CrossRef] [Green Version]
  66. Mugunthan, V.; Polychroniadou, A.; Byrd, D.; Balch, T.H. SMPAI: Secure Multi-Party Computation for Federated Learning. In Proceedings of the NeurIPS 2019 Workshop on Robust AI in Financial Services, Vancouver, BC, Canada; 2019. [Google Scholar]
  67. Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1333–1345. [Google Scholar] [CrossRef]
  68. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. arXiv 2016, arXiv:1607.00133. [Google Scholar]
  69. Bouacida, N.; Mohapatra, P. Vulnerabilities in Federated Learning. IEEE Access 2021, 9, 63229–63249. [Google Scholar] [CrossRef]
  70. Caldas, S.; Konečny, J.; McMahan, H.B.; Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements. arXiv 2018, arXiv:1812.07210. [Google Scholar]
  71. Rothchild, D.; Panda, A.; Ullah, E.; Ivkin, N.; Stoica, I.; Braverman, V.; Gonzalez, J.; Arora, R. FetchSGD: Communication-Efficient Federated Learning with Sketching. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; Proceedings of Machine Learning Research. pp. 8253–8265. [Google Scholar]
  72. Wu, C.; Wu, F.; Lyu, L.; Huang, Y.; Xie, X. Communication-efficient federated learning via knowledge distillation. Nat. Commun. 2022, 13, 2032. [Google Scholar] [CrossRef]
  73. Xiong, J.; Li, F.; Song, D.; Tang, G.; He, J.; Gao, K.; Zhang, H.; Cheng, W.; Song, Y.; Lin, F.; et al. Multimodal Machine Learning Using Visual Fields and Peripapillary Circular OCT Scans in Detection of Glaucomatous Optic Neuropathy. Ophthalmology 2022, 129, 171–180. [Google Scholar] [CrossRef] [PubMed]
  74. Jin, K.; Yan, Y.; Chen, M.; Wang, J.; Pan, X.; Liu, X.; Liu, M.; Lou, L.; Wang, Y.; Ye, J. Multimodal deep learning with feature level fusion for identification of choroidal neovascularization activity in age-related macular degeneration. Acta Ophthalmol. 2022, 100, e512–e520. [Google Scholar] [CrossRef]
  75. Zhao, Y.; Barnaghi, P.; Haddadi, H. Multimodal federated learning. arXiv 2021, arXiv:2109.04833. [Google Scholar]
  76. Qayyum, A.; Ahmad, K.; Ahsan, M.A.; Al-Fuqaha, A.; Qadir, J. Collaborative federated learning for healthcare: Multi-modal covid-19 diagnosis at the edge. arXiv 2021, arXiv:2101.07511. [Google Scholar] [CrossRef]
  77. Sadilek, A.; Liu, L.; Nguyen, D.; Kamruzzaman, M.; Serghiou, S.; Rader, B.; Ingerman, A.; Mellem, S.; Kairouz, P.; Nsoesie, E.O.; et al. Privacy-first health research with federated learning. NPJ Digit. Med. 2021, 4, 132. [Google Scholar] [CrossRef]
  78. Sheller, M.J.; Edwards, B.; Reina, G.A.; Martin, J.; Pati, S.; Kotrotsou, A.; Milchenko, M.; Xu, W.; Marcus, D.; Colen, R.R.; et al. Federated learning in medicine: Facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 2020, 10, 12598. [Google Scholar] [CrossRef] [PubMed]
  79. Fujinami-Yokokawa, Y.; Ninomiya, H.; Liu, X.; Yang, L.; Pontikos, N.; Yoshitake, K.; Iwata, T.; Sato, Y.; Hashimoto, T.; Tsunoda, K.; et al. Prediction of causative genes in inherited retinal disorder from fundus photography and autofluorescence imaging using deep learning techniques. Br. J. Ophthalmol. 2021, 105, 1272–1279. [Google Scholar] [CrossRef] [PubMed]
  80. Haleem, A.; Javaid, M.; Singh, R.P.; Suman, R.; Rab, S. Blockchain technology applications in healthcare: An overview. Int. J. Intell. Netw. 2021, 2, 130–139. [Google Scholar] [CrossRef]
  81. Ng, W.Y.; Tan, T.-E.; Xiao, Z.; Movva, P.V.H.; Foo, F.S.S.; Yun, D.; Chen, W.; Wong, T.Y.; Lin, H.T.; Ting, D.S.W. Blockchain Technology for Ophthalmology: Coming of Age? Asia-Pac. J. Ophthalmol. 2021, 10, 343–347. [Google Scholar] [CrossRef]
  82. Tan, T.-E.; Anees, A.; Chen, C.; Li, S.; Xu, X.; Li, Z.; Xiao, Z.; Yang, Y.; Lei, X.; Ang, M.; et al. Retinal photograph-based deep learning algorithms for myopia and a blockchain platform to facilitate artificial intelligence medical research: A retrospective multicohort study. Lancet Digit. Health 2021, 3, e317–e329. [Google Scholar] [CrossRef]
  83. Wang, Z.; Hu, Q. Blockchain-based Federated Learning: A Comprehensive Survey. arXiv 2021, arXiv:2110.02182. [Google Scholar]
  84. Warnat-Herresthal, S.; Schultze, H.; Shastry, K.L.; Manamohan, S.; Mukherjee, S.; Garg, V.; Sarveswara, R.; Händler, K.; Pickkers, P.; Aziz, N.A.; et al. Swarm Learning for decentralized and confidential clinical machine learning. Nature 2021, 594, 265–270. [Google Scholar] [CrossRef]
  85. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R. Advances and open problems in federated learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  86. Saldanha, O.L.; Quirke, P.; West, N.P.; James, J.A.; Loughrey, M.B.; Grabsch, H.I.; Salto-Tellez, M.; Alwers, E.; Cifci, D.; Ghaffari Laleh, N. Swarm learning for decentralized artificial intelligence in cancer histopathology. Nat. Med. 2022, 28, 1232–1239. [Google Scholar] [CrossRef]
  87. Simkó, M.; Mattsson, M.O. 5G Wireless Communication and Health Effects-A Pragmatic Review Based on Available Studies Regarding 6 to 100 GHz. Int. J. Environ. Res. Public Health 2019, 16, 3406. [Google Scholar] [CrossRef] [Green Version]
  88. Hong, Z.; Li, N.; Li, D.; Li, J.; Li, B.; Xiong, W.; Lu, L.; Li, W.; Zhou, D. Telemedicine During the COVID-19 Pandemic: Experiences From Western China. J. Med. Internet Res. 2020, 22, e19577. [Google Scholar] [CrossRef] [PubMed]
  89. Chen, H.; Pan, X.; Yang, J.; Fan, J.; Qin, M.; Sun, H.; Liu, J.; Li, N.; Ting, D.; Chen, Y. Application of 5G Technology to Conduct Real-Time Teleretinal Laser Photocoagulation for the Treatment of Diabetic Retinopathy. JAMA Ophthalmol. 2021, 139, 975–982. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Conventional centralised learning. All participating institutions transfer their dataset to a centralised location, where the deep learning model is developed.
Figure 1. Conventional centralised learning. All participating institutions transfer their dataset to a centralised location, where the deep learning model is developed.
Diagnostics 12 02835 g001
Figure 2. The architecture of federated learning. Each institution trains a local model on its own training dataset. All local models’ parameters are then transferred to the central server after one training epoch. The central server accumulates and aggregates all local parameters and updates the global model securely. Afterwards, the model is updated, and the aggregated parameters are redistributed to each centre for a new round of training. This process is iterated until the global model converges. (*) Optical coherence tomography (OCT) images are illustrated as private data from participating institutions. Other different ophthalmic imaging modalities (e.g., slit-lamp images, fundus photographs, OCT-angiography images) can also be used when exploring the federated learning approach in the field of ocular imaging.
Figure 2. The architecture of federated learning. Each institution trains a local model on its own training dataset. All local models’ parameters are then transferred to the central server after one training epoch. The central server accumulates and aggregates all local parameters and updates the global model securely. Afterwards, the model is updated, and the aggregated parameters are redistributed to each centre for a new round of training. This process is iterated until the global model converges. (*) Optical coherence tomography (OCT) images are illustrated as private data from participating institutions. Other different ophthalmic imaging modalities (e.g., slit-lamp images, fundus photographs, OCT-angiography images) can also be used when exploring the federated learning approach in the field of ocular imaging.
Diagnostics 12 02835 g002
Figure 3. Types of federated learning.
Figure 3. Types of federated learning.
Diagnostics 12 02835 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nguyen, T.X.; Ran, A.R.; Hu, X.; Yang, D.; Jiang, M.; Dou, Q.; Cheung, C.Y. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics 2022, 12, 2835. https://doi.org/10.3390/diagnostics12112835

AMA Style

Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics. 2022; 12(11):2835. https://doi.org/10.3390/diagnostics12112835

Chicago/Turabian Style

Nguyen, Truong X., An Ran Ran, Xiaoyan Hu, Dawei Yang, Meirui Jiang, Qi Dou, and Carol Y. Cheung. 2022. "Federated Learning in Ocular Imaging: Current Progress and Future Direction" Diagnostics 12, no. 11: 2835. https://doi.org/10.3390/diagnostics12112835

APA Style

Nguyen, T. X., Ran, A. R., Hu, X., Yang, D., Jiang, M., Dou, Q., & Cheung, C. Y. (2022). Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics, 12(11), 2835. https://doi.org/10.3390/diagnostics12112835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop