**1. Introduction**

Smart cities and societies are at the vanguard of driving digital transformation [1–5]. The digital transformation process involves developing digital services and systems that

**Citation:** Janbi, N.; Mehmood, R.; Katib, I.; Albeshri, A.; Corchado, J.M.; Yigitcanlar, T. Imtidad: A Reference Architecture and a Case Study on Developing Distributed AI Services for Skin Disease Diagnosis over Cloud, Fog and Edge. *Sensors* **2022**, *22*, 1854. https://doi.org/10.3390/ s22051854

Academic Editor: James (Jong Hyuk) Park

Received: 5 January 2022 Accepted: 21 February 2022 Published: 26 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

allow us to sense, analyze, and act on our environment with the optimality of our objectives [6,7]. Various industrial sectors are undergoing this transformation and healthcare is among the most critical sectors in need of this [8]. Several drivers are motivating the need to transform healthcare and develop preventive, personalized, connected, virtual, and everywhere healthcare services and systems [9–12]. These drivers include, among others, declining public health (due to processed food, lifestyles, etc.), increase in chronic diseases (e.g., hypertension, diabetes, heart disease), ageing population, decreasing quality of healthcare, and rising healthcare costs for the public and governments [6]. Due to the restrictions placed because of COVID-19, the difficulty of accessing to public healthcare has aggravated and amplified the need for virtual and everywhere healthcare [4].

The technology-related drivers to provide distributed services include the need to bring intelligence near the user (at the fog and edge layers) for reasons such as privacy, security, performance, and costs [13–18]. These drivers are not specific to healthcare alone and are driving all of the sectors in which data is generated at the edge and/or in which decisions need to be made instantaneously and intelligently by the user at the edge [19–23].

Motivated by these drivers, this paper proposes, implements, and evaluates a reference (software) architecture called Imtidad that provides distributed Artificial Intelligence (AI) as a Service (DAIaaS) over the cloud, fog, and edge layers using a case study of a service catalog with 22 Deep Learning-based skin disease diagnosis services. These services belong to four service classes that are distinguished by software platforms (containerized gRPC, gRPC, Android, and Android Nearby) and are executed on a range of hardware platforms (Google Cloud, HP Pavilion Laptop, NVIDIA Jetson nano, Raspberry Pi Model B, Samsung Galaxy S9, and Samsung Galaxy Note 4) and four network types (Fiber, Cellular, Wi-Fi, and Bluetooth). A selection of four AI models are provided for the diagnosis; two of these are standard Deep Neural Networks, and the other two are Tiny AI versions to enable their execution on smaller devices at the edge. The models have been trained and tested on the HAM10000 dataset containing 10,015 dermatoscopic images.

The services have been evaluated against several benchmark criteria, including model service value, processing time, response time, data transfer rate, energy consumption, and network transfer time. The service values have been computed and compared in terms of their speed and energy consumption. A Deep Learning (DL) service on a local smartphone provides the best service in terms of energy, followed by a Raspberry Pi edge device. A DL service on a local smartphone provides the best service (also in terms of speed), followed by a laptop device in the fog layer.

Imtidad is an Arabic word indicating the "extending" or "extension" (to the cloud, fog, and edge) nature of our reference architecture. The services are being extended in both directions, from cloud to fog and edge, and from edge to fog and cloud.

To help the reader conceptualize the proposed work, Figure 1 provides a high-level view of the Imtidad reference architecture. The reference architecture is described at length in this paper. The three perspectives of the reference architecture are: the service development and deployment perspective, the user view perspective, and the validation perspective. The service development and deployment perspective provides guidelines on developing and operationalizing the services: an application such as skin lesion diagnosis is selected for the provision of related distributed services followed by acquiring the necessary data, AI model designs, service use cases, service design, composing these into a service catalog, porting these to the execution platforms and networks, operationalizing the services, evaluating and validating them against benchmark criteria, medical professionals, and other sources of knowledge. The user view perspective includes selecting and requesting a service from the service catalog, receiving the diagnosis, and validating it. The validation perspective is shared with both the user view and the service designers and providers view because it is meant to allow all of them, as well as third parties, such as auditors, to validate. A more detailed view and discussion of the reference architecture is provided in Sections 3 and 4.

**Figure 1.** The Imtidad reference architecture (a high-level view).

The contributions of this paper can be outlined as follows:


The rest of the paper is organized as follows. Section 2 reviews the related works. Section 3 describes the reference architecture, methodology, and service catalog. Section 4 details the system architecture and design for the skin disease diagnosis case study. Section 5 provides results and their analysis. Section 6 concludes the paper and provides future lines of research.

### **2. Related Works**

This section reviews the literature related to topics of this paper, distributed AI for skin diseases diagnosis over the edge. Section 2.1 discusses the works related to distributed artificial intelligence over cloud, fog, and edge. Section 2.2 reviews the works related to skin disease diagnosis using AI and Section 2.3 discusses the research gap.

### *2.1. Distributed Artificial Intelligence (DAI) over Cloud, Fog, and Edge*

Distributed Artificial Intelligence (DAI) allows AI to be distributed across multiple agents, processes, cores, physical, or virtual, computational nodes with the aim of sharing data, improving data processing capabilities, and providing faster, privacy-preserved, node-local, global, or system-wide solutions [13]. Distributed AI on clouds has been the focus of many proposals, [31,32], for intensive computation or global knowledge sharing. Edge Intelligence (EdgeAI) and fog intelligence are among the main DAI approaches where AI models are distributed across fog nodes (intermediate nodes between edge and cloud layers) or network edges [33]. Models can be pre-trained on powerful machines (cloud), then modified and optimized to run in the resource-constrained edges. Edges, fogs, and cloud can also collaborate where some of the pre-processing and less-intensive computations are performed in edges and global processing performed in the cloud [13].

Several research studies have discussed the convergence of edge, fog, and AI, as well as their various architectures [30,34–40]. Pattnaik et al. [41] have proposed and evaluated different approaches to distribute ML across cloud and edge layers, including a variety of distributed edge and cloud-based training and inference with either local or global knowledge. Muhammed et al. [33] proposed UbiPriSEQ, a framework to optimize privacy, security, energy efficiency, and quality of service (QoS). UbiPriSEQ uses Deep Reinforcement Learning to optimize local processing and offloading on edge, fog, and cloud. Sparse matrix-vector multiplication (SpMV) is used as an application to implement and evaluate the proposed framework UbiPriSEQ. In our earlier work, Janbi et al. [13], we proposed a DAIaaS framework to standardize distributed AI provisioning across all layers (edge, fog, and cloud) aiming to facilitate the process of generic software development across different application domains and allow for developers to focus on the domainspecific details rather than how-to develop and deploy distributed AI. To this end, multiple case studies and several scenarios, applications, distributed AI delivery models, sensing modules, and software modules were developed to explore various architectures and understand performance barriers.

Another recently emerging direction is Federated Learning (FL), where edge devices collaborate to train ML models. Model aggregation can be performed centrally in the cloud or be distributed between nodes. Gao et al. [42] have proposed a cloud-edge collaborative learning framework with an elastic local update method. In addition, the n-soft synchronization approach has been proposed that combines both synchronous and asynchronous approaches. Chen et al. [43] have proposed a federated transfer learning approach for healthcare wearables to train global models across different organizations securely. Fully decentralized FL approaches, where no central server and models are aggregated directly by edge devices, have also been proposed in the literature. Heged ˝us et al. [44] have provided a comparison of central FL and decentralized FL as well as introduced two optimization techniques for decentralized FL, a token-based flow control and partitioned models subsampling. Kim et al. [45] have proposed an architecture of FL based on blockchain technology to enable secure local model exchange. Both verification and rewards systems are designed to support the exchange process between edges. The existing works on federated learning have focused on federated training over distributed devices, while our work differs from it and complements it, both in the broad aims of our research and the specific contributions of this paper (as highlighted in Section 2.3 (Research Gap) and elsewhere in the paper).

### 2.1.1. Tiny AI and Edge: Research and Frameworks

Table 1 gives a summary of research papers that utilized Tiny AI models, i.e., lighter versions of AI models on edge devices. Tiny AI models are customized AI models that are optimized or compressed to minimize the requirements for model memory and computation power. All the listed research has used TensorFlow Lite [46] to optimize and deploy the AI models locally. For each research in the table, the application domain, the specific application under that domain, and the adopted AI model are specified. Zebin et al. [47] have designed and implemented a tiny CNN model to optimally monitor human activity recognition using mobile devices. In the domain of the autonomous vehicles, a traffic sign recognition Tiny DL model based on Single Shot MultiBox Detector (SSD) has been developed by Benhamida et al. [48]. Alsing [49] has evaluated different tiny AI models for note detections in a smart home environment. For the security domain, Zeroual et al. [50] have developed a face recognition authentication model on mobile devices to authenticate users before accessing cloud services. Alternatively, Ahmadi et al. [51] have proposed an intelligent local malware detection approach for android devices based on random forests classifier. Soltani et al. [52] have developed a Tiny Deep CNN model for Signal Modulation Classification that identifies signals SNR region for wireless networks. A Tiny AI model on Unmanned Aerial Vehicles (UAV) has been proposed by Domozi et al. [53] to detect objects in search and rescue missions.

Regarding the deployment of AI at the edges, a few frameworks have been proposed and developed to run AI models on edge devices. These include Caffe2 [54], TensorFlow Lite [46], and PyTorch Mobile [55]. These frameworks support various edge platforms such as Android, iOS, and Linux and customize AI models to fit within the resource-constrained edge.

### 2.1.2. Distributed AI in Healthcare

EdgeAI is still in its infancy and attracting more researchers and companies to bring AI closer to users [34]. It aims to provide distributed, low-latency, reliable, scalable, and private AI services [35]. Many applications that require real-time responses can utilize edgeAI, such as autonomous vehicles, smart homes, smart cities, and security [47–53]. There are some works that have considered distributed AI for healthcare, which is the focus of this work too. Zebin et al. [47] have proposed a human activity recognition framework to run on mobile devices. They used batch normalization for CNN recognition tasks using data from wearable sensors. Isakov et al. [31] have developed a monitoring and detection system that aims to detect falls accurately through the use of mobile devices. The mobile devices are used for preprocessing and they perform a non-linear analysis on the cloud. Hassan et al. [32] proposed a remote pain monitoring system based on a fog-based architecture to process patient biopotential signals locally and detect pain in a real-time manner. They offloaded some of the processing to the cloud in case of local resource shortage and provided remote access through a web application. Muhammed et al. [56] have addressed the challenges of meeting network quality of service (QoS) requirements including network latency, bandwidth, and reliability challenges for delivering real-time mobile healthcare services.


**Table 1.** Related works: Tiny AI at the edge.


**Table 1.** *Cont.*

### *2.2. Skin Lesion Diagnosis*

Health information technology systems such as clinical decision support (CDS) systems are designed to support physicians and other health professionals in their decisionmaking tasks. AI based Computer-Aided Diagnosis (CAD) systems have been subject to rapidly growing interest for the diagnosis of skin disease [57]. They are used as a "second opinion" tool that assists radiologists and physicians in image interpretations and diseases diagnosis. There has been a continuous increase in skin cancer cases rates around the world, so, given that it is the most common cancer in the United States and worldwide [58], more research must be done in this area. Especially, since an accurate and early diagnosis of skin cancer would improve treatment and survival rates [59]. Computer vision algorithms are used to analyze images and identify abnormal structures. This helps professionals to detect the earliest signs of abnormality and support their evaluation. Clinical imaging and dermatoscopy are now considered to be an essential part of the dermatology clinics for diagnosis, treatment, follow-up, and documentation [60,61]. Skin diagnosis (and identifying benign and malignant skin lesions) is an important factor in the early detection and prevention of skin cancer. Automated skin diagnosis using dermoscopy and AI might also let patients avoid skin biopsy [62]. DL is one of the AI approaches that are becoming very popular for dermoscopic images classification problem. This has been boosted by the introduction of many dermoscopic datasets that are publicly available [57]. These datasets consist of labeled images belonging to various types of benign and cancerous skin lesions. Training DL model with such datasets would create an appropriate and accurate model for CAD systems.

Several research studies have been proposed in the literature aiming to improve the accuracy of skin diagnosis [63–72]. Convolutional neural networks (CNN) are adopted in most proposals [63–71], except in [72] where the authors proposed fuzzy classification for skin lesion segmentation. Some proposals have considered other information or data in the diagnosis process such as demographic and medical history [66] and sonification (audio) [73]. Pretrained CNN models have been retrained and evaluated in [63,65–69,71] and multiple CNN models have been ensembled in [64,66,69,70,73]. A review of DL segmentation, classification, and pre-processing techniques for skin lesion detection is provided in [74]. Table 2 summarizes the literature that has been reviewed in this subsection, related to skin disease diagnosis.

**Table 2.** Related works: skin disease diagnosis.



**Table 2.** *Cont.*
