**1. Introduction**

We are living in unprecedented times. Artificial intelligence (AI) has taken us by storm, helping us to make decisions in everything we do, even in finding the "true love" of our life and selecting the "significant other" [1]. Siri, Cortana, Google Assistant, Bixby, Alexa, Uber, Databot, Socratic, and Fyle are among the many apps that we use on an hourly basis, if not non-stop. The number of industries benefitting from AI is growing, such as recommender systems, autonomous vehicles, renewable energy, agriculture, healthcare, transportation, security, finance, smart cities and societies, and the list goes on [2–4]. The global market for AI is estimated to reach 126 billion U.S. dollars in 2025, from \$10.1 billion in 2018 [5].

AI allows us to embed "smartness" in our environments by intelligently monitoring and acting on it [6]. Internet of Everything (IoE) extends the Internet of Things (IoT) paradigm and integrates various

entities in this ecosystem including sensors, things, services, people, and data [7]. The grand challenge for such IoE enabled smart environments is related to the 4Vs of big data analytics [8]—volume, velocity, variety, and veracity—that is, to devise optimal strategies for migration and placement of data and analytics in these ubiquitous environments. The networking infrastructure would have to be smart to support these services and address the challenges.

Various deployments of the Fifth Generation (5G) of wireless systems have begun to appear across the globe, promising mobile internet at unseen speeds. However, a radical change is needed to support extreme-scale ubiquitous AI services [9–12]. the Sixth Generation networks (6G) pledge this through next-generation softwarization, heterogeneity, and configurability of networks [13,14]. 6G will provide much higher speeds, reliability, capacity, and e fficiency at lower latencies [15] through various enabling technologies such as higher spectrum and satellite communications [9,10,16,17], the use of AI to optimize network operations, and the use of fog and edge computing [18,19]. Figure 1 (see Section 2 for elaboration) depicts an envisioned view of smart societies that are enhanced with 6G and IoE technologies, showing also the distinguishing characteristics of 6G.

**Figure 1.** Sixth generation (6G)-internet of everything (IoE) enhanced smart societies.

The work on 6G is in its infancy and requires the community to conceptualize and develop its design, implementation, deployment, and use cases. Towards this end, this paper proposes a framework for Distributed AI as a Service (DAIaaS) provisioning for IoE and 6G environments. The AI service is "distributed" because the actual training and inference computations are divided into smaller, concurrent, computations suited to the large, medium, and smaller resources available with cloud, fog, and edge layers (see Figure 1). The AI service could be delivered by the Internet Service Providers (ISP) or other players. Multiple DAIaaS provisioning configurations for distributed training and inference are proposed to investigate the design choices and performance bottlenecks of DAIaaS. Specifically, we have developed three case studies (smart airport, smart district, and distributed AI delivery models) with eight scenarios (various usage configurations of cloud, fog, and edge layers) comprising nine applications and AI delivery models (smart surveillance, passport and passenger control, federated learning, etc.) and 50 distinct sensor and software modules (camera, ultrasonic sensor, electric power sensor, smart bins, data pre-processing, AI model building, data fusion, motion detector, object tracker, etc.).

The smart airport case study models the recently inaugurated King Abdulaziz International Airport, KAIA, Jeddah, and the smart district case study models the King Abdullah Economic City (KAEC), a smart city, both in Saudi Arabia. These two case studies model real-life physical environments

and provide the details about actual sensors and computing operations and are used to understand and develop the DAIaaS framework and various AI service provisioning strategies. Using the knowledge gained from the first two case studies, the third case study investigates various distributed AI delivery models as a service (DAIaaS) without regard to the specific high-level applications in the underlying environments.

The evaluation of the DAIaaS framework using the three case studies is reported in terms of the end-to-end delay, network usage, energy consumption, and financial savings with recommendations to achieve higher performance. The results show a range of scenarios and configurations and how these a ffect the performance metrics and the related costs. Moreover, we demonstrate through these investigations and results that the challenging task for designing and deploying DAIaaS is the service placement because, for example, the edge devices might fail due to the limitations in their computation capabilities when the computing resource demands are high (as is the case for AI applications). Similarly, moving data too often to the cloud may lead to an inability to provision the required latencies for the edge devices.

The benefit of the DAIaaS framework is to standardize distributed AI provisioning at all the layers of digital infrastructure. It will allow application, sensor, and IoE developers to focus on the various domain-specific details, relieve them from worries related to the how-to of distributed training and inference and help systemize and mass-produce technologies for smarter environments. Moreover, to address the challenges noted by Viswanathan and Mogensen [19] and others, DAIaaS will provide unified interfaces that will facilitate the process of joint software development across di fferent application domains. Therefore, we believe this work will have a far-reaching impact on developing next-generation digital infrastructure for smarter societies. To the best of our knowledge, this is the first work where distributed AI as a service has been proposed, modeled, and investigated.

The rest of this paper is organized as follows. Section 2 provides the background and reviews the related works. Section 3 explains our methodology and design, detailing how the various scenarios, applications, modules, and networks are modeled. Sections 4–6 give details that are specific to each of the three case studies and provide the performance analysis. Finally, conclusions are presented with future directions in Section 7.

### **2. Background and Related Works**

We revisit Figure 1 that depicts a potential view of smart societies enhanced with 6G and IoE technologies. We consider the digital infrastructure of smart societies to be organized in three layers: IoE, Fog, and Cloud Layers. IoE Layer at the bottom comprises devices, sensors, and actuators from various application domains, transportation, energy, etc. The devices and sensors generate big data [20–22] that must be continuously processed and analyzed to make smart decisions and communicated to the IoE devices for actuation and other purposes. A sensor's data may also be aggregated with other sensors for context-awareness, enhanced decision making, exploratory analyses, cross-sectoral and global optimizations, and other reasons (e.g., see [23]). The Fog Layer consists of fog nodes placed in various 6G connection providers (e.g., base stations) closer to the edge devices in the IoE Layer. Fog nodes will provide storage and computation power at the proximity of edges and with the 6G capabilities, they will achieve ultra-low latency. The Cloud Layer at the top consists of various types of private, public, and hybrid data centers (clouds) that will provide high computation and storage resources but with higher latency to the edges. Various distinguishing characteristics of 6G are mentioned in the boxes around the figure.

In the rest of the section, we explain and review the works related to the five core technologies used in our work. These are AI, IoE, edge-fog-cloud computing, smart societies, and 6G, discussed in Sections 2.1–2.5, respectively.

### *2.1. Artificial Intelligence (AI)*

Artificial intelligence is a field of study that focuses on the creation of intelligent machines that can learn, work, and react intelligently like humans. Deep learning (DL), machine learning (ML), neural network (NN), pattern recognition, computer vision, natural language processing, clustering, etc. are tools that can be used to train computers to accomplish specific tasks such as computer vision and natural language processing (NLP) [23–25]. AI models usually rely on data to build their knowledge therefore big data and data collected from a huge number of devices and sensors, as in IoE, has provided the fuel for the AI models [4,20–22]. The increasing volume of data generated from many connected, heterogeneous, and distributed objects (IoT/IoE) and the continuous development and evolution of networks and communication technologies have motivated the emergence of Distributed Artificial Intelligence (DAI) [26]. In DAI, AI models are distributed into multi-agents (or multiple processes) that are cooperatively sharing knowledge to solve or act either separately or to build a global knowledge for the whole system. Agents or sub-models can be residing either inside a single machine or across multiple machines to perform AI training or inference in a distributed or parallel way. One approach is to partition the AI model into sub-models or sub-tasks that can be run concurrently using parallel processing techniques such as pipelining. Wang et al. [27] have developed a framework that pipelines the processing of partitioned NN layers across heterogeneous cores for faster inference. An alternative is data partitioning, where the dataset is split across concurrently-running models, and results are aggregated later. These techniques are useful with massive AI models and data. A discussion of model and data parallelism is available in [28,29].

Another approach is Edge Intelligence (EdgeAI) where the AI model is distributed across network edges. Several works have discussed the convergence of edge and AI [30–35]. AI model can be pre-trained then modified and optimized to be appropriate to run in the resource-constrained edges. A discussion of DL optimizations at both software and hardware levels for edge AI is covered in [36]. The collaboration between edge and cloud is also a possible model where some of the pre-processing and less-intensive computations are placed in the edge and global analysis located in the cloud. Parra et al. [37] have developed a distributed attack detection system for IoT where di fferent AI models are used in both edge and cloud to provide local and global detection systems. Federated Learning (FL) is a DAI model where multi-agents collaboratively share their local knowledge for faster convergence and to make better decisions. FL concept, applications, challenges, and methods have been reviewed in [38,39]. Smith and Hollinger [40] developed a distributed robotic system that collaboratively shares their knowledge for a single goal (environment exploration). On the other hand, in [23] autonomous vehicles share knowledge to improve their own decisions.

Artificial Intelligence-as-a-Service (AIaaS) has also been considered in [41–43] as a natural extension of the usual "as-a-service" ("aaS") service delivery models available with cloud providers. This allows developers to focus on their domain-specific details and conveniently add AI capability to their software. Several works have shown the benefits of AIaaS in developing and supporting applications that require AI capabilities. Casati et al. [42] proposed a framework and architecture that facilitate the deployment of cloud-based AIaaS for smarter enterprise managemen<sup>t</sup> solutions. Milton et al. [43] conducted real experiments utilizing Google's Dialogflow API (a simple AIaaS provided by Google) [44] to develop a chatbot. The AI services that are provided by the top cloud providers such as Google, Amazon, Microsoft, and IBM, are discussed in [41].

### *2.2. Internet of Everything (IoE)*

Internet of Everything (IoE) has emerged as a concept that extends the Internet of Things (IoT) [45] to include processes, people, data, and things [7]. In the core of IoE, sensors are usually embedded with "everything" to monitor, identify the status and act intelligently to generate new opportunities for the society. There are a variety of sensors designed for di fferent purposes such as temperature, pressure, biosensors, light, position, velocity, etc. which are discussed in [46]. A massive number of sensors are expected to be deployed everywhere to support such applications and many others in di fferent areas including industrial, tra ffic, smart cities, and healthcare systems [47,48].

There are some IoE works that have looked at challenges in sensors connection and data collection and processing. AlSuwaidan [49] adopted Cloud as a Service (CaaS) and the Fog-to-Cloud concept to overcome the challenge of integrating, storing, and migrating distributed data. Lv and Kumar [50] proposed the software definition to sensors in the 6G/IoE network, along with Software Defined Network (SDN) technology to provide better control. Aiello et al. [51] have developed a self-contextualizing service for IoE that separates the logical part from the physical contexts. Others such as Badr et al. [52] focused on energy harvesting and Ryoo et al. [53] covered security and privacy concerns in IoE.

### *2.3. Edge, Fog, and Cloud Computing*

The continuous increase in the number of IoE sensors and edge devices joining the network required a shift in the paradigm that pushes data processing closer to the data sources. Edge and fog computing are two architectures that aim to bring processing closer to users at the network edges. While some in the literature do not di fferentiate between fog and edge [54], we and many others di fferentiate between them [55,56], depending on where the computation is performed. In edge computing, processes are localized in the edge devices to produce instant results. On the other hand, fog computing is an intermediate layer extending the cloud layer that brings the functions of cloud computing closer to the users [57]. Fog nodes are devices that can provide resources for services, and they might be resource-limited devices such as access points, routers, switches, and base stations, or resource-rich machines such as Cloudlet and IOx [58].

Discussions on fog computing and other edge paradigms are provided in [50,59] and their role in IoT is covered in [60]. Nath et al. [61] proposed an optimization algorithm to manage communication between the IoE cluster and the cloud. Wang et al. [62] adopted imitation learning for online task scheduling in vehicular edge computing. Badii at al. [63] have developed a platform for managing smart mobility and transport in network edges. Tammemäe et al. [64] proposed service architecture to support the self-awareness in fog and IoE.

### *2.4. Smart Cities, Societies, and Ecosystems*

Smart cities and smart ecosystems employ di fferent information and communication technologies (ICT) to intelligently monitor, collect, analyze, respond to environmental changes [2,65–75]. The population growth in the urban area and the advancement in technologies have increased the demand for more sustainable cities that adopt smarter, e ffective, and e fficient ways to manage the urban area and integrate various aspects of the ecosystem [76]. This includes introducing smartness to the infrastructure, operations, services, industries, education, security, and many more. In this context, IoE will be the base that will enable and integrate city services, people, things, and data. Deploying sensors all over the city including the one that is attached to the people, such as smartwatches, or at their mobile devices will provide grea<sup>t</sup> services and unlimited innovation opportunities.

Several works have looked at the design of the applications for smart societies. Ahad et al. [77] have developed a smart educational environment based on IoE to produce a learning analytics system that evaluates the learning process and achievements. Al-dhubhani et al. [78] have proposed a smart border security system where sensors and di fferent sources of data are used to make decisions and take actions. Queralta et al. [79] proposed an IoE-based architecture that employs a heterogeneous group of vehicles to improve traveling quality. Alam et al. [80] have developed an object recognition method for autonomous driving to improve the accuracy of vehicle recognition. Many other proposals on smart societies [81] exist, such as in transportation [25,65,69,71,82], healthcare [6], disaster managemen<sup>t</sup> [83], logistics [66,84], and more.

### *2.5. Sixth Generation Networks (6G)*

6G is the next generation of cellular networks that is expected to overcome the limitation of current fifth-generation (5G) deployment and fulfill requirements of the future fully connected digital society [10]. There are some publications [10,13–19] that have discussed the future vision of 6G cellular networks, requirements, enabling technologies, and challenges. The main challenges are coming from the expected continuous increase in the number of sensors joining the network and the popularity of IoE-based smart services [9]. Extensive improvements in the speed and capacity of communication can be achieved by adopting a higher spectrum and utilize various communication technologies [15]. 6G networks are expected to be an ultra-dense heterogeneous network [85]. Both terrestrial (cellular network) and non-terrestrial (e.g., satellites, drones, and planes) infrastructures will be employed to provide continuous and reliable network services [10]. Though the heterogeneity of network architecture, communication links, devices, and applications will increase network control and operation complexity [17]. Therefore, 6G is expected to take the 5G softwarization and virtualization to their next level by empowering the network with AI approaches to optimize the network operation [13,14]. Moreover, service-oriented operations should o ffer higher flexibility and looser integration with various network components, which will facilitate deployment, configuration, and managemen<sup>t</sup> of new applications and services [16]. Distributed artificial intelligence with user-centric network architectures will be a fundamental component of the 6G networks to reduce communication overhead, and provide autonomous and real-time decisions [10]. Energy e fficiency is also one of the critical requirements for 6G networks that must be taken into account from antenna design to the zero-energy nodes for low rate sensing applications [9]. It is expected from 6G to have 10–100 times higher energy e fficiency than 5G to accommodate joining devices and applications with lowest-cost and eco-friendly deployment [11].

To summarize, digital infrastructures that will support smart societies require rich and flexible AI capabilities. 6G pledges to support ubiquitous AI services, however, the work on 6G is in its infancy and requires the community to contribute to its realization, such as in designing AI models, data management, service placements, job scheduling, and communication managemen<sup>t</sup> and optimization for both application developers and service providers. Solutions are required to reduce the complexity of the systems and to allow application developers to focus on the various domain-specific details rather than worrying about the how-to of distributed training and inference. Table 1 summarizes some of the reviewed literature and compares it with our work. Note in the table that none of the published proposals have incorporated all the key technologies for next-generation digital infrastructure. The particular di fferentiating factor of our work is the DAIaaS framework and its detailed evaluation.


**Table 1.** Summary of relevant research.

### **3. Methodology and Design**

In Section 1, we have already given an overview of our methodology in terms of the motivation for the three case studies, and the comprising scenarios, applications and AI delivery models, and sensor and software modules. This section discusses our methodology and design, and the main components of our simulations in detail. Section 3.1 describes the devices used in the edge, fog, and cloud layers. Section 3.2 introduces the applications and delivery models used in our work and how these are modeled in the simulations. Section 3.3 explains how the network infrastructure is modeled. Finally, Section 3.4 defines the performance metrics used for performance evaluation.

Software and Hardware: We have used the iFogSim [86] simulation software to model and evaluate DAIaaS. We selected it because it allows simulating a range of applications, modules, placements, data streams, sensors, edges, fogs, cloud datacenters, and communication links. All experiments are executed on the Aziz supercomputer (Jeddah, Saudi Arabia), which comprises 492 nodes with 24 cores each. The supercomputer allowed us to run many large simulations with di fferent configurations concurrently on di fferent nodes.

### *3.1. Cloud, Fog, and IoE Layers*

Figure 1 shows a high-level view of smarter environments, supported by 6G and IoE, comprising three main layers: IoE, Fog, and Cloud, and this has been explained in some detail in Section 2. Each layer contains devices with distinct resource capabilities that are represented in the simulations using various parameters. We have determined the values for these parameters considering the specification of the devices that are available today. Table 2 list the three types of devices (edge, fog, and cloud devices) and the associated parameters. The computational capabilities of these devices are represented in the simulations with certain values of MIPS (Million Instructions Per Second) and RAM (Random Access Memory). The communication capabilities are simulated using uplink and downlink bandwidth. Each device is also characterized by specific power consumption in the idle and busy states. For example, The MIPS parameter for cloud for one virtual machine (VM) has the highest MIPS and RAM (220,000 and 40,000) values compared to the fog and edge devices.


**Table 2.** Device configurations.

### *3.2. Distributed Applications and AI Delivery Models*

A smart city would have various applications running on it simultaneously. Each application (see Sections 4 and 5) and AI delivery models (see Section 6) has a set of modules ( *m*) that performs some computations on the data they receive. The modules are organized in a directed graph (DG) with edges between them to represent data or workload ( *w*) passing between the modules. This is depicted in Figure 2 using the Smart Surveillance application, which we use in this section as an example. For each workload received by the module, it will be processed and a new workload will be generated depending on the configured mapping between workloads and the selectivity rate in case more than one exists. A workload ( *w*) can be characterized by its CPU requirements ( *wc*) in terms of million instructions (MI) required by the module to process the workload as well as its network requirements ( *wn*) in terms of bytes to be transferred between the two modules over the network.

**Figure 2.** Smart surveillance application module.

Table 3 lists the various workloads used in the Smart Surveillance application. We will come back to it later after explaining the application in Figure 2. The figure shows that the modules can be placed in different layers (i.e., edge, fog, or cloud devices) depending on the scenario (IoE-6G versus IoE-Fog-6G). The Smart Surveillance application uses CCTV (Closed-Circuit Television) cameras to detect and track objects in a specific area, such as in [86]. The CCTV cameras generate live video streams. Therefore, this application has a high computation requirement especially in a crowded environment such as airports or pedestrian areas, where many people and objects must be tracked, identified, and analyzed carefully for security reasons. The application consists of six modules: Camera, Motion Detector, Object Detector (Obj Detector), Object Tracker (Obj Tracker), Camera Control (Camera Ctrl), and User Interface.



The Camera contains the sensor and Camera Ctrl contains the pan–tilt–zoom (PTZ), which is the actuator in the camera that adjusts the camera zoom depending on the PTZ parameters. The Motion Detector is always located in the smart cameras and it receives live video streams (*vid\_strm*) from the Camera and when motion is detected it transfers the motion video stream (*motion\_vid\_strm*) to the Obj Detector module. The Obj Detector module is located in the cloud in the IoE-6G scenario, and in the fog node in the IoE-Fog-6G scenario. It receives video streams (*motion\_vid\_strm*) from the Motion Detector and intelligently detects objects and activates Obj Tracker if it hasn't been activated before for the same object. The Obj Detector module sends two workloads: the detected object (*detected\_obj*) to the User Interface and the object location (*obj\_location*) to the Obj Tracker. The Obj Tracker module is located in the cloud in the IoE-6G scenario, and in the fog node in the IoE-Fog-6G scenario. It receives coordinates of the tracked objects (*obj\_location*) and calculates the PTZ configuration, which is sent to

the Camera Ctrl using the workload, camera control (*cam\_ctrl*). The User Interface is always located in the cloud and it receives a video stream of the tracked objects (*detected\_obj*) from the Obj Detector. Each application contains one or more *application loop*, which is defined as a series of modules (a tuple) to measure the end-to-end delay between the start and the end of the loop. The Smart Surveillance application contains one control loop represented by the tuple of modules (Camera, Motion Detector, Obj Detector, Obj Tracker, Camera Ctrl). The end-to-end delay of each *application loop* defined in Section 3.4 and is computed as part of the application and network performance.

Table 3 lists the configuration of each workload, specified with its source and destination modules and resource requirements. For example, Row 1 in the table shows that the workload *vid\_strm* requires 1000 million instructions (MI) and 20000 bytes to be transferred from the Camera module to the Motion Detector module.

### *3.3. Network Infrastructure*

We have defined five categories of devices, Cloud, Gateway, Fog, Edge, and Sensor/Actuator. These devices operate in different layers and accordingly the expected link latency between them varies. Table 4 lists the various types of links (*L*) and their defined latency (*tl*) in ms. These latencies are set based on the expected 6G link latencies between the layers. For instance, the configured link latency between Cloud and Gateway is set to 100 ms while links between Gateway, Fog, and Edge are set to 2 ms because they are closer to each other. The link latency between Edges and their Sensor/Actuator is set to 1ms because they are expected to be part of the edge devices. We have deliberately used modest values for latencies compared to the expected 6G latencies to keep some levels of performance margins.



### *3.4. Performance Metrics*

For evaluation purposes, three performance metrics are monitored: network usage, application loops end-to-end delay, and network energy consumption.

The network usage (*U*) is the average load on the network in bytes per second. *U* is computed by Equation (1) where *tl* represents the latency of a link *l* and *tL* is a set of all links latency. *Wn* is the network requirement of workload *w* and *Wn* is a set of all workload's network requirements. *T* is the total simulation time.

$$Network\,\,usaga\,\,\mathcal{U} = \frac{\sum\_{t\_l \in t\_L, w\_n \in \mathcal{W}\_n} w\_n \* t\_l}{T} \tag{1}$$

The application's loop end-to-end delay allows us to evaluate the response time of the applications in different scenarios. For every application loop type (*a*), we calculate the average end-to-end delay (*Da*) from the first module to the last module in a specific loop using Equation (2). *Ts(i)* is the start time and *te(i)* is the end time of loop number (*i*) of type *a*, and *I* is the total number of loops of type *a*.

$$\text{Lloop delay } D\_{\mathfrak{a} \in \mathcal{A}} = \frac{\sum t\_{\mathfrak{c}(i)} - t\_{\mathfrak{s}(i)}}{I}, \ 0 < i < I \tag{2}$$

The network energy consumption is calculated per hour (*Eh*) using Equation (3) where ε is the estimated energy and *U* is the network usage. To estimate the network energy consumption (*Eh*), we used the energy estimation of a gigabyte transfer on the network from [87] Table 5 shows their forecasted energy consumption rate for the wireless access network (WAN) for 2010, 2020, and 2030. The average energy consumption of 2020 = 0.54 kWh/GB used as ε value.

$$\text{Energy consumption } E\_h = \text{ 3600} \ast \text{ } \circ \text{ } \text{U} \tag{3}$$


**Table 5.** Estimated network energy consumption [87].

In addition to the network energy consumption, the estimated daily Cost of energy is also calculated based on the electricity price in Dollar per kWh for Saudi Arabia from [88] using Equation (4). β is the electricity price in dollar per kWh and *Eh* is the energy consumption per hour.

$$\text{Cost}\,\mathbb{C}\_d = \text{ } \mathbf{24} \ast \boldsymbol{\beta} \ast \boldsymbol{E}\_h \tag{4}$$

### **4. Case Study 1: Smart Airport**

In this section, we present and discuss our first case study (Smart Airport) including the use of IoE in smart airports and their applications, the experiment design, configuration, and results.

### *4.1. IoE in Smart Airports*

According to the International Air Transport Association (IATA), it predicted that the number of passengers will double to 8.2 billion by 2037 [89]. This expected increase in the number of passengers will put huge pressure on the aviation industry, especially in the current infrastructure [89]. IoE will also play an important role in enhancing passenger experience and o ffering a grea<sup>t</sup> opportunity for both airlines and airports [90]. Many devices and sensors can be deployed to support the smartness in the airport such as surveillance cameras, Radio-frequency identification (RFID), various sensors (e.g., air quality sensor), wearable devices (e.g., watches), avionics devices (e.g., flight recorders), biometric devices and/or, digital regulators (e.g., electricity). Using data collected from these devices, many smart airport applications might be adopted such as Baggage Tracking, security applications, indoor navigation systems, and airport operation and administrations.

### *4.2. Smart Airport: Architectural Overview*

In the first case study, we selected the new King Abdulaziz International Airport (KAIA), Jeddah, Saudi Arabia, for evaluation. Figure 3 shows the layout of the simulated airport including the main components of the system. The whole airport landscape is divided into small areas, where each area is covered by a gateway router that works as a fog device. This router provides a connection for all edge devices in that area. Three types of edge devices are simulated: smart camera, barcode readers (at gates), counter devices, and each of them is connected to a specific type of sensor or actuator. Figure 4 shows the detailed architectural design of the IoE-6G (a) and IoE-Fog-6G (b) scenarios. Three applications are shown Smart Surveillance, Smart Gate Control, and Smart Counter. Although both scenarios have the same physical infrastructures, the application modules placement di ffer in them. In the following sections, we will discuss Smart Gate Control and Smart Counter, while third application, Smart Surveillance, we explained already in the previous section.

**Figure 3.** King AbdulAziz International Airport (Smart Airport Layout).

(**a**) **Figure 4.** *Cont.*

 **Figure 4.** Smart airport: (**a**) IoE-6G scenario and (**b**) IoE-Fog-6G Ssenario.

### *4.3. Application: Smart Counter*

The Smart Counter application is responsible for counter operation where passengers finish their check-in procedures. The application consists of five modules: Barcode Reader, Check Information (Check Info), Passenger Processing, Authentication Information Provider (Auth. Info Provider), and Boarding Issue. The Barcode Reader uses light sensors to read passports or ID cards. The Check Info module receives passenger information (*info*) from the counter and passes it to the Passenger Processing module. Passenger Processing is located in the cloud at the IoE-6G scenario, and in fog at the IoE-Fog-6G scenario. It receives passenger (*passenger*) orders from the counter and requests passenger information (*passenger\_info\_req*) from the Auth. Info Provider to perform the check-in process. The Auth. Info Provider will send the result back (*passenger\_info\_res*) to Passenger Processing module. After authentication, the boarding pass information (*boarding\_pass*) will be sent to the Boarding Issue actuator. Auth. Info Provider is always located in the cloud. In the case of the IoE-Fog-6G scenario, it has an extra role, that is designed to increase the data locality and provide faster service at the smart gates. When a passenger arrives at the counter and after the check-in, the passenger authentication information (*authe\_info*) will be sent to the fog node where the passenger boarding gate is located. In this way, when the passenger arrives at the gate, his information will be available at the fog which will enhance the response time of the gate. Auth. Info Provider will send periodically the passenger's data to the proper fog node, specifically to the Authenticator (Auth.) module of the Gate Control application, which will be discussed next.

### *4.4. Application: Smart Gate Control*

The Smart Gate Control application is responsible for processing passengers boarding passes at boarding gates. The application consists of five modules: Counter Device, Boarding Processor, Authenticator (Auth.), Authentication Information Provider (Auth. Info Provider), and Gate Control (Gate Ctrl). The Barcode Reader uses light sensors to read boarding pass code. The Boarding Processor module is always located in the smart gate. It receives barcode information (*barcode)* from the Barcode Reader, and it sends the code to the Auth. for authentication. The Auth. is located in the cloud on the IoE-6G scenario, and fog node on the IoE-Fog-6G scenario. It receives passenger info (*passenger\_info)* from the Boarding Processor and authenticates the passenger. After that, a decision will be sent as

a control signal (*gate\_ctrl*) to the Gate Ctrl, so it acts depending on that. The Auth. Info Provider is placed in the cloud and is responsible for communication with the Auth. Info Provider in the Smart Counter application.
