*3.4. Communications Subsystem*

G-IoT devices can make use of different technologies for their communications interfaces. The communications with the cloud are usually through the Internet or a wired intranet, so this section focuses on the energy efficiency of the wireless communications technologies used by G-IoT nodes and edge devices. Table 1 compares the characteristics of some of the most relevant communications technologies according to their power consumption, operating band, maximum range, expected data rate, their relevant features, and main applications.


**Table 1.** Main characteristics of the most relevant communications technologies for G-IoT nodes.

> G-IoT node communications need to provide a trade-off between features and energy consumption. For example, Near-field Communication (NFC) [50] is able to deliver a reading distance of up to 30 cm, but NFC tags usually do not need to make use of batteries since they are powered by the readers through inductive coupling. NFC is a technology derived from Radio Frequency Identification (RFID), which, despite certain security constraints [51], in recent years, has experienced significant growth in home and industrial scenarios [52,53] thanks to its very low power consumption. It must be noted that RFID and NFC are essentially aimed at identifying items, but they can be used for performing regular wireless communications among G-IoT nodes (e.g., for reading embedded sensors). Nonetheless, there are technologies that have been devised to provide more complex interactions. For instance, Bluetooth implementations such as Bluetooth Low Energy (BLE) can provide wireless communications distances between 10 and 100 m [54] and very low energy consumption thanks to the use of beacons [55], which are a sort of lightweight IoT devices able to transmit packets at periodic time intervals.

> The widely popular Wi-Fi (i.e., IEEE 802.11 standards) can also provide indoor and outdoor coverage easily and inexpensively for IoT nodes; however, its energy consumption is usually relatively high and proportional to the speed rate. Nonetheless, new IEEE 802.11 standards have been proposed in recent years so as to reduce energy consumption. For instance, Wi-Fi Hallow offers low power consumption (comparable with Bluetooth) while maintaining high data rates, and a wider coverage range.

> In terms of green communications, the following are currently the most popular and promising technologies:

> • ZigBee [56]. It was conceived for deploying Wireless Sensor Networks (WSNs) that are able to provide overall low energy consumption by being asleep most of the time,

just waking up periodically. In addition, it is easy to scale ZigBee networks, since they can create mesh networks to extend the IoT node communications range.


#### *3.5. Green Control Software*

There is a significant number of recent publications that propose different techniques and protocols for network control and power saving. For instance, there are G-IoT protocols for interference reduction, optimized scheduling (e.g., switching selectively inactive sensor nodes and put them into deep sleep mode), resource allocation and access control, temporal and spatial redundancy, cooperative techniques in the network, dynamic transmission power adjustment, or energy harvesting [6].

Power-efficient network routing is also a hot topic. For instance, Xie et al. [60] reviewed recent works on energy-efficient routing and propose a novel method for relay node placement. Other authors focused on solutions for service-aware clustering [61]. Another interesting work can be found in [62], where the authors present an energy-efficient IoT architecture able to predict the adequate sleep interval of sensors. The experimental results show significant energy savings for sensor nodes and improved resource utilization of cloud resources. Nonetheless, this solution is not valid for applications with real-time requirements or that require constant availability. Finally, recent approaches such as [63] proposed solutions that combine distributed energy harvesting-enabled mobile edge computing offloading systems with on-demand computing resource allocation and battery energy level management.

#### *3.6. Energy Efficient Security Mechanisms*

A number of attacks can be performed to break the confidentiality, integrity, and availability of IoT/IIoT networks (e.g., jamming, malicious code injection, Denial of Service (DoS) attacks, Man-in-the-Middle (MitM) attacks, and side-channel attacks) [64]. In order to have protection for such attacks, secure deployment of G-IoT networks should involve three main elements: architecture, hardware, and the security mechanisms across the different devices.

The resource-constrained nature of IoT devices, specially IoT nodes, imposes limitations on the inclusion of complex protocols to encryp<sup>t</sup> and secure communications [65]. This is particularly challenging when implementing cryptosystems that require substantial computational resources. Hash functions, symmetric cryptography, and public-key cryptosystems (i.e., asymmetric cryptographic systems such as Rivest–Shamir–Adleman (RSA) [66], Elliptic Curve Cryptography (ECC) [67,68], or Diffie–Hellman (DH) [69]) are among the most used cryptosystems.

Public-key cryptosystems are essential for authenticating transactions and are part of Internet standards such as Transport Layer Security (TLS) (TLS v1.3 [70]), currently the most-extended solution for securing TCP/IP communications. Regarding cipher suites recommended for TLS, Rivest–Shamir–Adleman (RSA) and Elliptic Curve Diffie–Hellman Ephemeral (ECDHE) are the most popular ones.

The execution of cryptographic algorithms must be fast and energy efficient, but still provide adequate security levels. Such a trade-off has attracted scientific attention, which is currently an active area of research [71], especially since recent advances in computation have made it easy to break certain schemes (e.g., 1024-bit RSA is broken [72]); however, there are few articles in the literature that address the impact of security mechanisms on energy consumption for G-IoT systems. For instance, in [42], the authors compare the energy consumption of different cryptographic schemes, showing that, at the same security level, some schemes are clearly more efficient in terms of energy and data throughput than others when executed on certain IoT devices.

Moreover, hardware acceleration can be used for keeping energy consumption and throughput values at a reasonable level when executing public-key cryptography algorithms [73]. Furthermore, the use of specific hardware can also speed up the execution of cryptographic algorithms such as hash algorithms [74] or block ciphers [75].

#### *3.7. G-IoT Carbon Footprint*

The concept of carbon footprint (or carbon dioxide emissions coefficient) measures the amount of greenhouse gases (including CO2) caused by human or non-human activities. In the case of the development and use of a technology, it involves a carbon footprint related to its life cycle: from the design stage to the recycling of products. This is especially critical for IoT, since a large number of connected devices is expected in the coming years (up to 30.9 billion in 2025 [76]), which will consume a significant amount of electricity and, as a consequence, a high volume of carbon dioxide will be emitted into the environment. G-IoT has emerged as an attractive research area whose objective is to study how to minimize the environmental impact related to the deployment of IoT networks in smart homes, factories, or smart cities [77].

The following are some of the challenges that must be faced in order to reduce IoT network carbon footprint and environmental impact [78,79]:


such a warning, carbon footprint estimations were performed in order to determine the emissions related to the construction and operation of a datacenter [87].


#### **4. AI and Edge Computing Convergence**

As previously mentioned in Section 2.3.3, AI can be broadly defined as a science capable of simulating human cognition to incorporate human intelligence into machines. Machine Learning (ML) can be seen as a specific subset of AI, as a technique for training algorithms that focuses on empowering computer systems with the ability to learn from data, perform accurate predictions, and therefore, make decisions. The training stage in ML involves the collection of huge amounts of data (train set) to train an algorithm that allows the machine to learn from the processed information. Then, after training, the algorithm is used for inference in new data [90]. Deep Learning (DL) is a subset of ML that can be seen as the natural evolution of ML. DL algorithms are inspired by the human brain cognitive processing patterns (i.e., by its ability for pattern identification and classification), using DL algorithms that are trained to perform the same tasks in computer systems. By analogy, the human brain typically attempts to interpret a new pattern by labeling it and performing subsequent categorization [91]. Once new information is received, the brain attempts to compare it to a known reference before reasoning, which is conceptually what DL algorithms perform (e.g., Artificial Neural Networks (ANNs) algorithms aim to emulate the way the human brain works). In [91], Samek et al. identified two major differences between ML and DL:


The use of such AI techniques is highly dependent, not only on the hardware specifications and the available computational power, but also on the adopted inference approach [92].

#### *4.1. AI-Enabled IoT Hardware*

AI-enabled IoT devices are paving the way to implement new and increasingly complex cyber–physical systems (CPS) in distinct application domains [93–95]. The increasing complexity of such devices is typically specified based on SWaP requirements (i.e., reduced Size, Weight, and Power) [96]. When considering the IoT/IIoT ecosystems, changes in SWaP requirements, and also in unit cost, may impact the overall performance and functionality of the end devices, since the number of devices tends to increase at a steady pace, the cost per unit becomes more and more relevant. Note that the number of devices deployed is expected to increase massively in the coming years, with many of these devices operating as sensors and/or actuators, which will demand increasing processing power enabling effective edge AI deployment. On the other hand, portability is also relevant, and therefore, power will often come from an external battery or an energy harvesting subsystem, which imposes several challenges in the design of AI-enabled IoT devices. For example, in [97], a study regarding low-power ML architectures has been put forward and results have shown that sub-mW power consumption can potentially be deployed in "always-ON" AI-enabled IoT nodes.

#### 4.1.1. Common Edge-AI Device Architectures

The G-IoT hardware previously described in Section 3.2 has evolved in recent years as illustrated in Figure 4 in order to provide AI-enable functionality. Thus, basic IoT hardware (represented at the top of Figure 4), typically uses a traditional computing approach that combines an embedded processor (CPU) or a microcontroller (MCU) with on-board memory, sensor/actuator interfacing—digital (e.g., SPI, I2C, 1-Wire) and analog (ADCs, DACs) inputs/outputs—and basic connectivity (e.g., Wi-Fi, Bluetooth).

AI-enabled IoT device architectures (depicted in the middle of Figure 4), use a nearmemory computing approach based on a multicore CPU or FPGA, and typically includes external sensors and actuators, and extended connectivity options such as NB-IoT, Lo-RaWAN, or 5G/6G support.

Lastly, an AI-specific IoT device also includes cognitive capabilities and typically uses an in-memory computing approach, which may be supported by a dedicated AI SoC, specifically included to execute learning algorithms (this architecture is depicted at the bottom of Figure 4). IoT devices are getting increasingly powerful and computationally efficient as new SoCs with integrated AI chips become available. For example, the usage of FPGAs in AI-enabled IoT devices allows high-speed inference, parallel execution, and the implementation of application-specific computational architectures without the need for expensive ASICs; however, the total power consumption may be a problem when using FPGAs in power-sensitive applications [96].

**Figure 4.** Basic, AI-enabled and AI-specific IoT device architectures.

4.1.2. Embedded AI SoC Architectures

Embedded AI SoCs are used in specific IoT architectures [98], allowing for the execution of ML algorithms directly on the end device, and therefore detecting patterns and trends in data, and enabling the transmission of low-bandwidth data streams with contextual information to enhance decision-making and empower prognosis throughout the use in-device prediction models and ML, as it is represented at the bottom in Figure 4. In [96], Mauro et al. achieved high performance in power saving for both logic and SRAM design, using Binary Neural Networks (BNNs). BNNs enable the deployment of deep models on resource-constrained devices [99], because they may be trained to produce outcomes comparable to full-precision alternatives while maintaining a smaller footprint, a more scalable structure, and better error resilience. Such characteristics enable the implementation of completely programmable SoC IoT end-devices capable of performing hardware-accelerated and software-defined algorithms at ultra-low power, reaching 22.8 Inference/s/mW while using 674 μW [98].

#### 4.1.3. AI-Enabled IoT Hardware Selection Criteria

Running an AI model at an AI-enable IoT device presents four main advantages when compared with the classical cloud-based approach:


Table 2 compiles several AI-enabled IoT hardware boards that are able to run ML libraries, such as Tensorflow Lite [100]. TensorFlow Lite is an open-source ML library specifically designed for resource-constrained IoT devices, that typically use MCU-based architectures.


**Table 2.** AI-enabled IoT hardware compatible with TensorFlow Lite.

#### *4.2. Edge Intelligence or Edge-AI*

Typically, in cloud-centric architectures, IoT devices can transfer data to the cloud using an Internet gateway. In this architecture, the raw data produced by IoT devices are pushed to a centralized server without processing; however, since IoT devices are becoming more efficient and powerful, new possibilities arise at the network edge, enabling real-time intelligent processing with minimal latency. Edge Intelligence (EI) or Edge-AI are the common names given to this approach, and its performance is often expressed in terms of model accuracy and overall latency [107].

A common IoT device (also known as a "dumb" device) tends to generate large quantities of raw and low-quality data, which may have no operational relevance. In most cases, data are noisy, intermittent, or change slowly, being useless in specific periods. Moreover, the managemen<sup>t</sup> and transmission of these useless data streams consume vital power and tend to be bandwidth-intensive. On the other hand, the inclusion of in-device/edge intelligence results in the reduction in the data dimension by turning data into relevant information, lowering power consumption, latency, and the overall bandwidth needs. Intelligence at the edge of the network enables the distribution of the computational cost among edge devices. In this computational approach, data can be classified and aggregated before its transmission up to the cloud. By using this approach, only information with historical value is archived, which can be later used for tuning prediction models and optimizing the cloud-based processing.

4.2.1. Model Inference Architectures

> The three major Edge-AI computing paradigms are [108]:


Given the limited resources that are typically available in most IoT devices, bringing AI to the edge can be challenging. Reducing model inference time has been implemented successfully at the cost of decreasing the overall model inference accuracy. According to Merenda et al. [109], to effectively run an AI model (after the compression stage) in an embedded IoT device, the hardware selection must be carefully performed.
