*5.1. Baseline Performance of Fog and Cloud Nodes*

Before studying the performance of the proposed architecture in detail, a first preliminary test was carried out for establishing a performance baseline for the fog and cloud nodes, since their constrained hardware (in the case of fog nodes) and remote connectivity (in the case of the cloud nodes) may limit the overall performance of the proposed system. Thus, an Orange Pi Zero Plus and a cloud node were evaluated when only running a Node.js server. Specifically, each test performed 5000 connections on a fog node and 1000 on a remote node (to avoid network congestion issues) at different connection rates in order to determine the node maximum throughput up to the point where connection errors arise.

The obtained results are shown in Figure 9, where it is represented the desired request rate (the one imposed by the tests) versus the rate actually achieved during the tests. As it can be observed in the Figure, the desired and real rates are roughly the same up to a point when the node is not able to handle the requests and thus its performance decreases. Specifically, the fog node reaches its peak performance at 300 requests per second, while this point is at 200 requests per second for the cloud node. In the case of the fog node, this is due to its hardware constraints, while, in the case of the cloud node, it is related to the restrictions of its network (i.e., the load of the network and the characteristics of the devices involved in processing and routing the requests through the Internet). In any case, these results are a useful reference when evaluating the performance analyses described in the next subsections.

**Figure 9.** Desired/real request rate for 5000 connections on the fog node and 1000 on the cloud node.

### *5.2. Performance of the Decentralized Database*

In order to estimate the throughput of the selected decentralized database, OrbitDB nodes were deployed locally (in fog nodes) and remotely (in a cloud). In such scenarios it was measured the average time required by an OrbitDB node for processing each REST API request. For the sake of fairness, tests were performed for four different payload sizes to evaluate their effect on network delay (for each payload size, the time required for processing 1000 requests was averaged).

The obtained results are shown in Figures 10 and 11. As it can be expected, the larger the payload, the larger the time response and the lower the request rate. Moreover, it can be observed that the cloud-based OrbitDB node is clearly slower in spite of being more powerful than the fog node. In the worst evaluated case (i.e., for the cloud and a 4 KB payload), the decentralized node, although it is only able to handle two requests per second, it is actually really fast, since it is able to process and to respond to each of the requests in less than half of a second, which seems to be quick enough for most glucose monitoring applications.

**Figure 10.** Average request rate for OrbitDB when running on fog and cloud nodes.

**Figure 11.** Average response time of OrbitDB when running it on fog and cloud nodes.

It is also worth noting that the difference in time response between the fog node and the cloud increases as payload size gets larger, mainly due to the communications transmission time through the network (i.e., although the processing time required by the OrbitDB node remains constant, the time required to exchange the request payload increases). In fact, during the performed experiments, the average round trip time (calculated using the 'ping' command in Linux, which makes use of 56-byte packets) for the fog node was 0.859 ms, while the same for the cloud was 45.920 ms, which makes a significant difference.

Figures 12 and 13 show the results of a second test, which measured the performance of the decentralized database when carrying out 2000 consecutive write operations on an OrbitDB node that ran on a fog and on a cloud node. For the sake of fairness, the data of the Figures were obtained by using the official OrbitDB benchmarking scripts [85], which obtain the average throughput of the OrbitDB node every 10 s.

While in Figure 12 it can be observed that the fog OrbitDB node throughput oscillates between 3.7 and 4.5 write requests per second, the cloud node throughput shown in Figure 13 can reach between 3.5 and 6 requests per second. This means that the fog node responds faster than the cloud node, but its hardware is less powerful than the one used by the cloud, so it is not able to process as many requests per second. However, it must be noted that both in the fog and in the cloud scenarios the average throughput oscillates continuously due to different factors, like the computational and network load (i.e., the network is actually shared with other users).

**Figure 12.** Performance of the fog OrbitDB node executing 2000 queries.

**Figure 13.** Performance of the cloud OrbitDB node executing 2000 queries.

A third test was carried out in order to determine the performance of an OrbitDB node when carrying out two main operations: when fetching content from connected peers in the swarm and when replicating content in other connected peers (these both operations are automatically orchestrated by OrbitDB). Figure 14 shows the average throughput for every 10 s when performing 2000 fetching and replication operations (from a fog node to a cloud node). Despite the observed oscillations, it can be concluded that the average fetching throughput is slower than the one related to a replication due to the complexity of this latter operation.

**Figure 14.** Performance in OrbitDB fetching and replication operations.
