*Proceedings* **Shiny Dashboard for Monitoring the COVID-19 Pandemic in Spain †**

#### **Carlos Fernandez-Lozano 1,2,\* and Francisco Cedron 1,2**


Published: 20 August 2020

**Abstract:** Real-time monitoring of events such as the recent pandemic caused by COVID-19, as well as the visualization of the effects produced by its expansion, has highlighted the need to join forces in fields already widely used to working hand in hand, such as medicine, biology and information technology. Our dashboard is developed in R and is supported by the Shiny package to generate an attractive visualization tool: COVID-19 Spain automatically produces daily updates from official sources (Carlos III Research Institute and Ministry of Health, Consumer Affairs and Welfare) in cases, deaths, recovered, ICU admissions and accumulated daily incidence. In addition, it shows on a georeferenced map the evolution of active, new and accumulated cases by autonomous community allowing to travel in time from the origin to the last available day, which allows to visualize the expansion of infections and serves as a visual support for epidemiological studies.

**Keywords:** COVID-19; R; Shiny; monitoring

#### **1. Introduction**

The pandemic generated by COVID-19 has highlighted concepts such as reproducibility or interactive publication of results for real-time monitoring of an event, two of the battlefields of research today where most of the models proposed in different scientific publications are not easily accessible, analyzable or reproducible. There is no doubt that since the end of 2019, once it became known that uncontrolled outbreaks of infection by a coronavirus were occurring and were severely affecting the respiratory system, the interest at a global level in obtaining information on the monitoring of this infection led to the search for multiple research groups for methods/systems that, in addition to predicting the evolution, would allow the evolution of the pandemic to be visualized as simply as possible. As of 20 July 2020, the World Health Organization (WHO) has records of more than 14.3 million confirmed cases and more than 600,000 deaths, of which more than 260,000 confirmed cases correspond to Spain and more than 28,000 deaths. The data are so overwhelming that they undoubtedly show the seriousness of the pandemic that has been experienced worldwide. In Spain, for example, the situation of saturation of the health systems led in mid-March to the declaration of a State of Alarm throughout the country, preventing the free movement of citizens and closing borders. It is precisely this situation that led to the development of a Dashboard using R [1] and Shiny [2].

#### **2. Results**

As previously mentioned the website is available at https://covid19.citic.udc.es and was created with the initial objective of serving as a visual aid to the spread of the pandemic in Spanish territory. To this end, the dashboard has a dynamic and interactive general map of the country's evolution (Figure 1) which allows the data to be displayed from the beginning of the series.

**Figure 1.** Overview of the shiny dashboard.

In the dashboard you can analyze the day-to-day evolution of the pandemic in Spain by Autonomous Community and variable of interest (Figure 2a) or the degree of infection by population pyramid (Figure 2b) using the ggplot2 [3] and plotly [4] libraries.

**Figure 2.** Interactive pandemic progress charts. (**a**) Accumulated cases (log10); (**b**) Confirmed counts by age range.

The deployment was carried out on a docker container with a swarm orchestrator in charge of balancing the load to avoid dashboard saturation in the face of a high number of simultaneous accesses.

#### **3. Conclusions**

A dynamic and interactive web dashboard has been developed to visualize the evolution of COVID-19 infection at a national level also segregating by autonomous community.

**Funding:** This project is supported by the General Directorate of Culture, Education and University Management of Xunta de Galicia (Ref. ED431G/01, ED431D 2017/16), Competitive Reference Groups (Ref. ED431C 2018/49).

**Acknowledgments:** Technical support from CITIC and UDC, specially to Carlos J. Escudero and Alejandro Mosteiro.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Proceedings* **Decentralized P2P Broker for M2M and IoT Applications †**

#### **Iván Froiz-Míguez 1,2 , Paula Fraga-Lamas 1,2,\* and Tiago M. Fernández-Caramés 1,2,\***


#### Published: 20 August 2020

**Abstract:** The recent increase in the number of connected IoT devices, as well as the heterogeneity of the environments where they are deployed, has derived into the growth of the complexity of Machine-to-Machine (M2M) communication protocols and technologies. In addition, the hardware used by IoT devices has become more powerful and efficient. Such enhancements have made it possible to implement novel decentralized computing architectures like the ones based on edge computing, which offload part of the central server processing by using multiple distributed low-power nodes. In order to ease the deployment and synchronization of decentralized edge computing nodes, this paper describes an M2M distributed protocol based on Peer-to-Peer (P2P) communications that can be executed on low-power ARM devices. In addition, this paper proposes to make use of brokerless communications by using a distributed publication/subscription protocol. Thanks to the fact that information is stored in a distributed way among the nodes of the swarm and since each node can implement a specific access control system, the proposed system is able to make use of write access mechanisms and encryption for the stored data so that the rest of the nodes cannot access sensitive information. In order to test the feasibility of the proposed approach, a comparison with an Message-Queuing Telemetry Transport (MQTT) based architecture is performed in terms of latency, network consumption and performance.

**Keywords:** IoT; edge computing; M2M; distributed computing; IPFS; MQTT; P2P

#### **1. Introduction**

The growing number of Internet of Things (IoT) devices generates a massive amount of data that has derived into the adoption of different communications paradigms that go beyond traditional client-server schemes. One of such paradigms is edge computing [1], which is based on the distribution of the computing load among different IoT nodes, thus moving part of the processing from the cloud to the edge of the network and then providing lower latency, improved response times and better bandwidth availability. In addition, recent advances on hardware enable creating more powerful and less power-hungry devices. Thus, the latest IoT devices can handle more complex tasks than simple data storage and device-to-device communications.

The mentioned evolution fosters the development of new distributed computing strategies like the one described in this paper. The proposed strategy is completely distributed, in contrast to traditional edge computing approaches, which provide a hybrid environment with distributed computing and a central

server (i.e., a cloud). Not delegating information to a central server has become increasingly important, as it is a single point of failure and a potential source of data leaks. Therefore, the proposed solution makes use of a fully distributed Machine-to-Machine (M2M) communications protocol whose performance is compared with Message Queuing Telemetry Transport (MQTT) [2], which is currently one of the most popular M2M protocols.

#### **2. Design and Implementation**

Figure 1 shows the proposed communications architecture. In such an architecture a private swarm is a set of peers that belong to the IoT system. Each peer is a device that manages the communications distributed among the different sensor nodes. Every peer provides persistent storage for the data gathered from its edge computing-based network. The communication between a user and the edge computing-based network is carried out within the same Local Area Network (LAN) through a REST API.

**Figure 1.** Communications architecture of the proposed system.

The devices make use of Inter-Planetary File System (IPFS) [3] to implement a decentralized file system that provides better performance than HTTP when managing large amounts of data. Moreover, the devices use a experimental publication/subscription protocol called PubSub for M2M communications.

For implementing the proposed architecture, a Raspberry Pi was used as a node. It runs a go-ipfs instance with the PubSub function enabled. For persistent storage, OrbitDB (a distributed database that runs on top of IPFS) is used through a port in golang that offers better performance than the original javascript version [4]. In addition, it is possible to communicate with the system via an HTTP REST API to perform actions and get the obtained results.

As edge device, a Raspberry Pi Zero (RPi Zero) was used to receive measurements from different sensors via BLE or WiFi. Thus, the RPi Zero is in charge of the distributed storage and of the M2M communications with other edge devices.

#### **3. Experiments**

To determine the performance of the system in terms of latency and throughput, different tests were carried out. The obtained results were compared with the ones provided by an MQTT broker that run in a cloud (an Eclipse Mosquitto broker was deployed in a cloud while a client node (RPi Zero) published messages). In a similar way, an IPFS node hosted in the same cloud acted as a topic subscriber while the client node sent messages. The tests simulated the publication of 10 messages from 10 different clients. The obtained latencies are shown in Table 1.

**Table 1.** Latency comparison between MQTT (left) and IPFS PubSub (right).


In addition, the throughput of OrbitDB was measured by making insertions in an EventLog and then measuring response times. Table 2 shows the obtained results.


**Table 2.** Performance of an EventLog of OrbitDB.

#### **4. Conclusions**

The proposed decentralized brokerless system offers a good trade-off between performance, security, and reliability. Although MQTT provides a low latency (maintly beacause PubSub was not designed by having M2M communications in mind), its centralized architecture is prone to security issues that can be easily tackled by the proposed system.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Proceedings*
