*Proceedings*  **Numerical Model of a 28-GHz Frequency Diverse Array Antenna †**

#### **Marc Bernice Angoue Avele \*, Julio Brégains, Roberto Maneiro, José A. García-Naya and Luis Castedo**

CITIC Research Center, Department of Computer Engineering, Universidade da Coruña, 15071 A Coruña, Spain; julio.bregains@udc.es (J.B.); roberto.maneiro@udc.es (R.M.); jagarcia@udc.es (J.A.G.-N.) luis.castedo@udc.es (L.C.)

**\*** Correspondence: avele.marc@udc.es; Tel: +34-644435147

† Presented at the 3rd XoveTIC Conference, A Coruña, Spain, 8–9 October 2020.

Published: 22 August 2020

**Abstract:** In this work we make use of the frequency diverse array (FDA) concept, whose design is based upon a frequency increment across the antenna elements to generate a beam steering that is a function of angle, time and range. For a possible use of this technique in 5G detection systems, a 28-GHz FDA numerical model, designed with help of a software tool, is analyzed. Some practical conclusions are drawn from the presented results.

**Keywords:** frequency diverse array; 5G; numerical simulation

#### **1. Introduction**

The next-generation of cellular network (5G) is approaching evolution beyond the mobile Internet to the IoT (Internet of Things) in the near future, with the expectation of communications being available everywhere. 5G is transforming the wireless telecommunication arena, paving the way for the next generation of cellular networks that will include more devices and will enable faster communications through higher speeds. Presumably, the devices technologies will have to adapt to this new approach through the adopted frequency bands. Among many frequency spectra, the millimeter band has been considered as a good candidate for 5G cellular communications, since it provides higher data rate services because of increased channel bandwidth compared to lower frequency bands [1]. A promising antenna technology for utilizing millimeter band communications was introduced [2,3], namely, frequency diverse array (FDA).

FDA is a technology that employs frequency increment across the array elements. With such a technology, beam steering is achieved by applying a linear phase progression across the aperture, creating both a deformation of the antenna pattern in the steering process and a limited scanning.

#### **2. System Description of the Numerical 28-GHz FDA Model**

Let us consider an *N*-element FDA [1,2] whose main axis is aligned along the *z* axis and whose inter-element distance between contiguous elements is a constant value *d* (see Figure 1). For a case study, we will set the array by using circular patches whose normals are pointing towards the *y* axis and that, for the sake of simplicity, will be considered to radiate a cos2*Μ*-type power pattern [4]. To further simplify the analysis, the static excitations of the elements will be uniform (i.e., *In* = 1, for *n* = 0, 1, …, *N* ƺ 1). Besides, the *yz* plane (*Π* = 90°, see Figure 1) will be taken for representing the main radiation pattern with ψ varying from ƺ90° WR 90°. Under those assumptions, the beam power pattern at point *P*(*r*, *Μ*, 90°) will be proportional to [1,2]:

$$P(r, \psi, d, \lambda\_c, \Delta f) \quad = \left| \cos(\psi) \sum\_{n=0}^{N-1} e^{2/\pi n \left[ r \frac{\Delta f}{c} \gamma \frac{d}{\lambda\_c} \sin \psi \right]} \right|^2 \tag{1}$$

where *c* is the speed of light in vacuum, *Ώc* = *c*/*fc* is the wavelength at the carrier (working) frequency *fc*, *r* is the range (distance) measured from the center of the array coordinate system (see Figure 1) and Δ*f* is the frequency shift that produces the range-dependent behavior of the FDA pattern [1,2].

**Figure 1.** Linear FDA composed of circular patches**.** 

#### **3. Results and Future work**

Let the nominal scan angle (*Μ*) be zero, indicating that the beam is not intentionally scanned by means of a linear phase progression across the array. As a specific case, we consider a 10-element cos2-FDA whose inter-element distance is *d* = *Ώc*/2, radiating at *fc* = 28 GHz, and with Δ*f* = 15 kHz.

We can see the comparison between the Normalized Power Density (NPD in dB) (The NPD is obtained by taking the logarithm with base 10 of the power pattern normalized with respect to the overall maximum radiation value within the considered angle *Μ* and distance *r* coverages) obtained with an FDA composed of 10 isotropic elements (with all the parameters set to the above mentioned values) (Figure 2a); and the corresponding NPD of a cos2-FDA presented in this work (Figure 2b), both with ƺ90° ǂ *Μ* ǂ 90° and *r* ranging from 20 to 60 km. Those plots were obtained with the help of Mathematica® software tool [5].

**Figure 2.** (**a**) Normalized Power Density (NPD) of a 10-element 28-GHz FDA with isotropic elements. (**b**) NPD of a 10-element 28-GHz cos2-FDA.

The beamforming effect for both the isotropic FDA (Figure 3a) and cos2-FDA (Figure 3b) at a given range is obtained by changing the shift frequency. For those plots, a constant range *r* = 20 km was taken for three values of Δ*f* (10, 15 and 20 kHz).

It can be seen from the above figures that the element power pattern (cos2*Μ* in our case) not only affects, as is well known, the pattern sidelobes (lowering them towards angles deviating from the broadside) but also the steering capabilities of the FDA, thus reducing the main lobe level for both range *r* (see Figure 2) and frequency shift Δ*f* (see Figure 3) changes. Those effects are not in any way negligible therefore, for practical cases, alternative studies must be performed to correct them.

*Proceedings* **2020**, *54*, 39

**Figure 3.** Impact of frequency offset on the main beams. (**a**) Isotropic elements FDA NPD. (**b**) Cos2-FDA NPD. In both cases, *r* = 20 km, and ̇*f* = 10 kHz: dotted line; ̇*f* = 15 kHz: continuous line; ̇*f* = 20 kHz: dashed line.

The presented numerical simulations show that the use of FDAs at a 5G working frequency (28 GHz) is possible. But from the above considerations, some corrections are needed, even for the general case (i.e., at frequencies not necessarily equal to the one chosen in this work) and this could be taken as a future work. Another research line could be focused on a full-wave simulation of the presented model with a proper electromagnetic simulation software tool.

**Author Contributions:** Conceptualization, and methodology, M.B.A.A. and J.B.; software, J.B.; Investigation, R.M., J.A.G.-N.; resources, L.C.; writing, M.B.A.A.; Funding acquisition, L.C.

**Funding:** This research received This work has been funded by the Xunta de Galicia (ED431G2019/01), the Agencia Estatal de Investigación of Spain (TEC2016-75067-C4-1-R, RED2018-102668-T, PID2019-104958RB-C42) and ERDF funds of the EU (FEDER Galicia & AEI/FEDER, UE).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Proceedings* **Feature Selection in Big Image Datasets †**

#### **J. Guzmán Figueira-Domínguez 1,\*, Verónica Bolón-Canedo <sup>2</sup> and Beatriz Remeseiro <sup>3</sup>**


Published: 24 August 2020

**Abstract:** In computer vision, current feature extraction techniques generate high dimensional data. Both convolutional neural networks and traditional approaches like keypoint detectors are used as extractors of high-level features. However, the resulting datasets have grown in the number of features, leading into long training times due to the curse of dimensionality. In this research, some feature selection methods were applied to these image features through big data technologies. Additionally, we analyzed how image resolutions may affect to extracted features and the impact of applying a selection of the most relevant features. Experimental results show that making an important reduction of the extracted features provides classification results similar to those obtained with the full set of features and, in some cases, outperforms the results achieved using broad feature vectors.

**Keywords:** feature selection; image feature extraction; big data; computer vision

#### **1. Introduction**

Image datasets have grown not only in the number of samples, but also in the number of features that describe them. At this point, it could be reasonable to expect that having more features would provide more information and better results. However, this does not happen, due to the so-called *curse of dimensionality* [1]. In this context, feature selection [2] contributes to the scalability of the machine learning algorithms by finding the most relevant properties of the images and decreasing train and prediction times. However, their efficiency drastically diminishes when dataset dimension grows. Hence, applying big data technologies may ease to use larger datasets. This article addresses the impact of feature selection on image classification using different feature extraction methods. Particularly, this research focuses on the use of filter methods for feature selection with big data technologies.

#### **2. Materials and Methods**

This work proposes a pipeline for image classification composed of three main steps: image feature extraction, feature selection and classification. On the one hand, the first step has been implemented in a *Python* package using Keras, OpenCV and scikit-image libraries. On the other hand, the next steps were developed in an *Apache Spark* application that contains independent jobs for both steps. Additionally, features extracted have been stored in *Kaggle datasets*.

1. Feature extraction: In this work, image feature extraction was performed in order to transform image datasets into columnar feature datasets. The techniques applied here are—*bag of features* methods based on feature detection algorithms like SIFT [3], SURF [4] and KAZE [5]; *linear binary pattern* (LBP) methods [6]; and *convolutional neural networks* (ConvNets) used as feature extractors through architectures like VGG, ResNet and DenseNet.


In order to carry out the experiments of this research, two datasets were employed—the *ImageNet* dataset, currently hosted by the *Kaggle* platform, which contains 1,281,167 hand-labeled images belonging up to 1000 object categories; and the *Tiny Imagenet* dataset, released as a subset of the original ImageNet, containing very low-resolution images from only a 200-class subset.

#### **3. Results**

Regarding results from *Tiny Imagenet*, we noticed that accuracy values provided by features extracted using *bag of features* and *LBP* were quite poor. However, results supplied by features extracted using the ConvNets and applying up to 50% of dimensionality reduction with *Relief-F* (0.6451 top-5 accuracy), *χ*<sup>2</sup> (0.6422) or *mRMR* (0.6382), outperformed results without feature selection (0.6241).

With respect to experiments carried out with *Imagenet* dataset, features extracted through traditional approaches showed better results with these higher resolution images. Experiments from features extracted using *bag of features*, over the KAZE *keypoints detector*, and applying up to 66% of dimensionality reduction with methods like *mRMR* (0.7674 top-5 accuracy), *χ*<sup>2</sup> (0.7528) or *ReliefF* (0.7442) showed better results than the ones performed without the selection step (0.7425).

Finally, the accuracy results using features pulled out with a ConvNet like *VGG-19* and feature selection methods were presented quite tight compared to the ones achieved by the own VGG-19 (0.7158 top-1 accuracy and 0.8996 top-5 accuracy). Applying a reduction of a 50% with *χ*<sup>2</sup> (0.6715 top-1 accuracy and 0.8450 top-5 accuracy) or a reduction of 90% through *mRMR* (0.6554 top-1 accuracy and 0.8143 top-5 accuracy), we notice how results, using a multi-layer perceptron as the classifier model, are below the baseline. However, if we compare the results achieved with a naive Bayes classifier, the baseline (0.6143 top-5 accuracy) is eventually outperformed: 0.6482 top-5 accuracy when applying a reduction up to a 66% with the *χ*<sup>2</sup> method.

#### **4. Discussion and Conclusions**

Contrasting differences on experiments done with all the feature extractors, we can observe some clear tendencies. When feature selection is applied to features extracted with classical techniques, results outperform the baseline collected without making dimensionality reduction. On these techniques, salient information about images is shaped into vectors of a chosen size. As shown in results, this representation may be improved through feature selection techniques. However, when feature selection is applied to *deep features* (i.e., features extracted by pre-trained ConvNets), results are slightly below the baseline without feature selection. This may be explained due to the successive *dropout* layers included in *ConvNets*, which help to remove meaningless information over the layers and represent the best high-order features.

In main terms, results show a clear evidence that feature selection performs a positive impact over features extracted from both datasets. Accuracy values collected in most feature subsets are very close to the ones observed without applying dimensionality reduction. And, in some cases, dimensionality reduction techniques help to outperform classification results using all the features provided by *ConvNets* or *bag of features* extractors. Also, we remark that different feature selection methods stand out depending on the required percentage of feature reduction, so *the best feature selection method* simply does not exist.

**Funding:** This research has been financially supported in part by European Union FEDER funds, by the Spanish Ministerio de Economía y Competitividad (research project PID2019-109238GB), by the Consellería de Industria of the Xunta de Galicia (research project GRC2014/035), and by the Principado de Asturias Regional Government (research project IDI-2018-000176). CITIC as a Research Centre of the Galician University System is financed by the Consellería de Educación, Universidades e Formación Profesional (Xunta de Galicia) through the ERDF (80%), Operational Programme ERDF Galicia 2014–2020, and the remaining 20% by the Secretaria Xeral de Universidades (ref. ED431G 2019/01).

**Acknowledgments:** The authors would like to express their gratitude to the CESGA for the provided resources that allowed this research, and the support of NVIDIA Corporation with the donation of Titan Xp GPUs.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Proceedings*  **Developing an Open-Source, Low-Cost, Radon Monitoring System †**

#### **Alberto Alvarellos \* and Juan Ramón Rabuñal**

RNASA-IMEDIR, Computer Science Faculty, University of A Coruna, 15071 A Coruña, Spain; juan.rabunal@udc.es

**\*** Correspondence: alberto.alvarellos@udc.es

† Presented at the 3rd XoveTIC Congress, A Coruña, 8–9 October 2020.

Published: 24 August 2020

**Abstract:** The United States Environmental Protection Agency (USEPA) and the International Agency for Research on Cancer (IARC) have declared Radon gas a human carcinogen. Spain has several regions with high radon concentrations, Galicia (northwestern Spain) being one with the highest Radon concentration. In this work, we present the development of an open-source and lowcost radon monitoring and alert system. The system has two parts: devices and the backend. The devices integrate a Radon sensor, capable of measuring Radon levels every 10 min, and several environmental sensors capable of measuring temperature, humidity, atmospheric pressure, and air pollution. The devices send all the information to the backend, which stores it, exposes it in a web interface, and uses the historical data to predict the radon levels for the following hours. If the radon levels are predicted to overpass the threshold in the next hour, the system issues an alert via several channels (email and MQTT) to the configured recipients for the corresponding device, allowing them to take measures to lower the Radon concentration. The results of this work indicate that the system allows the radon levels to be greatly reduced and makes the development of a low cost and open-source radon monitoring system feasible. The system scalability allows a network of sensors to be created that can help mitigate the health hazard that high radon concentrations create.

**Keywords:** radon monitoring; IoT; radon alert system; open source; Arduino; Node-RED; opensource

#### **1. Introduction**

Radon gas levels are generally high in Spain, and more so in Galicia, caused by its strong links to granite geology, granite being the principal source of radon emissions [1,2]. Several studies carried out in the 1980s highlighted these high radon concentrations [3].

The European Union (EU) has indicated in its guidelines (E2013/59/EURATOM [4]) that annual average radon levels in homes and worksites should not exceed 300 Bq/m3. These guidelines also indicate the need for member states to include in their Technical Building Codes information regarding radon detection and mitigation. Using a system that can measure radon levels and help mitigate high values, or even better, avoid future high radon levels via predictions, could meet these requirements. In this work, we present the development of the said system. It comprises several monitoring devices and a backend that stores the data, predicts radon levels, and issues alerts.

#### **2. Monitoring Devices**

In this work, we design and develop the monitoring devices. They use an RD200M sensor for measuring Radon levels, a BME280 sensor for measuring relative humidity, barometric pressure and ambient temperature, and a CCS811 gas sensor for monitoring indoor air quality. All these data (radon concentration, temperature, humidity, barometric pressure, and air quality) are collected and processed by a processing unit based on the Arduino MKR family (with an ARM Cortex-M0+ CPU). Figure 1a shows the schema of the devices. To integrate all the sensors and the devices' processing unit, we designed an electronic integration board and a 3D printed case (see Figure 1b). The processing unit samples each sensor, processes the data, shows it in a display, and sends it to the backend every 10 min (a sampling period recommended by the radon sensor manufacturer to obtain correct measures). The devices send the data using two different communication technologies: WiFi and Sigfox. We use the WiFi devices in locations where a WiFi network is available, and the Sigfox devices when WiFi is not available.

**Figure 1.** (**a**) Monitoring devices sketch. (**b**) Final device assembled in a 3D-printed enclosure.

#### **3. System Architecture**

The backend stores the data sent by the devices, it exposes it in a web interface, and predicts the radon level for the next hours, for each device, using its historical data (see Figure 2). If the backend predicts a high radon level for a given device, it sends an MQTT message to an MQTT topic associated with the device, and an email to the addresses registered for the device.

**Figure 2.** (**a**) Architecture of the radon detection and alert system. (**b**) System's web interface.

#### *Proceedings* **2020**, *54*, 41

Once the radon concentration level is below the threshold, the backend sends a message indicating that levels are back to normal (also via MQTT and email). Currently, the system uses the threshold that E2013/59/EURATOM establishes for issuing the alerts (300 Bq/m3), although it can be configured per device.

#### **4. Results**

We tested the system in a laboratory with naturally high radon levels in two scenarios. In the first scenario (Figure 3a), the alert system was not activated, showing the natural radon level of the testing site. In the second scenario (Figure 3b), the alert system was activated, and a human operator turned an airflow control system on or off with each alert/"back to normal" message.

Comparing these two scenarios, we can see that the system can satisfactorily be used to significantly lower the radon levels.

**Figure 3.** radon levels for the same location: (**a**) alert system deactivated, (**b**) alert system activated, and airflow control system operated by a human.

**Funding:** This research received no external funding.

**Acknowledgments:** we would like to thank the CITEEC's (https://www.udc.es/citeec/) Laboratory of Instrumentation and Intelligent Systems in Civil Engineering staff for their technical support.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

#### *Proceedings*
