*Article* **Design and Validation of a U-Net-Based Algorithm for Star Sensor Image Segmentation**

**Marco Mastrofini, Ivan Agostinelli and Fabio Curti \***

School of Aerospace Engineering, Sapienza University of Rome, 00138 Roma, Italy

**\*** Correspondence: fabio.curti@uniroma1.it

**Abstract:** The present work focuses on the investigation of an artificial intelligence (AI) algorithm for brightest objects segmentation in night sky images' field of view (FOV). This task is mandatory for many applications that want to focus on the brightest objects in an optical sensor image with a particular shape: point-like or streak. The algorithm is developed as a dedicated application for star sensors both for attitude determination (AD) and onboard space surveillance and tracking (SST) tasks. Indeed, in the former, the brightest objects of most concern are stars, while in the latter they are resident space objects (RSOs). Focusing attention on these shapes, an AI-based segmentation approach can be investigated. This will be carried out by designing, developing and testing a convolutional neural network (CNN)-based algorithm. In particular, a U-Net will be used to tackle this problem. A dataset for the design process of the algorithm, network training and tests is created using both real and simulated images. In the end, comparison with traditional segmentation algorithms will be performed, and results will be presented and discussed together with the proposal of an electro-optical payload for a small satellite for an in-orbit validation (IOV) mission.

**Keywords:** artificial intelligence; star trackers; resident space objects; convolutional neural network; segmentation; space situational awareness; space surveillance and tracking; optical images; in orbit validation

### **1. Introduction**

In the framework of on-board autonomous star sensors algorithms, star segmentation and accurate centroid estimation are critical problems to face. The number of actual detected stars and the relative centroids' estimation accuracy have a huge impact on the success of star-identification-based attitude determination routines [1,2]. Taking into account that star sensor images generally contain several noise sources (stray light noise, single point noise, single events upsets (SEUs) and so on), good initial processing of image data is mandatory. In this way, it is possible to improve the percentage of success in the star identification process [3,4] and the possibility of retrieving accurate attitude information from images. A good image segmentation process is also needed for RSO detection, where often the image noise and the weak nature of an RSO streak can lead to an incorrect detection of the object and loss of precious information for the orbit determination (OD) modules.

The process of image segmentation is needed to highlight the useful image information which is geometrically encoded in the image pixels. Often, not all the pixels in an image are needed, and a process of selection and exclusion is mandatory to focus attention on just that image part. Image segmentation faces and solves this problem bringing as a result a binarized image which is often called Mask. Through the convolution of this Mask with the original image, it is possible to extract photometric and geometrical information about the target pixels. In this case, we will not be interested in the photometric information, but this will be discussed in the next sections.

Regarding image segmentation, in the last decades several traditional approaches have been proposed, classified according to the element they take into account to discern fore-

**Citation:** Mastrofini, M.; Agostinelli, I.; Curti, F. Design and Validation of a U-Net-Based Algorithm for Star Sensor Image Segmentation. *Appl. Sci.* **2023**, *13*, 1947. https://doi.org/ 10.3390/app13031947

Academic Editor: Theodore E. Matikas

Received: 29 November 2022 Revised: 15 January 2023 Accepted: 27 January 2023 Published: 2 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

ground and background pixels: image histograms, detection of image gradients, detection of object edges and complexity analysis techniques [1].

Among them, commonly used methods are the following:


Each reported traditional method presents limitations. Iterative threshold methods rely on all the energy levels in the global image, and these constitute their limit: a simple strong stray-light scenario can badly affect the predictions of these methods. Moreover, when the histogram of a star image is unimodal, the traditional iterative threshold method is time-consuming and cannot reach a suitable segmentation. A local threshold method would fail in the same way if the difference between the star signal and the local noise intensity is lower than the margin. The Otsu method, instead, can only provide satisfactory results for thresholding if the histogram is not unimodal or close to unimodal [10]. Artificial intelligence could be an appealing and useful tool to face this challenging problems due to the recent development and application of AI in image processing.

Star segmentation is a task that has many aspects in common with the segmentation of dim small targets, whose primary goal is to enhance the contrast between the target itself and the background, regardless of its nature; segmentation must be performed in every possible situation (noise, blurring, angular motion of the camera and/or motion of the target).

All these issues also occur in star tracker images. Dim small target segmentation has been studied for both synthetic aperture radar (SAR) and optical images. Jin et al. [11] proposed the application of a lightweight patch-to-pixel (P2P) CNN for ship detection in PolSAR (polarimetric SAR) images. Their approach prefers the use of dilated convolutional layers rather than conventional convolutional layers in order to expand the receptive field without adding any model parameter. Zhao and Jia [12] employed a more general-purpose target segmentation using a CNN on infrared images, testing it on both real and synthetic images; however, their study suffers from the necessity for a large amount of training data to be fed to the network. Fan et al. [13] overcame the issue of large training data by using the MNIST [14] database in such a way that it simulates images having similar properties to the long-range infrared images. Nasrabadi [15] proposed an autonomous target recognition in forward looking infrared (FLIR) images based on different deep CNN architectures and thresholding. However, this approach suffers from a lack of reliability in a real-world scenario. Shi and Wang [16] based their studies on the use of a CNN and a denoising autoencoder network by treating the small targets as background noise and therefore transforming the segmentation task into a denoising task.

The use of the famous biomedical-based U-Net for small target segmentation tasks has not been broadly investigated as of this writing, but Tong et al. [17] proposed an enhanced symmetric attention (EAA) U-Net, which employs information extracted in the same layer to focus on the target and cross-layer information to learn higher level features.

Xue et al. [18,19] investigated a more specialized version of the small target segmentation problem by going deeper into the problem of segmenting star images by proposing StarNet. This network is particularly complex due to the fact that its training is carried out in a multistage approach, an aspect that affects the training time in a negative way. In addition to this issue, StarNet is pre-trained on weights taken from the first three stages of VGG16 [20], whose initial input size must be a three-channel image. This forced choice translates into a computationally heavy process that affects the prediction time and could require top choice GPUs. All these aspects seem to make real-time image processing unfeasible. The U-Net proposed in this work tries to overcome this problem by working with mono-channel images by exploiting a simpler network architecture and training strategy.

The present work is focused on an AI-based image segmentation module of a proposed star-tracker-based payload for attitude determination and extended RSO detection functions. This payload is thought to investigate and validate the AI behaviour for onboard optical sensor applications for small satellite missions. This choice can guarantee a high segmentation quality of star tracker images against a variety of noise scenarios without the need for intensive calibration activity. Indeed, it represents a robust processing solution for commercial and cheap star sensors which can be used in small platform missions.

The Discussion section will be devoted to a description of the integrated hardware and software payload (design, optical head specifications, component choice and validation mission). The Materials and Methods section, together with the Results, will be about the design of an AI-based star and RSO segmentation algorithm capable of filtering the faintest objects from background in several signal-to-noise (SN) level scenarios (stray light noise, ghost noise, SEUs). In particular this algorithm aims to provide information about the brightest objects within the FOV in order to facilitate star identification and object detection and reduce the memory storage burden of a star tracker for successive operations. By brightest objects, we mean the ones that stand out most inside an image. Here, the formulation of the segmentation problem is similar to the creation of a saliency map. In the predicted mask, just the most salient objects will appear. This philosophy seems to be suitable to detect stars and objects in low SN environments as strong stray light noise that could affect the star sensor image. The algorithm design aims to have few modules to reduce the algorithm size, complexity and hardware implementation difficulties in order to provide a reliable star detection product.

The machine learning (ML) world was investigated to take advantage of the neural networks (NNs) capability of learning specific tasks, performing well against unpredictable situations and reducing algorithm complexity for segmentation purposes; this could avoid the problem of algorithm re-calibration to successfully process images in different noise conditions. Moreover, the CNNs are considered for this kind of task. They have the advantage of processing the image directly without iterative steps and without a local approach that would be power- and time-consuming. All of these features are designed along with the capability of resolving objects in strong stray-light environments if a suitable dataset for training is available.

Contributions of this work are as follows:


The work is organized as follows: a brief overview of the CNN and U-Net model is described in the Materials and Methods section, together with the proposed dataset and our proposed segmentation-clustering algorithm scheme. Then, a description of the traditional algorithms used for comparison will follow, along with a description of the considered performance indices. Results of U-Net's training, validation and testing are then reported and compared with traditional algorithms. They are followed by a Discussion section about the obtained results with a payload concept proposal and a possible validation mission. In the end, a Conclusion section summarizes the achievements and future goals.

### **2. Materials and Methods**

To face the segmentation of star tracker images, an ML approach has been considered. This AI area is being extensively explored and applied for image processing due to the great improvements achieved in several tasks. In particular, CNNs have been developed for computer vision algorithms because of their capability of image data processing and information extraction. Actually, lots of them are available to be used as Vgg-16 and Vgg-19 [20], Inception-v3 [21] from Google, the U-Net [22], AlexNet [23] and many others. CNNs have shown great performances in achieving many tasks such as object detection and classification, semantic segmentation and pose problems involving both cooperative and non-cooperative targets.

The problem faced in this work is semantic segmentation. Indeed, the scope is dividing the original image pixels in two groups: foreground pixels and background pixels belonging, respectively, to the white area and the black area of the segmented output. This result will be achieved through the application of the U-Net to obtain a final output which contains as white areas just the most salient objects in the sky, avoiding stray light noise, faintest stars and objects on the background and reducing the overall noise signal in the image. The reason why this is performed through an ML-based algorithm is due to the CNN capability to handle objectives and targets which are easily understandable for a human being but sometimes very difficult to formulate in a rigorous fashion; the problem of segmenting just stars and RSOs which stand out the most in a night sky image is one of them. This is the goal pursued in this work.

### *2.1. Convolutional Neural Networks*

CNNs are heavily applied whenever complex information extraction from images is required. In a rough approximation, they are stacks of convolution and maxpooling layers. They are capable of learning high- and low-level features of an image regardless of their position and orientation. A great advantage of a CNN is that the weights to be learned are shared between the layers, minimizing the needed memory storage.

The convolutional layer works in the following way: a convolution operation is performed by sliding a small window (typically 3 × 3 pixels) over height and width of the image and over its color channels. The window creates a patch feature map, which is multiplied with a learned weights matrix called convolution kernel. This operation could cause image size reduction, which is handled by applying padding, which means to add border pixels in order to make the output size the same as the input. The neurons contained in the network layers are activated through an activation function; ReLU (rectified linear unit), softmax and sigmoid are typically used.

The pooling layer has the aim to downsample the image in order to optimize the learning of features. In particular, the max pooling layer takes a portion of the image (typically a 2 × 2 window) and substitutes it with the maximum pixel value.

The performance of a CNN is represented by values of loss and accuracy, which are computed during training, validation and test processes. It is desirable to obtain similar accuracies in all these phases to avoid overfitting and generalize network performances against never seen data. There are different ways to overcome this issue. These include L2 weight regularization [24], batch normalization [25], data augmentation [26] and dropout [27].

#### *2.2. U-Net*

The U-Net is a fully convolutional neural network (FCN) which was first used for segmentation of biomedical images and was later adapted to space-based applications such as crater detection [28]. The network architecture in this paper is chosen to be a slightly modified version of the original one as shown in Figure 1, and it has a symmetric encoder–decoder structure: the encoding, downsampling path is a stacked sequence of two ReLU 3 × 3 convolutional layers followed by a 2 × 2 max pooling layer. At each level, the number of filters doubles, reaching its maximum value at the bottom.

The decoding upsampling path contains 2 × 2 up-convolutional layers concatenated with layers coming from the corresponding downsampling level in order to preserve already learned features. The concatenated layers are followed by dropout and a sequence of two 3 × 3 convolutional layers up to the output layer in which a final sigmoid-activated 1 × 1 convolution produces data ready to be binarized. The present U-Net will be fed with 512 × 512 images and their corresponding masks.

**Figure 1.** U-Net model with features dimensions and number of features [29].

Configuration parameters used for this network to prevent overfitting are as follows:


In this work, Adam was chosen as the optimizer and Binary Crossentropy was chosen as the loss function.

### *2.3. Dataset Creation and Input Data Preprocessing*

The dataset created and used for the U-Net training phase is provided at [31]. It is composed of 5600 squared monochromatic jpeg images and 5600 associated jpeg masks. All of them have a size of 512 × 512 px and a bit depth of 8. Masks' pixels can have one of two possible values: 0 for background and 255 for foreground pixels.

The samples for the training and validation phases are the first 5300 ones. They are generated from a batch of 600 images to which 180°clockwise rotation, 90° clockwise rotation, added noise, increased stray light, blur effects, increased luminosity, contrast and further modifications have been applied. This original batch is made of 300 samples coming from several night sky acquisition campaigns and 300 samples randomly acquired using Stellarium [32] software in night sky conditions considering different FOV and attitudes (Figure 2). Acquisition campaigns were performed using several kinds of cameras and objectives: ZWO ASI 120MM-S camera with default optics, reflex Nikon D3100 equipped with a Nikkor 18-105 mm and a ProLine PL16803 camera coupled with an Officina Stellare RiFast 400 telescope.

**Figure 2.** Dataset samples: real night sky image (on the **left**) and Stellarium-generated night sky (on the **right**).

Last. 300 dataset images were obtained with a high fidelity star tracker image simulator (HFSTIS). It is used to simulate realistic night sky images, simulating all the physical, functional and geometrical characteristics of the star sensor, along with all the instrumental and environmental noises [33–37]. Simulator images were used only in the test phase to provide independent samples the network has never been trained on to monitor the generalization capability of the trained network.

The dataset contains images of night sky where stars are the main actors but where planets, light pollution caused stray light noise, SEUs, clouds, airplanes' streaks, satellites, comets, nebulae and galaxies appear as well. Masks were obtained using specific thresholds for different groups of images from different campaigns, sensors and software. In particular, mask creation was carried out using Adobe Photoshop CC 2019 with the scope of obtaining just the most salient objects in the FOV and filtering all the noise and undesired objects cited above. An example of a dataset image and mask is provided in Figure 3, where it is clearly visible that just the brightest points appear inside the mask. Input images and mask preprocessing is performed in order to organize these data in two float 4D tensors for NN training and testing. Every image is converted into floating point arrays and normalized using its maximum value to carry out an image adaptive normalization process; every most significant image pixel will have an associated value close to 1. This will help the network detect the most significant pixels and make the learning process easier for the net by constraining the interval of signal values between 0 and 1. Moreover, this normalization process improves the algorithm capability of working with images that have different range of energy levels. The same process applies for mask data vectorization and normalization, with the only difference being that the normalizing factor is constant and equal to 255 for every mask.

**Figure 3.** Dataset samples: real night sky image (on the **left**) and relative mask (on the **right**).

It must be pointed out that the dataset does not contain only point-like stars but also even images where they appear as streaks due to the angular speed of the sensor. Considered angular speeds are within the [0◦/*s*, 1◦/*s*]. This allows the proposed AIbased algorithm to work properly even in situations where the star sensor is rotating with respect to the fixed stars or where a sidereal pointing of the camera occurs together with RSO passages.

Concerning the noise, 5300 images for training and validation are generated from a batch of 300 images. This batch is composed of several acquisitions performed with different sensors and so, different noise conditions. In less-noise-corrupted images, an added source of uniform noise was added using Adobe Photoshop 2019 (Filter->Noise- >Add Noise->Uniform 6.3%). For different stray light conditions, +0.5, +1 and +1.5 stop in image exposure were added randomly. It is quite impossible to retrieve the sensors' noise level characterization due to the different kind of sensor used and the unavailability of their noise features. For the last 300 test images, it is possible to characterize the noise:


### *2.4. Algorithm Design and Configuration*

In this section, our proposed AI-based algorithm's scheme is described and shown (Figure 4). It will be referred to it as the brightest objects sky segmentation (BOSS) algorithm.

**Figure 4.** BOSS algorithm scheme: data are in orange, and modules are in light blue.

The monochromatic raw image from the camera arrives at the preprocessing module where it will be resized, normalized and vectorized into a floating point array. This module will be tailored according to specific sensors' bit depth, while the size must be the one the NN is trained on. Then, the array is processed by the trained U-Net which gives its output prediction. This is a 2D array where every pixel has an associated value of probability *p* of being foreground. If *p* ≥ *pmin*, then this value is rounded up to 1. After the thresholding filter, a binary image is obtained, and its element product with the original image will provide an array with just active pixels' energy values. This output will be used by the clustering algorithm to detect clusters and compute their centroids and total energies. The clustering module developed within this work takes all the active pixels in the segmented image and organizes them in a list. Then. it associates the first pixel in the list to the first cluster and starts to verify if the next pixels are part of the same cluster or not through a distance-based criterion. Whenever the next active pixel does not belong to the already identified clusters, then a new cluster is identified.

The distance criterion uses the Euclidean norm and the condition to assess the membership of two active pixels (**p***<sup>i</sup>* and **p***<sup>j</sup>* ) to the same cluster as expressed by Equation (1).

$$\|\mathbf{p}\_i - \mathbf{p}\_j\| \le \sqrt{2} \tag{1}$$

The clustering algorithm then first performs a filtering action with a minimum and maximum dimension and then a sorting of all the clusters. A minimum dimension filter is needed to avoid a possible noise signal in the clustering outputs (ex. hot pixels), while the maximum dimension filter is needed to avoid great detected clusters due to nonstars objects.

The sorting operation is then performed in descending order and considers the first N clusters in output from the previous filtering actions. It is based on a combination (Equation (2)) of the clusters' dimension (*dimcluster*), energy (*Ecluster*) and maximum energy value (*max*(*E*(**p***<sup>i</sup>* ))) over the cluster because this sorting index proved to be suitable to select the best clusters for star pattern recognition purposes with the conducted tests.

$$\text{Sorting Index} = \frac{E\_{\text{cluster}}}{\text{dim}\_{\text{cluster}}} \times \max(E(\mathbf{p}\_i)) \text{ with i over cluster} \tag{2}$$

N is computed by applying a 1.25 factor to the average number of stars which depends on the FOV and cut-off magnitude of the selected sensor [2]. The maximum number of clusters in output is fixed a priori to process just the most significant ones, and the 25% margin factor is selected to take into account possible uncertainties of the average number of stars formula.

The U-Net is capable of resolving background and foreground pixels after training, validation and test phases, but it still gives continuous values that have to be discretized. This is the reason why the thresholding filter and the selection of a reasonable value for *pmin* will be described in the next sections in order to obtain the best correspondence between algorithm prediction and targeted masks. Once the U-Net is trained and *pmin* selected, the BOSS algorithm will be compared with traditional image segmentation ones, both described in the next section in terms of segmentation performance.

### *2.5. Traditional Image Processing Algorithms Description*

In this framework, several traditional algorithms for image segmentation were selected for comparison with the AI-based proposed algorithm. In the following, five algorithms are considered. They are based on Niblack's method, Otsu's method, the weighted iterative threshold approach (WIT) [1] and two local threshold (LT) approaches.

### 2.5.1. Niblack's Algorithm

Niblack's method is based on the computation of a threshold value *T* which is a function of the energy mean value and standard deviation computed over a rectangular window. Through its sliding over the whole image, it is possible to then identify which are the foreground and background pixels by comparing the window's pixels energy values with the local threshold.

$$T(\mathbf{x}, \mathbf{y}) = m(\mathbf{x}, \mathbf{y}) + k \times \sigma(\mathbf{x}, \mathbf{y}) \tag{3}$$

where *k* is a parameter which varies between −0.2 and −0.1, while the other parameter is the dimension of the window (supposed squared in our paper with side *dNb*).

#### 2.5.2. Otsu's Method

It is based on the gray level image's histogram. It aims to find the threshold value which minimizes the intraclass variance [38] of the thresholded black and white pixels. The main advantage of this method is its unparametric nature which avoids the need for configuring it according to the image noise level.

#### 2.5.3. Weighted Iterative Threshold Approach

This method is an iterative way to compute the optimal threshold for a given image. At each step, the threshold is computed using the following formula:

$$T = \frac{(1+\delta) \times \mu\_1 + (1-\delta) \times \mu\_0}{2} \tag{4}$$

where *μ*<sup>1</sup> and *μ*<sup>0</sup> are, respectively, the average gray level values of foreground and background pixels. With this new threshold, new sets of foreground and background pixels can be computed with the following mean values, and the process repeats with the computation of a new threshold. When the difference between two successive thresholds is lower than a certain tolerance Δ, the process stops. This algorithm is dependent on a scalar parameter *δ* which can vary in the range [−1.0, +1.0].

### 2.5.4. Local Threshold Approach Based on Rectangular Areas (LTA)

The algorithm used is from MATLAB. It scans the whole image and computes a local threshold. The used size of the window is given by the following formula:

$$Size = 2 \times floor \left(\frac{Image\ size}{16}\right) + 1\tag{5}$$

Over this window, a Bradley's mean [39] is computed. By comparison between a pixel's energy value and this local mean, the classification of the pixel as background or foreground occurs. This process is carried out using a scalar parameter called sensitivity (*S*). It varies in the range [0, 1] and indicates sensitivity towards thresholding more pixels as foreground.

### 2.5.5. Local Threshold Approach Based on a Moving Streak Average (LTS)

The last one, the segmentation algorithm [40], can be configured using a static or a dynamic approach [41] for the background noise estimation [38]. In this work, a dynamic approach based on a zigzag local thresholding with moving average is used [42]. In particular, a pixel is saved if its energy value is greater than the background noise, evaluated through a moving average line-by-line, plus a threshold *τpre*.

The information relative to the segmented image is used for the clustering process, i.e., the combination of segments belonging to the same star streak [34–37]. The most important step required by the clustering algorithm is the evaluation of the primitive clusters containing pixels which share at least one corner. A suitable technique has been developed in order to relate streaks in the image to the same star if the corresponding streak is broken into multiple streaks due, for example, to the high rate of the spacecraft. Three conditions will be satisfied related to the minimum distance, the direction and the density of the considered primitive clusters. Centroids' coordinates of each cluster are evaluated, taking into account the energies of segments merged to build the cluster itself according to Equation (6):

$$\mathbf{p}\_{cluster,i} = \sum\_{j} \left( \frac{E\_{sc\text{g},j} \cdot \mathbf{p}\_{sc\text{g},j}}{E\_{cluster,i}} \right) \tag{6}$$

where **p***seg*,*<sup>j</sup>* are the baricenter coordinates of the j-th segment in the image plane, while *Ecluster*,*<sup>i</sup>* is the sum of the energies *Eseg*,*<sup>j</sup>* which forms the cluster i-th:

$$E\_{\text{cluster},i} = \sum\_{j} E\_{\text{seg},j} \tag{7}$$

#### *2.6. Comparison Indices*

The performance comparison between the BOSS algorithm and other algorithms will be detailed in the Results section. The compared algorithms will be tested with a set of gray scale images and relative masks; in order to assess the accuracy of the algorithm's predictions against test masks, the use of suitable indices is necessary. These indices are heavily used in the segmentation field to compare two image masks as the U-Net output and ground truth from the dataset actually are. These indices can also be used to compare predictions of any couple of segmentation algorithms. Before introducing them, the notion of true positive (TP), false positive (FP) and false negative (FN) must be given:


A good prediction has background pixels and foreground ones almost in the same position inside an array as the associated ground truth mask. The goodness of the prediction can be assessed through the evaluation of Precision, Recall and F1 index.

$$Precision = \frac{TP}{TP + FP} \times 100\tag{8}$$

It represents the percentage of the TP with respect to the sum of TP and FP. The lower the FP is, the higher the Precision will be (with maximum value equal to 100% under ideal conditions).

$$Recall = \frac{TP}{TP + FN} \times 100\tag{9}$$

It represents the percentage of the TP with respect to the sum of TP and FN. The lower FN is, the higher the Recall is (with maximum value equal to 100% under ideal conditions). Recall and Precision are similar but with a slight difference. This is due to the distinction of FP and FN. Every time an FP or FN occurs, there is a discrepancy between the prediction and the reference mask, but the difference between FP and FN helps us better understand the behaviour of the trained network.

Another index to be defined is F1:

$$F1 = \frac{2 \times Precision \times Recall}{Precision + Recall} \tag{10}$$

This index combines the Precision and Recall and lets us understand when the best compromise between FP and FN occurs. It reaches its maximum value when the discrepancy between the prediction and the mask is minimized. Ideally, the maximum value of F1 is 100%, but the best prediction will be the one with the highest value of the F1 index.

### **3. Results**

In this section, results of U-Net's training, validation and testing will be shown, together with a reasonable choice for *pmin* and number of initial filters. Then, a comparison test of the BOSS algorithm with traditional ones is conducted against the same set of images.

### *3.1. U-Net Training and Test*

The U-Net tested here is configured with the values shown in Table 1.

**Table 1.** U-Net configuration parameters.


Training and validation were performed in Tensorflow Keras [43] considering three epochs and a dimension of the training batch equal to three. This has been conducted for four different values of initial filters: 16, 32, 64 and 128. This choice was made to see how performance in terms of accuracy and loss varies with the increasing network complexity in order to select the minimum required number of filters while achieving satisfactory performance. Every trained network reached accuracies higher than 99.80% after the first epoch and remained constant for training, validation and testing. The same behavior applies for the final losses which are not higher than 0.016. Moreover, no overfitting phenomenon was indicated.

The training, validation and tests were performed on the following workstation:


### *3.2. Tuning of Thresholding Filter*

From the previous phase, it seems that every model has learned its task, but their output is not yet a binary mask. To achieve this, a thresholding operation has to be performed to select a suitable value of *pmin*. Its tuning is conducted in a range of values from 0.01 to 0.9 and different models over the 300 images test set.

For every *pmin* and model, the values (over the whole test set) of Precision, Recall and F1 are computed and reported in Figure 5. It can be seen (Figure 6) that the network with 128 filters performs better in terms of output mask. The value around 69% for the F1 index means a minimized number of discrepancies between the BOSS algorithm's prediction and the mask. This value is the highest one if the best achieved F1 values are considered among all the networks. Because of this, a model with 128 filters and *pmin* = 0.04 was chosen for the BOSS algorithm design.

Precision behaviour for low *pmin* values is due to an increasing number of FPs because as the *pmin* decreases, the number of FPs becomes greater if compared to TPs. For higher *pmin*, the number of FPs tends to 0, while TPs tend to a finite value, and the 100% Precision is achieved. This maximum value of Precision still does not mean that the prediction is good.

A similar discussion can be conducted for the Recall: as the *pmin* decreases, the risk of predicting FNs decreases with respect to TPs. This is why Recall grows. For higher *pmin*, the increasing number of FNs causes the Recall value to fall towards 0%.

A test with a 256 initial filters model was conducted, but performances do not show relevant improvements with respect to the 128 initial filters model, while the required storage memory and computational time increment is not negligible (and also undesired). This is the reason why they are not reported in Figure 5.

**Figure 5.** Performance curves vs. *pmin* and models.

**Figure 6.** Details of performance curves vs. *pmin* and models.

### *3.3. BOSS Algorithm Tests*

In this section, examples of BOSS algorithm applications for stray light removal and ghost noise removal in night sky images are shown. The aim of a good segmentation in this case is to retrieve the point-like stars and streaks (RSOs) while removing background pixels that are often badly affected by several noise sources. By noise, we mean that part of the image signal that does not represent the target scene and must be filtered in a certain way. It corrupts the image actors, and if the ratio between the useful signal and noise is higher than but close enough to one, the useful signal could be filtered out as noise and lost for further processing. The image noise is due to several causes briefly divided into the following:


### 3.3.1. Sidereal Pointing Camera and Different Exposure Times

This test aims to investigate the BOSS algorithm behaviour against an increasing stray light noise source. Stray light noise is due to external light sources which cause a flare all over the image. It can be experienced when an optical-space-based sensor has a boresight direction close to the Sun, the Moon, or another high luminosity source. This stray light can have a uniform intensity or uniform gradient. If it is too strong, it can hide the useful signal associated to stars and RSOs and make them difficult or even impossible to be segmented and detected. A change in the stray light intensity during the mission may cause the need for continuous sensor calibration. This could be avoided with a calibration-less algorithm such as the U-Net-based proposal in this work. This noise can increase with the increase of the sensor exposure time as the amount of collected photons proportionally increases with this time interval. In this test, images acquired with a night sky simulator facility (NSSF) (Figure 7) were used. This facility is made of a darkroom where a ZWO ASI 120MM-S camera points towards a screen. Here, Stellarium software is used to simulate the sky as seen by a specific camera with a selected FOV and optics (2.8 mm of focal length, f/1.4). All the simulated stars have a magnitude lower than 6.5 in Figures 8 and 9. With this software and the real camera, it is possible to simulate the working of a camera pointing at the night sky both for stars and RSO segmentation and detection for preliminary results. Images' noise sources are the real camera noises plus a further stray light source due to the screen technology (backlit screen).

**Figure 7.** NSSF at ARCALab, School of Aerospace Engineering, Sapienza University of Rome.

A sidereal pointing was considered in this test just to see the effect of the image stray light noise increasing with the increase of the camera exposure time. Here 100, 200, 500 and 1000 ms of exposure were considered, and the original images and segmented ones are reported below (Figure 8). From this figure, it is possible to assess that the segmentation algorithm removes the stray light and correctly segments the stars. As the noise increases, the foreground pixels increase because of the increasing magnitude of the targets.

**Figure 8.** Left column: images from NSSF; right column: segmented images by the BOSS algorithm; from top to bottom: 100, 200, 500 and 1000 ms of exposure time.

**Figure 9.** Left column: images from NSSF with ghost noise; right column: segmented images by the BOSS algorithm; from top to bottom three different ghost noise scenarios (**a**–**c**).

### 3.3.2. Ghost Noise Removal

This test aims to investigate the BOSS algorithm behaviour against a nonuniform gradient noise: ghost noise. This undesired noise source is due to camera lens reflections, spacecraft structure reflection and dust all over the camera lenses. The ghost noise is named in such a way due to its appearance: it is a vague and partially transparent shape superposed on the background and image objects and a nonuniform gradient signal which may cover the whole image or part of it. If it is too strong, it can hide the useful signal associated with stars and RSOs and make them difficult or even impossible to be segmented and detected like the stray light. A change in the ghost noise intensity during the mission may cause the loss of information if no calibration is performed. This could be avoided with a calibration-less algorithm such as the U-Net-based proposal in this work, and it is shown with simulated images in Figure 9 and with real images in the next sections. This noise does not increase with the increase of the sensor exposure time, while the gradient level tends to be more uniform. Here, the same facility has been used but with an added external flashlight source. The stray light and ghost noise were produced with three different positions of an external flashlight source which was moved at different heights on the left side of the camera and target screen inside the darkroom. Even here, it is possible to see

a good segmentation quality provided by BOSS (Figure 9). A particular thing that can be noted is the white edges at the images' corners. They are due to a displayed undesired box in the night sky images made by Stellarium.

### 3.3.3. Real Image Test on BOSS Algorithm Output

In this section, a real night sky image is considered to test the BOSS algorithm against it. The aim of the test was the demonstration of the BOSS capability of providing a suitable segmentation and stars localization with a real image and real noise sources. The real image is shown in Figure 10 together with the segmented image. The camera used is a Nikon D3100 whose image has been cropped and resized to the U-Net input size of 512 pixels. The image contains, besides real sensor noises due to the electronics, a strong stray light source coming from the bottom and due to nearby city light pollution (Acquisition site: Tusculum/Frascati, Rome, Italy). Parameters of the camera are listed in Table 2. Clustering algorithms extracted the centroids of the eight brightest stars in the FOV and correctly localized them. They are shown in Figure 11.

**Table 2.** Nikon D 3100 camera features. The image has been resized to 512 px and 1:1 aspect ratio.


**Figure 10.** Ursa Major constellation segmentation provided by the BOSS algorithm. **Left**: real Ursa Major image; **right**: BOSS output of the Ursa Major real image.

**Figure 11.** Ursa Major constellation stars localization performed by the BOSS algorithm. Names and visual magnitudes of the segmented objects have been reported in Table 3.


**Table 3.** IDs, names and visual magnitudes of the segmented objects in Figure 11.

### *3.4. BOSS Algorithm Comparison with Traditional Image Segmentation Algorithms*

Here, a comparison between BOSS and traditional algorithms is conducted. The F1 index (Equation (10)) is chosen to compare the performance of the algorithms. Every algorithm was tested against the same test set of 300 simulated images [31]. Otsu's algorithm did not need to be configured, while LTA, WIT and LTS algorithms did. For these configurable algorithms, a tuning of their parameters was performed, and the F1 index was computed for every combination of their internal parameter.

### 3.4.1. Algorithms Comparison Procedure

The rationale behind the algorithm comparison is described in this section. The comparison dataset was composed of 300 images and 300 reference masks. Each reference mask represented the desired result of the segmentation process. Every reference mask was manually obtained using Adobe Photoshop 2019 (Image->Adjustments->Threshold), selecting a suitable threshold level because of the varying noise conditions over the 300 image test set.

Now, the scope compared the output of the generic segmentation algorithm with the corresponding reference mask. The procedure was as follows:


This process was repeated for all the 300 images, and a final averaged value for the F1 index was obtained for the considered algorithm.

This procedure was directly applied for the BOSS and Otsu algorithms because they do not need to be configured: Otsu does not need any configuration parameter, and the BOSS algorithm has its fixed value of *pmin* which was frozen during its design in Section 3.2.

Niblack, LTA, WIT and LTS require an additional step before computing the final F1 value: the configuration parameters' optimization. Indeed all of them have at least one configuration parameter to be selected with a suitable criterion:


By considering the generic configurable algorithm, the averaged F1 index was computed for every combination of the configuration parameters varying in their specific ranges. Indeed, every selected value for the configuration parameter brings the algorithm to be more or less severe in terms of segmentation performance and changes the final F1 index value. Results of this process, using commonly used values for the parameters, have been collected for each configurable algorithm in Tables 4–7. In the end, the six averaged F1 values for Otsu, BOSS and the optimized algorithms can be obtained and were reported in Table 8.

**Table 4.** Niblack's algorithm tuning. F1 index vs. *k* and *dNb*. The first parameter varies along the rows from −0.2 to −0.1, while the second varies one along the columns from 1 to 10.


**Table 5.** LTA algorithm tuning. F1 index vs. *S*. The Sensitivity varies along the columns from 0.1 to 1.0.


**Table 6.** WIT algorithm tuning. F1 index vs. *δ*. Here, *δ* varies along the columns from −1.0 to +1.0.


In these tables, the configuration parameters which maximize the F1 index were considered, and the maximum value of F1 is then reported in Table 8 for the final comparison. In this way, every configurable algorithm was optimized against the 300-image test set in order to make the comparison more challenging for the BOSS algorithm.


**Table 7.** LTS algorithm tuning. F1 index vs. *τpre* and *BKG*0. The first threshold varies along the rows from 5 to 55, while the second one varies along the columns from 1000 to 2000.

**Table 8.** Summary of algorithms' best F1 scores.


#### 3.4.2. Comparison Results

Otsu's algorithms is the only traditional one which does not need any configuration of parameters. Its F1 score is 0.07%.

The best achieved F1 values for every algorithm are summarized in Table 8 for a fast comparison.

From previous tables, it can be seen that the best F1 score was achieved by the LTA algorithm followed by the LTS one and the BOSS algorithm. Niblack and Otsu behaved worse than the others, while the WIT achieved high but not satisfying performances.

Both LTA and LTS behave better on the test set if compared to the BOSS algorithm. As a first impression, it would seem that the use of BOSS algorithm does not bring any advantage. However, the performances achieved by LTA and LTS were obtained via a tuning of their parameters, while the BOSS algorithm was not tuned after its design. LTA and LTS have to be calibrated every time the noise level changes inside the selected scenario to achieve the best segmentation quality output, while the BOSS algorithm does not need this calibration because it has been trained to segment well with several SN levels. This consideration means that the 69% of F1 index is a generalized performance value, while the 79% and 72% values for the traditional algorithms are optimized and not generalized. LTA and LTS were tuned against the test set, while the BOSS did not need any tuning.

This would mean that a star tracker based on the BOSS algorithm would not require any calibration or segmentation performance degradation during its lifetime in orbit. Even if the sensor's noise increases with the increasing of the lifetime, the U-Net would be able to adapt itself to several levels of SN ratio because it has been trained to do so. A stray light or a higher radiative region would not affect the quality of the segmentation product very much. With this consideration, the strength and meaning of the BOSS performance can be more appreciated and understood.

In Figures 12–14, it is possible to visually compare the segmentation algorithms output quality of these LT approaches and the BOSS algorithm to understand the limit of a traditional parameterized segmentation algorithm. Three real images of the night sky both from Earth and space were considered with different kinds and levels of noises. A discussion follows for each of them.

**Figure 12.** Italian Space Agency Matera Telescope's image segmentation: (**a**) real image; (**b**) LTA's prediction; (**c**) LTS' prediction; (**d**) the BOSS algorithm's prediction.

**Figure 13.** E and B EXperiment mission [44] star camera's image detail segmentation: (**a**) real image; (**b**) LTA's prediction; (**c**) LTS' prediction; (**d**) the BOSS algorithm's prediction.

**Figure 14.** Juno Star Reference Unit's [45] image segmentation (Credits to NASA): (**a**) real image, (**b**) LTA's prediction; (**c**) LTS' prediction; (**d**) the BOSS algorithm's prediction.

Considering the processing of these three figures:


stars in the background. Recognizing stars in this condition is important because star sensors are not used just in space where the absence of atmosphere avoids many noise sources, but also on ships or cruise missiles for navigation purposes, where being able to remove ghost noise due to clouds and other atmospheric effects is mandatory. The LTA (b) algorithm's prediction shows that few of the brightest stars have been correctly segmented with a not complete filtering action of the noise in the top right part of the image noise. The LTS behaves worst because the strong noise scenario does not allow it to correctly segment the image. There is a lot of noise, with horizontal foreground streaks in addition to the correct segmented stars. This bad behavior is due to the algorithm's inability to adapt itself to different SN conditions. Here, a re-calibration of the algorithm threshold would provide a better segmentation quality for the mask. In the last image (d), the BOSS algorithm prediction shows the segmentation of all the brightest stars (red boxes) with just a little portion of noise in the most corrupted region. Both the LTA and the BOSS algorithm show some problems with the strong ghost noise conditions but in different regions of the image and in different ways: the BOSS algorithm seems to detect as stars the strong nonuniform brightest corrupted region due to the mesosphere winds in the image, while the LTA seems to have an opposite problem with the darker region in the top right part. The explanation for the BOSS algorithm is that the whitest spots in the ghost noise are recognized as possible embedded foreground objects, while in the LTA case, the gray level gradients in the darker localized region mislead the algorithm in properly segmenting the image. Among the two false positives cases, this LTA behaviour could greatly affect the output mask quality because the gradient in the darker region of the image (generally the 99 % of these samples) increases the percentage of noise in the output mask (with possible negative effects both for attitude determination routines and RSO detection).

• Figure 14: This is a frame from a video [45] from Juno's Star Reference Unit (SRU) camera. The image contains stars and a huge number of SEU's crossing the sensor with different impact angles as the Juno spacecraft crosses Jupiter's high radiation polar regions. SEUs over images are caused by ionizing radiation which hits the sensor pixels. The more perpendicular to the sensor plane the ionizing radiation is, the more point-like the footprint of the high energy particle on the sensor will be. This noise source can be reduced by shielding the electronics properly, but it cannot be removed. In the analysed image, many SEUs cross the sensor due to the high energetic region where the Juno spacecraft is. It is quite difficult to distinguish the white spots nature, but the purpose of this image processing is to show how the weakest elements on the detector (certainly SEUs under spacecraft inertial pointing condition) are removed. By a rapid inspection of the figures, it is possible to assess that LTA and LTS show similar segmentation outputs, and the SEUs which cross the detector almost tangentially are segmented as streaks. They are segmented by the LTA and LTS algorithms, while they are removed by BOSS. This is due to the dataset used to train the U-Net algorithm. The NN learned to detect just the the brightest star-like objects, filtering all the less salient other ones. Here, the weak streaks of the SEU are removed, but this does not mean that all the star-like objects which appear in the BOSS prediction of Figure 14 are stars; they can be that part of SEUs with a high impact angle.

These examples show that the BOSS algorithm has performances that are slightly lower but acceptable and generalized if compared to the calibrated traditional algorithms. Moreover, it has an intrinsic robustness to SEUs which a traditional algorithm does not normally have.

#### **4. Discussion**

Previous results have proved the high quality of night sky image segmentation provided by the algorithm. The unparametric nature of it makes the calibration operation unnecessary and maintains the high robustness to strong stray light variation scenarios. These features are welcome both for attitude determination and RSO detection through

electro-optical sensors such as star sensors. Moreover, the absence of a calibration activity would make an optical camera based on this technology ready to work when the platform is released into its orbit. This algorithm could be used as an image processing main module for lost in space [36] or star tracking routines in common star sensors or RSO detection modules for onboard space surveillance camera or star sensors [40,46]. Both in the former and latter applications, good noise filtering capability is crucial for the success of the star and objects information extraction. This suggests a star sensor based on the U-Net image processing algorithm can be used to validate this technology in a real space mission and increase its technology readiness level (TRL) up to eight. This module will serve both the AD routines and the RSO detection modules. As the image is acquired by the optical head (OH), it is fed into the U-Net-based image segmentation module and then the mask will be used both by lost in space (LIS)/star tracking and RSO detection routines (Figure 15).

The selected components of the payload are the following:


**Figure 15.** Payload architecture and the role of the BOSS algorithms (for the OH SAGITTA, Credits to ARCSEC SPACE).

### *4.1. ARCSEC SPACE SAGITTA Star Tracker*

The SAGITTA Star Tracker from ARCSEC SPACE (Figure 16) was chosen due to its good features both for AD and RSO detection purposes. Its most important features for these purposes are the following:


It is compact and suitable for microsatellites and cubesats missions with dimensions of 45 × <sup>50</sup> × 95 mm3, a mass of 275 g and a low power consumption of 1.4 W. This device will be customized for the intended missions and used just as OH.

**Figure 16.** Sagitta Star Tracker, (Credits to ARCSEC SPACE).

### *4.2. PHOEBE Board*

PHOEBE Board [47] (Figure 17) was used by School of Aerospace Engineering, (Sapienza University of Rome) for the STECCO (Space Travelling Egg-Controlled Catadioptric Object) mission. PHOEBE Board Features include the following:


**Figure 17.** PHOEBE board, (Credits to School of Aerospace Engineering).

The algorithm will be implemented on the PHOEBE system on Chip/Field Programmable Gate Array (SoC/FPGA) hardware for taking advantage of the parallel computation capabilities of the FPGA which would boost the image processing time and make the proposed algorithm suitable for real-time applications. Indeed, as already demonstrated in another work [48], the SoC/FPGA hardware is able to reduce the processing time by a factor of 70 when compared with a personal computer based solution of the algorithm itself. At the moment, a personal computer processing time of this algorithm requires no more than 5 seconds per image. A materialization on the cited hardware would mean a realistic processing time, not higher than 100 ms, and thus a possibility of working with real 10 Hz star trackers and onboard cameras.

### *4.3. Small Satellite Mission Proposal*

The payload will be able to process camera images up to 10 Hz. This would be good both for AD and RSO detection functions. It will have a total weight not higher than 330 g, a total power consumption not higher than 2 watts with a total volume lower than 0.3 U. It can be installed both on Cubesats and micro satellite platforms, and once in orbit, it can work without any limitations of pointing except for the luminosity conditions: Sun, Moon and Earth to be avoided within the FOV.


The mission intended to be proposed has as a goal the detection of space debris in LEO. The payload will be mounted onto a 3U Cubesat, where an entire unit will be dedicated for our payload.

**Figure 18.** Visibility conditions.

### **5. Conclusions**

In conclusion, this work has shown several aspects:


As future goals, the development of a real payload for a future IOV mission will be pursued:


**Author Contributions:** Conceptualization, M.M. and F.C.; methodology, M.M. and I.A.; validation, M.M. and I.A.; data curation, M.M. and I.A.; writing—original draft preparation, M.M. and I.A.; writing—review and editing, M.M., I.A. and F.C.; supervision, I.A. and F.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding and is conducted within the main author's Ph.D. research activity in astronomy astrophysics and space sciences XXXVI cycle.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** U-Net Dataset link [31].

**Acknowledgments:** The author and co-authors want to thank Cosimo Marzo of *Italian Space Agency "Centro di Geodesia Spaziale Giuseppe Colombo"* Observatory for having provided night sky image data both for building the dataset and testing the algorithms.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Juan Misael Gongora-Torres** *<sup>∗</sup>***,†, Cesar Vargas-Rosales †, Alejandro Aragón-Zavala † and Rafaela Villalpando-Hernandez †**

Tecnologico de Monterrey, School of Engineering and Science, Monterrey 64849, Mexico

**\*** Correspondence: misael.gongora@tec.mx

† These authors contributed equally to this work.

**Abstract:** The elevation angle *θ* is relevant for the Low Earth orbit (LEO) satellite communications since it is always changing its relative position with respect to fixed Earth stations (ES's), and this affects the link length and received power, *PR*. This article provides a new methodology to compute the probability density function (PDF) and cumulative distribution function (CDF) of the elevation angle, *θ*, for diverse ES locations. This methodology requires as input parameters an ES latitude, *φ*, an orbit inclination value, *i*, and an orbit altitude, *h*. The elevation angle is characterized through a well known random variable, which facilitates the computation of the first and second-order statistics, and helps to determine the expected value and measures of dispersion of the angle *θ* for a particular ES location. The proposed methodology allows an easy and quick calculation of the elevation angle's CDF, facilitating comparisons against CDF's of more ES's located at different latitudes, and longitudes, *λ*; as well as the comparisons of CDF's of the elevation angle produced by different orbits. Extensive simulation results are summarized in a small table, which allows computation of the elevation angle's CDF and PDF for multiple ES locations without requiring of simulations and statistical fitting. Finally, the proposed methodology is validated through an extensive error analysis that show the suitability of the obtained results to characterize the elevation angle.

**Keywords:** Low Earth orbit (LEO); elevation angle; satellite communications

### **1. Introduction**

The characterization of the elevation angle, *θ* is relevant for low Earth orbit (LEO) satellite communications since this parameter is directly related to the varying distance between the satellite and Earth station (ES), and affects the link total attenuation. The elevation angle description through an analytical expression is a difficult problem addressed in [1,2] which has received less attention in the literature.

Nonetheless, this parameter is directly related to the link performance and channel characterization of LEO satellites. The LEO channel characterization has also received less attention than the geostationary (GEO) satellite channel and just few models such as [3] have been specifically developed and published to consider the elevation angle variations introduced at LEO. The lack of LEO channel models accounting for the always-changing elevation angle have resulted in just a few channel models available in the literature for LEO satellites and specially for small satellites as mentioned in [4].

LEO satellites are increasing their numbers and role as an enabling technology for Internet of Things (IoT), 5G, 6G, and next generation wireless networks aimed to provide global coverage with very low latency [5]. Then it is relevant to develop methodologies to analyze and compare the link performance and channel characteristics considering the variations introduced by the always-changing elevation angle.

The always-changing elevation angle condition has been a limitation to analyze the link budget and channel of LEO satellites, and common approaches to characterize it

**Citation:** Gongora-Torres, J.M.; Vargas-Rosales, C.; Aragón-Zavala, A.; Villalpando-Hernandez, R. Elevation Angle Characterization for LEO Satellites: First and Second Order Statistics. *Appl. Sci.* **2023**, *13*, 4405. https://doi.org/10.3390/ app13074405

Academic Editors: Simone Battistini, Filippo Graziani and Mauro Pontani

Received: 26 February 2023 Revised: 18 March 2023 Accepted: 21 March 2023 Published: 30 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

have followed segmentation in best and worst-case of the elevation angle [6], instead of analyzing the short and long term behavior of the elevation angle as an analytical function. However, some emerging problems, such as efficient power management [7], related to LEO satellites have made evident the necessity of having a way to characterize the elevation angle as an analytical function.

The observed elevation angle, *θ*, is always-varying for LEO satellites and can be described as a function of time, *θ*(*t*), for intervals in which a satellite is visible. The calculation of *θ* for a given position can be deterministically obtained with mathematical procedures developed in orbital mechanics books [8] and widely implemented in software for space dynamics simulations. However, it is important to note that even with accurate predictions of *θ* (accounting for contacts in several days or months), the relations between one contact and another, as well as the long term behavior of the elevation angle can be hardly described without randomness. Thus, appropriate use of probability theory becomes a great tool to analyze the behavior of the elevation angle, *θ*, in the long term.

The elevation angle, *θ*, is usually defined for an ES as the angle (above the local horizon) at which the satellite is visible, and within the interval of 0° to 90°, *θ* ∈ (0◦, 90◦], regardless of its azimuth. The minimum value of *θ* at which communication is possible is often called *θmin* (subject to *θmin* ≥ 0◦), similarly, the maximum value of *θ* that can be observed from the ES is called *θmax* (subject to *θmin* ≤ *θmax* ≤ 90◦). We can define a random variable, r.v., to take possible values of *θ*, such that *θ* ∈ (*θmin*, *θmax*] as Θ.

The probability density function, PDF, and cumulative distribution function, CDF, of the elevation angle, *f*Θ(*θ*) and *F*Θ(*θ*), respectively, are useful functions to characterize the LEO channel since the elevation angle affects the received power level, *PR*. The reasoning behind that is that *θ* depends on the distance from the satellite to the ES, *rS*,*E*. Then, at greater link distances, it can be expected to have a lower received power, *PR*, (of the transmitted signal from the satellite) than at shorter distances.

The distance between a LEO satellite and an ES, *rS*,*E*, can vary several thousands of kilometers from a low value of *θ* to a high value of *θ*. Figure 1a shows the extreme cases for the elevation angle in a LEO satellite link. Those extreme cases are not necessarily met at every contact, but represent the best and worst length-scenarios in a long period. Path lengths between the ES and satellite, *rS*,*E*, are described in this figure using the variable *ri*, *i* ∈ {1, 2, 3}. The figure also shows that at low values of *θ* the path length is larger and the received power, *PR* is at its minimum value. This last implication between the elevation angle, *θ*, the received power, *PR*, and the link length, *rS*,*E*, assumes some constraints explained in detail in [9].

Figure 1b shows the typical-case for the elevation angle, where the value of *θ* is not necessarily at its minimum nor at its maximum, but it can be at any value within the range *θ* ∈ [*θmin*, *θmax*]. Since the elevation angle of a LEO satellite appears as always-varying from an ES's, it is convenient to determine statistical indicators of its behavior, such as its expected value, *E*[·], median, *MED*[·], standard deviation, *SD*[·]; and how often does the elevation angle will be above or below a threshold, or within a region of interest, for example, using its quantiles *nQ*. In addition, since the elevation angle variations are directly related to the link length, the elevation angle for LEO satellites is also directly related to the variations of the received power *PR* at an ES, as described in [9].

**Figure 1.** Elevation angle cases for LEO satellite systems: (**a**) The values of *θmin* and *θmax* are usually related to the best and worst case of the received power, *PR*, at an ES; nonetheless, those cases are rare and represent the extremes of *θ*. (**b**) The elevation angle of a LEO satellite system as observed from an ES is always-varying; then, it is more convenient to analyze its behavior through statistical tools.

#### *1.1. Contributions*

The elevation angle characterization is relevant for LEO satellites because it is an always-changing variable for those communication systems. The effects of the elevation angle variations are observable in the received power, link quality, and channel behavior; then, an accurate characterization of this variable is a topic of interest for planning and implementation of LEO satellite systems.

In this article, we have addressed the characterization of the elevation angle for LEO satellites, by obtaining its PDF and CDF, as well as the derivation of its first and second order statistics. First and second order statistics are relevant to evaluate the suitability of different LEO orbits and to determine which orbits are more convenient to provide coverage for a particular application or Earth station location.

In addition, we have developed an extensive analysis to validate our results, and we include supplementary materials containing elevation angle times series. The supplementary materials will facilitate reproducibility of our work, and will allow future research based on our proposed methodology and results.

#### 1.1.1. Contribution 1

This document shows the feasibility of using a random variable to characterize the elevation angle behavior for LEO satellites with different orbit configurations. The suitability of using the proposed random variable is verified through an extensive error analysis with diverse orbit configurations.

### 1.1.2. Contribution 2

The PDF and CDF of the elevation angle can be obtained as proposed in [1,2]. Nonetheless, this document describes a methodology to obtain the PDF and CDF parameters of the elevation angle distribution for different orbit configurations and ES locations using a well-known random variable. The proposed random variable facilitates analytical manipulation as well as computation of the probabilities of occurrence of the elevation angle at

specific values. A small table containing the resultant parameters to characterize multiple LEO orbits configurations as observed from multiple ES's is included.

### 1.1.3. Contribution 3

The expected value of the elevation angle and its measures of dispersion are relevant for planning and implementation of LEO satellite systems, since the received power varies according to the elevation angle. This document describes an analytical methodology to obtain the first and second order statistics of the elevation angle for different orbit configurations and ES locations, which, to the best of our knowledge are not available in the literature and were just utilized for a specific case in [9].

### *1.2. Outline*

The remaining of the article is organized as follows: Section 2 introduces the theoretical fundamentals required to understand the subsequent sections; Section 3 describes the developed methodology to characterize the elevation angle behavior through *f*Θ(*θ*) and *F*Θ(*θ*); Section 4 contains the main results; and Section 5 concludes analyzing the obtained results and opportunities for future work. Figure 2 shows the main structure of the document.

**Figure 2.** Basic structure and contents of this article.

#### **2. Materials and Methods**

#### *2.1. Elevation Angle Definition*

The elevation angle, *θ*, for an ES can be defined as the angle between the local horizon and the satellite. Several sources such as [10–12] contain expressions to calculate *θ* from geometrical relations between the satellite and the ES instantaneous positions. One of those definitions is as follows

$$\theta = \arctan\left(\frac{\cos\Delta\cos\phi\_{\rm ES} - (r\_{\rm E,O}/r\_{\rm S,O})}{\sqrt{1 - \cos^2\Delta\cos^2\phi\_{\rm ES}}}\right),\tag{1}$$

where *r*E,O and *r*S,O are the distances from the center of the Earth, *O*, to the ES and the satellite, respectively; *φ*ES is the latitude at which the ES is located (in degrees); *M* is the subsatellite point, which corresponds to the latitude and longitude of the satellite instantaneous position; and Δ is the difference in longitude between the ES and *M* (in degrees). Figure 3a shows a derivation of *θ* based on (1).

**Figure 3.** LEO characteristics: (**a**) Graphical representation of the always-varying elevation angle, *θ*, and (**b**) an On-Off model showing the link availability for a LEO satellite vs a GEO satellite during a random day.

### *2.2. Relevance of the Elevation Angle for the Satellite Channel*

LEO satellites differ to those at GEO in its relative position as seen from a fixed ES. Whereas GEO satellites appear as fixed, those at LEO appear as always-moving points in the sky or are absent causing link unavailability. Link availability can be seen as an ON-OFF process, where the link is ON if the satellite is visible in the sky, and it is OFF if the satellite is absent. Figure 3b shows the ON-OFF process for both a LEO and GEO satellite.

In addition to the short visibility that LEO satellites have, conditions in this lapse are always varying as a consequence of the changing position of the satellite. When the satellite is first visible from an ES, it is at the largest distance supported by the link, then, the satellite starts approaching the ES until reaching the shortest distance in that particular contact. Finally, the satellite moves away after being at its minimum contact distance. It is important to note that the minimum distance from an ES to a LEO satellite, min(*rE*,*S*), varies from one contact to another, as well as max(*rE*,*S*) does.

Differences between the Earth's rotation rate and LEO orbiting velocities cause an always-varying link length (as seen from a fixed ES), and then, an always-varying elevation angle.

An important relation that needs to be accounted for the elevation angle, *θ*, and the link length, *rS*,*<sup>E</sup>* is given by the implication that when the angle is minimum, the *rS*,*<sup>E</sup>* distance is maximum, i.e.,

$$
\theta \to \min(\theta) \iff r\_{\mathbb{S}, E} \to \max(r\_{\mathbb{S}, E}), \tag{2}
$$

similarly

$$
\theta \to \max(\theta) \iff r\_{\mathbb{S},E} \to \min(r\_{\mathbb{S},E}); \tag{3}
$$

where the maximum link length, max(*rS*,*E*), occurs at some value of *θ* such that min(*θ*) < *θ* ≤ 90◦; and min(*rS*,*E*) occurs at some value of *θ* ≥ 0◦ lower than *θmax*. The implications in (2) indicate that when values of *θ* diminish, *rS*,*<sup>E</sup>* increases; similarly, in (3), when values of *θ* increase then *rS*,*<sup>E</sup>* decreases.

As a satellite moves away from an ES, *rS*,*<sup>E</sup>* and the free-space path loss, *LFS*, increases. Longer paths occur for lower values of *θ*, then, the atmospheric attenuation, *AAtm*, increases for larger paths (assuming similar atmospheric conditions for different link paths). On the other hand, when *θ* takes values close to 90◦, *rS*,*<sup>E</sup>* will be at its minimum, and so does *AAtm*. Then, low elevation angles cause a greater *LFS*, and more atmospheric attenuation, *Aatm* (assuming that the atmospheric conditions are approximately the same for two distinct paths).

For LEO it is important to consider link interference, *I*, usually addressed as noise in link-budget calculations, and also dependent on the elevation angle as mentioned in [13]. Furthermore, low values of *θ* are also associated in practice with non-line-of-sight (NLOS) conditions, increasing ground interference [14], and increasing multipath fading in the land mobile satellite (LMS) channel [15–17].

There are several models available in the literature to characterize the received signal from satellite systems. Most of these works have been elaborated for the LMS channel, which describes the received signal at a land-moving ES; nonetheless, most of those works focus on GEO systems. Extensive reviews for the LMS channel can be found at [16,18,19].

Among the available channel models for LMS systems, a few of them focus on characterizations for LEO and non-GEO (NGEO) satellites; but, as noted by [4,20], those are few and there are still many challenges. Most of channels for NGEO systems have been developed based on previous models for GEO satellites, specially on well-known models, such as those by Loo [21] and Lutz [22].

Even though the Loo and Lutz models were not originally intended for LEO satellites, those were later used as a base for much of the non-geosynchronous (NGSO) channel models. The Loo channel assumes that the complex envelope of the received signal, *rT*, contains a LOS, *rD*, and a multipath component, *rM*, which in its phasor notation can be represented as

$$r\_T \exp(j\phi\_T) = r\_D \exp(j\phi\_D) + r\_M \exp(j\phi\_M) \tag{4}$$

where *φT*, *φD*, and *φ<sup>M</sup>* indicate the phase of the total received signal, of the direct or LOS component, and of the multipath component, respectively.

The channel developed by Lutz [22] models a varying received signal affected by different levels of shadowing. This channel is characterized by a Markov chain with a state corresponding to light shadowing, *G*, and another corresponding to deep shadowing conditions *B*. Figure 4 illustrates the Lutz channel.

**Figure 4.** First order Markov chain illustrating the channel model developed by Lutz [22].

The Loo channel [21] was adapted by Corazza and Vatalaro [3] to formulate one of the best-known channel models for LEO satellites; similarly, it was modified by Abdi et al. [23], to achieve straightforward analytical expressions for the first and second order statistics of the received signal based on the Nakagami distribution. However, current models for GEO and NGEO systems have recognized the limitations of a single distribution and a single state to describe the received signal, and mutistate models with different distributions at each state have became popular.

Channel models for LEO with multiple states as in [22] have been developed containing the same kind of distribution at each state. For example, a combination of the Lutz and Loo models was implemented for NGSO by Perez-Fontan et al. [24], focusing on time series generation. This model used a three-state Markov chain, as shown in Figure 5, to indicate different levels of shadowing; the first state, *S*1, indicates deep shadow; the second, *S*2, indicates moderate shadowing; and the third state, *S*3, indicates line-of-sight (LOS) conditions. Each state describes the received signal according to a Loo distribution with a different Loo triplet, *α*, *ψ*, *MP*; and a different Markov chain for distinct elevation angle values. The so-called Loo triplet, which includes the mean of the direct signal amplitude, *α*, the standard deviation of the direct signal amplitude, *ψ*, and the multipath power, *MP*, is the set of parameters required by the Loo distribution.

Figure 6 shows two Loo distributions to characterize the satellite channel at two elevation angles; the CDF's were computed assuming presence of a LOS component, and a multipath power component very close to the LOS level. The Loo distributions in Figure 6 can be understood as two of the states of Figure 5 at different elevation angles; e.g., a state transition from *S*3,*θ*<sup>1</sup> to *S*3,*θ*<sup>2</sup> .

In addition to the shadowing level at a receiver environment, which is mainly determined by natural and human made objects at the surroundings of the ES, the received

signal level will be changing according to the elevation angle value. For LEO satellites, the elevation angle will change rapidly, and with a different rate at every contact.

As observed in Figure 6, the Loo CDF curves of the received signal rely on the elevation angle value since those depend on the LOS component of the received signal, *α*. Then, in order to predict different curves we need to know the expected value and measures of dispersion of the elevation angle.

Even though we illustrate the received signal through the well-known Loo model in Figure 6, there are more LEO channel models depending on a line-of-sight component, and thus, directly depending on the elevation angle. The, elevation angle characterization proposed in this article can be a valuable tool to determine CDF curves of the received signal, based on the first and second order order statistics of the elevation angle. For example, to determine the expected CDF of the Loo model based on the elevation angle expected value, and determine how far apart other curves will be based on the measures of dispersion of the elevation angle.

**Figure 5.** Generalization of a three-state channel model as proposed by [24] considering state transitions based on elevation angle changes.

**Figure 6.** Loo CDF of the received signal for two elevation angle values. The Loo triplet for *θ*<sup>1</sup> = 10◦ is *α* = −105 dB, *ψ* = 5 dB, and *MP* = −108 dB; and *α* = −95 dB, *ψ* = 5 dB, and *MP* = −98 dB for *θ*<sup>2</sup> = 90◦.

#### *2.3. Analytical Characterization of the Elevation Angle*

The elevation angle was analytically characterized in [2] through its probability density function (PDF), *f*Θ, which is defined as a marginal distribution from the joint PDF of Θ and the maximum value of the elevation angle, Θ*max*, as follows

$$f\_{\Theta}(\theta) = \int\_{\theta}^{\theta\_M} f\_{\Theta, \Theta\_{\text{max}}}(\theta, \theta\_{\text{max}}) d\theta\_{\text{max}} \tag{5}$$

where *f*Θ,Θ*max* is given by

$$f\_{\Theta, \Theta\_{\max}}(\theta, \theta\_{\max}) = f\_{\Theta | \Theta\_{\max}}(\theta | \theta\_{\max}) f\_{\Theta\_{\max}}(\theta\_{\max}) \tag{6}$$

or equivalently

$$f\_{\Theta, \Theta\_{\text{max}}}(\theta, \theta\_{\text{max}}) = \frac{G(\theta) \sin \gamma(\theta)}{\sqrt{\cos^2 \gamma(\theta\_{\text{max}}) - \cos^2 \gamma(\theta)}} \cdot \frac{f\_{\Theta\_{\text{max}}}(\theta\_{\text{max}})}{\int\_{\theta\_{\text{min}}}^{\theta\_M} f\_{\Theta\_{\text{max}}}(\mathbf{x}) \cos^{-1} \left(\frac{\cos \gamma(\theta\_{\text{min}})}{\cos \gamma(x)}\right) d\mathbf{x}} \tag{7}$$

The integral dividing the right hand term is a constant, say *C*1, then, (7) can be rewritten as

$$f\_{\Theta, \Theta\_{\max}}(\theta, \theta\_{\max}) = \frac{f\_{\Theta\_{\max}}(\theta\_{\max})G(\theta)\sin\gamma(\theta)}{\mathbb{C}\_1\sqrt{\cos^2\gamma(\theta\_{\max}) - \cos^2\gamma(\theta)}}\tag{8}$$

and the auxiliary functions *γ*(·) and *G*(·) require an extra parameter *a* defined as *α* = *rE*/*rS*, where *rE* = 6378 km is the radius of the Earth; and *rS* = *rE* + *h* depends on the altitude *h* (from the Earth's surface) of the circular LEO orbit in kilometers. The functions *γ*(·) and *G*(·) are defined as follows

$$\gamma(\theta) = \cos^{-1}(a\cos\theta) - \theta \tag{9}$$

$$G(\theta) = \frac{1 + a^2 - 2\kappa \cos \gamma(\theta)}{1 - \kappa \cos \gamma(\theta)} \tag{10}$$

The term *f*Θ*max* (*θmax*) is defined as follows

$$f\_{\Theta\_{\text{max}}}(\theta\_{\text{max}}) = \begin{array}{c c c c} G(\theta\_{\text{max}}) & \frac{G(\theta\_{\text{max}})}{K\_2} & \end{array} \\ \text{for } \theta\_{\text{min}} \le \begin{array}{c c c c} \theta\_{\text{max}} & \le & \theta\_{\text{max}} & < & \theta\_{\text{c}} \end{array} \\ \text{(11)}$$

 $f\_{\Theta\_{\text{max}}}(\theta\_{\text{max}}) = \frac{G(\theta\_{\text{max}})}{K\_2} \cdot [f\_{\Phi}(\phi\_0 - \gamma(\theta\_{\text{max}}) + \\\\

$$f\_{\Phi}(\phi\_0 + \gamma(\theta\_{\text{max}}))], \quad \text{for } \theta\_{\text{c}} < \theta\_{\text{max}} < \theta\_M \\ \tag{12}$$
$ 

where *f*Φ(*φ*) is defined from the orbital inclination *i*; and ES latitude, *φ*, as follows

$$f\_{\Phi}(\phi) = \frac{\cos \phi}{\pi \sqrt{\sin^2 i - \sin^2 \phi}}, \quad \text{for } |\phi| < i \tag{13}$$

and *θ<sup>M</sup>* = *π*/2, and *K*<sup>2</sup> is given by

$$K\_2 = \frac{1}{2} - \frac{1}{\pi} \sin^{-1} \left( \frac{\sin(\phi + \gamma(\theta\_{\min}))}{\sin(i)} \right) \tag{14}$$

where *θmin* is the minimum elevation angle that will be considered and is in the range 0◦ < *θmin* < 90◦.

Although results shown in [2] are highly accurate with respect to the actual elevation angle PDF, the analytical expression *f*Θ is cumbersome to obtain and evaluate. Furthermore, analytical expressions for second or higher order statistics, are a missing result in the literature to the best of our knowledge.

### **3. Methodology for the Elevation Angle Characterization**

In this section, we describe the procedure to characterize the elevation angle behavior through a random variable in a manageable analytical expression. Also, we include a procedure to obtain useful expressions of the elevation angle, such as measures of central tendency and dispersion.

The elevation angle characterization for different LEO orbit's configuration will help to predict which orbits are more convenient for a particular ES location. In addition, the measures of central tendency and dispersion will help to choose suitable orbits based on the expected elevation angle, and to calculate the link attenuation based on the distribution of the elevation angle, and on its expected value and variance.

### *3.1. Simulations Configuration*

Simulations were performed for a LEO satellite with characteristics mentioned in Table 1, and for circular orbit configurations listed in Table 2. The LEO region goes from a few hundred kilometers above the Earth's surface up to 2000 km; but the performed simulations and methodology cover just the upper part of this region since orbit perturbations effects are much lower at those altitudes than at orbital heights closer to the surface, and facilitates operation for longer periods of time without a complex propulsion system.

**Table 1.** Satellite characteristics for the ephemeris generation.


Three initial values of orbital parameters not included in Table 2 are right ascension of the ascending node, Ω, argument of perigee, *ω*, and true anomaly, *ν*, which were all set to zero. From those simulations, ephemeris files with the satellite position and velocity in five-second steps were generated. Then, ES's were placed (simulated) at 18 different and arbitrarily chosen locations listed in Table 3. Finally, time series of the observed elevation angle from the ES to the satellite were calculated for each ground location using the previous generated ephemeris files and geometrical relations as in (1).

The orbital mechanical equations required to calculate the position and velocity of the satellite as a function of time, were solved using an open source program developed and maintained by the NASA, GMAT [25]. The time series for the elevation angle discussed in this paper can be generated in GMAT using the data of Tables 1–3; also, these can be generated using other software, for example STK as in [2]. The time series mentioned in this paper were generated by the authors specially for this work, and are available at [26].

**Table 2.** Orbit characteristics for the ephemeris generation.



**Table 3.** ES locations for the performed simulations specified through their latitude, *φ*, and longitude, *λ*.

### *3.2. Elevation Angle Time Series Analysis*

Each elevation angle data set (obtained from the simulations) was individually analyzed to observe its statistical values of interest (mean, median, standard deviation, quantiles). Those time series were fitted to several distributions using the maximum likelihood estimation method. The goodness of fit of the proposed distributions was evaluated for each elevation angle time series using the Kolmogorov-Smirnov test, and it was observed a better performance of the gamma distribution, *Gamma*(*a*, *b*), for most of the data sets. Other distributions that showed good fit based on the Kolmogorov-Smirnov test were the beta, *B*(*a*, *b*), and Weibull, *Weib*(*a*, *b*). Figure 7 shows the values of *θ* as well as the PDF and CDF for one elevation angle time series, *θ*(*t*) using the proposed distributions. Figure 8 shows the Kolmogorov-Smirnov statistic, often indicated as *D*, for the time series obtained for different altitudes. The Kolmogorov-Smirnov statistic is an indicator of the maximum distance between the actual time series distribution and the proposed gamma distribution. A low value of the Kolmogorov-Smirnov is often desirable to indicate a better goodness-of-fit.

The gamma PDF, *f*Θ(*θ*), and CDF, *F*Θ(*θ*), are defined as follows

$$f\_{\Theta}(\theta) = \frac{1}{b^a \Gamma(a)} \theta^{a-1} \exp(-\theta/b), \tag{15}$$

and

$$F\_{\Theta}(\theta) = \frac{1}{\Gamma(a)} \gamma \left(a, \frac{\theta}{b}\right),\tag{16}$$

respectively, where Γ(·) is the gamma function, *γ*(·) is the incomplete gamma function, *a* is the shape parameter, and *b* is the scale parameter. By definition both *a* and *b* are greater than zero.

**Figure 7.** (**a**) Histogram and (**b**) theoretical densities for one of the elevation angle data sets obtained from the performed simulations.

**Figure 8.** Kolmogorov-Smirnov statistic for the performed simulations at different altitudes.

The shape, *a*, and scale, *b*, parameters for the gamma distribution were obtained for each of the elevation angle time series of each ES. Those parameters were organized into several matrix arrays as shown in Table 4, which contains the shape parameter for the simulated orbits as seen from the ES1, and as in Table 5, which contains the scale parameter also for ES1.

**Table 4.** Rate parameter, *a*, of the gamma distribution to characterize the elevation angle distribution at ES1, for different altitudes, *h*, and orbit inclinations, *i*.


**Table 5.** Scale parameter of the gamma distribution, *<sup>b</sup>* <sup>×</sup> <sup>10</sup>−2, to characterize the elevation angle distribution at ES1, for different altitudes, *h*, and orbit inclinations, *i*.


### *3.3. Orbit Coverage*

Figure 9a illustrates (without scale) passes of a satellite in each one of the six altitudes and for all the inclinations mentioned in Table 2. This figure also shows dashed lines indicating the ES's latitudes mentioned in Table 3.

In Figure 9a as in Table 2 the orbit inclination parameter, *i*, for each altitude goes from 20◦ to 85◦, then, this figure can be redrawn as the grid shown in Figure 9b, where the curved lines of Figure 9a are replaced by vertical lines indicating the orbit inclination. Additionally, the ES latitude lines were inverted to start with the lowest value at the top.

From Figure 9a it can be observed that satellites with certain orbit inclination *i*, reach (in its orbit trajectory) at most the latitude that coincide with the inclination value. For example, a satellite with *i* = 20◦ will not be able to pass over and ES located at *φ* = 80◦, since the maximum reached latitude in that orbit will be *φ* ≈ *i* ≈ 20◦, then, that satellite will be best suited to provide coverage at ES's located at latitudes *φ* ≤ *i* or at most *φ* ≈ *i*. This coincides with basic knowledge of satellite coverage; whereas polar orbits can cover almost all the globe, lower inclination orbits cover smaller portions of the Earth. Then, we can redraw Figure 9b to have a coverage grid by discarding points corresponding to cases of *φES* > *iSAT*, this resultant grid is shown in Figure 9c.


From the simulation results obtained for the grid points of Figure 9c, matrices *Ah* and *Bh* containing the shape and scale parameters of *f*Θ(*θ*) and *F*Θ(*θ*) can be defined as follows

where *NaN*'s are placed for the case *φES* > *iSAT*, and *h* ∈ *hs* corresponds to the simulation altitudes set defined in Table 2 as

$$h\_s = \{1000\text{ km}, 1200\text{ km}, \dots, 2000\text{ km}\}.\tag{18}$$

Tables A1–A6 in Appendix B, show the shape and scale parameters obtained from simulations with configurations shown in Tables 1–3. From those values, matrices *Ah* and *Bh* can be constructed, and linear interpolation can be applied to those in order to obtain shape and scale parameters for different orbit inclinations, ES's latitudes, and altitudes that were not simulated, but are in the range of *hs*, *is*, *φs*.

### *3.4. Reducing the Shape and Rate Matrices*

From the grid of Figure 9c, an array as shown in Figure 9d can be created for each simulated orbit altitude. Figure 9d shows the diagonal numbers of a rectangular matrix *Dh* defined as

$$\mathbf{D}\_{\hbar} = \begin{bmatrix} d\_0 & d\_1 & d\_2 & \dots & d\_{13} \\ d\_{-1} & d\_0 & d\_1 & \ddots & \ddots & \vdots \\ d\_{-2} & d\_{-1} & d\_0 & \ddots & \vdots \\ d\_{-3} & d\_{-2} & d\_{-1} & \ddots & \vdots \\ d\_{-4} & d\_{-3} & d\_{-2} & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ \text{NaN} & d\_{-4} & d\_{-3} & \ddots & \vdots \\ \text{NaN} & \text{NaN} & d\_{-4} & \ddots & \vdots \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \text{NaN} & \text{NaN} & \text{NaN} & \dots & d\_{-4} \end{bmatrix} \tag{19}$$

for each altitude *h* ∈ *hs*, with *NaN*'s are placed for cases where *φES* > *iSAT*.

### *3.5. PDF and CDF of the Elevation Angle*

In addition to *hs*, we can define two additional sets containing the orbit inclinations (as in Table 2) and ES's latitudes (as in Table 3) at which simulations were performed, as

$$d\_s = \{20^\circ, 25^\circ, \dots, 85^\circ\} \tag{20}$$

and

$$\phi\_{\sf s} = \{0^{\diamond}, 5^{\diamond}, \dots, 85^{\diamond}\}, \tag{21}$$

respectively.

Depending on the input values of *φk*, *hk*, and *ik*, if those are part of the previously defined sets (*is*, *hs*, *φs*) eight cases can arise as follows:

• Case 1. *ik* ∈ *is* and *φ<sup>k</sup>* ∈ *φ<sup>s</sup>* and *hk* ∈ *hs*


Case 1 occurs when all the input values *φk*, *ik* and *hk* are contained in *φs*, *is*, and *hs*, respectively; and Case 8 occurs when none of *φk*, *ik* and *hk* parameters coincide with previously simulated values. Cases 2 to 7 occur when at least one of the input values (any of *φk*, *ik*, and *hk*) coincides with a value within its corresponding sets *φs*, *is* and *hs*. Those cases illustrate all possible scenarios, ranging from not-required interpolation to interpolation inside a four-point mesh.

We can represent the ES latitude, *φk*, orbit inclination, *ik*, or both (for which we want to determine their PDF) as a query point between two-known points, inside a three-point mesh, or inside a four-point mesh. Then we can use an interpolation method, such as those proposed in Appendix C, to find an appropriate value for the gamma distribution parameters at that query coordinates. In addition to those proposed in Appendix C, more interpolation methods are widely available in the literature.

### *3.6. First and Second Order Statistics of f*Θ(*θ*)

The elevation angle expected value can be calculated from (15) as

$$E[\Theta] = \int\_{\theta} \theta f\_{\Theta}(\theta) \mathrm{d}\theta,\tag{22}$$

Nonetheless, sometimes a satellite link requires to operate above a minimum value of *θ*, *θmin*, then, elevation angle values below *θmin* are not of interest. Recent satellite systems operating above the Ku band consider *θmin* values above some tens of degrees, e.g., the Starlink constellation considered a *θmin* = 40◦ as mentioned in [27]. We can obtain the conditional expected value for values above *θmin* using conditional probability as follows

$$E[\Theta|\Theta \ge \theta\_{\min}] = \int\_{\theta} \theta f\_{\Theta}(\theta|\theta \ge \theta\_{\min}) \,\mathrm{d}\theta,\tag{23}$$

which can be rewritten as in [28] as

$$E[\Theta|\Theta \ge \theta\_{\min}] = \frac{\int\_{\theta\_{\min}}^{\theta\_{\max}} \theta f\_{\Theta}(\theta) \,\mathrm{d}\theta}{F\_{\Theta}(\theta\_{\max}) - F\_{\Theta}(\theta\_{\min})},\tag{24}$$

The variance of Θ, is obtained as follows

$$\text{Var}[\Theta|\Theta \ge \theta\_{\text{min}}] = E[\Theta^2|\Theta \ge \theta\_{\text{min}}] - E[\Theta|\Theta \ge \theta\_{\text{min}}]^2,\tag{25}$$

And the standard deviation of Θ is then defined as

$$\text{SD}[\Theta|\Theta \ge \theta\_{\text{min}}] = \text{Var}[\Theta|\Theta \ge \theta\_{\text{min}}]^{1/2} \tag{26}$$

### *3.7. Choosing Orbits to Maximize Mean Value of θ*

It is possible to choose an orbit configuration to maximize the elevation angle expected value, *E*[Θ|Θ ≥ *θmin*], for given ranges of *φ*, *h*, and *i*. The problem can be stated as

$$\begin{aligned} \max\_{i, \phi, h} \quad & E\left[\Theta | \Theta \ge \theta\_{\min}\right] \\ & \text{s.t.} \quad & i\_1 \le i \le i\_2 \\ & \quad & \phi\_1 \le \phi \le \phi\_2 \\ & \quad & h\_1 \le h \le h\_2 \end{aligned} \tag{27}$$

where *i*<sup>1</sup> ≤ *i*2, *φ*<sup>1</sup> ≤ *φ*2, and *h*<sup>1</sup> ≤ *h*2. This problem can be solved by optimization methods or using computational tools to perform an iterative evaluation within a for or while cycle.

#### Reduction of the Elements in the Diagonals

It was found that functions *f*Θ(*θ*) and *F*Θ(*θ*) were very similar for some diagonals, *d* shown in Figure 9d. Then, from the diagonals as shown in Figure 9d, we propose a reduction first, by obtaining the expected value *E*[·] of each diagonal, *dl*, (note the use of subindex *l* ∈ {−4, −3, ... , 13} as an indicator of the diagonal number) as *E*[*dl*]. Then, (19) can be rewritten as the following row vector

$$E[D\_{\hbar}] = [E[d\_{-4}], E[d\_{-3}], \dots, E[d\_{13}]]\_{1 \times 18} \tag{28}$$

where the diagonals with just *NaN*'s elements are omitted.

From observation, we find that some adjacent elements in *E*[*Dh*] are approximately equal, then, we reduced *E*[*Dh*] by taking the mean value of adjacent similar values according to

$$\frac{1}{|L-I|}\sum\_{l}^{L}E\left[d\_{l}\right], \quad L > l \tag{29}$$

where *l*, and *L* are, respectively, the lower and upper diagonal numbers for the interval in which the values *E*[*dl*], *E*[*dl*<sup>+</sup>1], ... , *E*[*dL*] were observed approximately the same (with a maximum-distance criteria between the fitted curves of 5%). After this reduction *E*[*Dh*] ends as a row vector with only four elements, and we call this vector *ph* for *<sup>h</sup>* <sup>∈</sup> *hs*. This procedure can be repeated for each altitude in *hs*, and after grouping all the row vectors *ph* for all simulated altitudes *h* we arrive to the matrix *P* given by

$$P = \begin{bmatrix} p\_{1000\text{km}} \\ p\_{1200\text{km}} \\ p\_{1400\text{km}} \\ p\_{1600\text{km}} \\ p\_{1800\text{km}} \\ p\_{1800\text{km}} \end{bmatrix} = \begin{bmatrix} p\_{1,1} & p\_{1,2} & p\_{1,3} & p\_{1,4} \\ p\_{2,1} & p\_{2,2} & p\_{2,3} & p\_{2,4} \\ p\_{3,1} & p\_{3,2} & p\_{3,3} & p\_{3,4} \\ p\_{4,1} & p\_{4,2} & p\_{4,3} & p\_{4,4} \\ p\_{5,1} & p\_{5,2} & p\_{5,3} & p\_{5,4} \\ p\_{6,1} & p\_{6,2} & p\_{6,3} & p\_{6,4} \end{bmatrix} \tag{30}$$

Matrix *P* will be called *Pa* if it contains the shape parameters, *a*, for *f*Θ(*θ*), and *Pb* if it contains the scale parameter, *b*, for *f*Θ(*θ*).

From matrices *Pa* and *Pb*, it is possible to recover any of the points of *Ah* and *Bh* and then estimate the CDF for a query orbit altitude *hk*, orbit inclination, *ik*, and ES's latitude, *φk*. However, we observed that some *F*Θ(*θ*) curves for high latitudes were not fitting very well after the reduction from matrices *Ah* and *Bh* to *Pa* and *Pb*, then, two weight matrices were obtained through manual tuning to improve the correspondence between *Pa* and


*Ah*, and between *Pb* and *Bh*. The weight matrix for the shape parameter, *wa* and *wb*, is defined as

Both *wa* and *wb* are the same for all altitudes, and their numerical values are shown in Appendix B.

#### **4. Results**

The shape, *a*, and rate, *b*, parameters for matrices *Pa* and *Pb*, respectively, are shown in Tables <sup>6</sup> and <sup>7</sup> for the orbit altitudes *<sup>h</sup>* <sup>∈</sup> *hs* and for the diagonals as in *Dh*, (19).

**Table 6.** Shape parameter, *a*, for matrix *Pa*, using the diagonal notation as in (19), and as shown in Figure 9d.


**Table 7.** Rate parameter, *<sup>b</sup>* <sup>×</sup> <sup>10</sup><sup>−</sup>2, for matrix *Pb*, using the diagonal notation as in (19), and as shown in Figure 9d.


Figure 10a–c show the PDF obtained from the empirical data and the ones obtained from the parameters listed in Tables 6 and 7. Table 8 shows the shape, *a*, and scale, *b*, parameters as well as the first and second order statistics for the *f*Θ(*θ*) functions of Figure 10a–d.

**Figure 10.** Comparison and error of the empirical and Gamma elevation angle CDF's for different ES latitudes, *φES*, orbit inclinations of the satellites, *iSAT*, and, orbit altitudes, *hk*. (**a**) CDF's for *φES* = 30◦ and *iSAT* = 30◦ at *hk* = 1000 km; (**b**) CDF's for *φES* = 45◦ and *iSAT* = 65◦ at *hk* = 1000 km; (**c**) CDF's for *φES* = 20◦ and *iSAT* = 75◦ at *hk* = 1000 km; (**d**) CDF's for *φES* = 85◦ and *iSAT* = 85◦ at *hk* = 1000 km.

**Table 8.** First and second order statistics of the elevation angle for the same orbit configurations and ES's locations that in Figure 10a–d.


A case study was developed for a satellite with an altitude of 1500 km, an orbit inclination of 43◦, and ES latitude of 22◦. Using the numerical results of Tables 6 and 7, a mesh as shown in Figure 11a can be created for altitudes of 1400 km and 1600 km (shape and scale values were obtained for those altitudes and are shown in Tables 6 and 7). Then, using an interpolation method as that described in Appendix C, parameters *a* and *b* can be obtained to characterize the elevation angle curve.

The shape and scale parameters for the case study are shown in Table 9. First, parameters for the four-points (as in Figure 11b) are listed for each altitude (1400 km and 1600 km). Then, the resultant parameters obtained from interpolation were computed. Finally, the shape and scale parameters for the query altitude can be obtained using a weighted arithmetic mean were the weights can be obtained from the distance of the query altitude to the closest characterized orbit altitudes. In this case, the distances from the query altitude to the closest characterized altitudes are the same, then we obtained the shape and scale for the query altitude using the arithmetic mean of the shape and scale values at 1400 km and 1600 km.

Both the elevation angle CDF obtained with the proposed methodology and the empirical CDF are shown in Figure 11b. The distance magnitude between those two curves is also shown in Figure 11b and labeled as absolute error (|Error|). The distance (or error) between the gamma CDF and the empirical CDF is very small, and its maximum value is around 0.025 (2.5%).


**Table 9.** Shape and scale parameters for the elevation angle PDF and CDF for *φES* = 22◦ and *iSAT* = 43◦ at *hk* = 1500 km, corresponding to the four-point mesh shown in Figure 11a.

**Figure 11.** Example case results for *φES* = 22◦, *iSAT* = 43◦ and *hk* = 1500 km: (**a**) four-point mesh to calculate the shape, *a*, and scale, *b* parameters of *f<sup>θ</sup>* (*θ*) and (**b**) empirical CDF vs. CDF obtained with the proposed methodology.

#### *4.1. Results Validity*

### 4.1.1. Effects of the Orbit Configuration in the Elevation Angle Distribution

The Kepler orbital elements were chosen to describe the orbits, those parameters include the semi-major axis, *a*, eccentricity, *e*, orbit inclination, *i*, right ascension of the ascending node, Ω, argument of perigee, *ω*, and true anomaly, *ν*. The values of the right ascension of the ascending node, Ω, argument of perigee, *ω*, and true anomaly, *ν*, do not affect the statistical properties of the elevation angle, then, those values are not further discussed. Figure 12a,b show the PDF and CDF elevation angle curves observed from the same location for two simulated satellites with different orbit configurations which share the same initial semi-major axis, *a*, eccentricity, *e*, and orbit inclination, *i*; but, differ in the initial values of the right ascension of the ascending node, Ω, argument of perigee, *ω*, and of the true anomaly, *ν*.

**Figure 12.** PDF and CDF for two orbit configurations sharing the same altitude, eccentricity, and orbit inclination, but with the remaining orbital parameters being different.

#### 4.1.2. Elevation Angle CDF for Different Longitudes

ES's located at different longitudes share approximately the same elevation angle distribution for the same LEO satellite configuration. The reasoning behind this fact is discussed in [2], and it is illustrated in Figure 13a, which shows the elevation angle PDF for three ES's located at the same latitude, *φ*, but at different longitudes, *λ*'s. Figure 13a shows that the elevation angle PDF is the same for ES located at different longitudes, *λ*'s, when they share the same latitude, *φ*, and those are served by satellites with the same orbit configuration (semi-major axis, *a*, eccentricity, *e*, and, inclination, *i*). The PDF's shown in Figure 13a were generated for a satellite with a circular orbit and an inclination of *i* = 60◦, and an altitude of *h* = 1800 km; in addition, all ES's were located at the same latitude *φ* = 25◦ N.

**Figure 13.** CDF *F*Θ(*θ*) for the same orbit configuration and ES latitude but for (**a**) different ES longitude and (**b**) different simulation period.

### 4.1.3. Long Term Validity of *f*Θ(*θ*) and *F*Θ(*θ*)

From a few days *f*Θ(*θ*) can be observed very similar compared with the same PDF is a much greater period, e.g., one year. Figure 13b shows the functions *F*Θ(*θ*) for three different simulation periods ranging from one week to one year. The CDF curve is almost identical for all the simulation periods. The orbit configuration and ES's location were choose to be the same that for Figure 13a.

### 4.1.4. Elevation Angle CDF for Different Satellite Characteristics

Simulations for this paper where performed for a satellite with the mass and drag areas listed in Table 1, nonetheless, satellites with different values in those characteristics will share very similar functions *F*Θ(*θ*). Figure 14 shows the empirical CDF's for two satellites with the same orbital parameters but with different mass and drag areas. For both satellites, the orbit altitude, *h*, was set to 1800 km and the orbit inclination, *i*, to 60◦, and both ES's were located at 25◦ N. As shown in Figure 14, variations in the satellite mass and drag area characteristics will have a very little impact in the elevation angle CDF's. For lower altitudes the effect is greater, but not significant enough for the altitudes covered in this paper. Table 10 show the mass and areas for satellites shown in Figure 14.

**Table 10.** Satellites with different mass and drag area characteristics for additional performed simulations.

**Figure 14.** CDF *F*Θ(*θ*) for different satellite mass and drag area characteristics.

### *4.2. Error Analysis*

The error between the empirical CDF curves of the elevation angle and the ones obtained with the parameters in Tables 6 and 7, was quantified by means of the absolute error, , and its CDF. Figure A1a–f in Appendix A show the mean absolute error, ¯, between the empirical CDF's for the simulated points of Figure 9c, and for the orbit altitudes *hk* ∈ *hs*.

The maximum observed mean error for the individual characterizations shown in Appendix A was about 5% (¯*max* ≈ 0.05) for the worst case. However, the greatest error values were observed to occur at lower elevation angles (below 20◦ were the link is often unreliable since line-of-sight is harder to find) that are often not considered by satellite communication systems due to greater attenuation.

The absolute errors were also analyzed through their individual CDF's, showing that most of the errors where below 5%. Figure 15a–d show the CDF's of the absolute error, , corresponding to Figure 10a–d. In addition, was analyzed for all the simulated satellite orbits and according to their elevation angle range.

**Figure 15.** CDF's for the absolute errors at different orbit configurations and ES's as in Figure 10a–d. Absolute error CDF's for: (**a**) *φES* = 30◦ and *iSAT* = 30◦ at *hk* = 1000 km; (**b**) *φES* = 45◦ and *iSAT* = 65◦ at *hk* = 1000 km; (**c**) *φES* = 20◦ and *iSAT* = 75◦ at *hk* = 1000 km; (**d**) *φES* = 85◦ and *iSAT* = 85◦ at *hk* = 1000 km.

The PDF, CDF, and the complementary CDF (CCDF) showing the resultant absolute errors, , for all the simulations are shown in Figure 16, where it can be observed that around 95% of the values of are below 0.05. Figure 17 shows the same absolute errors as in Figure 16b, but now expanded and classified by elevation angle range; in this figure, it can be observed that most of the absolute errors are below 5% for all the elevation angle ranges, and that errors' CDF distributions are approximately the same for all the elevation angle ranges. Figure 18 expands the 90th-percentile of the CDF's in Figure 17, showing that in fact, most of the errors are below 2%; the 90th-percentile mean value is shown as a red cross for each boxplot.

**Figure 16.** Distribution of the absolute errors behavior of all the performed simulations: (**a**) PDF, (**b**) CDF, and (**c**) CCDF.

**Figure 17.** CDF's of the absolute error by elevation angle range for all the simulated orbits and ES's.

**Figure 18.** Boxplots of the absolute error by elevation angle range. Simulation results vs proposed methodology.

#### **5. Conclusions**

This article demonstrated the feasibility of using a gamma random variable to characterize the elevation angle PDF and CDF. The proposed methodology allowed PDF and CDF calculations of the elevation angle for LEO satellites for altitudes between 1000 km and 2000 km. The characterization of elevation angle through a gamma random variable allowed an easy computation of first and second order statistics, which, to the best of our knowledge, have not been previously addressed in the literature.

The proposed methodology was validated with an error analysis against the empirical CDF's. The results showed that errors are low, with a mean absolute error much below 5% in most of the cases, and with around 95% of the total errors being below 5%.

Furthermore, the proposed methodology allows an easy comparison between multiple orbits in order to determine the most suitable orbital parameters to provide coverage above a minimum elevation angle at a particular latitude.

**Author Contributions:** Conceptualization, J.M.G.-T. and C.V.-R.; methodology and software, J.M.G.- T.; validation, A.A.-Z. and R.V.-H.; formal analysis, A.A.-Z. and R.V.-H.; investigation, A.A.-Z. and R.V.-H.; writing—original draft preparation, J.M.G.-T.; writing—review and editing, C.V.-R., A.A.-Z. and R.V.-H.; visualization, J.M.G.-T.; supervision, J.M.G.-T., C.V.-R. and A.A.-Z.; project administration, C.V.-R. and R.V.-H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We would like to acknowledge the support from the Smart Digital Technologies and Infrastructure Research Group, the project "Digital Technologies to Create Adaptive Smart Cities" as part of the Challenge-Based Research Funding Program, and the School of Engineering and Sciences at Tecnologico de Monterrey for providing the means to develop this collaborative work.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **Appendix A**

Figure A1a–f contain the mean error for each elevation angle CDF. Those errors were obtained by comparing the proposed methodology results against the results obtained through simulations. The same format than in Figure 9c,d is applied here; only showing the ES's located below the satellite orbit inclinations at each column.

**Figure A1.** *Cont*.

**Figure A1.** Mean error obtained with the proposed methodology, as compared with the simulation results for all the simulated orbit inclinations, *iSAT*, ES's latitudes, *φES*, and altitudes, *hk*. (**a**) Proposed methodology vs simulations results at 1000 km; (**b**) Proposed methodology vs simulations results at 1200 km; (**c**) Proposed methodology vs simulations results at 1400 km; (**d**) Proposed methodology vs simulations results at 1600 km; (**e**) Proposed methodology vs simulations results at 1800 km; (**f**) Proposed methodology vs simulations results at 2000 km.

#### **Appendix B**

Tables A1–A6 contain the shape, *a*, and rate, *b*, parameters for *f*Θ(*θ*) at the simulated values of *i* and *φ*. With those results, matrices *Ah* and *Bh* are formed as defined in (17) and depicted in Figure 9d. Also, Table A7 contains the numerical values for *wa* and *wb*.

**Table A1.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 1000 km.



**Table A2.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 1200 km.

**Table A3.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 1400 km.


**Table A3.** *Cont.*


**Table A4.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 1600 km.



**Table A5.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 1800 km.

**Table A6.** Shape, *a*, and rate, *b* parameter for *f*Θ(*θ*) and *F*Θ(*θ*) for *hk* = 2000 km.


**Table A6.** *Cont.*


**Table A7.** Data for weights matrices, *wa*, and *wb*.



#### **Appendix C**

This appendix contains the well-known four-point-mesh interpolation method, which was applied in this paper to determine the shape, *a*, and rate, *b*, parameters from matrices *Ah* and *Bh* for a given set of coordinates (*i*, *φ*). More interpolation methods are widely available in the literature.

#### *Interpolation for a Four-Point Mesh*

The bilinear method is proposed as the method to find the query point inside the mesh since it is a well-known method and widely applied for interpolation.

We can obtain the values for a query point as in Figure A2 for a shape parameter value *a* using bilinear interpolation as follows

$$a(\phi, i) = w\_1 a(\phi\_2, i\_1) + w\_2 a(\phi\_1, i\_1) + w\_3 a(\phi\_2, i\_2) + w\_4 a(\phi\_1, i\_2),\tag{A1}$$

similarly, we can obtain the scale parameter, *b*, using bilinear interpolation as follows

$$b(\phi, i) = w\_1 b(\phi\_2, i\_1) + w\_2 b(\phi\_1, i\_1) + w\_3 b(\phi\_2, i\_2) + w\_4 b(\phi\_1, i\_2),\tag{A2}$$

where the wights *w*1, *w*2, *w*<sup>3</sup> and *w*<sup>4</sup> are given by

$$w\_1 = (i\_2 - i)(\phi\_1 - \phi) / [(i\_2 - i\_1)(\phi\_1 - \phi\_2)] \tag{A3}$$

$$w\_2 = (i\_2 - i)(\phi - \phi\_2) / [(i\_2 - i\_1)(\phi\_1 - \phi\_2)] \tag{A4}$$

$$w\_3 = (i - i\_1)(\phi\_1 - \phi) / [(i\_2 - i\_1)(\phi\_1 - \phi\_2)] \tag{A5}$$

$$w\_4 = (i - i\_1)(\phi\_1 - \phi\_2) / \left[ (i\_2 - i\_1)(\phi\_1 - \phi\_2) \right] \tag{A6}$$

where the known values for *a*(*in*, *φm*), ... , *a*(*in*+1, *φm*+1) and *b*(*in*, *φm*), ... , *b*(*in*+1, *φm*+1), can be obtained from matrices *Ah* and *Bh*.

**Figure A2.** *F<sup>θ</sup>* (Θ) for two orbit configurations.

### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
