Next Article in Journal
Hybrid Multimodal Medical Image Fusion Method Based on LatLRR and ED-D2GAN
Previous Article in Journal
Radioactivity Content and Dosimetric Assessment in Bovine Meat from the Calabria Region, Southern Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Path Loss Models for Cellular Mobile Networks Using Artificial Intelligence Technologies in Different Environments

1
LDDI Laboratory, Ahmed Draia University of Adrar, Adrar 01000, Algeria
2
Computer Science and Engineering Department, School of Engineering, American University of Ras Al Khaimah, Ras Al Khaimah 72603, United Arab Emirates
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(24), 12757; https://doi.org/10.3390/app122412757
Submission received: 2 November 2022 / Revised: 2 December 2022 / Accepted: 8 December 2022 / Published: 12 December 2022
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
One of the most critical problems in a communication system is losing information between the transmitter and the receiver. WiMAX (Worldwide Interoperability of Microwave Access) technology is gaining popularity and recognition as a Broadband Wireless Access (BWA) solution. At frequencies below 11 GHz, WiMAX can operate in line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. The implementation of WiMAX networks are rushed worldwide. Estimating path loss is crucial in the early stages of wireless network deployment and cell design. To anticipate propagation loss, several path loss models are available (e.g., Okumura Model Hata Model), but they are all bound by particular parameters. In this paper, we propose an MLP neural network-based path loss model with a well-structured implementation network design and grid search-based hyperparameter tuning. The proposed model optimally approximates mobile and base station path losses. Therefore, neurons number, learning rate, and hidden layers number are investigated to obtain the best model in terms of prediction accuracy. Path loss data is collected based on 14 networks in different microcellular settings. Simulations under Matlab environment showed that prediction errors were lower than standard log-distance-based path loss models.

1. Introduction

During the evolution of cellular networks, several systems have been developed without being standardized, which poses many problems, particularly in terms of electromagnetic compatibility (frequency distribution). As a solution, GSM (Global System for Mobile communication) is the first 2G cellular telephony standard to have been standardized, and is today the global benchmark for mobile radio systems. Given the apparent importance of the mobile channel, it is essential to master the design and densification methods of a mobile channel network. It is also necessary to study the physical behavior of the channel signals and evaluate the resulting losses.
The mobile system is based on electric radio links located inside the troposphere, the seat of many meteorological and climatic phenomena (rain, snow, fog, etc.), or above the ground with many obstacles (buildings, vegetation, etc.) inside buildings. The budgets study of such links requires considering the different attenuations (loss in free space, attenuation due to the different effects of the environment: hydrometeors, buildings, vegetation, etc.) and the different reinforcements of the signal between the transmitter and the receiver (antenna gains), where different propagation mechanisms come into play such as reflection, refraction, transmission, scattering, etc.
With the rapid growth and need for high-quality throughput and capacity of cellular networks, increasingly accurate modeling of the propagation channel under various environmental conditions, frequency ranges, and bandwidths have become extremely important. This modeling improves radio interfaces in terms of performance, optimizing networks during their deployment (determination of coverage, frequency allocation, power definition, antenna gains, etc.), and determining unavoidable disturbances. The models are of different types: deterministic, empirical, and semi-empirical models. Deterministic models are based on reference models represented in known physical processes. The computation time is relatively high [1]. Experimental data only supports empirical models depending on frequency, distance, and antenna height (see Table 1). They are powerful, fast, and do not require geographic databases [1].
In the last decade, statistical learning algorithms have attracted much interest in academia and companies in various sectors. They have been successfully implemented to perform predictive tasks related to statistical processes. However, another approach to modeling a system is fascinating, namely artificial neural networks. They derive their modeling power from their ability to detect high-level interconnections, simultaneously involving several variables [2]. Today, neural networks are well established in several fields: in the financial world for the prediction of market fluctuations, in the pharmaceutical field (analysis of organic molecules), in the banking field for the detection of credit card fraud, and in the calculation of credit ratings, in aeronautics for the programming of autopilots, etc.
The applications are numerous, and all share a common point essential to the usefulness of neural networks: the processes for which one wishes to make predictions have many explanatory variables and, above all, there are possibly nonlinear dependencies between these variables which, if discovered and exploited, can be used to increase the predictability of the process. From the point of view of process prediction, the main advantage of neural networks over traditional statistical models is that they can automate the discovery of essential links. The main interest of these networks is their ability to react automatically to a complex environment such as telecommunication systems. This study evaluates signal losses in a physical channel as an example.
The rest of the paper is organized as follows: Section 2 discusses propagation models commonly used in path loss prediction. Related work is presented in Section 3, while Section 4 describes neural network architectures such as Multi-Layer Perceptron’s (MLP), selected neural network topologies and details of datasets processing. Section 5 presents our contribution. Section 6 presents the results obtained for loss prediction using neural network training, and compares the different neural network architectures. Section 7 compares our contribution with some empirical models. Section 8 is the conclusion.

2. Classification of Propagation Models

Propagation models design a radio interface to optimize performance and deploy systems in the field to determine radio coverage. The models will be used in engineering tools to predict various values that will benefit the deployment of radio telecommunications systems and radio coverage research (site selection, frequency allocation, power definition) and interference description. The models rely heavily on geographic datasets that include topography and land use types. This is because the way ultra-high frequency (UHF) radio waves propagate in a given space is intimately related to the obstacles (buildings, tree trunks, mountainsides, etc.) encountered along the propagation channel.
Therefore, the modeling of geographical objects is essential in any UHF wave propagation model [3]. Propagation models are then used to provide a mathematical prediction of wave propagation between the origin and destination service area. This allows a system receiver to assess whether a planned radio system will sufficiently serve the desired service area. The following subsections present the fundamental models studied and their classification, data requirements, and coverage notes.
Table 1. The empirical and semi-empirical model parameters.
Table 1. The empirical and semi-empirical model parameters.
ModelsFrequenciesMobile Station Antenna HeightBase Station Antenna HeightDistanceThe Scope of the Application
Okumura-Hata model [4].200 to 1900 MHz30 to 200 m1 to 10 m1 km to 10 km
  • Weak Urban
  • Dense Urban
  • Rural
Free Space Path Loss (FSPL) model [5]undefinedundefinedundefinedundefined
  • Urban
  • Suburban
  • Rural
Cost 231-Hata [6].1500–2000 MHz30 to 200 m1 to 10 m1 km to 20 km
  • Urban
  • Suburban
  • Rural
Walfisch-Ikegami (WI) model [7,8].800–2000 MHz1 to 3 m4 to 50 m0.02 to 5 km
  • Urban
  • Suburban
  • Rural
ECC-33 or extended Hata-Okumura model [9,10].200 to 3500 MHz30 to 200 m1 to 10 m1 km to 100 km
  • urban
  • suburbs
Stanford University Interim (SUI) model [11].To 11 GHz2 to 10 m10 to 80 m0.1 and 8 km
  • urban
  • suburbs
  • Rural
Ericsson model [12,13].200 to 3.5 GHz30 to 200 m1 to 10 m1 km to 100 km
  • urban
  • suburbs
  • Rural

3. Related Work

Artificial Intelligence (AI) is excellent for problems where existing solutions require a lot of hand-tuning or long lists of rules. It efficiently handles complex problems where there is no good solution at all using traditional methods. AI is used for adapting to changing environments, for getting insights about complex problems that use many data, and in general for noticing patterns that a human might miss. Hard-coded software can be a long list of complicated rules that are hard to keep up with, or it can be a system that automatically learns from past data, finds outliers, predicts what will happen in the future, etc. AI’s ability to learn, along with a large amount of sent data or wireless configuration datasets, can be used to solve these problems. In Table 2, we summarize and compare some previous AI models.

4. Artificial Neural Networks

A neural network can be considered in its simplest form as a black box, equivalent to a function with multiple inputs and outputs. This black box has a vital characteristic called “learning”. This learning is done with the help of an algorithm and a dataset [29]. The outputs of the function are matched to the inputs. Each time, the function is adjusted according to the number of training samples that make up the dataset and the convergence speed to an optimal solution. This function is adjusted for each sample, implying that the dataset must be as rich as possible. At the end of the process, the black box produces outputs for any input, even outside the dataset, with optimal accuracy [30].

4.1. Architecture of Neural Networks

An ANN (Artificial Neural Network) combines several formal neurons to create a structure capable of solving mathematical problems of greater complexity than a single neuron can handle. Neural networks can be subdivided into two leading families: non-looped (static) and looped (dynamic) networks.
A neural network is designed with an input layer, one or more hidden layers, and an output layer. Each layer is a cluster of several neurons performing linear activation functions (see Table 3); the output layer is nonlinear for the other neurons [31].

4.2. MLP Architecture

In the general case, an MLP can have any number of layers. Still, to improve the performance of the MLP on the one hand, and to minimize the computation time on the other hand, an optimal architecture in terms of perspective should be sought. Number of layers and their corresponding number of neurons affects the neural network’s performance [2]. A multilayer MLP network with at least one input layer, one hidden layer, and one output layer is called a backpropagation network (Figure 1). The number of layers and neurons cannot be precisely determined because it depends on the complexity of the problem to be treated.
Figure 1 shows an MLP network of an input layer with N neurons (X1, … XN), L-1 hidden layers each with K neurons, and an output layer with M neurons (Y1, … YM). Each neuron of a layer is connected to the neurons of the previous layer with certain weights (matrices W1, … WL).

5. Our Contribution

This work introduces machine learning approaches to efficient path loss prediction to address the issues related to empirical and deterministic models. Considering a multi-transmitter situation, we developed and validated models for estimating path losses using MLP. The inputs to the machine learning algorithms change from one dataset to another. The parameters we used in the outdoor environment were: longitude and latitude extracted from payload, received signal strength indicator, signal-to-noise ratio, transmission frequency, spreading factor, packet sequence number, transmission bandwidth, distance between the gateway and the end-device, and power received at the gateway. Every parameter has an impact on signal propagation, e.g., the terrain causes attenuation in the signal. We used longitude and latitude to define the nature of the environment where the model can be applied. The essential indoor parameters were the number of floors and walls between the gateway and the end device. These parameters enable us to test the model in a building similar to the studied building. The output layer is the path loss in both environments.

5.1. General Description of the Designed System

In the MLP, neurons are put in layers that go from the input to the output. The output of each layer node is the weighted sum of its input over a particular activation function. We create three MLPs using path loss (PL) as the output layer for the experimental data gathered across fourteen base stations. Figure 2 depicts the network topology of the MLP model with two hidden layers. We used this kind of network instead of a deep learning network because the aim was to achieve a higher computational speed.
Because it can be used for nonlinear functions and produces a smooth thresholding curve for the MLP, we chose the logistic sigmoid as the activation function. The standard output of the sigmoid function ranges from 0 to 1. When learning is performed in an MLP by repeatedly adjusting the weights, the feedforward approach comes first, followed by the backpropagation algorithm for excellent optimization. By updating MLP weights, the training aim is to minimize the loss function. The connections are joined together when the feedforward operation is finished. The backpropagation optimization approach is one of the essential hyperparameters for the MLP model. It starts by backpropagating to the weights of the first layer, moves on to the next iteration, and stops when the values of the weights reach a certain tolerance threshold.

5.2. Our Training Algorithm

The MLP training was carried out in a supervised manner using a backpropagation algorithm, whose objective is to adjust the weights and minimize the amount of quadrature error between the network output and the target result. The quadrature error is:
E ( x ) = | | d ( n ) P L | | 2
where d(n) is the target value, and P L is the value of the network output. The backpropagation algorithm defines the error gradient in aim to achieve a minimum.
The error gradient E(x) is calculated for each weight as follows:
E ( n ) W i j k   ( n ) = E ( n ) P L i k P L i k W i j k = E ( n ) P L i k x i 1   j
where
  • i varies between 1 and 3.
  • K varies from 1 to 50.
  • X the number of inputs varies between 6 to 11 according to each dataset.
For the output layer i = L, the output error is denoted. δLk is calculated as follows:
δ L k = E ( n ) P L i k = 2 f ( P L L k ) ( d k x L k )
where f(x) is the activation function.
For hidden layers, the error δ i k is given by:
δ i k = f ( P L i k ) j = 1 N ( j + 1 ) δ i + 1   j W i + 1   k j
The modification of the weights W(n) and the biases b(n) is obtained by the following two equations:
W i j k ( n + 1 ) = W i j k ( n ) + 0.7 δ i k x i 1   j + 0.6 ( W i j k ( n ) W i j k ( n 1 )
b i k ( n + 1 ) = b i k ( n ) + 0.7 δ i k
0.7 is the learning step determining the convergence speed, and 0.6 is the momentum or inertia term that prevents the algorithm from getting stuck on a local minimum.
Ojo et al. [32] designed the model with two hidden layers, while we tested three different architectures for the MLP models. However, their radial basis function (RBF) results were excellent. Isabona et al. [33] focused on developing the networks in the urban areas while we conducted our models in three different areas. Additionally, they used several training algorithms, while we used the same training algorithm in all cases (Figure 3).

5.3. Formatting of the Dataset Used in Training and Validation Sets

Network learning will be performed through a parallel learning model. It is necessary to build a learning base to develop network learning. Since the learning is supervised, the dataset must contain the network input and the desired output. To respond to the neural inputs and make the neural network training more efficient, the databases must go through a preprocessing step (Figure 4). Preprocessing is a typical technique to remove spurious discontinuities in the input function space and reduce the question inputs to manageable data. This is followed by appropriate normalization, taking into account the magnitude of the acceptable network values.

5.4. Data Collection Procedure

Table 4, Table 5 and Table 6 summarize the description and parameters of each dataset.

5.5. MLP Network Learning

Fourteen datasets were used to optimize neuronal structures, which are composed of different numbers of samples. These datasets are further subdivided into a training set and a test set. The training datasets are composed of (60%), the test datasets are composed of (20%) items that are reserved for the final performance measurement, and the validation datasets of (20%) examples. The test, validation, and training samples must be different and be randomly selected from the original dataset.
The optimization process (learning, validation, and testing) was performed for many iterations for which error stabilization was obtained. The numbers of neurons in the first, second and third layers were changed, and the associated optimization error was recorded. In this study, the backpropagation algorithm was used. It is important to note that the end of the program can be caused by one of the following events:
  • The Mean Squared Error (MSE), the average squared difference between the estimated and actual values [36]), is less than the minimum error.
  • The maximum number of iterations is reached.

5.6. Architecture Optimization

The optimization method chosen to solve a particular problem not only depends on the nature of the parameters to be optimized but also on the given problem. Therefore, only some general optimization methods can solve all problems, but many methods are adapted for each case. Therefore, we have designed the models with three different architectures (one, two, and three hidden layers).
As the network design grows, i.e., the number of layers and neurons increase, and the network will include more connections, which implies slower learning and processing. These topologies were chosen using an optimization procedure illustrated in Figure 5. Once the neural network has been trained, it must be validated on a dataset that is not the same as the one used for the training. This validation allows us to evaluate the neural system’s performance and identify the data type that caused the problem. The network architecture will be modified if the performance is not achieved.
After testing several learning possibilities and measuring performance, we obtained the following architectures.

6. Designed Architectures

The R-Squared (R2) is a statistical metric that measures the proportion of the variance of a dependent variable in a regression model that is explained by one or more independent variables.
R 2 = 1 i ( y i y i ^ ) i ( y i y ¯ )
where
  • y i : dataset values.
  • y i ^ : predicted values.
  • y ¯ : mean of dataset values y i .
In each example, there was a high agreement between the experimental and predicted outcomes. Therefore, the optimized structure can predict other combinations of input variables. The R-squared of the test set compares the results predicted by the neural model at different distances versus those measured.

6.1. Architecture 1 (One Hidden Layer)

There is an agreement between all sets, as illustrated in Figure 6, which demonstrates the usefulness of artificial neural networks in mobile network loss analysis.
After modeling the MLP network as in Figure 2, and performing the necessary optimizations, we could choose the best structure for each dataset with one hidden layer, as represented in Table 7.
Figure 6 shows that Dataset 10 model is the best with R-squared almost equal to 1, while Dataset 1 has the worst with R-squared equal to 0.70. Table 7 shows that different model’s parameters that contributed to the model’s efficiency.

6.2. Architecture 2 (Two Hidden Layers)

There is a correlation between the desired and the predicted outputs. The performance of the optimized MLP network (with two hidden layers) was evaluated by comparing our results with actual measurements. Figure 7 shows the different results of the simulations compared to the actual measurements using trend curves.
We identified the optimal structure for each dataset with two hidden layers after modeling the MLP network and making the necessary adjustments. Table 8 summarizes the obtained results.
From the previous figure, we notice that the worse model was with Dataset 1 and two hidden layers, with R-squared equal to 0.6204. The best performance was with Dataset 9 and R-squared equal to 0.999.

6.3. Architecture 3 (Three Hidden Layers)

The training, validation, and testing sets were compared to the neural model response to confirm the predictive property of the optimized network structure with three hidden layers. The R-squared values are shown in Figure 8. There was a high agreement between the experimental and predicted results in each scenario. Therefore, the optimized structure can predict other combinations of input variables. Figure 8 compares the predicted data expected using three-hidden-layer neural model and those measured for different distances.
Figure 8 shows an improvement in Dataset 1 compared to one and two hidden layers with R-squared equal to 0.72. The best model with three hidden layers was the model of Dataset 14, with R-squared equal to 0.9998.

6.4. Comparison of the Three Architectures

The results in Table 7, Table 8 and Table 9 indicate that the number of neurons varies with the number of layers. The number of inputs plays a significant role in the accuracy and speed of the models. Figure 9 shows the MSE for the three chosen architectures.
Based on Figure 9, we can note that the best architecture has three hidden layers. The error rate is directly proportional to the number of samples—the smaller the number of samples, the lower the error rate. The experiment determines the number of hidden layers and the number of neurons; for example, in Dataset 3, a structure with two hidden layers performs better than the one with three hidden layers. An increase in the number of neurons does not always imply an improvement in the model’s performance, as in Dataset 7 where we only needed seven neurons to get the best performance.

7. Comparison of our Models with Empirical Models

This section presents the results obtained for 14 different datasets with different path loss prediction techniques. The results obtained by our MLP were compared with those obtained by six empirical models: (Free Space Path Loss Model (FSPL), Walfisch-Ikegami model (WI), ECC-33 Model (ECC-33), COST 231-Hata model (PLCH), EricssonModel (PL999), Stanford University Interim Model (SUI)), and the type of model architecture. The results are shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19.

Analysis of the Simulation

Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 shows that the MLP model has the lowest error when all measures are included. Still, the other six empirical models overestimated the path loss and are not suggested for use in the studied context. All observed datasets and alternative designs agree with the MLP model, which performs better than the empirical models.
The parameters were used in the simulation to improve the deployment of the MLP models outside the initial environment where they were built. Since the models were built outside the environment under consideration, these corrective variables were employed to estimate the empirical models. The parameters hr and hb represent the antenna height of the base station and the mobile station, respectively, which were also used in the experiment. These numerical values were obtained based on the taken measurements. We used different base station heights in our simulations and evaluations. The real data was obtained by controlling these correction variables and the specific properties of the model were used to design and test our models. We divided our datasets into four main environments (urban, suburban, rural, and indoor environments). Based on Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19, the model proposed in this study gave the best performance compared to other empirical models:
  • Free space model;
  • Ericson model;
  • ECC-33 model;
  • Cost 231 (WI) model;
  • Cost 231 Hata model;
  • Stanford University Interim model.
The obtained result can be explained by the fact that these models were developed in a specific domain which was different from that of our datasets. To reach high accuracy, the path loss prediction should be close to the actual path loss value. Furthermore, these changes depend on the parameters of the dataset (distance, frequency, base station antenna height, mobile station antenna height). For example, in Datasets 11 to 14, the parameters were collected in an urban area with the same frequency, distance, and mobile station, but we noticed that the path loss value changed from one set to another because the height of the base station antenna changed; the longer the transmitter antenna, the lower the path loss. This also applies to Datasets 8 to 10, whose parameters were collected in the same situation, except for the base station antenna height.
On the other hand, the parameters of Dataset 4 were collected inside buildings. The path loss varies depending on the obstacles inside the building, such as wall thickness, room space, and number of windows. Our models showed to be the best in all cases because we used all possible parameters to make predictions almost identical to the actual path loss values.

8. Conclusions

This paper examined neural network methods for efficient path loss prediction to address the challenges associated with empirical and deterministic path loss prediction models. An MLP model for path loss prediction has been constructed and validated, considering multi-transmitter scenarios. The inputs to the neural network algorithm vary from a dataset to another, and path loss is the output layer. The MLP models were created to train 14 datasets, and compared to predictions given by six existing empirical models. Two well-known error measures were used to assess the created model: MSE and R-square. Interestingly, the presented MLP models outperformed the other six empirical models and showed the lowest error in various environments, which make them good fits for the measured data. It was found that increasing the number of MLP input variables increased the accuracy of the produced model, which can predict path loss measurements in wireless propagation situations. Therefore, the suggested MLP is quite effective in predicting path fading and is considered a generic data-driven model. Future research will focus on other neural network algorithms, like the radial basis function (RBF) algorithm, and compare it with our MLP models to improve the accuracy of signal loss prediction.

Author Contributions

Conceptualization, M.A. and M.O.; methodology, M.A.; software, M.A.; validation, M.A., M.O. and M.K.; formal analysis, M.A. and M.O; investigation, M.A.; resources, M.A.; data curation, M.A.; writing—original draft preparation, M.A.; writing—review and editing, M.A., M.O. and M.K.; visualization, M.A.; supervision, M.O. and M.K.; project administration, M.O.; funding acquisition, M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Algerian National Agency of Research and Development (DGRSDT-PRFU project number C00L07UN010120200001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Elsevier: https://doi.org/10.1016/j.dib.2018.02.026, accessed on 25 April 2022; ResearchGate: https://www.researchgate.net/publication/288366098_A_proposal_for_path_loss_prediction_in_urban_environments_using_support_vector_regression, accessed on 25 April 2022; Zenodo: DOI/10.5281/zenodo.1560654.svg, accessed on 25 April 2022.

Acknowledgments

This work was supported by the sustainable energy and computer science laboratory Adrar, Algeria (LDDI).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kvicera, V.; Martin, G.; Ondrej, F. Long-term propagation statistics and availability performance assessment for simulated terrestrial hybrid FSO/RF system. EURASIP J. Wirel. Commun. Netw. 2011, 2011, 435262. [Google Scholar] [CrossRef] [Green Version]
  2. Dreyfus, G. Neural Networks: Methodology and Applications, 2nd ed.; Springer Science & Business Media: Gertwiller, France, 2005; 498p. [Google Scholar]
  3. Faruk, N.; Popoola, S.I.; Surajudeen-Bakinde, N.T.; Oloyede, A.A.; Abdulkarim, A.; Olawoyin, L.A.; Ali, M.; Calafate, C.T.; Atayero, A.A. Path loss predictions in the VHF and UHF bands within urban environments: Experimental investigation of empirical, heuristics and geospatial models. IEEE Access 2019, 7, 77293–77307. [Google Scholar] [CrossRef]
  4. Okumura, Y. Field strength and its variability in VHF and UHF land-mobile radio service. Electr. Commun. 1968, 16, 825–873. [Google Scholar]
  5. Hari, K.; Baum, D.; Rustako, A.; Roman, R. Channel Models for Fixed Wireless Applications; IEEE 802.16 Broadband Wireless Access Working Group: Piscataway, NJ, USA, 2003; 38p. [Google Scholar]
  6. Phillips, C.; Douglas, S.; Dirk, G. A survey of wireless path loss prediction and coverage mapping methods. IEEE Commun. Surv. Tutor. 2012, 15, 255–270. [Google Scholar] [CrossRef]
  7. Walfisch, J.; Henry, L.B. A theoretical model of UHF propagation in urban environments. IEEE Trans. Antennas Propag. 1988, 36, 1788–1796. [Google Scholar] [CrossRef] [Green Version]
  8. Ikegami, F.; Susumu, Y.; Tsutomu, T.; Masahiro, U. Propagation factors controlling mean field strength on urban streets. IEEE Trans. Antennas Propag. 1984, 32, 822–829. [Google Scholar] [CrossRef]
  9. Theodore, S. Wireless Communications: Principles and Practice, 2nd ed.; Pearson: Paris, France, 2002; 640p. [Google Scholar]
  10. Electronic Communication Committee (ECC) within the European Conference of Postal and Telecommunication Administration (CEPT). The Analysis of the Coexistence of FWA Cells in the 3.4–3.8 GHz Band, 1st ed.; Fixed Service in Europe: Switzerland, 2012; 73p. [Google Scholar]
  11. Electronic Communication Committee (ECC) within the European Conference of Postal and Telecommunication Administration (CEPT). ECC Report 33, 1st ed.; ECO Documentation Database: Copenhagen, Denmark, 2006; 96p. [Google Scholar]
  12. Abhayawardhana, V.S.; Wassell, I.J.; Crosby, D.; Sellars, M.P.; Brown, M.G. Comparison of empirical propagation path loss models for fixed wireless access systems. In Proceedings of the 2005 IEEE 61st Vehicular Technology Conference, Stockholm, Sweden, 30 May–1 June 2005. [Google Scholar]
  13. Simi, I.; Stani, I.; ZIRNI, B. Minimax LS algorithm for automatic propagation model tuning. In Proceedings of the 9th Telecommunications Forum (TELFOR 2001), Belgrade, Serbia, 20–22 November 2001. [Google Scholar]
  14. Sotiroudis, S.P.; Goudos, S.K.; Gotsis, K.A.; Siakavara, J.N. Application of a Composite Differential Evolution Algorithm in Optimal Neural Network Design for Propagation Path-Loss Prediction in Mobile Communication Systems. IEEE Antennas Wirel. Propag. Lett. 2013, 12, 364–367. [Google Scholar] [CrossRef]
  15. Timoteo, R.D.A.; Cunha, D.C.; Cavalcanti, G.D.C. A Proposal for Path Loss Prediction in Urban Environments using Support Vector Regression. In Proceedings of the Advanced International Conference on Telecommunications, Paris, France, 20–24 July 2014; Volume 10, pp. 119–124. [Google Scholar]
  16. Liu, J.; Deng, R.; Zhou, S. Seeing the unobservable: Channel learning for wireless communication networks. In Proceedings of the 2015 IEEE Global Communications Conference, GLOBECOM, San Diego, CA, USA, 6–10 December 2015. [Google Scholar]
  17. Bojovic, B.; Meshkova, E.; Baldo, N.; Riihijärvi, J.; Petrova, M. Machine learning-based dynamic frequency and bandwidth allocation in self-organized LTE dense small cell deployments. EURASIP J. Wirel. Commun. Netw. 2016, 1, 422–438. [Google Scholar] [CrossRef] [Green Version]
  18. Sarigiannidis, P.; Sarigiannidis, A.; Moscholios, I. DIANA: A Machine Learning Mechanism for Adjusting the TDD Up-link-Downlink Configuration in XG-PONLTE Systems. Mob. Inf. Syst. 2017, 2017, 8198017. [Google Scholar]
  19. Song, W.; Zeng, F.; Hu, J.; Wang, Z.; Mao, X. An Unsupervised Learning-Based Method for Multi-Hop Wireless Broadcast Relay Selection in Urban Vehicular Networks. In Proceedings of the IEEE Vehicular Technology Conference, Sydney, NSW, Australia, 4–7 June 2017. [Google Scholar]
  20. Parwez, M.S.; Rawat, D.B.; Garuba, M. Big data analytics for user-activity analysis and user-anomaly detection in mobile wireless network. IEEE Trans. Ind. Inform. 2017, 13, 2058–2065. [Google Scholar] [CrossRef]
  21. Wickramasuriya, D.S.; Perumalla, C.A.; Davaslioglu, K.; Gitlin, R.D. Base station prediction and proactive mobility management in virtual cells using recurrent neural networks. In Proceedings of the 2017 IEEE 18th Wireless and Microwave Technology Conference (WAMICON), Cocoa Beach, FL, USA, 24–25 April 2017; pp. 1–6. [Google Scholar]
  22. Qian Zhang, S.; Xue, F.; Ageen Himayat, N.; Talwar, S.; Kung, H. A Machine Learning Assisted Cell Selection Method for Drones in Cellular Networks. In Proceedings of the 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, 25–28 June 2018; pp. 1–5. [Google Scholar]
  23. Perez, J.S.; Jayaweera, S.K.; Lane, S. Machine learning aided cognitive RAT selection for 5G heterogeneous networks. In Proceedings of the 2017 IEEE International Black Sea Conference on Communications and Networking (Black SEACOM), Istanbul, Turkey, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
  24. Zappone, A.; Sanguinetti, L.; Debbah, M. User Association and Load Balancing for Massive MIMO through Deep Learning. In Proceedings of the 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 1262–1266. [Google Scholar]
  25. Balevi, E.; Gitlin, R.D. Unsupervised machine learning in 5G networks for low latency communications. In Proceedings of the 2017 IEEE 36th International Performance Computing and Communications Conference, San Diego, CA, USA, 10–12 December 2017. [Google Scholar]
  26. Wang, L.-C.; Cheng, S.H. Data-Driven Resource Management for Ultra-Dense Small Cells: An Affinity Propagation Clustering Approach. IEEE Trans. Netw. Sci. Eng. 2018, 4697, 1. [Google Scholar] [CrossRef]
  27. Balapuwaduge, I.A.M.; Li, F.Y. Hidden Markov Model Based Machine Learning for mMTC Device Cell Association in 5G Networks. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 22–24 May 2019. [Google Scholar]
  28. Zhang, Y.; Xiong, L.; Yu, J. Deep Learning Based User Association in Heterogeneous Wireless Networks. IEEE Access 2020, 8, 197439–197447. [Google Scholar] [CrossRef]
  29. Bouhous, A. Use of the Stationary Phase Method and Artificial Neural Networks for the Modeling of an Open Structure Microstrip Resonator. Master’s Thesis, University of Batna, Batna, Algeria, 2012. [Google Scholar]
  30. Chemachema, K. Study of Microstrip Structures by the Technique of Neural Networks Application to Different Excitations. Ph.D. Thesis, University of Constantine, Constantine, Algeria, 2013. [Google Scholar]
  31. Addaci, R. Evaluation of the Complex Resonance Frequency and Band-Width of a Rectangular Microstrip Antenna by the Neuro Spectral Method. Master’s Thesis, University of Constantine, Constantine, Algeria, 2006. [Google Scholar]
  32. Ojo, S.; Agbotiname, I.; Alienyi, D. Radial basis function neural network path loss prediction model for LTE networks in multi-transmitter signal propagation environments. Int. J. Commun. Syst. 2021, 34, e4680. [Google Scholar] [CrossRef]
  33. Isabona, J.; Imoize, A.L.; Ojo, S. Development of a Multilayer Perception Neural Network for Optimal Predictive Modeling in Urban Micro-cellular Radio Environments. Appl. Sci. 2022, 12, 5713. [Google Scholar] [CrossRef]
  34. Popoola, S.I.; Atayero, A.A.; Arausi, O.D.; Matthews, V.O. Path loss dataset for modeling radio wave propagation in a smart campus environment. Data Brief 2018, 17, 1062–1073. [Google Scholar] [CrossRef] [PubMed]
  35. El Chall, R.; Lahoud, S.; El Helou, M. LoRaWAN Network: Radio Propagation Models and Performance Evaluation in Various Environments in Lebanon. IEEE Internet Things J. 2019, 6, 2366–2378. [Google Scholar] [CrossRef]
  36. Allen, D. Mean square error of prediction as a criterion for selecting variables. Technometrics 1971, 13, 469–475. [Google Scholar] [CrossRef]
Figure 1. The architecture of an MLP network.
Figure 1. The architecture of an MLP network.
Applsci 12 12757 g001
Figure 2. MLP network architecture (with two hidden layers) for Dataset 2.
Figure 2. MLP network architecture (with two hidden layers) for Dataset 2.
Applsci 12 12757 g002
Figure 3. Flowchart of proposed predictive modelling with MLP neural network.
Figure 3. Flowchart of proposed predictive modelling with MLP neural network.
Applsci 12 12757 g003
Figure 4. Dataset preprocessing phase.
Figure 4. Dataset preprocessing phase.
Applsci 12 12757 g004
Figure 5. Architecture optimization.
Figure 5. Architecture optimization.
Applsci 12 12757 g005
Figure 6. R-squared for the neural network model with one hidden layer.
Figure 6. R-squared for the neural network model with one hidden layer.
Applsci 12 12757 g006
Figure 7. R-squared for the neural network model with two hidden layers.
Figure 7. R-squared for the neural network model with two hidden layers.
Applsci 12 12757 g007
Figure 8. R-squared for the neural network model with three hidden layers.
Figure 8. R-squared for the neural network model with three hidden layers.
Applsci 12 12757 g008
Figure 9. MSE for the three neural network models.
Figure 9. MSE for the three neural network models.
Applsci 12 12757 g009
Figure 10. Comparison between empirical models and MLP (with one hidden layer) for Dataset 1.
Figure 10. Comparison between empirical models and MLP (with one hidden layer) for Dataset 1.
Applsci 12 12757 g010
Figure 11. Comparison between empirical models and MLP (with one hidden layer) for Dataset 4.
Figure 11. Comparison between empirical models and MLP (with one hidden layer) for Dataset 4.
Applsci 12 12757 g011
Figure 12. Comparison between empirical models and MLP (with one hidden layer) for Dataset 6.
Figure 12. Comparison between empirical models and MLP (with one hidden layer) for Dataset 6.
Applsci 12 12757 g012
Figure 13. Comparison between empirical models and MLP (with one hidden layer) for Dataset 7.
Figure 13. Comparison between empirical models and MLP (with one hidden layer) for Dataset 7.
Applsci 12 12757 g013
Figure 14. Comparison between empirical models and MLP (with two hidden layers) for Dataset 3.
Figure 14. Comparison between empirical models and MLP (with two hidden layers) for Dataset 3.
Applsci 12 12757 g014
Figure 15. Comparison between empirical models and MLP (with two hidden layers) for Dataset 5.
Figure 15. Comparison between empirical models and MLP (with two hidden layers) for Dataset 5.
Applsci 12 12757 g015
Figure 16. Comparison between empirical models and MLP (with two hidden layers) for Dataset 8.
Figure 16. Comparison between empirical models and MLP (with two hidden layers) for Dataset 8.
Applsci 12 12757 g016
Figure 17. Comparison between empirical models and MLP (with three hidden layers) for Dataset 11.
Figure 17. Comparison between empirical models and MLP (with three hidden layers) for Dataset 11.
Applsci 12 12757 g017
Figure 18. Comparison between empirical models and MLP (with three hidden layers) for Dataset 12.
Figure 18. Comparison between empirical models and MLP (with three hidden layers) for Dataset 12.
Applsci 12 12757 g018
Figure 19. Comparison between empirical models and MLP (with three hidden layers) for Dataset 14.
Figure 19. Comparison between empirical models and MLP (with three hidden layers) for Dataset 14.
Applsci 12 12757 g019
Table 2. A comparison between AI models.
Table 2. A comparison between AI models.
Authors AI TechniqueTraining Model Model PropertiesYear
Sotiroudis et al. [14]. Supervised learning Artificial Neural Networks (ANN) and Multi-Layer Perceptron’s (MLP).It is used for modeling and estimations of link budget and propagation loss objective functions for wireless networks.2013
Timoteo et al. [15]. Supervised learning Support Vector Machines (SVM).Model for forecasting path loss in urban areas.2014
Liu et al. [16]. Supervised learningNeural-Network-based approximation.Channel Learning to deduce channel state information (CSI) that cannot be seen from a channel that can be seen.2015
Bojovic et al. [17].Supervised learningstatistical logistic regression techniques and Machine Learning. It uses for Self-organized LTE high-density tiny cell implementation with dynamic frequency and bandwidth distribution.2016
Sarigiannidis et al. [18].Supervised learningSupervised Machine Learning Frameworks.Adjust the TDD Uplink-Downlink configuration in XG-PON-LTE Systems to enhance network performance in light of current traffic circumstances in the hybrid optical-wireless network.2017
Song et al. [19]. Unsupervised learningK-means clustering, Gaussian Mixture Model (GMM), and Expectation Maximization (EM).It uses for the selection of relay nodes in vehicular networks.2017
Parwez et al. [20]. Unsupervised learningHierarchical Clustering.Wireless network anomalous, defect, and penetration testing.2017
Dilranjan et al. [21].Supervised learningRecurrent Neural Network (RNN).It obtains a remarkable 98% precision.2017
Zhang et al. [22].Supervised learningConditional random fields (CRFs).It Obtains a remarkable 90% precision.2017
Juan et al. [23].Reinforcement learningQ-Learning.aids in maintaining a steady workload.2017
Zappone et al. [24].Supervised learningArtificial neural network.Simplicity gains in computation.2018
Balevi et al. [25].Unsupervised learningUnsupervised Soft-Clustering MachineLearning Framework.In heterogeneous cellular networks, latency may be reduced by using fog node clustering to automate the selection of low-power nodes (LPNs) for upgradation to high-power nodes (HPNs).2018
Wang et al. [26].Unsupervised learningAffinity Propagation Clustering.Resource Allocation in Highly Dense Small Cell Networks, Based on Data.2018
Balapuwaduge et al. [27].Supervised learningHidden Markov Model (HMM).helps increase channel availability and dependability.2019
Zhang et al. [28].Supervised learningconvolutional neural network (U-Net).It boosts processing speed and network reliability.2020
Table 3. Some activation functions.
Table 3. Some activation functions.
FunctionFormulaForm
Threshold or Heaviside function f ( x ) = 0   i f   x < 0
f ( x ) = 1   i f   x 0
Applsci 12 12757 i001
Symmetrical threshold function f ( x ) = 1   i f   x < 0
f ( x ) = + 1   i f   x 0
Applsci 12 12757 i002
Law f ( x ) = α x + β Applsci 12 12757 i003
‘Symmetric’ saturated linear f ( x ) = 1   i f   x < 1
f ( x ) = α x + β   i f 1 x 1
f ( x ) = 1   i f   x > 1
Applsci 12 12757 i004
Sigmoid f ( x ) = 1 1 + e x Applsci 12 12757 i005
Table 4. Description of each dataset.
Table 4. Description of each dataset.
DatasetDescription
1Smart campus environment within Covenant University, Ota, Ogun State, Nigeria [34]
2
3The urban environment in the city of Fortaleza-CE, Brazil [15]
4Indoor building in USJ-ESIB Campus in Beirut [35]
5Outdoor around the USJ campus using three antennas heights [35]
6Drive tests in Bekaa valley [35]
7Drive tests in Beirut city [35]
8ms = 1.5 m in Bekaa valley [35]
9ms = 3 m in Bekaa valley [35]
10ms = 20 cm in Bekaa valley [35]
11ms = 1 m in Beirut city [35]
12ms = 1.5 m in Beirut city [35]
13ms = 3 m in Beirut city [35]
14ms=20 cm in Beirut city [35]
Table 5. Parameters of each dataset.
Table 5. Parameters of each dataset.
DatasetParameters
1Elv, Alt, Dist, Clut
2Long, Lat, Elv, Alt, Dist, Clut
3Long, Lat, Ter Elv, Dist, Hor Ang, Ver Ang, Hor Atn, Ver Atn
4Long, Lat, RSSI, SNR, Freq, Floor, Wall, Dist, Prx
5–14Long, Lat, RSSI, SNR, Freq, SF, Seq, BW, Dist, Prx
Table 6. Definition of parameters.
Table 6. Definition of parameters.
ParametersDefinition of Parameters
BWbandwidth of transmission
Distdistance between the gateway and the end-device
Floornumber of floors between gateway and end-device
Freqthe frequency used for transmission
Latlatitude extracted from the Payload
Longlongitude extracted from the Payload
Prxpower received at the gateway
RSSIreceived signal strength Indicator
Seqthe sequence of the packet
SFspreading factor
SNRsignal-to-noise ratio
Wallnumber of walls between gateway and end-device
Table 7. Optimized parameters for the final model of the single hidden layer MLP model.
Table 7. Optimized parameters for the final model of the single hidden layer MLP model.
Datasets Numbers of NeuronsNumber of Samples for Each Set
Input Layer1st Hidden LayerOutput LayerTraining SetValidation SetTest Set
149612170723723
26731749245245
36871558618611861
49461791263263
511111473157157
6108311313437437
71081399133133
810461429143143
910911509169169
1010281429142142
1111971387129129
1211981381126126
1311961395131131
1411431379125125
Table 8. Optimized parameters for the final model for the MLP model with two hidden layers.
Table 8. Optimized parameters for the final model for the MLP model with two hidden layers.
DatasetNumbers of NeuronsNumber of Samples for Each Set
Input Layer1st Hidden Layer2nd Hidden LayerOutput LayerTraining SetValidation SetTest Set
14161812170723723
2620201749245245
3650501558618611861
4911161791263263
51116291473157157
610272511313437437
71010171399133133
81016201429143143
91016201509169169
101010121429142142
111130101387129129
12117171381126126
131120131395131131
141137261379125125
Table 9. Optimized parameters for the final model for three hidden layers.
Table 9. Optimized parameters for the final model for three hidden layers.
DatasetNumbers of NeuronsNumber of Samples for Each Set
Input Layer1st Hidden Layer2nd Hidden LayerOutput LayerTraining SetValidation SetTest Set
1409100812170723
261015221749245
36101640155861861
490516351791263
5110518061473157
61012203611313437
7100417271399133
8101914381429143
9100417191509169
10100816321429142
11111818311387129
12112131461381126
13110721491395131
14112825051379125
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alnatoor, M.; Omari, M.; Kaddi, M. Path Loss Models for Cellular Mobile Networks Using Artificial Intelligence Technologies in Different Environments. Appl. Sci. 2022, 12, 12757. https://doi.org/10.3390/app122412757

AMA Style

Alnatoor M, Omari M, Kaddi M. Path Loss Models for Cellular Mobile Networks Using Artificial Intelligence Technologies in Different Environments. Applied Sciences. 2022; 12(24):12757. https://doi.org/10.3390/app122412757

Chicago/Turabian Style

Alnatoor, Moamen, Mohammed Omari, and Mohammed Kaddi. 2022. "Path Loss Models for Cellular Mobile Networks Using Artificial Intelligence Technologies in Different Environments" Applied Sciences 12, no. 24: 12757. https://doi.org/10.3390/app122412757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop