1. Introduction
As the Internet of Things (IoT) continues to expand and sensor networks become more prevalent in various applications, the task of optimizing sensor utilization within specific environments with regard to certain key performance indicators is becoming increasingly important. One of the most important performance metrics is network coverage [
1]. The overall coverage performance of the sensor network depends on the placement of each sensor. Depending on the sensor type, each sensor can have different individual coverage models.
The placement of isotropic beacons plays a pivotal role in the efficiency of indoor localization systems [
2], which could lead to many applications, e.g., indoor navigation of drones [
3]. Isotropic beacons emit signals uniformly in all directions, providing a consistent and reliable source of spatial information. This uniformity ensures that drones can determine their position within an indoor environment with high precision, by triangulating their location relative to multiple beacons. Optimal placement of these beacons is essential to minimize signal obstructions and interference, which can significantly degrade localization accuracy. Consequently, a well-considered configuration of isotropic beacons enables drones to navigate complex indoor spaces safely and efficiently, by ensuring continuous and accurate positional data. This is particularly important in environments where GPS signals are unavailable or unreliable, highlighting the critical role of isotropic beacon placement in the advancement of indoor drone technology.
An optimal placement of visual sensors can improve the performance of video surveillance systems while minimizing the number of sensors required to cover a given area, reducing the overall cost of installation and operation. It may be important to cover certain areas of the environment, such as pathways, areas around exhibited artwork and similar locations. When analyzing human behavior, extending the coverage of visual or isotropic sensors to specific areas—such as in front of product shelves, advertising displays or showcases—allows for a closer examination of people’s interest in specific products or advertising types.
The challenge of determining the optimal positioning of sensors in an environment arises from the Art Gallery problem (AGP) [
4]. This problem has its roots in the concept of visibility, a measure that quantifies the effectiveness of sensors in detecting and observing objects in an environment. Visibility appears as an important aspect in various fields such as robotics, computer vision, optimization and computer graphics. Originally, the AGP aimed to find optimal locations for guards that could monitor a full
area from fixed positions, typically within a polygonal art gallery. In its original form, the AGP assumed that visibility between two points is given if the line segment connecting them does not intersect any obstacles—a principle known as line-of-sight constraint.
Subsequent research extended the AGP by incorporating distance and incidence constraints [
5], but retained the simplified two-dimensional polygonal approach. In these studies, polygonal representations of the floor plan of the environment were used, which often inadequately capture the complicated spatial arrangements of buildings. In many practical scenarios, understanding the three-dimensional spatial structure of a building is crucial for developing effective monitoring strategies [
6]. In three-dimensional environments, research usually relies on elevation maps, which are limited to planar terrains [
7] or orthogonal polyhedral environments [
8].
Both the two-dimensional and the three-dimensional version of the Art Gallery problem have been shown to be NP-hard. However, while solutions can be found for certain instances of the two-dimensional problem and some three-dimensional scenarios with simple polygons [
5,
9,
10], practical implementations are faced with increasing complexity due to the different shapes of real-world polygons. A universally applicable solution that covers all polygon types remains elusive [
4].
The optimization of sensor placement involves several key aspects that shape the approach and solution strategies. These aspects encompass the type of coverage used, the dimensionality of the target environment, sensor detection capabilities and sensor models.
The choice of coverage metric has a direct influence on the optimization process. Commonly used metrics include area coverage, point coverage and barrier coverage, each tailored to specific application requirements [
11,
12]. In this work, we use area coverage, the most widely used metric, which quantifies the ratio of the area covered by the sensors to the total target area [
3,
13,
14,
15,
16].
Based on the AGP mentioned above, it becomes clear that the second aspect of the problem relates to the dimension of the target environment. Environment models, such as voxel-based representations, help to capture the spatial subtleties required for accurate coverage computation. The dimensionality of the target environment model is crucial for accurate sensor placement. While two-dimensional environment models are computationally simple, they can lead to errors due to oversimplified representations. Three-dimensional environment models provide a more accurate representation, but are associated with higher computational costs [
17]. In previous research, spaces are often simplified to the level of a floor plan or two-dimensional boundaries [
18,
19,
20,
21,
22,
23,
24,
25,
26]. As an improvement of the two-dimensional environment models, three-dimensional models were mostly described with elevation maps [
27,
28,
29]. In our previous research, we proposed three-dimensional environment models and a framework for optimizing sensor placement [
15,
16]. Throughout this manuscript, both two-dimensional and three-dimensional types of environments will be examined.
The detection capabilities of sensors play an important role in optimizing placement. Sensors can have binary or probabilistic sensing ability. The sensing ability of a sensor refers to whether it can perceive an object in the case of binary coverage [
19,
20,
21,
22,
23,
24,
25,
26,
29,
30,
31], or how effectively it can perceive it in the case of probabilistic coverage, which provides a more refined model [
15,
16,
28,
32,
33]. In addition to detection capability, the shape of the area covered by the sensor is also important. The coverage area of the sensor depends on the sensor type, mainly isotropic or directional [
34].
In previous research on sensor placement optimization, various optimization algorithms have been used to improve the efficiency of sensor deployment tailored to specific needs. Early attempts that focused on the placement of binary directional [
19] and isotropic sensors [
21] in two-dimensional spaces employed techniques such as Integer Linear Programming, Binary Integer Programming, Greedy Search and random arrangement. Advances in optimal placement of binary sensors in two dimensions include methods based on genetic algorithms such as Individual Particle Optimization [
13], Particle Swarm Optimization, Binary Genetic Algorithm, Simulated Annealing [
27], Improved Cuckoo Search Algorithm and Chaotic Flower Pollination Algorithm [
14].
Extending to three-dimensional spaces with height maps, studies compared deterministic methods with the Covariance Matrix Adaptation Evolution Strategy algorithm, the Limited-memory Broyden–Fletcher–Goldfarb–Shanno method and the Gradient Descent [
17,
28,
32]. The Covariance Matrix Adaptation Evolution Strategy showed superior performance for smaller height maps, while Gradient Descent was effective for larger maps.
Some approaches utilized exhaustive search for optimal placement of sensors to monitor deformation and detect damage in structural systems [
35,
36,
37,
38]. In addition, exhaustive search has been used for the ordered placement of nodes in sensor networks. For example, the placement of sensors to monitor leaks in drinking water networks [
39] or to achieve the target coverage of the wireless sensor network based on the selected coverage metric [
40]. Most of the current research uses one-dimensional approaches [
35,
36,
38] that usually simulate structural beams. Other researchers use two-dimensional simulated environments to determine the optimal placement of sensors [
37,
39,
40], usually with additional constraints.
Arising from the aforementioned AGP, there is no general analytical solution to the sensor placement problem. The optimal solution for discrete search spaces can be determined using exhaustive search, but this involves a high computational cost. By using other, mostly meta-heuristic approaches, the problem with the high computational costs could be alleviated, but the selected sensor placement could not be guaranteed to be optimal.
The aim of the present study is two-fold. First, a variant of the exhaustive search approach was proposed to address the problem of two-dimensional and three-dimensional sensor placement. This modified exhaustive search method is used to establish a ground truth for the single sensor placement problem. As shown in
Section 3, exhaustive search is not feasible for three-dimensional environment models and the proposed approach mitigates some computational costs.
Second, following the same idea of finding approaches that lead to (near) optimal solutions, we explore nature-inspired genetic algorithms and the impact of the number of evaluations of the optimization function for these algorithms on both accuracy and computational cost. In addition, the effects of the rasterization of the environment are analyzed. This enables users to select the right parameter values and balance coverage and computational costs.
To summarize, the following contributions are presented in this manuscript:
An exhaustive search-based approach that serves as ground truth for sensor placement in two-dimensional and three-dimensional environments.
Analysis of the influence of the number of evaluations of the optimization function and the rasterization of the environment on the performance of stochastic optimization sensor placement algorithms.
Evaluation of three stochastic optimization algorithms for sensor placement in two-dimensional and three-dimensional environments with selected parameters and comparison with the ground truth values.
This manuscript is structured as follows.
Section 2 describes the used environment and sensor models, optimization function and algorithms as well as the newly proposed recursive exhaustive Search. In
Section 3, the results are then presented, analyzed and discussed. Finally, concluding remarks are given in
Section 4.
2. Materials and Methods
In our previous research [
15], we introduced environment and sensor models as well as an optimization function that is crucial for evaluating sensor placement in a given environment. To ensure an accurate assessment of sensor positioning, we utilize environment models that provide a rough two-dimensional and three-dimensional representation of the real-world environment. Our proposed sensor models establish visibility calculations by employing probabilistic coverage models tailored for both isotropic and directional sensors. In addition, the optimization function utilizes a continuous, single-objective approach that aims to minimize the global loss value, which is the complement of the area coverage.
In this section, we provide a brief overview of these models and functions and explain the role they play in improving the efficiency of sensor placement in specific environments. Furthermore, a variant of the exhaustive search and its adaptation to the two-dimensional and three-dimensional sensor placement problem is presented to create a ground truth solution for single sensor placement.
2.1. Environment Model
To accurately assess sensor placement, we rely on environment models proposed in our previous research that closely represent the real environment based on descriptions of free and occupied spaces. These models, which typically describe the layout of the ground plane, are created through three-dimensional polygonal modeling. Although these models are primarily three-dimensional, they can also be simplified into two-dimensional representations if necessary by focusing on their ground planes.
Three distinct test environments were defined by altering their shapes and the arrangement of obstacles within them. The first environment, seen in
Figure 1a, features a U-shape without obstacles, representing a scenario where multiple sensors are necessary due to the curved layout obstructing visibility. In the second environment,
Figure 1b, a simple quadratic shape with multiple obstacles, limits the coverage of individual sensors. The third environment,
Figure 1c, simulates scenarios where obstacles isolate parts of the environment, testing the optimization algorithm’s response to discontinuities and the placement of sensors in both isolated and non-isolated areas.
To effectively calculate the spatial coverage, the environment is divided into a final number of voxels based on a rasterization parameter
that determines the granularity of coverage. The number of voxels depends on the layout, the dimensions of the free and occupied spaces, and the selected rasterization parameter, and is given in
Table 1. In general, it is assumed that a higher number of voxels provides a more accurate description of the world.
By employing both two-dimensional and three-dimensional spatial representations and adjusting the rasterization parameter, we can evaluate the performance of optimization algorithms on test cases of varying complexity.
2.2. Sensor Models
Sensor models are defined by specifying rules for calculating the visibility for each voxel surrounding the sensor within the environment. Proposed sensor models employ a probabilistic coverage model. There are two main types of sensors that differ in their sensing capabilities: sensors with isotropic sensing and sensors with directional sensing.
The visibility
between each voxel
and the sensor
is determined as the product of three fundamental functions:
where
represents the visibility based on the distance between the sensor and the observed voxel, while
and
correspond to the visibilities determined by azimuth and inclination angles between the sensor and the voxel. These calculations vary depending on the sensor type.
Visibility is only calculated if there is a line of sight between the voxel and the sensor. The distance
between the voxel and the sensor, which is used in the distance visibility function
, is calculated using the following Euclidean distance formula:
Both azimuth
and inclination
are calculated as angles within the spherical coordinate system:
and used in the calculation of the azimuth visibility
and inclination visibility
, respectively.
In this study, a model of an isotropic radio beacon is used, where the visibility of the distance is based on the Received Signal Strength Indicator (RSSI). The SNR is normalized by , resulting in . The azimuth visibility and the inclination visibility are equal to 1, as it is assumed that the isotropic sensor detects (or emits) the signal equally in all directions.
2.3. Optimization Function
To determine optimal sensor positions, an optimization function is required, combining the environment and sensor models based on the selected metric. This function represents a minimization problem suitable for various optimization algorithms that are non-derivative, nonlinear and constrained.
The metric chosen for this optimization is the area coverage ratio, which indicates the proportion of the target area that is covered by the sensors. The goal of the optimization is to minimize the global loss value, which is essentially the complement of the coverage.
The loss
associated with each sensor–voxel pair is calculated as the complement of their visibility:
Since a voxel can be visible from multiple sensors, the voxel loss
is calculated as the product of the losses from each of the
m sensors within the set
S:
The global loss value
L is then obtained by averaging the individual voxel losses
over all
n voxels within the environment
V:
Sensor positions, yaw and pitch angles are characterized by continuous values, while the voxel positions adopt discrete values derived from the environment’s rasterization parameter. Although the proposed solution utilizes a loss minimization function that aims to reduce the loss value
, a coverage value
is used in the results:
Moreover, this coverage value can be used for optimization problems that utilize a maximization function.
2.4. Recursive Exhaustive Search for Sensor Placement
Analytically determining the optimal sensor placement in space is not feasible [
5,
6,
9,
10,
41], necessitating methods such as exhaustive search (ES) to explore all possible configurations within a discrete search space defined by a rasterization value
. However, due to the continuous nature of the sensor positions, the coverage achieved with ES may not be optimal. In addition to the problems with the discrete search space that affect the accuracy of the optimization algorithms, the computational complexity is usually high and often results in the need to find a balance between accuracy and computational complexity.
To tackle both the accuracy and computational complexity of ES, a recursive exhaustive search (RES) approach was developed that combines the concepts of ES and greedy search (GS) to recursively traverse the environment in search of the optimal solution. In RES, the potential sensor positions are initially separated by , same as in ES. As shown in Algorithm 1, in each iteration, the search space area is recursively refined around the positions with the highest coverage until the termination conditions are met.
The step size
for ES was chosen so that it is one order of magnitude lower than the rasterization value of the environment
. The coverage value
C is calculated for each of the possible positions and the position or positions with the highest coverage value are selected. With RES, this process is repeated at each iteration, shrinking the search space around the selected position and halving its size for each of the axes of the global coordinate system. In each iteration, the value of
is also reduced by one order of magnitude. The execution is terminated if the size of the search space falls below the value of the floating point precision
m.
Algorithm 1 Recursive Exhaustive Search (RES) |
- 1:
- 2:
-initially set to lower bounds of environment model - 3:
-initially set to upper bounds of environment model - 4:
function RES() - 5:
if then - 6:
return - 7:
end if - 8:
sample the area between and values for each axis with a step size of - 9:
solution(s) with highest coverage value C from the - 10:
- 11:
for in do - 12:
RES(, , ) - 13:
if then - 14:
- 15:
Replace with the - 16:
else if then - 17:
Append to the list of - 18:
end if - 19:
end for - 20:
return - 21:
end function
|
While ES provides comprehensive coverage exploration, RES offers a refined approach with reduced computational demands. Further optimization of RES may enhance its practical applicability for sensor placement problems in real-world scenarios.
2.5. Optimization Algorithms
Three algorithms were selected for this study, all of which fall within the domain of population-based stochastic genetic optimization techniques. These methods, namely Particle Swarm Optimization (PSO) [
42], Artificial Bee Colony (ABC) [
43] and Fireworks Algorithm (FWA) [
44,
45], are tailored to the optimization of nonlinear functions within multidimensional spaces, as described earlier. The variables to be optimized are the positions of the sensors, which form a multidimensional search space whose dimensions vary depending on environmental factors and the properties of the sensors used.
The choice of the PSO algorithm stems from its status as one of the most widely used swarm optimization algorithms, which has proven itself in various optimization problems. Alongside PSO, the ABC algorithm is utilized, as a representative of more modern optimization techniques, which is characterized by its effectiveness in certain optimization scenarios [
43]. Additionally, the inclusion of the FWA algorithm demonstrates a recent advancement in swarm optimization techniques, presenting notable improvements over the PSO algorithm, according to [
44]. In particular, the combination of these algorithms, as used in [
46] for the path planning of mobile robots, highlights their suitability for tackling optimization problems in continuous space, and a number of other applications [
47,
48,
49].
Since there are no known solutions to the sensor placement problem and it cannot be solved analytically, the optimization cannot be directed toward a fixed target. Therefore, we use a “fixed-cost” approach, where the number of evaluations of the optimization functions for each algorithm within a given test case is limited [
50]. Specifically, the number of evaluations is set to
, where
is the number of evaluations of the optimization function per independent variable and
D is the number of independent variables. In this study, the influence of the number of evaluations of the optimization function per independent variable
is analyzed.
For the placement of the single isotropic sensor in the two-dimensional environment, , which corresponds to the positions x and y of said sensor. In the case of a three-dimensional environment, because an additional axis z is added.
2.6. Experimental Setup
The three environments described above were used for all experiments. Due to the computational requirements of these extensive experiments, the supercomputer “Bura” of the University of Rijeka was used [
51]. The tests with ES and RES were parallelized on 20 nodes with two Xeon E5 processors each. The tests with stochastic optimization algorithms were performed on a varying number of nodes, with each test running exclusively on a single core.
3. Results and Discussion
In this section, we present the results of our study. We start with the results of a proposed solution approach for ground truth based on exhaustive search. We then compare the coverage results for the placement of a single isotropic sensor in three two-dimensional and three-dimensional environments based on the number of evaluations of the optimization function and the rasterization parameter. Following the comparison, the coverage values obtained with the three optimization algorithms are compared with the ground truth value calculated with modified RES.
3.1. Ground Truth Estimation
Since it is not possible to analytically determine the optimal arrangement of sensors and multiple sensor configurations could be optimal, we decided to use a single metric with which to compare the optimization algorithms. If there are multiple optimal sensor configurations, they result in the same coverage value C. Therefore, we chose the coverage value C as the criterion for determining the accuracy of sensor placement and used it as the ground truth representation instead of the sensor configuration.
The time per evaluation mainly depends on the number of voxels in the environment and the number and type of sensors placed. On this basis, and, extrapolated from it, the average computation time of an optimization function are constant. Both the ES and RES approaches are parallelized and are compared based on the number of evaluations of the optimization function and not on the computation time.
Both ES and RES were evaluated and compared on test cases involving the placement of a single omnidirectional sensor in two-dimensional and three-dimensional models of all three environments with a rasterization value of m. ES was conducted with a step size of m, while for RES, the initial value was m.
Table 2 shows the measured time per evaluation
in seconds, the achieved coverage values
C and the number of evaluations
E of the optimization function for both approaches. For two-dimensional environments, the computation time for ES was between 4 and
min, while for RES it was an order of magnitude lower, between 1 and 4 s. For three-dimensional environments, the difference is even more pronounced, ranging from 22 to 45 min for RES. Measuring the computation time for ES would be possible, but not feasible, as it is estimated to be between 80 and 204 days for the same test cases. The computation time between ES and RES ranged from a factor of 100 for two-dimensional environments to a factor of 5000 for three-dimensional environments.
From the data obtained, it is evident that there is no difference in the coverage achieved between the two approaches for two-dimensional environments. However, for three-dimensional environments, the achieved space coverage results for the ES approach are not shown since the computation time is extended beyond practical limits.
3.2. Analysis of the Impact of the Number of Evaluations
The importance of choosing the right number of evaluations of the optimization function lies in the need for faster processing without significant loss of total coverage C. In this subsection, we study the impact of the evaluations for the problem of placing a single isotropic sensor in three selected environments. To account for the stochastic variability inherent in optimization algorithms, each test case was repeated 30 times. For the sake of brevity, the results are presented for one environment only. The analysis of the coverage in the other environments leads to the same results.
For all tests, a rasterization value is set to
m.
Table 3 shows the descriptive statistics for using different numbers of evaluations
of the optimization for the sensor placement in the two-dimensional representation of the Environment 1. This table shows the average and standard deviation of the coverage
C of all runs for each algorithm.
In order to statistically analyze the effects of the optimization algorithm used and the number of evaluations of the optimization function on the average coverage, we performed a two-way repeated measures (RM) ANOVA. Namely, a ANOVA was utilized, with Algorithms (three instances) and Number of evaluations (14 instances) being the within-subjects factors. The test yielded the following results:
Table 5 shows the descriptive statistics for using different numbers of evaluations
of the optimization for the sensor placement in the three-dimensional representation of Environment 1. This table shows the average and standard deviation of the coverage
C of all runs for each algorithm.
We again performed a two-way RM ANOVA, where Algorithms (three instances) and Number of evaluations (14 instances) were the within-subjects factors. The test yielded the following results:
The mean coverage differed significantly between the optimization
Algorithms:
Mean coverage differed significantly between the
Number of evaluations:
In both cases, the post hoc analysis with a Bonferroni adjustment led to the same conclusions as for the two-dimensional placement. The full results of the pairwise post hoc comparisons for the factor
Algorithm can be found in
Table 6.
The interaction between the factors
Algorithm *
Number of evaluations is statistically significant:
Although the statistical analysis showed that 50 evaluations per independent variable achieved a coverage value that was not significantly different from that achieved with more evaluations, the statistically significant interaction between the factors prompted us to perform a post hoc analysis of all factors. This post hoc analysis showed that PSO performs better than ABC only for
for two-dimensional environments and for
evaluations for three-dimensional environments, while there is no statistically significant difference for a higher number of evaluations. Both algorithms outperform FWA up to
for two-dimensional environments and
for three-dimensional environments, after which there is no statistically significant difference between the pairs of algorithms. This difference can be seen in
Figure 2.
As can be seen in
Figure 3, the computation time increases linearly with the number of evaluations
. The computation time for ABC and PSO overlaps, while it increases faster for FWA.
3.3. Analysis of the Impact of the Rasterization Value
While the number of evaluations has a linear influence on the computational time, it is expected that the rasterization value
has an exponential influence on the calculation. In this subsection, we examine the effects of the rasterization values on the computed coverage value
C. For all tests, the number of evaluations per independent variable is set to
.
Table 7 shows the descriptive statistics for using different rasterization values
of the optimization for sensor placement in the two-dimensional Environment 1. This table shows the average and standard deviation of the coverage
C of 30 runs.
Again, to statistically analyze the effects of the optimization algorithm used and the rasterization value on the average coverage, we performed a two-way RM ANOVA. Namely, a ANOVA was utilized, with Algorithms (three instances) and Rasterization (10 instances, m to m, step m) being the within-subjects factors. The test yielded the following results:
Table 9 shows the descriptive statistics for using different rasterization values
of the optimization for the sensor placement in the three-dimensional representation of Environment 1. This table shows the average and standard deviation of the coverage
C of all runs for each algorithm.
Again, we performed a two-way RM ANOVA with the same within-subjects factors. The test yielded the following results:
The mean coverage differed significantly between the optimization
Algorithms:
Mean coverage differed significantly between the
Rasterization:
For both factors, a post hoc analysis with a Bonferroni adjustment yielded the same results as for the two-dimensional Environment. The full results of the pairwise post hoc comparisons for the factor
Algorithms can be found in
Table 10.
The interaction between the factors
Algorithm *
Rasterization is statistically significant:
The statistically significant interaction between the factors prompted us to conduct a post hoc analysis of all factors. This post hoc analysis showed that for the two-dimensional environment there is no statistically significant difference between the factors
Algorithms for
m and
m, while for values of
m both ABC and PSO perform better than FWA, with no significant difference between them. For the three-dimensional environment, a post hoc analysis showed that ABC and PSO outperform FWA for all rasterization values. This can be seen in
Figure 4.
As can be seen in
Figure 5, the computation time increases exponentially with the smaller rasterization value
and it overlaps again for ABC and PSO, while it increases somewhat faster for FWA.
3.4. Comparison of RES and Stochastic Algorithms
Based on the conclusions from the analysis in the two previous subsections, we chose m as this led to the best overall coverage C. For the stochastic optimization algorithms, was chosen as this was the first value where there was a significant difference between the algorithms for two-dimensional environments. For three-dimensional environments, a value of was chosen for the same reasons.
Table 11 shows the results of the coverage value
C and the total number of evaluations
E. The stochastic algorithms, in particular ABC and PSO, performed as well as RES and in most cases reached or came close to the ground truth coverage value. This coverage performance metric was tracked with a large impact on computation time, with the stochastic algorithms being more than 5 times faster for two-dimensional environments and between 14 and 18 times faster for three-dimensional environments. If we were to compare the computation time with ES and not with the proposed RES, the impact would be about 500 times for the two-dimensional environments and almost
times for the three-dimensional environments.