Next Article in Journal
Robust Flow Estimation Algorithm of Multichannel Ultrasonic Flowmeter Based on Random Sampling Least Squares
Previous Article in Journal
Artificial Intelligence and 3D Scanning Laser Combination for Supervision and Fault Diagnostics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constrained Planar Array Thinning Based on Discrete Particle Swarm Optimization with Hybrid Search Strategies

School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7656; https://doi.org/10.3390/s22197656
Submission received: 8 September 2022 / Revised: 29 September 2022 / Accepted: 6 October 2022 / Published: 9 October 2022
(This article belongs to the Section Physical Sensors)

Abstract

:
This article presents a novel optimization algorithm for large array thinning. The algorithm is based on Discrete Particle Swarm Optimization (DPSO) integrated with some different search strategies. It utilizes a global learning strategy to improve the diversity of populations at the early stage of optimization. A dispersive solution set and the gravitational search algorithm are used during particle velocity updating. Then, a local search strategy is enabled in the later stage of optimization. The particle position is adaptively adjusted by the mutation probability, and its motion state is monitored by two observation parameters. The peak side-lobe level (PSLL) performance, effectiveness and robustness of the improved PSO algorithm are verified by several representative examples.

1. Introduction

Thinned arrays have been the focus of research in recent years for their lower cost, lower energy consumption and lighter weight compared with the conventional uniform arrays. The main purpose of array thinning is to obtain a lower peak side-lobe level (PSLL) on the condition that the antenna array satisfies gain demand. Planar array thinning can be achieved by adjusting the “ON” or “OFF” states of each element in a uniform array.
To suppress the PSLL, several optimization methods have been proposed. As suggested by Liu in [1], the thinning of a reconfigurable linear array can be reduced by matrix beam method with high efficiency. RF switching technology in the T/R module makes the application of adaptive sparse technology possible [2]. Rahardjo et al. [3] developed a new method for designing linear thinned arrays using spacing coefficients based on Taylor line source distribution, which is verified to be able to correctly match different beam pattern configurations without repeated global optimization. Recently, analytical thinning methods by convex optimization [4] and Bayesian compressive sensing [5] have been introduced, but these methods need to set a suitable reference pattern as a precondition. Keizer [6] proposed the iterative Fourier technique (IFT) for array thinning. He calculates the array factor and excitation of the uniform array by making use of the Fourier transforms. Moreover, the IFT method is also used to optimize large planar arrays with higher convergence speed [7].
Benefitting from excellent global search performance, some intelligent optimization algorithms, such as the real genetic algorithm (RGA) [8] and asymmetric mapping method with differential evolution (DE) [9], also play a good role in array thinning. For this problem, analyses are also carried out with the newly introduced Honey Badger Algorithm (HBA) and Chameleon Swarm Algorithm (CSA) algorithms in [10]. Vankayalapati introduced a new algorithm called multi-objective modified binary cat swarm optimization to deal with the optimization of multiple contradicting parameters [11].
First proposed by Eberhart and Kennedy in 1995 [12], particle swarm optimization (PSO) is a new optimization algorithm for solving continuous and discontinuous problems in multiple dimensions. With a variety of analytical and numerical tools available, PSO has been applied to different application domains such as antenna pattern synthesis, array thinning, and sensor networks. The PSO and its improved algorithms are employed frequently for suiting multiple objectives simultaneously. Random drift particle swarm optimization (RDPSO), which is used to solve the electric power economic scheduling problem, has been applied to the optimal design of thinned array in [13], achieved good results. To compare the performance of single-ring planar antenna arrays, Bera et al. [14] proposed a novel wavelet mutation-based novel particle swarm optimization (NPSOWM). By applying the binary particle swarm optimization (BPSO) algorithm to the total focusing method (TFM) [15] for thinned array design, the simulation results indicate that the proposed TFM can greatly increase computational efficiency and provide significantly higher image quality. The PSO study in [16] aims to generate multiple-focus patterns and a large scanning range for random arrays thinning, which is applied to the ultrasound treatment of brain tumors and neuromodulation. A search mode with multi-objective particle swarm optimization was proposed in [17] to solve the problem of optimal array distribution, which met the requirements of reducing the number of antenna elements and maintaining the PSLL simultaneously. In 2021, Guo et al. [18] studied the SLL reduction optimization of linear antenna arrays (LAA) with specific radiation characteristics and circular antenna arrays (CAA) with isotropic radiation characteristics. In the optimization process, an optimization method (GWO-PSO) combining gray wolf optimization and PSO will be used.
In general, in order to avoid the rapid loss of particle distribution in solution space, the existing PSO methods give a variety of evaluation strategies for the diversity of particle population distribution. However, these methods mainly focus on improving the algorithm efficiency and ignore the balance between global search and local search, resulting in the lack of ability to adjust the search focus dynamically in different search stages.
In this paper, a new novel optimization algorithm for large array thinning is proposed. The innovative part of this algorithm is the combined usage of different particle learning strategies with discrete particle swarm optimization, which enhances the ability of global search and effectiveness of the algorithm.
The rest of this paper is organized as follows: The planar array structure and optimization problem model are briefly outlined in Section 2. Section 3 introduces the DPSO algorithm and our improvement to it. Several simulation results and discussions are presented in Section 4. Finally, a brief conclusion is given in Section 5.

2. Optimization Model

Assume a large planar array with elements arranged in square grids with a spacing of d along M columns and N rows, as shown in Figure 1.
Matrices A and B are set as:
A = a 00 a 01 a 0 N 1 a 10 a 11 a 1 N 1 a M 10 a M 11 a M 1 N 1
B = b 00 b 01 b 0 N 1 b 10 b 11 b 1 N 1 b M 10 b M 11 b M 1 N 1
where amn and bmn are the excitation of element (m, n) and the “ON” or “OFF” state of the element (m, n), respectively. So, bmn is 1 or 0. In Figure 1, θ and φ are the elevation and the azimuth angles in the spherical coordinate, respectively. u and v are direction cosines defined by u = sin θ cos φ and v = sin θ sin φ. If (u0, v0) is the desired main beam direction, the radiation beam pattern F(u, v) can be expressed as:
F u , v = m = 0 M 1 n = 0 N 1 a m n b m n e j k n d u u 0 + m d v v 0
The fitness function considered in the present study is the PSLL of the radiation beam pattern, which is desired to be as low as possible. The PSLL of the planar array can be computed as:
PSLL = max u , v S 20 lg F u , v ; u 0 , v 0 F u 0 , v 0
where S denotes the angular region excluding the main beam. Considering the constraint of the array filling rate, the sum of all elements in matrix B should be a definite constant. If the aperture remains the same, the four corner elements of the planar array must be “ON”. The model of optimization can be represented as:
min B PSLL s . t . 0 m M 1 , 0 n N 1 b 00 = 1 , b M 10 = 1 b 0 N 1 = 1 , b M 1 N 1 = 1 n = 0 N 1 m = 0 M 1 b m n = K
where K denotes the number of elements that turned “ON”.

3. Improved Algorithm

This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

3.1. Fundamental PSO Algorithm

The fundamental PSO algorithm assumes a swarm of particles in the solution space, and the positions of these particles indicate possible solutions to the variables defined for a specific optimization problem. The particles move in directions based on update equations impacted by their own local best positions and the global best position of the entire swarm.
Assume a swarm composed of NP particles is uniformly dispersed in a D-dimensional solution space, where the position xi and velocity vi of ith particle can be expressed as:
x i = x i 1 , x i 2 , , x i D
v i = v i 1 , v i 2 , , v i D
The velocity update equation is given below:
v i j t + 1 = w v i j t + c 1 r 1 t p B e s t i j t x i j t + c 2 r 2 t g B e s t j t x i j t
where i = 1, 2, …, NP, j = 1, 2, …, D. Obviously, NP represents the number of particles, and D represents the number of dimensions. c1 and c2 are the acceleration constants, and r1 and r2 are two random numbers within the range [0,1]. vij(t) and xij(t) are the velocity and position along the jth dimension for the ith particle at tth iteration, respectively. pBestij(t) is the best position along the jth dimension for the ith particle at tth iteration, also called “personal best’’. Finally, “global best’’ gBesti(t) is the best position found by the swarm along the jth dimension at tth iteration.
The value of a position along each dimension for each particle is limited to “0” and “1” in discrete algorithms. The position update equation along the jth dimension for the ith particle is given by
s v i j = 1 1 + e v i j
x i j t + 1 = 1 , r < s v i j 0 , o t h e r s
where s(vij) denotes a function that maps the particle velocity to the probability of the particle position, and r represents a random number within the range [0,1].

3.2. DPSO with Hybrid Search Strategies

The improved PSO algorithm proposed in this paper consists of three main strategies, which are the global learning strategy based on niche technique, dispersed solution sets, the local mutation search strategy and the motion state monitoring strategy. The execution of the corresponding strategies is adaptively adjusted in different search stages of the optimization.

3.2.1. Global Learning Strategy

It is necessary to maintain as high a swarm diversity as possible in the early stage of optimization to avoid premature convergence. So, two strategies are utilized in the early stage, which are the niche technique and the gravitational search algorithm.
The niche technique proposed in [19] can form multiple stable subpopulations within a population. Each particle interacts only with its immediate neighbors through a ring topology. Obviously, the ring topology structure can promote individuals to search thoroughly in its local neighborhood before global optimization. Therefore, it is acceptable for discovering multiple optima. On the basis of the ring topology proposed by [19], a high-quality solution set PG is proposed in this paper, as shown in Figure 2. The solution set PG consist of two kinds. One is the optimal solutions of all particles, the other is the eliminated optimal solutions of some particles with excellent fitness function values. Preserving the possibility of interaction between each particle and its neighbors, particles can also learn from PG directly. This structure will substitute for the role of the global best position gBest in (8).
Because the positions of elements in PG may be uniformly distributed in the whole solution space, particles under the guidance of elements in PG have a considerable number of potential motion directions, which improves the diversity of the population, and avoids premature convergence.
The gravitational search algorithm (GSA) mentioned in [20] is a novel heuristic search algorithm, where a particle is impacted by the combined gravity of all the other particles in the solution space. We use a GSA-based acceleration to replace the personal best position pBesti in (8) to enhance the correlation between particles.
At the tth iteration, the gravitational attraction of particle q on particle i in the kth dimension can be defined as:
F i q k t = G t M i t × M q t R i q + ε [ x q k t x i k t ]
where Mq(t) represents the inertial mass of the particle applying force, Mi(t) represents the inertial mass of the particle subjected to the force, Riq = ||xi(t) − xq(t)||, Riq denotes the Euclidean distance between particle i and particle q, and ε is a small constant to make sure the denominator is not zero. G(t) is the gravitational constant whose value changes dynamically, which can be expressed as:
G t = G 0 e α t T
where α represents the attenuation rate, T is the max iteration times, and G0 is the initial value.
At the tth iteration, the resultant force F i k t caused by all the other particles will play a role to ith particle in the kth dimension, so:
F i k t = q = 1 , q i N P r a n d q F i q k t
where randq is a random number within the range [0,1].
According to Newton’s second law, the acceleration produced by this resultant force can be expressed as:
a i k t = F i k t M i t
The inertial mass of each particle is calculated from its fitness function value and the updating equation of inertial mass is given below:
m i t = PSLL i t g W o r s t t g B e s t t g W o r s t t M i t = m i t q = 1 N P m q t
where PSLLi(t) represents the fitness function value of particle i at tth iteration, and gWorst(t) is the worst position found by the swarm at tth iteration.
Therefore, after applying the global learning strategy, the updating equation of particle velocity can be rewritten as:
v i d t + 1 = w v i d t + c 1 r 1 t a i d t , d C w v i d t + c 1 r 1 t p G o o d i d t x i d t , d C
where d = 1, 2, …, D, D represents the number of dimensions, C denotes a set of q dimensions randomly selected from all D dimensions, pGoodid(t) is the best position of l candidate solution randomly selected from PG and neighborhood particles along the dth dimension for the ith particle at tth iteration, aid(t) is the acceleration along the dth dimension for the ith particle at tth iteration.

3.2.2. Local Search Strategy

An algorithm should have strong local search ability to improve the convergence efficiency in the later stage of optimization, especially when it is applied to the optimal design of large-scale array. The flowchart of the local search strategy is shown in Figure 3.
It can be considered that a particle i can be regarded as moving from a certain position xi to another position xi in the solution space when xi along some dimensions are mutated. Therefore, we propose a local search method that the neighborhood of the optimal position of a particle i is searched by only changing partial elements of its position xi under the guidance of a Gaussian random variable. The position moving the equation along the jth dimension for the ith particle is given by:
x i j = x i j , r j < Χ i j 0 , σ 2 x i j , r j Χ i j 0 , σ 2
where Xij (0, σ2) represents a Gaussian random variable with a mean value of 0 and standard deviation of σ, and ri is a random number within the range [0,1].
The mutated position x i should be used to replace xi if it has a better fitness function value, and the standard deviation σ should be amplified to increase the moving distance of particles. Otherwise, keep xi unchanged and reduce σ. The updating equation of standard deviation σ can be expressed as:
σ k + 1 = σ k ρ , PSLL x i < PSLL x i σ k μ , PSLL x i PSLL x i
where k represents the number of variations, ρ is the expansion parameter and ρ > 1, μ is the contraction parameter and 0 < μ < 1. If the size of the standard deviation σ is less than a preset threshold σe, that means there is no better solution for particle i in the adjacent position. So, the update should be stopped.

3.2.3. Particle Movement Condition Monitoring

In order to avoid some particles falling into local optimum prematurely and to improve the performance of the optimization algorithm, a condition monitoring strategy is proposed to adjust the position of the particle according to the moving state of the particle.
There are two preconditions for a particle i to trigger a mutation operation. The first one is that the optimal solution of particle i has not been updated for u iterations, which can be expressed as:
Tpb i > u
where Tpbi represents the number of iterations of the optimal solution that have not been updated. The second precondition is that the moving distance of the position xi is less than the set value ε for successive h iterations, which can be expressed as:
Txb i > h
where Txbi represents the number of iterations in which the moving distance of the position xi is less than the set value ε. The moving distance is defined as the number of different elements in all dimensions before and after the particle position changes.
It can be assumed that particle i has fallen into local optimum when both preconditions are met, and then an individual variation probability γa is used to vary all dimensions of position xi. The equation of variation can be represented as:
x i d = x i d , r d < γ a x i d , r d γ a
where d = 1, 2, …, D, D represents the number of dimensions, and rd is a random number within the range [0,1].

3.3. Steps of Algorithm

The new Algorithm 1 exploits the hybrid search strategies (HSS) described above to improve the performance of a fundamental DPSO. Therefore, it is called DPSO-HSS. The detailed steps of the improved algorithm are summarized as follows.
Algorithm 1: DPSO with hybrid search strategies
1. Generate the initial particle swarm that satisfying the conditions. Initialize pBesti, gWorst, and gBest. Initialize observation parameters Tpbi and Txbi. Initialize the solution set PG.
 2. Calculate the fitness function value of each particle and update pBesti, gWorst, gBest, and Tpbi. Replenish the set PG with the good solutions that have been eliminated.
 3. Update the velocity and position of the particle according to (16), (9), and (10). Determine whether the number of iterations t is larger than tL. If so, go to Step 4. Otherwise, go to Step 5.
 4. Initiate the local search strategy.
 5. Determine whether Tpbi> u and Txbi> h are both valid. If so, update the position of the particle according to (21). Otherwise, go to Step 6.
 6. Constrain the particle position according to the constraint condition and update the parameter Txbi.
 7. Do boundary treatment for particle velocity.
 8. Output gBest. Determine whether the termination conditions are met. If so, end the optimization. Otherwise, t = t + 1, and return to Step 2.

4. Numerical Results and Analysis

In this section, several examples are presented to compare the performance, effectiveness and robustness of the DPSO-HSS algorithm and some contrast algorithms.

4.1. Simulation 1: Function Optimization Tests

The algorithms tested include the proposed DPSO-HSS algorithm, RDPSO algorithm in [13], and NPSOWM algorithm in [14].
The five typical functions used to compare the performance of each algorithm are shown in Table 1. The test is to minimize the function value within a specified range of dimensions and variables. In order to compare the effects of each algorithm under the same conditions, the test of each function is run several times and its statistical results are calculated for comparison.
The three statistical characteristics as evaluation criteria are:
  • The mean value of multiple simulation results;
  • The variance of multiple simulation results;
  • The minimum value of multiple simulations.
The population size NP, the iteration times T, the dimension D, the learning factors c1 and c2, and the inertia weight w of the three algorithms are the same. The simulation results are shown in Table 2.
Among the five test functions, the proposed algorithm outperforms the other two algorithms in the mean and variance of Ackley, Rastrigin, and Sphere. In Rosenbrock’s test, DPSO-HSS has a better variance. In Griewank’s test, the test results of DPSO-HSS and RDPSO are close to the same. Considering the test results of the above five functions comprehensively, the proposed DPSO-HSS algorithm performs well in both mean and variance compared with the RDPSO and NPSOWM algorithm. The low mean indicates the excellent global search ability and convergence effect of the algorithm, whereas the low variance indicates that the stability of the algorithm results in multiple runs.
The time complexity of each test function is O (D), where D is the number of dimensions. By substituting this result into the optimization of the DPSO-HSS algorithm, the time complexity of DPSO-HSS can be calculated as follows
T D P S O H S S = O t p 2 q + t p D
where D represents the number of dimensions, p is the swarm population, q denotes the number of dimensions randomly selected from all D dimensions in (16), and t is the number of iterations. It can be seen that, compared with the general PSO algorithm, the improved algorithm has higher time complexity.

4.2. Simulation 2: Application in Planar Array Thinning with Constraints

The algorithms involved in the simulation include the proposed DPSO-HSS algorithm, RDPSO algorithm in [13], NPSOWM algorithm in [14], and MOPSO-CO algorithm in [4].
Consider a planar array consisting of 20 × 20 elements with equal spacing d = 0.5λ as an initial layout. u and v are both within the range [−1, 1] with a scanning step of 0.01. The main beam direction is (θ, φ) = (0°, 90°), that is, (u, v) = (0, 0). The filling rate is 50%, so there are 200 elements in the “ON” state. The population size NP = 100, iteration times T = 500, and dimensions D = 400 are the same for all algorithms used.
The convergence curve of PSLL using four algorithms for array thinning is shown in Figure 4. At the early stage of optimization, the DPSO-HSS algorithm does not have more impressive search efficiency than other algorithms, but it maintains good global searching ability with the guiding of comprehensive learning strategy and aims at maintaining the high diversity of the population and the correlation between the particles. At the later stage of optimization, the curve of the proposed algorithm shows the clear change process with local search strategy before reaching the optimal PSLL results. The trend of DPSO-HSS corresponds to the configured search strategy and meets the ideal expectation.
The elements distribution and beam pattern diagram of the optimized array of DPSO-HSS algorithm are shown in Figure 5 and Figure 6, respectively. In Figure 5, the black triangles represent the array elements in the “ON” state, and the number of these elements is 200, which meets the filling rate of 50%. The four angular array elements in the array are all in the “ON” state because the array aperture must be unchanged. On the other hand, the direction of the main beam is (u, v) = (0, 0) with the normalized amplitude of 0 dB, and the PSLL is −18.32 dB. These results mean the result of the array thinning design is correct.
We can see the PSLL performance in details in Figure 7, illustrating the novelty of the proposed DPSO-HSS algorithm, which plots the u-cut of the beam pattern diagram of a thinned array optimized by four kinds of algorithms. The main lobe width of the beam pattern of each algorithm is identical. The PSLL of the DPSO-HSS algorithm is −18.32 dB, whereas those of RDPSO, NPSOWM, and MOPSO-CO are −17.15 dB, −17.42 dB, and −17.66 dB, respectively. Compared with the other three algorithms, the PSLL of DPSO-HSS is decreased by 6.82%, 5.17%, and 3.74%, respectively.

4.3. Simulation 3: Beam Pattern of The Optimized Array with Different Main Beam Positions

In the above example, we set the main beam to a normal direction. Reference [21] proved that the main beam direction configuration has no impact to PSLL in unequally spaced linear antenna array thinning. In fact, this conclusion could also be obtained for planar array thinning. The v-cut of the radiation beam pattern diagram is shown in Figure 8 with different main beam directions, i.e., (θ, φ) = (0°, 90°), (θ, φ) = (30°, 45°), and (θ, φ) = (60°, −30°). The PSLLs are −18.32 dB, −18.30 dB, and −18.31 dB, respectively. The difference is less than 0.1 dB, which is consistent with the conclusion in [21].

4.4. Simulation 4: Synthesis Results of Four Algorithms for Large Planar Thinned Arrays

The array thinning effect may be affected by both array aperture and array filling rate. Table 3 records the PSLL and 3 dB bandwidth of the radiation beam pattern of thinned arrays optimized by each algorithm under different array apertures and filling rates. Each result is the mean value of five iterations running under the same simulation conditions.
The results of PSLL under different filling rates show that the filling rate does have a certain influence on the optimization effect of thinned array and the optimization effect will be reduced if the number of open elements is too many or too few. The PSLL is improved when the filling rate is a constant of 60% and the aperture is changed from 5λ to 10λ, which indicates that the aperture also has a certain influence on the performance of array thinning. As shown in Table 3, the 3 dB bandwidth of the main lobe is mainly affected by the aperture, and the larger the array aperture is, the smaller the 3 dB bandwidth is.

5. Conclusions

A novel optimization algorithm for large array thinning based on DPSO with hybrid search strategies is proposed to improve the performance of large planar array thinning. The proposed algorithm, named DPSO-HSS, utilizes a global learning strategy to improve the diversity of populations at the early stage of optimization. A dispersive solution set and the gravitational search algorithm are used during particle velocity updating. Then, a local mutation strategy is employed in the later stage of optimization so that the local convergence is enhanced by continuous search around the best position of the particle. Several of the above-mentioned representative examples of large planar arrays thinning are provided to demonstrate the effectiveness and robustness of the DPSO-HSS algorithm. The brief comparison results are shown in the Table 4.

Author Contributions

W.C., L.J., C.G., K.M. and H.Z. conceived and designed the algorithm and experiments together; W.C., L.J. and H.Z. performed the simulations and experiments and analyzed the results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Equipment Pre-Research Foundation of China during the “14th Five-Year Plan”, grant number 629010204.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Liu, Q.H.; Nie, Z. Reducing the Number of Elements in Multiple-Pattern Linear Arrays by the Extended Matrix Pencil Methods. IEEE Trans. Antennas Propag. 2014, 62, 652–660. [Google Scholar] [CrossRef]
  2. Haupt, R.L. Adaptively Thinned Arrays. IEEE Trans. Antennas Propag. 2015, 63, 1626–1632. [Google Scholar] [CrossRef]
  3. Rahardjo, E.T.; Sandi, E.; Zulkifli, F.Y. Design of Linear Sparse Array Based on the Taylor Line Source Distribution Element Spacing. In Proceedings of the 2017 IEEE Asia Pacific Microwave Conference (APMC), Kuala Lumpur, Malaysia, 13–16 November 2017. [Google Scholar]
  4. Cao, A.; Li, H.; Ma, S.; Jing, T.; Zhou, J. Sparse circular array pattern optimization based on MOPSO and convex optimization. In Proceedings of the 2015 Asia-Pacific Microwave Conference (APMC), Nanjing, China, 6–9 December 2015. [Google Scholar]
  5. Liu, X.; Hu, F.; Wei, X.; Yu, D.; Gang, G. Cuboid sparse array synthesis for sensor selection by convex optimization with constrained beam pattern based on WBAN. In Proceedings of the International Conference on Wireless Communications & Signal Processing, Nanjing, China, 15–17 December 2015. [Google Scholar]
  6. Keizer, W. Linear Array Thinning Using Iterative FFT Techniques. IEEE Trans. Antennas Propag. 2008, 56, 2757–2760. [Google Scholar] [CrossRef]
  7. Gu, L.; Zhao, Y.W.; Zhang, Z.P.; Wu, L.F.; Hu, J. Adaptive Learning of Probability Density Taper for Large Planar Array Thinning. IEEE Trans. Antennas Propag. 2020, 69, 155–163. [Google Scholar] [CrossRef]
  8. Cheng, Y.F.; Shao, W.; Zhang, S.J.; Li, Y.P. An Improved Multi-Objective Genetic Algorithm for Large Planar Array Thinning. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar] [CrossRef]
  9. Dai, D.; Yao, M.; Ma, H.; Wei, J.; Zhang, F. An Asymmetric Mapping Method for the Synthesis of Sparse Planar Arrays. IEEE Antennas Wirel. Propag. Lett. 2018, 17, 70–73. [Google Scholar] [CrossRef]
  10. Durmus, A. Novel Metaheuristic Optimization Algorithms for Sidelobe Suppression of Linear Antenna Array. In Proceedings of the 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 21–23 October 2021; pp. 291–294. [Google Scholar]
  11. Guo, J.; Xue, P.; Zhang, C. Optimal Design of Linear and Circular Antenna Arrays Using Hybrid GWO-PSO Algorithm. In Proceedings of the 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 10–12 December 2021; pp. 138–141. [Google Scholar]
  12. Vankayalapati, S.; Pappula, L.; Kumar, K.; Panda, P.K. Thinned Planar Antenna Array Synthesis: A Multiobjective Improved Binary Cat Swarm Optimization Approach. In Proceedings of the 2021 7th International Conference on Signal Processing and Communication (ICSC), Noida, India, 25–27 November 2021; pp. 129–132. [Google Scholar]
  13. Brahma, D.; Deb, A. Optimal Design of Antenna Array using Tuned Random Drift Particle Swarm Optimization Algorithm. In IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS); IEEE: Vancouver, BC, Canada, 2020. [Google Scholar]
  14. Bera, R.; Mandal, D.; Ghoshal, S.P.; Kar, R. Wavelet Mutation based Novel Particle Swarm Optimization technique for comparison of the performance of single ring planar antenna arrays. In Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2016. [Google Scholar]
  15. Zhang, H.; Zheng, J.; Bai, B.; Zhou, Y. Optimal Design of Sparse Array for Ultrasonic Total Focusing Method by Binary Particle Swarm Optimization. IEEE Access 2020, 8, 111945–111953. [Google Scholar] [CrossRef]
  16. Zhang, Q.; Mao, J.; Zhang, Y.; Lu, M.; Li, R.; Liu, X.; Liu, Y.; Yang, R.; Wang, X.; Geng, Y.; et al. Multiple-Focus Patterns of Sparse Random Array Using Particle Swarm Optimization for Ultrasound Surgery. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2022, 69, 565–579. [Google Scholar] [CrossRef] [PubMed]
  17. Li, H.; He, F.; Chen, Y.; Luo, J. Multi-objective Self-organizing Optimization for Constrained Sparse Array Synthesis. Swarm Evol. Comput. 2020, 58, 100743. [Google Scholar] [CrossRef]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  19. Li, X. Niching Without Niching Parameters: Particle Swarm Optimization Using a Ring Topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  20. Doraghinejad, M.; Nezamabadi-pour, H.; Sadeghian, A.H.; Maghfoori, M. A hybrid algorithm based on gravitational search algorithm for unimodal optimization. In Proceedings of the 2012 2nd International eConference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 18–19 October 2012; pp. 129–132. [Google Scholar]
  21. Miranda, A.V.; Ashwin, P.; Sharan, P.; Gangwar, V.S.; Singh, A.K.; Singh, S.P. An efficient synthesis of unequally spaced antenna array with electronic scan capability utilizing particle swarm optimization. In Proceedings of the 2017 IEEE MTT-S International Microwave and RF Conference (IMaRC), Ahmedabad, India, 11–13 December 2017; pp. 255–258. [Google Scholar]
Figure 1. A planar array with some elements “ON” and the others “OFF”.
Figure 1. A planar array with some elements “ON” and the others “OFF”.
Sensors 22 07656 g001
Figure 2. Schematic representation of the ring topology and the solution set PG.
Figure 2. Schematic representation of the ring topology and the solution set PG.
Sensors 22 07656 g002
Figure 3. The flowchart of the local search strategy.
Figure 3. The flowchart of the local search strategy.
Sensors 22 07656 g003
Figure 4. The convergence curve of PSLL using four kinds of algorithms in thinning the 10λ diameter array.
Figure 4. The convergence curve of PSLL using four kinds of algorithms in thinning the 10λ diameter array.
Sensors 22 07656 g004
Figure 5. The array elements’ distribution of an optimized array using the DPSO-HSS algorithm.
Figure 5. The array elements’ distribution of an optimized array using the DPSO-HSS algorithm.
Sensors 22 07656 g005
Figure 6. The radiation beam pattern diagram of an optimized array using the DPSO-HSS algorithm.
Figure 6. The radiation beam pattern diagram of an optimized array using the DPSO-HSS algorithm.
Sensors 22 07656 g006
Figure 7. The u-cut of the beam pattern diagram of a thinned array optimized by each algorithm.
Figure 7. The u-cut of the beam pattern diagram of a thinned array optimized by each algorithm.
Sensors 22 07656 g007
Figure 8. The v-cut of the radiation beam pattern diagram of the optimized array different main beam positions.
Figure 8. The v-cut of the radiation beam pattern diagram of the optimized array different main beam positions.
Sensors 22 07656 g008
Table 1. Description of test functions and settings of simulation conditions.
Table 1. Description of test functions and settings of simulation conditions.
NameRangeDimensionMinimumPopulationRuning timeNumber of Iterations
Ackley−30 ≤ xi ≤ 3010015050500
Rastrigin−10 ≤ xi ≤ 10100200501000
Sphere−10 ≤ xi ≤ 10300100100500
Rosenbrock−30 ≤ xi ≤ 30100200501000
Griewank−30 ≤ xi ≤ 30100100100500
Table 2. Statistical properties of function tests using different algorithms.
Table 2. Statistical properties of function tests using different algorithms.
FunctionStatistical PropertiesDPSO-HSSRDPSO [13]NPSOWM [14]
AckleyMean0.00962.83852.7257
Variance0.00051.22290.2512
Minimum0.00246.9407 × 10−51.4062
RastriginMean7.548611.820132.1085
Variance9.454825.279498.8685
Minimum2.00263.979813.7634
SphereMean1.24932.10505.0407
Variance0.22341.87911.7301
Minimum0.53690.32082.2356
RosenbrockMean3.85232.386939.3544
Variance0.68717.44212.0763 × 103
Minimum1.33381.3120 × 10−136.9166
GriewankMean0.08010.07640.3513
Variance0.00270.00310.0104
Minimum0.03531.9080 × 10−60.1228
Table 3. Synthesis results obtained by four algorithms for large planar thinned arrays.
Table 3. Synthesis results obtained by four algorithms for large planar thinned arrays.
Aperture
Diameter
(×λ)
Fill
Factor
(%)
PSLL (dB)3 dB Beamwidth (u)
DPSO-HSSRDPSO [13]NPSOWM [14]MOPSO-CO [4]
5100−12.97−12.97−12.97−12.970.179
590−15.46−15.25−15.31−15.380.182
580−16.23−15.83−16.07−16.120.185
570−16.84−16.48−16.53−16.700.181
560−15.90−15.59−15.76−15.640.180
660−16.72−16.49−16.62−16.550.159
7.560−16.81−16.45−16.78−16.570.125
960−17.19−16.77−17.02−16.990.105
1060−17.43−16.98−17.14−17.230.094
1050−18.01−17.45−17.66−17.890.096
1040−16.84−16.41−16.53−16.590.095
1030−15.45−15.06−15.18−15.130.095
1020−13.01−12.83−12.99−12.920.092
Table 4. Comparison of the proposed algorithm and other algorithms in key performance metrics.
Table 4. Comparison of the proposed algorithm and other algorithms in key performance metrics.
MetricsDPSO-HSSMOPSO-CO [4]RDPSO [13]NPSOWM [14]
EfficiencyNormalNormalGoodGood
StabilityGoodNormalGoodNormal
PSLLGoodNormalNormalNormal
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, W.; Ji, L.; Guo, C.; Mei, K.; Zeng, H. Constrained Planar Array Thinning Based on Discrete Particle Swarm Optimization with Hybrid Search Strategies. Sensors 2022, 22, 7656. https://doi.org/10.3390/s22197656

AMA Style

Cai W, Ji L, Guo C, Mei K, Zeng H. Constrained Planar Array Thinning Based on Discrete Particle Swarm Optimization with Hybrid Search Strategies. Sensors. 2022; 22(19):7656. https://doi.org/10.3390/s22197656

Chicago/Turabian Style

Cai, Wanhan, Lixia Ji, Chenglin Guo, Ke Mei, and Hao Zeng. 2022. "Constrained Planar Array Thinning Based on Discrete Particle Swarm Optimization with Hybrid Search Strategies" Sensors 22, no. 19: 7656. https://doi.org/10.3390/s22197656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop