Next Article in Journal
Two CMOS Wilkinson Power Dividers Using High Slow-Wave and Low-Loss Transmission Lines
Previous Article in Journal
Recent Advances in Photoacoustic Imaging: Current Status and Future Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Temperature Characteristics Modeling for GaN PA Based on PSO-ELM

1
School of Intelligent Science and Engineering, Qinghai Minzu University, Xining 810007, China
2
School of Electronic Science and Engineering, University of Electronic Science and Technology, Chengdu 610050, China
3
Tong Fang Electronic Technology Company, Jiujiang 332000, China
*
Author to whom correspondence should be addressed.
Micromachines 2024, 15(8), 1008; https://doi.org/10.3390/mi15081008
Submission received: 14 July 2024 / Revised: 31 July 2024 / Accepted: 2 August 2024 / Published: 5 August 2024

Abstract

:
In order to solve the performance prediction and design optimization of power amplifiers (PAs), the performance parameters of Gallium Nitride high-electron-mobility transistor (GaN HEMT) PAs at different temperatures are modeled based on the particle swarm optimization–extreme learning machine (PSO-ELM) and extreme learning machine (ELM) in this paper. Then, it can be seen that the prediction accuracy of the PSO-ELM model is superior to that of ELM with a minimum mean square error (MSE) of 0.0006, which indicates the PSO-ELM model has a stronger generalization ability when dealing with the nonlinear relationship between temperature and PA performance. Therefore, this investigation can provide vital theoretical support for the performance optimization of PA design.

1. Introduction

Today, with the rapid development of wireless communication technology, power amplifiers (PAs) play an important role in the modern wireless communication system. Therefore, the performances of PA have a decisive influence on the transmission quality and the efficiency of the whole system [1,2]. Meanwhile, the requirements for the stability and reliability of PAs on various occasions are increasingly urgent [3,4]. Therefore, it is of great significance to investigate the nonlinear relationship between PA performance and temperature.
In previous studies, it can be found that some degree of performance degradation was caused by temperature [5,6,7,8,9,10]. Several scholars have investigated the nonlinear relationship between PA performance and temperature. For example, in 2018, Zhou proposed a neural network modeling method for PA based on temperature effects [11]. In 2021, Jha studied the behavior modeling of Gallium Nitride (GaN) class AB dual-frequency PA at different temperatures and humidities and emphasized the importance of the environment for PAs [12]. In 2022, a Support Vector Machine (SVM) was utilized to model a Complementary Metal Oxide Semiconductor (CMOS) PA [13]. In the same year, S-parameters of PAs at different temperatures were modeled based on the BP neural networks by Zhao [14]. Meanwhile, Lin discussed the application of the X parameter in the modeling of microwave power devices [15]. Yang modeled the key indicators of RF amplifiers using an extreme learning machine (ELM) and verified the accuracy of the model in 2023 [16]. In 2024, Lin conducted behavioral modeling for a Gallium Arsenide (GaAs) pseudomorphic high-electron-mobility transistor (pHEMT) monolithic microwave, integrating circuit (MMIC) high-gain PAs at different temperatures [17]. However, how to build models with more generalization ability to predict PA performance accurately on complex occasions is still the current challenge. In recent years, ELM has emerged as a novel machine learning algorithm with continuous advancement, and it has shown significant advantages in many fields with rapid and efficient learning capabilities. It can effectively deal with complex nonlinear relationships and accurately capture the impact of factors such as temperature.
However, ELM is limited by the number of hidden layer nodes and parameters. In order to further optimize the performance of ELM, PSO and ELM are combined to achieve the global optimization of hidden layer nodes and improve the accuracy of modeling. Therefore, a GaN PA is used as the modeling object; the performance parameters were predicted based on PSO-ELM and ELM at −40 °C, 0 °C, 40 °C, 80 °C, and 120 °C, respectively. The prediction results show that PSO-ELM has a higher accuracy than ELM with a minimum mean square error (MSE) of 0.0006.
The paper is organized as follows: the modeling object and methods are explained in Section 2, as well as the detailed modeling procedure. In Section 3, the modeling results are discussed. Finally, concluding remarks are presented in Section 4.

2. Modeling Object and Methods

2.1. Modeling Object

In this paper, a broadband high-efficiency GaN PA is chosen as the object. The step-matching structure and harmonic control network are used to control the second and third harmonics in this PA, which can greatly improve its efficiency and bandwidth. The physical photograph of the PA is shown in Figure 1.
After the design and simulation in ADS, a series of temperature simulations were carried out to obtain the S-parameters and large signals of the PA at different temperatures.

2.2. Modeling Methods

2.2.1. ELM

ELM was proposed by Huang in 2004 [18], which is a feed-forward neural network (FNN) with a single hidden layer. It has the advantages of simple training, strong generalization ability, and fast training. Therefore, it is widely used in pattern recognition [19], regression analysis [20], classification [21], and other fields. The basic structure of ELM is described in Figure 2, which is composed of the input layer, hidden layer, output layer, and the neurons for each layer. The number of neurons for the input layer and output layer is determined by the input vector of X = [X1, X2, …, Xn] and the output vector of Y = [Y1, Y2, …, Yn]. Then, the neuron number of hidden layers can be adjusted according to the complexity of the nonlinear problem. During training, the input weight matrix and hidden layer threshold matrix can be randomly selected; thus, the solution can be solved by adjusting the number of neurons in the hidden layer [22].
There are three steps for ELM training, detailed as follows:
Firstly, it is necessary to determine the number of neurons of L and j for the input layer and the hidden layer. Meanwhile, the input vector of X, the weight of w, and the bias of b also should be given according to the nonlinear function.
Then, after selecting the activation function, the output matrix of H in the hidden layer can be calculated by Formula (1):
H = j = 1 L β j g ( ω X + b )
where g is a nonlinear activation function that is needed for nonlinear classification, and it is used to approximate the nonlinear target function.
Finally, for the output sample of T, the output weight matrix can be obtained by solving the least square. Thus, the estimated value β ^ is shown as Equation (2):
β ^ = H + T
where H + is the generalized inverse matrix of the hidden layer output function H.
Through the above process, ELM is trained and tested for several cycles to generate an ELM prediction model until the accuracy is good.

2.2.2. PSO-ELM

PSO was first proposed by Kennedy and Eberhart in 1995 [23], which is a kind of intelligent optimization algorithm that can simulate the foraging behavior of birds. Its basic idea is to find the optimal solution through cooperation and information sharing among individuals. In order to optimize the modeling, PSO-ELM is generated with PSO and ELM, which can calculate the fitness of each particle through particle swarm operation and adjust the speed and position of particles to obtain the optimal weight and threshold constantly. Thus, the global optimal solution can be obtained through group cooperation until the preset network termination conditions can be met. Finally, the prediction results with high efficiency and accuracy can be achieved.
The update velocity of each particle is shown in Equation (3):
ω V i k + c 1 r 1 ( P i k X i k ) + c 2 r 2 ( P g k X i k )
The position update of each particle is shown in Equation (4):
X i k + 1 = X i k + V i k + 1
where k is the number of iterations; V i k is the particle velocity; P i k is the optimal position of the current particle, which is the current particle can be found from generation 0 to generation k; X i k is the particle position; P g k is the population optimal position; c1 and c2 are the learning factors, which can affect the convergence of the particle; r1 and r2 are the random numbers; and ω is the inertia weight.
In general, it is repeated until it converges.

2.3. Modeling Process

2.3.1. Modeling of ELM

According to the above analysis, ELM can better model the nonlinear relationship between temperature and PA performance. The modeling flowchart of ELM for this GaN PA is shown in Figure 3.
It can be seen that the major process for the ELM model is as follows:
Step 1: The simulation data of the PA at different temperatures are divided into training data and testing data.
Step 2: Data normalization. Here, the activation function is used to normalize the input and output of the training set.
Step 3: ELM model construction.
Step 4: Sigmoid is selected as the activation function.
Step 5: Calculate the hidden layer output matrix. Using randomly generated weights and biases, the output matrix of the training set and testing set can be calculated.
Step 6: Calculate the weight of the output layer. The least square method is used to calculate the optimal weight from the hidden layer to the output layer.
Step 7: Model evaluation. Use the testing sets to evaluate the performance of the model.
Step 8: When comparing the testing set with the predicted results of ELM, if the error is larger than 0.001, the number of neurons and weights will be adjusted. When the error is less than 0.001, the prediction models are generated. The mean square error is picked as an index for determining the accuracy of the model here. Its size can reflect the deviation of the predicted from the true value, and it can be obtained from Equation (5) [24]:
M S E = 1 n i = 1 n ( P i T i ) 2
where MSE is the result of the mean square error, n represents the number of prediction samples, Pi is the ith predicted value, Ti is the corresponding ith expected value, and P i ¯ is the mean of the ith predicted value.
In addition, in order to evaluate the accuracy of the model, Nash-Sutcliffe Efficiency (NSE) is also calculated, which can be calculated from Equation (6) [25]:
N S E = 1 i 1 n ( P i T i ) 2 i 1 n ( P i P i ¯ ) 2
Step 9: Generate the prediction model. If the error can meet the accuracy, the prediction model is generated and the modeling process is ended.

2.3.2. Modeling of PSO-ELM

The random generation of w and b in the initialization of ELM is unstable and will affect the prediction accuracy [26]. Therefore, PSO is introduced to optimize the values of w and b of ELM. The modeling flowchart of PSO-ELM is shown in Figure 4.
This process consists of four steps, detailed as follows:
Step 1: Data acquisition. T and F are the independent variables and Pin, S21, S11, S22, Pout, and Gain are the dependent variables.
Step 2: Training and testing sets are divided.
Step 3: Data normalization.
Step 4: PSO-ELM model construction.
Step 5: Initialization of PSO algorithm. Here, the number of nodes in the hidden layer is 8, the population size is 25, the maximum number of particle iterations is 150, the learning factor is 1.5, the maximum inertia weight is 0.9, and the minimum inertia weight is 0.4. Meanwhile, the position of the particle is at [0, 1], the activation function of the hidden layer is sigmoid, the maximum number of ELM is 1000, the learning rate is 0.0001, and the error accuracy is 0.0004.
Step 6: Calculate the initial fitness value and find out the individual and global optimal position.
Step 7: Iteration of PSO algorithm. The updated fit values are calculated, and the individual and global optimal positions are updated.
Step 8: Update particle position and velocity. The position and velocity of the particle are updated according to the current position and velocity and the information of the individual and global optimal position.
Step 9: ELM training and prediction. The testing set is trained and predicted with ELM; meanwhile, the training data and testing data are fitted and the predicted results are de-normalized.
Step 10: Result analysis.
Here, the mean square error is also selected as the major index to judge the accuracy of the model, and NSE is also used to evaluate the accuracy.
In addition, the performance and generalization ability of models can be affected by the hyperparameters of machine learning [23]. Therefore, in order to optimize the complexity, the learning ability, and the generalization ability of the model and improve the prediction accuracy, it is necessary to adjust its hyperparameters.
The first hyperparameter is the number of hidden layer nodes, which can affect the complexity and learning ability of the model directly. The number of hidden layer neurons of PSO-ELM and ELM is given as Formula (7):
S = m + n + δ
where S is the number of neurons in the hidden layer, n is the number of neurons in the input layer, m is the output layer, and δ is a constant between 1 and 10.
To determine the best number of hidden layer neurons, experiments should be conducted according to the empirical formula. Furthermore, the average MSE of ten times is used to measure the number of hidden layer neurons. When the number of hidden layer neurons in PSO-ELM and ELM is 12, the MSE is the smallest. Therefore, the optimal number of hidden layer neurons for two models can be determined.
The second hyperparameter is the activation function. The sigmoid and relu are the most widely used in an artificial neural network. Here, the modeling effects of these two activation functions are compared and sigmoid is selected.
The third hyperparameter is the learning rate. The step size of parameter updating is determined. Therefore, the appropriate learning rate can make the objective function converge to the global minimum in a reasonable time. In this paper, the learning rate of PSO-ELM and ELM is set to 0.0001 and 0.001, respectively.
The fourth hyperparameter is the number of iterations. The number of rounds is determined. Therefore, in this paper, the iterative times of PSO-ELM and ELM are set to 150 and 1000, respectively.
In addition, the particle number, inertia weight, and learning factor of PSO are also important hyperparameters. Therefore, in this paper, the population size of PSO-ELM is 25, the learning factors are 1.5, the maximum inertia weight is 0.9, the minimum inertia weight is 0.4, and the position of particles is at [0, 1].

3. Results Discussions of Modeling

3.1. ELM Prediction Model

Then, the prediction model based on ELM is built as the structure of 3-12-5 as shown in Figure 5. As shown in Table 1, the input vector is composed of the temperature T, frequency F, and the input power Pin. Moreover, the output vector consists of the small signal gain of S21, input return loss of S11, output return loss of S22, output power of Pout, and Gain. Meanwhile, S11 is the input return loss, S22 is the output return loss, S21 is the small signal gain, Pin is the power of the input signal, Pout is the available power of the output two-port network, and Gain is the ratio of output signal power (watt) to input signal power (watt), as shown in Equation (8) [27]:
G a i n = P o u t P i n
In the ideal case, the circuit is expected to output a low S11, which indicates that the input ports are matched well. This can lead to smaller losses and facilitate efficient power transfer to the circuit. Therefore, the value of S11 directly affects the transmission efficiency of Pin.
Here, T is chosen from 40 °C to 120 °C with a step of 40, F is from 0.2 to 5 GHz with a step of 20 MHz, and Pin is fixed at 29 dBm.
In detail, the input neutron is 3, which can be expressed as the vector of [F, Pin, T], and the output neutron is 5, which is expressed as the vector of [S11, S21, S22, Pout, Gain]. Finally, the neuron number of the hidden layer is determined as 12.
The predicted results of S-parameter, Pout, and Gain based on ELM are shown in Figure 6, Figure 7, and Figure 8, respectively. It can be seen that the fitting results of S11, S21, Pout, and Gain are better, and the minimum MSE can reach 0.0109.

3.2. PSO-ELM Prediction Model

The prediction model based on PSO-ELM is the same as ELM, as shown in Figure 5. In order to compare the accuracy of the two models, the same input is used. Finally, the predicted results of the S-parameter, Pout, and Gain based on PSO-ELM are shown in Figure 9, Figure 10 and Figure 11, respectively. It can be seen that the fitting results of S11, S21, Pout, and Gain are better, and the minimum mean square error can reach 0.0006.

3.3. Comparison of Modeling Results between the Two Methods

Moreover, the MSE of ELM and PSO-ELM are given with the same number of neurons in the hidden layer as in Table 2. It can be seen that the accuracy of prediction results based on PSO-ELM is 0.0006 compared to 0.0109 in ELM with 12 neurons of the hidden layer. Meanwhile, it can be seen that with the increased number of neurons in the hidden layer, the prediction accuracy of PSO-ELM is superior, and the prediction accuracy can be improved by increasing the number of neurons in the hidden layer.
Then, the two models in this paper are compared with other methods presented in terms of MSE, as shown in Table 3. From the perspective of training error, the accuracy of the other models is lower than PSO-ELM. It can be concluded that the PSO-ELM has superior advantages of accuracy and handleability, and it is more suitable for modeling large signal behaviors of GaN HEMTs.

4. Conclusions

In order to accurately grasp the non-relationship between temperature and PA performance, PSO-ELM and ELM have been used to construct the nonlinear modeling. The prediction results show that the mean square error based on PSO-ELM is 0.0006, and its prediction results and errors are better than ELM. Moreover, it is also compared with other models and it also can present a superior performance. In conclusion, this study is of great significance to optimize the circuit design and improve the reliability of PAs in complex environments.

Author Contributions

Methodology, Q.L.; testing, M.W.; validation, M.W.; investigation, Q.L. and M.W.; resources, Q.L.; data curation, M.W.; writing—original draft preparation, M.W.; writing—review and editing, Q.L.; visualization, M.W.; supervision, Q.L.; funding acquisition, Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the authors, who gratefully acknowledge the financial support from the National Natural Science Foundation (62161046), the West Light Youth Talent Program of the Chinese Academy of Sciences (1_14), and the Postdoctoral Project with the University of Electronic Science and Technology and Tongfang Electronic Technology Company.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Ghavidel, A.; Tamjid, F.; Fathy, A. GaN widening possibilties for PAs: Wide-band GaN power amplifiers utilize the technology’s special properties. IEEE Microw. Mag. 2017, 18, 46–55. [Google Scholar] [CrossRef]
  2. Liu, C.; Shi, C.J.R. Design of the Class-E Power Amplifier Considering the Temperature Effect of the Transistor On-Resistance for Sensor Applications. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1705–1709. [Google Scholar] [CrossRef]
  3. Alim, M.A.; Rezazadeh, A.A. Temperature-Dependent DC and Small-Signal Analysis of AlGaAs/InGaAs pHEMT for High-Frequency Applications. IEEE Trans. Electron Devices 2016, 63, 1005–1012. [Google Scholar] [CrossRef]
  4. Qi, Q.; Chen, Z.A. A K-Band CMOS Amplifier With Temperature Compensation for Gain Variation Reduction. IEEE Microw. Wirel. Compon. Lett. 2018, 28, 150–152. [Google Scholar] [CrossRef]
  5. Shin, S.C.; Leung, M.C.H.; Hsiao, S.W. A temperature variation compensated 60-GHz low-noise amplifier in 90-nm CMOS technology. In Proceedings of the Asia-Pacific Microwave Conference, Melbourne, VIC, Australia, 5–8 December 2011; pp. 211–214. [Google Scholar]
  6. Zhao, H.; Zhou, S.H.; Nie, M.N. Experimentally investigating the performance degradation of the CMOS PA at different temperatures. AIP Adv. 2021, 11, 115205–115212. [Google Scholar]
  7. Altet, J.; Aragones, X.; Barajas, E.; Gisbert, X.; Martínez, S.; Mateo, D. Aging compensation in a class-A high-frequency amplifier with DC temperature measurements. Sensors 2023, 23, 7069. [Google Scholar] [CrossRef] [PubMed]
  8. Pour, F.L.; Ha, D.S. A temperature compensated 5 GHz GaN on SiC power amplifier. In Proceedings of the IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August 2020; pp. 549–553. [Google Scholar]
  9. Zhou, S.H. Experimentally investigating the degradation of small-signal gain for a GaN class-AB dual-band power amplifier under high temperature and humidity conditions. AIP Adv. 2020, 10, 125219–125223. [Google Scholar] [CrossRef]
  10. Duraij, M.; XIao, Y.; Zsurzsan, G.; Zhang, Z. A Comparative Study on Class AB and Class D Amplifier Topologies for High Temperature Power Line Communication Circuits. In Proceedings of the IEEE International Symposium on Power Line Communications and its Applications (ISPLC), Aachen, Germany, 26–27 October 2021; pp. 102–107. [Google Scholar]
  11. Zhou, S.H.; Fu, H.P.; Ma, J.G.; Zhang, Q.J. A Neural Network Modeling Approach to Power amplifiers Taking into Account Temperature Effects. In Proceedings of the IEEE/MTT-S International Microwave Symposium-IMS, Philadelphia, PA, USA, 10–15 June 2018; pp. 1028–1031. [Google Scholar]
  12. Zhou, S.; Jha, A.K. Characteristics Modeling of GaN Class-AB Dual-Band PA Under Different Temperature and Humidity Conditions. IEEE Access 2021, 9, 121632–121644. [Google Scholar] [CrossRef]
  13. Zhou, S.H.; Yang, C.; Wang, J. Support Vector Machine–Based Model for 2.5–5.2 GHz CMOS Power Amplifier. Micromachines 2022, 13, 1012–1013. [Google Scholar] [CrossRef] [PubMed]
  14. Zhao, H.; Zhou, S.H. BPNN-Based Behavioral Modeling of the S-Parameter Variation Characteristics of PAs with Frequency at Different Temperatures. Micromachines 2022, 13, 1831. [Google Scholar] [CrossRef] [PubMed]
  15. Lin, Q.; Wang, X.Z.; Wu, H.F. X-parameter modeling investigation for microwave power devices. Microelectron. J. 2022, 127, 105523–105526. [Google Scholar] [CrossRef]
  16. Zhou, S.H.; Yang, C.; Wang, J. Modeling of key specifications for RF amplifiers using the extreme learning machine. Micromachines 2022, 13, 693. [Google Scholar] [CrossRef]
  17. Lin, Q.; Jia, L.; Wu, H.; Wang, X.Z. Temperature behavior modeling based on resilient BPNN for a GaAs pHEMT high gain MMIC PA. Int. J. Numer. Model. Electron. Netw. Devices Fields 2024, 37, 3161–3164. [Google Scholar] [CrossRef]
  18. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feed forward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), Budapest, Hungary, 25–29 July 2004; pp. 985–990. [Google Scholar]
  19. Saurav, S.; Saini, R.; Singh, S. Facial Expression Recognition Using Dynamic Local Ternary Patterns with Kernel Extreme Learning Machine Classifier. IEEE Access 2021, 9, 120844–120868. [Google Scholar] [CrossRef]
  20. Peng, Y.; Kang, H.; Chen, Y. A Research on Stock Index Prediction Based on Multiple Linear Regression and ELM Neural Network. In Proceedings of the 6th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 9–11 April 2021; pp. 254–259. [Google Scholar]
  21. Mao, X.; Xiao, X.; Lu, Y. PolSAR Data-Based Land Cover Classification Using Dual-Channel Watershed Region-Merging Segmentation and Bagging-ELM. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  22. Iwahana, K.; Takemura, T.; Cheng, J.C.; Ashizawa, N.; Umeda, N.; Sato, K.; Kawakami, R.; Shimizu, R.; Chinen, Y.; Yanai, N. MADMAX: Browser-Based Malicious Domain Detection Through Extreme Learning Machin. IEEE Access 2021, 9, 78293–78314. [Google Scholar] [CrossRef]
  23. Almayan, H.; Mayyan, W.A. Improving accuracy of students’ final grade prediction model using PSO. In Proceedings of the 2016 6th International Conference on Information Communication and Management (ICICM), Hatfield, UK, 29–31 October 2016; pp. 35–39. [Google Scholar]
  24. Wu, T.; Wang, X.Z.; Lin, Q.; Bai, H. Large Signal Modeling for Microwave Power Devices Based on LSTM. In Proceedings of the 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 17–19 June 2022; pp. 1553–1556. [Google Scholar]
  25. Knoben, W.J.M.; Freer, J.E.; Woods, R.A. Inherent benchmark or not Comparing Nash–Sutcliffe and Kling–Gupta efficiency scores. Hydrol. Earth Syst. Sci. 2019, 23, 4323–4331. [Google Scholar] [CrossRef]
  26. Calik, N.; Güneş, F.; Koziel, S.; Pietrenko-Dabrowska, A.; Belen, M.A.; Mahouti, P. Deep-learning-based precise characterization of microwave transistors using fully-automated regression surrogates. Sci. Rep. 2023, 13, 1445. [Google Scholar] [CrossRef] [PubMed]
  27. Hao, Y.; Zhang, J.F.; Zhang, J.C. Nitride Wide Band Gap Semiconductor Material and Electronic Devices, 1st ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 1–392. [Google Scholar]
  28. Lin, Q.; Wang, X.Z.; Wu, H.F. Approach for Extreme Learning Machine-Based Microwave Power Device Modeling. IEEE Access 2022, 10, 127806–127816. [Google Scholar] [CrossRef]
Figure 1. Physical photograph of the GaN PA.
Figure 1. Physical photograph of the GaN PA.
Micromachines 15 01008 g001
Figure 2. Basic structure of ELM for GaN PA.
Figure 2. Basic structure of ELM for GaN PA.
Micromachines 15 01008 g002
Figure 3. Modeling flowchart of ELM model for GaN PA.
Figure 3. Modeling flowchart of ELM model for GaN PA.
Micromachines 15 01008 g003
Figure 4. Modeling flowchart of PSO-ELM for GaN PA performance.
Figure 4. Modeling flowchart of PSO-ELM for GaN PA performance.
Micromachines 15 01008 g004
Figure 5. Prediction model based on ELM for GaN PA performance.
Figure 5. Prediction model based on ELM for GaN PA performance.
Micromachines 15 01008 g005
Figure 6. Prediction results of S-parameters based on ELM for GaN PA. (a) S11. (b) S21. (c) S22.
Figure 6. Prediction results of S-parameters based on ELM for GaN PA. (a) S11. (b) S21. (c) S22.
Micromachines 15 01008 g006aMicromachines 15 01008 g006b
Figure 7. Prediction results of Pout based on ELM for GaN PA.
Figure 7. Prediction results of Pout based on ELM for GaN PA.
Micromachines 15 01008 g007
Figure 8. Prediction results of Gain based on ELM for GaN PA.
Figure 8. Prediction results of Gain based on ELM for GaN PA.
Micromachines 15 01008 g008
Figure 9. Prediction results of S-parameters based on PSO-ELM for GaN PA. (a) S11. (b) S21. (c) S22.
Figure 9. Prediction results of S-parameters based on PSO-ELM for GaN PA. (a) S11. (b) S21. (c) S22.
Micromachines 15 01008 g009aMicromachines 15 01008 g009b
Figure 10. Prediction results of Pout based on PSO-ELM for GaN PA.
Figure 10. Prediction results of Pout based on PSO-ELM for GaN PA.
Micromachines 15 01008 g010
Figure 11. Prediction results of Gain based on PSO-ELM for GaN PA.
Figure 11. Prediction results of Gain based on PSO-ELM for GaN PA.
Micromachines 15 01008 g011
Table 1. Input and output vectors of the models.
Table 1. Input and output vectors of the models.
VectorsRangeStep Size
Input VectorFrequency(F)0.2 GHz–5 GHz20 MHz
Input power (Pin)0 dBm–30 dBm0.1 dBm
Temperature(T)−40 °C–120 °C40 °C
Output VectorSmall signal gain (S21)--
Input return loss (S11)
Output return loss (S22)
Output power (Pout)
Gain
Table 2. Comparison of modeling results.
Table 2. Comparison of modeling results.
ModelNumber of Neurons in the Hidden LayerMSE
ELM20.1908
40.1248
60.0800
80.0948
100.0739
120.0109
140.9903
PSO-ELM20.0720
40.0060
60.0083
80.0090
100.0055
120.0006
140.0063
Table 3. Performance comparison of models.
Table 3. Performance comparison of models.
ReferencesResearch ObjectModelMSE
[13]COMSSVM0.0219
[15]COMSBPNN0.0008
[28]GaN HEMTELM0.0027
This workGaN HEMTELM0.0109
PSO-ELM0.0006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Q.; Wang, M. Temperature Characteristics Modeling for GaN PA Based on PSO-ELM. Micromachines 2024, 15, 1008. https://doi.org/10.3390/mi15081008

AMA Style

Lin Q, Wang M. Temperature Characteristics Modeling for GaN PA Based on PSO-ELM. Micromachines. 2024; 15(8):1008. https://doi.org/10.3390/mi15081008

Chicago/Turabian Style

Lin, Qian, and Meiqian Wang. 2024. "Temperature Characteristics Modeling for GaN PA Based on PSO-ELM" Micromachines 15, no. 8: 1008. https://doi.org/10.3390/mi15081008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop