Next Article in Journal
Dilaton Effective Field Theory
Next Article in Special Issue
Multi-Periodicity of High-Frequency Type III Bursts as a Signature of the Fragmented Magnetic Reconnection
Previous Article in Journal
Effects of Background Turbulence on the Relaxation of Ion Temperature Anisotropy Firehose Instability in Space Plasmas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Solar Radio Spectrum Based on Swin Transformer

1
School of Information Science & Engineering, Yunnan University, Kunming 650500, China
2
CAS Key Laboratory of Solar Activity, National Astronomical Observatories, Beijing 100012, China
3
State Key Laboratory of Space Weather, National Space Science Center, Beijing 100864, China
4
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Universe 2023, 9(1), 9; https://doi.org/10.3390/universe9010009
Submission received: 30 October 2022 / Revised: 16 December 2022 / Accepted: 19 December 2022 / Published: 23 December 2022
(This article belongs to the Special Issue Solar Radio Emissions)

Abstract

:
Solar radio observation is a method used to study the Sun. It is very important for space weather early warning and solar physics research to automatically classify solar radio spectrums in real time and judge whether there is a solar radio burst. As the number of solar radio burst spectrums is small and uneven, this paper proposes a classification method for solar radio spectrums based on the Swin transformer. First, the method transfers the parameters of the pretrained model to the Swin transformer model. Then, the hidden layer weights of the Swin transformer are frozen, and the fully connected layer of the Swin transformer is trained on the target dataset. Finally, parameter tuning is performed. The experimental results show that the method can achieve a true positive rate of 100%, which is more accurate than previous methods. Moreover, the number of our model parameters is only 20 million, which is 80% lower than that of the traditional VGG16 convolutional neural network with more than 130 million parameters.

1. Introduction

The Sun is the closest star to the Earth, and the light and heat it provides are the source of human survival and activities on the Earth. The Sun’s violent activity and space weather changes affect human survival. Solar bursts are a sporadic component of solar radio emission connected with the flare energy release. A solar radio spectrometer can observe the solar radio emission intensity in the radio band and is the main equipment for studying a solar burst. If the solar radio spectrum can be processed automatically and solar burst events can be classified in real time, it is of great value to solar physics research and space weather early warning.
In the late 1960s and early 1970s, John Paul Wild’s team built and operated the world’s first solar radio spectrometer and the subsequent Culgoora radio heliograph. Since then, an increasing number of solar radio spectrometers have been built. In recent years, many researchers have studied the automatic processing of the observed solar radio spectrum. Ruizhen Zhao analyzed the features of the NeighShrink thresholding function, and a new wavelet NeighShrink square root thresholding method was proposed for solar radio spectrum image denoising [1]. Yihua Yan proposed a nonlinear calibration method to address the instrument saturation effect. The method adopted a channel normalization algorithm for image enhancement and finally used a wavelet approach to eliminate the effect of noise [2]. For the detection of the solar radio spectrum outline, Yan Zhang improved the level set method to extract the contours of the original image, overcoming the problem of missing detection associated with this method [3]. Chengming Tan developed a data analysis system compatible with SSW (Solar Soft Ware). This software can further expand the connection between data while mining the deep information of data [4]. White S.M. used data from the Green Bank Solar Radio Burst Spectrometer to describe and illustrate the main types of radio bursts [5]. In the study of sunspots, Preminger proposed a new method for the fast and automatic identification of solar features using the contrast ratio of images [6]. K. Iwai developed a new metric spectrum observation system for solar radio bursts. This system allows us to observe solar radio bursts in the frequency range of 150 to 500 MHz [7].
With the rapid development of deep learning, deep learning has achieved good results in image processing and computer vision [8,9]. Researchers have begun to use deep learning to study the solar radio spectrum. Vasili V. Lobzin proposed a method for the automatic identification and classification of type II and type III solar radio spectra. The method improved the accuracy of detection of solar radio burst types II and III via preprocessing, spectral intensity transformation, and spectral frequency transformation [10,11]. Junchen Guo proposed a hybrid network based on convolutional and GRU storage units to classify the solar radio spectrum [12]. Weidan Zhang proposed a model that combines a conditional generative adversarial network (CGAN) and a deep convolutional generative adversarial network (DCGAN) to classify the solar radio spectrum [13].
In previous studies, most researchers adopted convolutional operations as backbone networks to extract features of the solar radio spectrum. The methods had a large number of model parameters and consumed many system resources. In this paper, we propose a method combining a Swin transformer [14] and transfer learning to classify solar radio spectra. Our method incorporates the advantages of convolutional neural networks, such as localization, translation invariance and hierarchy. Compared with previous research in classifying the solar radio spectrum, our method achieves better classification accuracy, while the number of model parameters is greatly decreased.

2. Solar Radio Spectrum and Preprocessing

2.1. Dataset Introduction

The dataset for this research experiment was obtained from the national astronomical observatory of the Chinese Academy of Sciences. The data were collected via a solar broadband spectrometer (SBRS). The sample amounts of the dataset are shown in Table 1, which are in references [12,13].
The vertical axis of the solar radio spectrum represents the frequency of the spectrum, and the horizontal axis represents the time. Each pixel value represents the solar radio flux at a given time and frequency. If the spectrum images are displayed in grayscale, white represents a high flux of solar radio emission, and black represents a low flux of solar radio emission. The whole image represents the solar radio flux at a certain frequency during a certain time period. The burst, nonburst, and calibrated solar radio spectrum are shown in Figure 1a–c. Sometimes, the solar radio spectrum can be displayed with pseudocolor to generate a good visual effect of the solar radio spectrum.
After the solar radio spectrometer receives solar radio emission, the signal is amplified and filtered and then saved in a certain file format. Horizontal stripes and other noise can appear in solar radio spectrum images due to internal interference, external interference, and quantization problems of the instrument. Therefore, it is necessary to preprocess the solar radio spectrum data.

2.2. Normalization of Channels

In a general sense, the nonlinear effect of the instrument is caused by the variation in the gain of each channel. There should be a deterministic continuous trend in the gain between channels. Therefore, we adopt the logic of the channel normalization method in the literature [12] to adjust the channel differences. That is, the pixel points on the horizontal stripes are subtracted from the original image f ( x , y ) , and the global average is added. The equation for the channel-normalized image g ( x , y ) is
g ( x , y ) = f ( x , y ) 1 n x = 0 n f ( x , y ) + 1 n m x = 0 m y = 0 n f ( x , y )
where f ( x , y ) is the pixel value of the original image at ( x , y ) , m is the number of pixel points on the x-axis, and n is the number of pixel points on the y-axis. The experimental results are shown in Figure 2a,b.

2.3. Pseudocolor Conversion and Dimensional Transformation of the Solar Radio Spectrum

The Swin transformer network model requires the input format of the image to be a three-channel color map of 224 × 224 × 3. After channel normalization of the solar radio spectrum, pseudocolor conversion of the grayscale image and transformation of the image size are needed.
We define a grayscale pseudocolor mapping table to map the normalized grayscale values from the interval [0,1] to the corresponding pseudocolor. Figure 3a shows the pseudocolor color mapping table. The leftmost part of the color band corresponds to a grayscale of 0.0, the rightmost part corresponds to a grayscale of 1.0, and the middle corresponds to a grayscale of 0.5. With this grayscale pseudocolor mapping table, the channel-normalized Figure 2b can be converted to the pseudocolor image in Figure 3b.
After obtaining the pseudocolor image of the solar radio spectrum, we used a bilinear interpolation algorithm [15] to transform it to a size of (224 × 224 × 3). In the image size transformation, for any target pixel g ( x , y ) after the transformation, the floating-point coordinates ( i + u , j + v ) of the original image are obtained via the coordinate inversion transformation. Then, this pixel value g ( x , y ) can be determined using the pixel values f ( i , j ) , f ( i + 1 , j ) , f ( i , j + 1 ) , and f ( i + 1 , j + 1 ) of the four points around the original image ( i + u , j + v ) together. The calculation formula is
g ( x , y ) = a + b + c + d
where
a = ( 1 u ) ( 1 v ) f ( i , j )  
b = ( 1 u ) v f ( i , j + 1 )  
c = u ( 1 v ) f ( i + 1 , j )
d = u v f ( i + 1 , j + 1 )
where i and j are nonnegative integers, and u and v are floating-point numbers in the interval [0,1). The RGB components of each pixel in the solar radio spectrum are processed separately using a bilinear interpolation algorithm to obtain the (224 × 224 × 3) solar radio pseudocolor spectrum.

3. Method

3.1. Transfer Learning

Solar activity depends heavily on the phase of the solar cycle. Solar radio bursts are low-probability events, and we found that the cumulative duration of solar radio bursts is less than 0.5% of the total observation duration after our statistics. Therefore, the number of solar radio burst spectrum samples collected in the last decade is limited. The number of solar radio bursts used in our experiments is 579, which is insufficient for training the network model.
To reduce the training cost and improve the training efficiency of the model, we use a transfer learning strategy. Transfer learning refers to the reuse of a pretrained model in another task. Usually, these pretrained models consume considerable time and computational resources in training, and transfer learning can transfer a pretrained model with acquired capabilities to a relevant problem. It is focused on finding similarities between existing knowledge and new knowledge, and transfer learning is achieved through transferring such similarities. Transfer learning is defined as follows:
Given a source domain D s , a learning task T s , a target domain D t , and a learning task T t , the purpose of transfer learning is to acquire knowledge in the source domain D s and the learning task T s to help enhance the learning of the prediction function f t ( x ) in the target domain, where D s D t or T s T t . The model is shown in Figure 4 below:
At the beginning of training, the weights of the hidden layers of the pretrained Swin transformer model are first frozen. Then, the fully connected layer of the pretrained Swin transformer model is trained. After the above operation, the transfer learning process is completed.

3.2. Solar Radio Spectrum Classification Based on Swin Transformer

The Swin transformer, a deep learning network model proposed by Microsoft 2021, is a new network model with self-attention as the backbone network. The Swin transformer constructs a hierarchical transformer by using the hierarchical construction method commonly used in convolutional neural networks (CNNs). It also enacts the idea of self-attentiveness based on sliding windows to solve the problem of too much computational complexity caused by the transfer of the transformer to computer vision tasks. In addition, its design philosophy incorporates the essence of the residual network (ResNet) from the local to the whole, employing the transformer as a tool for progressively expanding the field of perception.
In contrast to a vision transformer (ViT) [16], a Swin transformer extracts the features of the image by continuously enlarging the attention window much like the convolution process of CNN. Its self-attention computation introduces the process of local aggregation, which is computed in terms of windows; the size of its window sliding step is equal to the window size to achieve window nonoverlap. When a traditional CNN performs convolutional operations on each window, each window takes on a value that represents the features of that window. In contrast to CNNs, Swin transformers calculate the self-attention of each window and obtains an updated window. Then, these windows are merged into a large window using a patch merge layer. Finally, in this large window, the value of self-attention is computed, and the obtained value represents the features of the entire window. Computing window-based self-attention in the window instead of global self-attention can greatly reduce the complexity of computation from O ( n 2 ) to O ( n ) . The complexity calculation formula is shown below:
Ω ( M S A ) = 4 h w C 2 + 2 ( h w ) 2 C
Ω ( W M S A ) = 4 h w C 2 + 2 ( h w ) 2 C
where C is the number of channels in the image, h and w are the height and width of the image, respectively, and M is the window size. Then, according to the self-attention formula, M S A (global attention) h × w has O ( n 2 ) complexity, while WMSA (window-based attention) has O ( n ) complexity when M is fixed (window size, set to 7 by default). The larger the value of h × w is, the greater the time complexity of the global self-attention calculation and the more system resources will be consumed.
The architecture of the Swin transformer is very similar to that of the CNN. A Swin transformer contains four stages, each of which is a similar repetitive unit. First, the patch partition layer and the linear embedding layer divide h × w × 3 inputs into a collection of nonoverlapping patches, where the size of each patch is 4 × 4 ; then, the number of patches is h 4 × w 4 . The classification process of the solar radio spectrum is shown in Figure 5.
In Figure 5, the solar radio spectrum is size 28 × 28 after image preprocessing, and the input image is transformed into patches that are composed of four adjacent pixel blocks by the patch partition layer and then fed into Stage 1. In Stage 1, the feature dimension of patches is changed to 96 by the linear embedding layer and then sent to the Swin transformer block. The input comes out of Stage 1 and enters the patch merging layer, where the input patches are merged with adjacent patches according to the rules of 2 × 2 . The number of patches becomes 28 × 28 , and the feature dimension becomes 192, which is equivalent to the image upsampling process. The dimension of the feature map decreases by half when passing through a stage, and the number of channels increases as the dimension decreases. Stages 2 to 4 follow the same principle, and the Swin transformer block in Stage 3 is cycled 6 times. The training process is similar to the convolution and pooling process in convolutional neural networks.

4. Experimentation and Discussion

We randomly selected a number of samples from the dataset in equal proportions for model pretraining, validation, and testing. The number of datasets for the training process is shown in Table 2 below:

4.1. Experimental Evaluation Metrics

We evaluated the model using true positive rate (TPR) and false positive rate (FPR) as experimental metrics, which can reflect the effectiveness of experimental classification. Among them, TPR is defined as the proportion of correctly detected positive samples to all positive samples; FPR is defined as the proportion of incorrectly detected false-positive samples to all negative samples. TPR and FPR are calculated as follows:
T P R = T P ( T P + F N )
F P R = F P ( F P + T N )
The predicted and the labeled values of the images are used as parameters of the confusion matrix algorithm to obtain TP (true-positive), TN (true-negative), FP (false-positive), and FN (false-negative). In this experiment, we expect that the higher the TPR, the better, and the lower the FPR, the better.

4.2. Experimental Results and Analysis

This research used the TensorFlow 2.7 deep learning framework on Python 3.8 to complete the experiment, with 32 GB of memory, a 2080 Ti graphics card, a 14,000 MHz memory frequency, an 11 GB memory capacity, a 352-bit memory bit width, a 0.001 learning rate, a cross-entropy loss function, and the activation function softmax. In this research, we froze the weights of the hidden layer of the pretrained model Swin transformer and then trained the fully connected layer of the Swin transformer on the target dataset. Finally, we saved the best parameters of the model for the following data testing. To ensure that the proposed model is efficient in the training of the entire solar radio spectrum and is not affected by the target dataset, we randomly upset all the training datasets at the beginning of each training session. The obtained experimental results are shown in Figure 6a,b and Table 3 below:
As seen from Figure 6a, the proposed method combining a Swin transformer and transfer learning has a loss value of 0 at the beginning of the third training round, and it remains at 0 in each subsequent training round, which fully demonstrates the superiority of our proposed algorithm.
As shown in Table 3, Swin transformer transfer learning can improve the classification efficiency of the model with the same proportion of dataset segmentation. Compared with experiments without transfer learning, the TPR of burst improves by 1.1%; the TPR of calibration improves by 0.5%; the FPR of nonburst decreases by 0.5%; and the FPR of calibration decreases by 0.1%. We find that using a transfer learning strategy can improve the classification effect of the model well.
In terms of model training parameters, this design was compared with the previous large model VGG16. We find that the training parameters of the Swin transformer decreased by approximately 80% compared to the VGG16 convolutional neural network without reducing the TPR and FPR. The results are shown in Table 4.
To demonstrate the advantages of our proposed method, we compare the model parameters of the vision transformer, which extracts image features similarly to the Swin transformer. Without reducing the TPR and FPR, the number of parameters of the Swin transformer decreases by approximately 60% compared to the vision transformer. The results are shown in Table 5.
To demonstrate the advantage of our proposed method when training the model, we also compared the training time of the VGG16 model, the vision transformer model, and the Swin transformer model. The results are shown in Figure 7 below.
Experimental comparison demonstrates that the proposed method combining the Swin transformer and transfer learning obtains excellent results for the classification of solar radio spectrum. Not only does the model training converge more stably, but the training time is also shorter. Finally, we compared the experimental results of references [12,13,17,18] and the current method, as shown in Table 6.
According to Table 6, all of our experimental results show a significant improvement over those of previous researchers. This gain can be attributed to the advantages of the Swin transformer in image processing, which draws on the features of Resnet for extracting image features and employs the transformer as a tool to gradually extend the perceptual domain from local to global.

5. Conclusions

In this paper, we propose a solar radio spectrum classification method combining a Swin transformer and transfer learning. Experiments show that the self-attentive mechanism can extract the global features of images well, which gives the model a strong generalization ability and greatly improves model classification. This paper can provide reference for other astronomical image classification.

Author Contributions

J.C. proposed the network architecture design and the framework of a Swin transformer. L.Y. and S.L. collected and preprocessed the datasets. J.C. performed the experiments. J.C., L.Y. and S.L. analyzed and discussed the experimental data. J.C. and G.Y. wrote the article. H.Z. and C.T. revised the article and provided valuable advice for the experiments. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (Grant No. 12263008, 11941003), the MOST (Grant No. 2021YFA1600500), the Application and Foundation Project of Yunnan Province (Grant No. 202001BB050032), the Yunnan Provincial Department of Science and Technology–Yunnan University Joint Special Project for Double-Class Construction (Grant No. 202201BF070001-005), the Key R&D Projects of Yunnan Province (Grant No. 202202AD080004), the Open Project of CAS Key Laboratory of Solar Activity, National Astronomical Observatories (Grant No. KLSA202115), and the 13th Postgraduate Innovation Project of Yunnan University (Grant No. 2021Y269).

Data Availability Statement

The data are available at GitHub: https://github.com/filterbank/spectrumcls (accessed on 1 March 2022).

Acknowledgments

We would like to thank the anonymous reviewers and the editor-in-chief for their comments to improve the article. Thanks also to the data sharer. We thank all the people involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, R.; Hu, Z. Wavelet NeighShrink Method for Grid Texture Removal in Image of Solar Radio Bursts. Spectrosc. Spectr. Anal. 2007, 27, 4. (In Chinese) [Google Scholar]
  2. Yan, Y.; Tan, C.; Xu, L.; Ji, H.; Fu, Q.; Song, G. Nonlinear Relative Calibration Methods and Data Processing for Solar Radio Bursts. Sci. China Math. 2001, 45, 89–96. (In Chinese) [Google Scholar] [CrossRef]
  3. Zhang, Y.; Yan, Y.; Tan, C.; Zhao, C. Automatic Contour Detection and Information Extraction of Solar Radio Spectrogram. Mod. Electron. Technol. 2011, 34, 4. (In Chinese) [Google Scholar]
  4. Tan, C.; Yan, Y.; Tan, B.; Liu, Y. Design of A Data Processing System for Solar Radio Spectral Observations. Astron. Res. Technol. 2011. (In Chinese) [Google Scholar]
  5. White, S.M. Solar Radio Bursts and Space Weather. Asian J. Phys. 2007, 16, 189–207. (In Chinese) [Google Scholar]
  6. Preminger, D.G.; Walton, S.R.; Chapman, G.A. Solar Feature Identification using Contrasts and Contiguity. Sol. Phys. 2001, 202, 53–62. [Google Scholar] [CrossRef]
  7. Iwai, K.; Tsuchiya, F.; Morioka, A.; Misawa, H. IPRT/AMATERAS: A new metric spectrum observation system for solar radio bursts. Sol. Phys. 2012, 277, 447–457. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, H.; Yuan, G.; Yang, L.; Liu, K.; Zhou, H. An Appearance Defect Detection Method for Cigarettes Based on C-CenterNet. Electronics 2022, 11, 2182. [Google Scholar] [CrossRef]
  9. Yang, L.; Yuan, G.; Zhou, H.; Liu, H.; Chen, J.; Wu, H. RS-YOLOX: A High-Precision Detector for Object Detection in Satellite Remote Sensing Images. Appl. Sci. 2022, 12, 8707. [Google Scholar] [CrossRef]
  10. Lobzin, V.V.; Cairns, I.H.; Robinson, P.A.; Steward, G.; Patterson, G. Automatic recognition of type III solar radio bursts: Automated Radio Burst Identification System method and first observations. Space Weather. Int. J. Res. Appl. 2009, 7, S04002. [Google Scholar] [CrossRef]
  11. Lobzin, V.V.; Cairns, I.H.; Robinson, P.A.; Steward, G.; Patterson, G. Automatic 19 recognition of coronal type II radio bursts: The automated radio burst identification system method and first observations. Astrophys. J. Lett. 2010, 710, 58–62. [Google Scholar] [CrossRef]
  12. Guo, J.-C.; Yan, F.-B.; Wan, G.; Hu, X.-J.; Wang, S. A deep learning method for the recognition of solar radio burst spectrum. PeerJ Comput. Sci. 2022, 8, e855. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, W.; Yan, F.; Han, F.; He, R.; Li, E.; Wu, Z.; Chen, Y. Auto recognition of solar radio bursts using the C-DCGAN method. Front. Phys. 2021, 9, 646556. [Google Scholar] [CrossRef]
  14. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  15. Wang, S.; Yang, K. Research and implementation of image scaling algorithm based on bilinear interpolation. Autom. Technol. Appl. 2008, 27, 44–45. (In Chinese) [Google Scholar]
  16. Chen, S. Research on Classification Algorithm of Solar Radio Spectrum Based on Convolutional Neural Network; Shenzhen University: Shenzhen, China, 2018. (In Chinese) [Google Scholar]
  17. Trockman, A.; Kolter, J.Z. Patches Are All You Need? arXiv 2022, arXiv:2201.09792. [Google Scholar]
  18. Chen, M.; Yuan, G.; Zhou, H.; Cheng, R.; Xu, L.; Tan, C. Classification of Solar Radio Spectrum Based on VGG16 Transfer Learning. In Chinese Conference on Image and Graphics Technologies; Springer: Singapore, 2021; pp. 35–48. [Google Scholar]
Figure 1. Solar radio spectrum: (a) “burst” type, (b) “calibrating” type, (c) “nonburst” type.
Figure 1. Solar radio spectrum: (a) “burst” type, (b) “calibrating” type, (c) “nonburst” type.
Universe 09 00009 g001
Figure 2. Normalized noise reduction results of channels: (a) before normalization of channels, (b) after normalization of channels.
Figure 2. Normalized noise reduction results of channels: (a) before normalization of channels, (b) after normalization of channels.
Universe 09 00009 g002
Figure 3. Pseudocolor conversion results: (a) grayscale pseudocolor mapping table, (b) after pseudocolor.
Figure 3. Pseudocolor conversion results: (a) grayscale pseudocolor mapping table, (b) after pseudocolor.
Universe 09 00009 g003
Figure 4. Transfer learning model.
Figure 4. Transfer learning model.
Universe 09 00009 g004
Figure 5. Solar radio spectrum classification process.
Figure 5. Solar radio spectrum classification process.
Universe 09 00009 g005
Figure 6. Model training results: (a) visualization result of the loss value, (b) ROC curve.
Figure 6. Model training results: (a) visualization result of the loss value, (b) ROC curve.
Universe 09 00009 g006
Figure 7. Comparison of training time for each model.
Figure 7. Comparison of training time for each model.
Universe 09 00009 g007
Table 1. Solar radio spectrum dataset.
Table 1. Solar radio spectrum dataset.
Spectrum TypeBurstNonburstCalibrationTotal
Spectrum number57933354944408
Table 2. Experimental dataset division.
Table 2. Experimental dataset division.
DatasetBurstNonburstCalibrationTotal
Training set20012002001600
Validation set179935941208
Test set20012002001600
Table 3. Experimental results of Swin transformer.
Table 3. Experimental results of Swin transformer.
ModelSwin TransformerSwin Transformer+
Transfer Learning
TPR (%)FPR (%)TPR (%)FPR (%)
Burst98.901000
Nonburst1000.51000
Calibration99.50.11000
Table 4. Swin transformer vs. VGG16.
Table 4. Swin transformer vs. VGG16.
ModelSwin Transformer+
Transfer Learning
VGG16+
Transfer Learning
TPR (%)FPR (%)TPR (%)FPR (%)
Burst100096.81.4
Nonburst100097.11.3
Calibration100099.61.8
Parameters27, 550, 473139, 357, 544
Table 5. Swin transformer vs. vision transformer.
Table 5. Swin transformer vs. vision transformer.
MethodSwin Transformer+
Transfer Learning
Vision Transformer
TPR (%)FPR (%)TPR (%)FPR (%)
Burst100099.50
Nonburst10001000
Calibration10001000.1
Parameters27, 550, 47385, 800, 963
Table 6. Comparison results of all experiments.
Table 6. Comparison results of all experiments.
Model BurstNonburstCalibration
Swin transformerTPR (%)100100100
FPR (%)000
Vision transformerTPR (%)99.5100100
FPR (%)000.1
CGRUTPR (%)96.899.599.9
FPR (%)01.50.3
VGG16TPR (%)96.897.199.6
FPR (%)1.41.31.8
CNNTPR (%)84.69099
FPR (%)9.78.70.7
MultimodelTPR (%)70.980.996.8
FPR (%)15.613.93.2
DBNTPR (%)67.486.495.7
FPR (%)3.214.10.4
PCA+SVMTPR (%)52.70.168.3
FPR (%)2.616.672.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Yuan, G.; Zhou, H.; Tan, C.; Yang, L.; Li, S. Classification of Solar Radio Spectrum Based on Swin Transformer. Universe 2023, 9, 9. https://doi.org/10.3390/universe9010009

AMA Style

Chen J, Yuan G, Zhou H, Tan C, Yang L, Li S. Classification of Solar Radio Spectrum Based on Swin Transformer. Universe. 2023; 9(1):9. https://doi.org/10.3390/universe9010009

Chicago/Turabian Style

Chen, Jian, Guowu Yuan, Hao Zhou, Chengming Tan, Lei Yang, and Siqi Li. 2023. "Classification of Solar Radio Spectrum Based on Swin Transformer" Universe 9, no. 1: 9. https://doi.org/10.3390/universe9010009

APA Style

Chen, J., Yuan, G., Zhou, H., Tan, C., Yang, L., & Li, S. (2023). Classification of Solar Radio Spectrum Based on Swin Transformer. Universe, 9(1), 9. https://doi.org/10.3390/universe9010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop