Next Article in Journal
A Complete Breakdown of Politics Coverage Using the Concept of Domination and Double Domination in Picture Fuzzy Graph
Next Article in Special Issue
On the Partition Temperature of Massless Particles in High-Energy Collisions
Previous Article in Journal
On the Computation of the Codimension of Map Germs Using the Lie Algebra Associated with a Restricted Left–Right Group
Previous Article in Special Issue
Einstein–Yang–Mills-Aether Theory with Nonlinear Axion Field: Decay of Color Aether and the Axionic Dark Matter Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progress of Machine Learning Studies on the Nuclear Charge Radii

1
Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China
2
Shanghai Research Center for Theoretical Nuclear Physics, NSFC and Fudan University, Shanghai 200438, China
*
Authors to whom correspondence should be addressed.
Symmetry 2023, 15(5), 1040; https://doi.org/10.3390/sym15051040
Submission received: 12 April 2023 / Revised: 30 April 2023 / Accepted: 4 May 2023 / Published: 8 May 2023

Abstract

:
The charge radius is a fundamental physical quantity that describes the size of one nucleus, but contains rich information about the nuclear structure. There are already many machine learning (ML) studies on charge radii. After reviewing the relevant works in detail, the convolutional neural networks (CNNs) are established to reproduce the latest experimental values of charge radii. The extrapolating and interpolating abilities in terms of two CNN structures partnering two inputting matrix forms are discussed, and a testing root-mean-square (RMS) error 0.015 fm is achieved. The shell effect on charge radii of both isotones and isotopes are predicted successfully, and the CNN method works well when predicting the charge radii of a whole isotopic chain.

1. Introduction

The nuclear charge radius is one of the most fundamental physical quantities to describe nuclear properties. By studying the nuclear charge radii, information such as nuclear charge density, Coulomb potential of nuclei [1], the properties of nuclear force, shell structure [2,3], halo structure and neutron radii or skins [4,5,6] could be obtained. The RMS charge radii of stable nuclei can be measured by electron scattering [7] and muonic atom X-rays [8] experiments, while the experimental information on charge radii of radioactive nuclei is derived from mean square (MS) charge radii changes δ r 2 〉 through comparing the K α isotopic shift ( K α I S ) [9] and the isotopic shift of optical spectral lines of two isotopes of the same element. With the development of radioactive ion beams (RIB), nuclei far away from the β -stability line have attracted a lot of interest from nuclear physicists. The charge radii of exotic nuclei can be extracted by charge-changing cross-sections [10,11]. In recent years, the number of unstable nuclei whose RMS charge radii are measured by the laser spectroscopy experiments have increased significantly, and the accuracy of measured results has been improved [12]. Accurate results of charge radii are also conducive to some studies in atom physics and astrophysics. It is therefore interesting to find ways to make more accurate predictions for the RMS charge radii.
Nuclear charge radius is defined as
R c = 5 3 R c r m s ,
where R c r m s is the RMS nulcear charge radius. The nucleus is regarded as an incompressible drop with an equilibrium density in liquid drop model (LDM). Because of the uniformity, the RMS charge radius is given by
R c r m s = 3 5 r 0 A 1 / 3 ,
where r 0 = 1.2247 fm [13] and A is the mass number. It is rough for light nuclei [14] and those away from β -stability. In Ref. [15], a formula is given as follows:
R c r m s = 3 5 r 0 A 1 / 3 ( 1 a N Z A + b A ) ,
where r 0 = 1.2347 fm, a = 0.1428, and b = 2.0743 [13]. Although considerable progresses in compensating the weakness of Equation (2) have been made by Equation (3), it describes an approximately linear relation between charge radii and neutron numbers for an isotopic chain. To reflect the shell effects and odd-even staggering on charge radii, Ref. [13] fitted the data with A 40 from Ref. [16] and gave two new formulas:
R c r m s = 3 5 r 0 A 1 / 3 ( 1 a N Z A + b A + c P A ) ,
where r 0 = 1.2320 fm, a = 0.1529, b = 1.3768, c = 0.4286 [13] and P denotes the Casten factor [17];
R c r m s = 3 5 r 0 A 1 / 3 ( 1 a N Z A + b A + c P A + d δ A ) ,
where r 0 = 1.2321 fm, a = 0.1534, b = 1.3358, c = 0.4317, d = 0.1225 [13] and δ is dependent upon the odevity of proton and neutron numbers. In addition, nuclear charge radii could be calculated with Hartree–Fock-Bogoliubov (HFB) [18] and Relativistic Mean Field [19,20] theory. The success of these theories has been witnessed on validating more and more accuracy measured values of charge radii, even if these theoretical models are not confined in extracting nuclear radii.
Machine learning (ML) has the congenital advantage of processing enormous data, which makes it successfully and increasingly applied to nuclear physics [21,22] and particle physics [23]. Ref. [21] provides a snapshot of ML in nuclear physics including nuclear theory, experimental methods, accelerator technology and nuclear data. As a fundamental observable in nuclear theory, it is necessary to be able to accurately calculate the charge radii of nuclei that have not been measured, which is called data mining based on ML. There have been several ML models that are effectively applied to depict and predict nuclear charge radii [24,25,26,27,28,29,30,31]. These applications are also not limited to reducing the RMS deviation of the data set, but also to reproducing and extrapolating the charge radii of isotopic chains whose increasing trends with the neutron number could reflect some underlying physical information. The calcium isotopic chain is often discussed because of the nearly equal charge radii of 40 Ca and 48 Ca, the apparent odd-even effects between the two isotopes, and the significantly increased radii of the isotopes after the neutron magic number N = 28. ML has been used to the study of multiple radionuclide identification in nuclear safety [32] and other nuclear properties such as nuclear mass [33,34], α -decay rate [35] as well as fission yields [36] and so on. On the one hand, this article aims to provide a detailed and methodical review of ML for charge radii. On the other hand, CNNs offer significant advantages in image processing due to the spatial structure of the images considered [37,38]. The CNN is also favored by researchers in the field of deep learning because of its idea of shared weights and biases and the advantage of translation invariance. We also introduce the CNN approach to directly calculate RMS charge radii R c r m s , and intend to achieve better predictions when more nuclear physics quantities are input into models.
This article is organized as follows. In the next section (Section 2), a systematic review of the ML for charge radii is given. We then briefly introduce the CNNs and explain how we apply CNNs to the study of nuclear radii in Section 3. Section 4 is the results and discussions, and a brief conclusion is contained in Section 5.

2. Machine Learning for Nuclear Charge Radii

Particularly, the charge radius is a fundamental physical quantity that reflects the size of the nucleus, but the application of ML methods on it is increasing with new experimental data that are constantly updated. In 2013, S. Akkoyun et al. had already started using neural network methods for charge radius studies [24]. They tried a feed-forward artificial neural network (ANN) with Z and N as input neurons and charge radii as output neurons, and gave a RMS deviation of 0.025 fm between the experimental charge radii and the results of ANN for 20% test data from 900 nuclei. However, it is also seen that the performance on the light nuclei is much worse than that of nuclei with A 40 . The new mass-dependent formula R c r m s = 1.231 A 0.28 was obtained by least-squares fitting of the ANN outputs. When this formula is used as a parameter of the harmonic oscillator basis in the HFB model, the calculated ground-state properties of the Sn isotopes are in good agreement with the experimental values, which provides a good example for combining ML with theoretical models.
In Ref. [25], a Bayesian Neural Network (BNN) is used to learn the residuals between the experimental data and theoretical predictions. The input of the neural network contains only Z and A. As the combination of the ANN and Bayesian theorem, in regressions, the BNN defines the model for the conditional distribution of the outputting values given a set of inputting values. The prior distributions for the parameters of the BNN are set as Gaussian distributions with zeros as the mean and with the variances determined by hyperparameters, which are given Gamma distributions. The Gaussian-form likelihood function naturally combines the raw residuals and the outputs with the experimental errors as the variance term. The raw residuals between experimental values and predictions calculated by the extended-liquid-drop model formula, Equation (3), as well as three relativistic energy density functionals, NL3, FSUGold and FSUGarnet, are refined individually and compared, which greatly expands the range of applying ML for the nuclear theory. The data set of experimental charge radii consists of 820 nuclei with Z 20 and A 40 . Although the extrapolation and interpolation results are improved at least 28% and 42%, respectively, for the entire data set after BNN refinement, this BNN method is overwhelmed for the reproduction of Ca isotopic chain.
In Ref. [26], ANN is used to learn the experimental data of nuclear charge radii, in which the input is extended to include the proton number, the neutron number, the electric quadrupole transition strength B(E2) from the first excited 2 + state to the ground state, and the symmetry energy. Although the total number of nuclei is only 347, the predictions of Ca isotopes are evidently improved with the symmetry energy part included. The underlying correlation between the symmetry energy and the charge radii of Ca isotopes is confirmed by HFB calculations with Skyrme interactions, thereby confirming the reliability of ML.
A naive Bayesian Probability Classifier is trained to tune the nuclear charge radii predicted by the Skyrme-HFB model and the semi-empirical formula in Equation (4), in Ref. [27]. The classification table is made by dividing the raw residuals of nuclear charge radii into 10 intervals. The classification value with the highest probability is exactly the refined value for the raw residual. Ref. [27] calculates the raw deviations between experimental values and the predicted results by the Skyrme-HFB model and Equation (4), and obtained a standard deviation σ = 0.0196 fm for the validation set in the extrapolation. In the subsection of the NBP refinements for the isotopes in Ref. [27], even though the changing features of Ca isotopes can be reproduced, these interesting phenomena, such as nearly identical charge radii of 40 Ca and 48 Ca, as well as evident odd–even effects for Ca isotopes between N = 20 to N = 28, are not completely shown.
In Ref. [28], BNN is used to learn the residuals between experimental data and the predictions of Equation (3). Along with Z and A, two more terms are introduced as input, i.e., δ and P. By these two items, nuclear pairing effects and the shell closure effects are involved. The study achieves the RMS deviation of 0.0149 fm for the entire set in the medium and heavy mass regions. When extrapolating the charge radii of Ca isotopes, the BNN fails to predict the odd–even staggering for the nuclei with 36 A 39 , although the relatively good performance in potassium isotopes can not be ignored.
Ref. [29] defines the distance between two nuclei by calculating the Euclidean norm in the Z N plane. The Kernel ridge regression (KRR) model with a Gaussian kernel is applied to reconstruct the differences between experimental values and calculated results by six phenomenological formulae, and the RMS deviations are improved to about 0.017 fm at a whole level. It should be further explained that these six formulae are not all those introduced in the Section 1, but also include N 1 / 3 , Z 1 / 3 formulae and the formula with the quadrupole deformation, which can be found in the corresponding reference.
In Ref. [30], ANN is used to predict the parameters c and z of a two-parameter Fermi(2pF) distribution, which is assumed for the nuclear charge distributions. Two kinds of inputs ( Z ,   N ,   Z 1 / 3 ) and ( Z ,   N ,   Z 1 / 3 ,   A 1 / 3 ) are used. The accuracy and precision of the parameter-learning effect are improved by introducing A 1 / 3 , in the latter case. However, the RMS deviation between the experimental charge radii and the result of 2pF with parameters tuned by the ANN is 0.07693 fm, which is limited by the form of the 2pF model.
Following the achievements of Ref. [28], the same team goes on to apply BNN to improve the residuals of charge radii of the medium and heavy nuclei in Ref. [31]. In contrast to Ref. [28], an isospin-dependent term and the artificial binary encoding of 181 , 183 , 185 Hg with strong odd–even staggering, are added to the input, respectively, and the RMS deviation of the testing set 0.0139 fm is achieved. The extrapolation capability of BNNs with four, five and six features as inputs are compared by calculating the RMS deviations of the test data in terms of different mass numbers, extrapolation distances and isospin symmetry | N Z | , which confirms the importance of minimizing model distortion with the manual handling of abnormal data. The BNN with six quantities inputted can perform perfect extrapolation in the proton-rich region of thallium isotopes and the proton-rich and neutron-rich regions of calcium isotopes.
Table 1 compares these ML methods and their results, where σ i n denotes the RMS deviations of interpolation, while σ o u t is the RMS deviations of extrapolation. In general, when evaluating charge radii, the interpolation is done by selecting a random portion of the entire data set as the test set and the rest as the training set. The extrapolated data, on the other hand, are selected according to the chronological compilation of the nuclear radius experimental data, with the previous data being used for training and the later updated ones for testing. These data were obtained from Refs. [12,16,39], and the compared data for the other methods in Table 1 are from Refs. [16,39], except for the extrapolation in Refs. [28,31], which used data from Refs. [12,16,40]. A new division of test sets aiming at long isotopic chains as extrapolating is proposed in Ref. [29], which is the six most neutron-rich nuclei are classified into six test sets determined by the extrapolating distances to the nearest isotopes in the train set. It should be noted that the concept of input and output in the NBP classifier method is actually inappropriate, but the physical quantities are grouped in Table 1 according to the categories divided by the residuals of charge radii and the classification process in Ref. [27].

3. CNN Method

Data features can be efficiently extracted by CNN, which is why we use the CNN method. A typical CNN consists of convolution layers, pooling layers and fully connected layers. We want the input and output images to have the same pixel size, so only convolutional layers are used in the CNNs we construct. A normal convolutional layer requires a three-dimensional arrangement of neurons, channel × height × width ( C × H × W ), as input. To illustrate the convolutional layer as an example, refer to Figure 1, where the input of size 3 × 5 × 5 is mapped to the hidden layer of size 2 × 3 × 3. Each neuron in one channel of the hidden layer is connected to a 3 × 3 × 3 region of the input neurons, corresponding to 9 pixels in 1 input channel. That region in the input images is called the local receptive field for the hidden neuron. The convolution means starting with a local receptive field in the top-left corner, then sliding the local receptive field over by one pixel (called stride length) to the right to connect with the second hidden neuron, and so on, crossing all the input images building up the hidden layer. Something that needs mentioning is that the same weights and biases are used for each of the 3 × 3 hidden neurons, which are called shared weights and biases. In practical calculations, for the j , k th hidden neuron in one channel, the output is expressed by:
f ( c = 0 2 ( b c + l = 0 2 m = 0 2 ω c , l , m a c , j + l , k + m ) ) .
Here, f is the neural activation function—as with the ReLU function we use. b is a 3 × 1 × 1 array of shared biases, while ω is a 3 × 3 × 3 array of shared weights. Additionally, a c , h , w denotes the input pixel value at position c , h , w . The shared weights and bias are defined as a kernel or filter. A simple convolutional layer can be implemented by defining the number of input and output channels and the size of the kernel or filter.
In our work, CNNs are prone to achieve high accuracy results, while the difficulty of prediction increases. Therefore, selecting the appropriate neural network structures and network inputs is one of the major tasks of our work. Figure 2 shows two kinds of network structures, labeled C1 and C2, respectively. C1 is very common in deep learning, and it is actually a convolutional network containing a residual block. For example, 36, 3 × 3 conv1 in C1, the description is as follows: we use conv1 to indicate the current convolutional layer, then 36 and 3 × 3 denote the number of output channels and the kernel size of this layer, respectively. The structure of C2 is borrowed from deep convolutional networks that achieve image super-resolution [41]. CNNs are employed to address images, so the 6 × 102 × 158 matrix diagram is filled in with the 6 physical quantities ( Z , N , R c r m s , B a v , I , P ) of nuclei according to the layout shown in Figure 3. It should be added that when filling in the matrix about R c r m s , we use the experimental values for nuclei with experimental charge radii, and the values of R c r m s are calculated by Equation (3) for those that have not been measured. The data set of the binding energy per nucleon B a v is taken from Ref. [42]. P is called a Casten factor [17], and is defined by
P = N p N n N p + N n ,
where N p and N n present the number of valence protons and valence neutrons, respectively, and they are counted from the nearest closed shell. In this work, the proton and neutron magic numbers are taken as Z = 2, 8, 20, 28, 50, 82 and N = 2, 8, 20, 28, 50, 82, 126. I is named relative neutron excess [15], and is given by
I = N Z A ,
which is associated with the isospin.
We hope to establish a connection between the charge radius of one nucleus and the physical quantities associated with itself and the surrounding nuclei in such isotope matrices. Naturally, it is not necessary to consider 248 Cm when the charge radius of 9 Li is calculated. We want the size of CNNs’ output to be consistent with that of the input, and the value of the Zth row and Nth column on the output image represents the charge radius of the nucleus with the proton number Z and the neutron number N. Based on the defined convolution layers, it can be inferred which part of data on the isotope matrices is used in the calculation of one nuclear radius. Thus, for each calculated nucleus, we selected a region of size 13 × 13 with it as the center on the filled isotope matrices as the inputted image of CNNs. As an example in Figure 3, a 13 × 13 region centered on 16 O is framed by the green dotted box, which is the inputted image of 16 O. This division of the input image avoids redundant filtering of the kernel on the images, saving the computational effort. As mentioned before, the experimental values are used to fill in the image of the R c r m s channel, so the central values of the R c r m s channel are set to zero to ensure that the experimental data are not involved in the calculation of the corresponding nuclei. So far, we have obtained 6 × 13 × 13 input images for each nucleus.

4. Results and Discussion

All the RMS charge radii are from the 2013 compilation [16], then we pick out the nuclei that do not exist in the 2004 compilation [39] as the test set when extrapolating. The number of Y and Pb isotopes, for example, has been expanded from 1 to 16, and 23 to 32 in the two compilations, respectively. Overall, 820 nuclei beyond 40 Ca ( Z 20 , A 40 ) have been discussed based on BNN in Ref. [25], while Ref. [27] introduces the NBP classifier to analyze 896 nuclei with A > 3 . The sections can be found in the corresponding references, from which we have selected the best extrapolation results here for a comparison.
The extrapolation property of the CNN method is discussed based on two considered models. In addition, the other kind of input images are also discussed. That is, in the numerical matrix enclosed by the green dashed line in Figure 4, only the values on the pixels framed by the red dashed line are retained for all channels except the R c r m s channel. In order to distinguish, we write down the previously obtained input as Input 1, and this input, which does not take into account the ( Z , N , B a v , I , P ) information of the surrounding nuclei, as Input 2. We individually calculate the nuclei beyond 40 Ca and the nuclei with A > 8 , and the results are presented in Table 2 and Table 3, respectively. When light nuclei are ignored in Table 2, the RMS error of the training set of these CNNs is reduced by 0.01 fm compared to the result of the BNN and a similar optimization is obtained on the test set. C1 is more applicable for medium and heavy nuclei comparing these two models. In the calculation of nuclei with A > 8 , 897 nuclei are collected in total, but we remove 9 Li and 10 Be from the test set. Although it is not reasonable to remove badly predicted nuclei, the absolute errors of them are really large, sometimes over 0.1 fm. There is no lithium isotope in the training data, and only 9 Be is involved in the learning. When 13 × 13 input region images are segmented, the empty parts are filled with zeros after being placed in their respective center pixels. Therefore, to obtain better results with CNNs, it should be worthwhile to adjust the inputting images of light nuclei. The extrapolating ability can be witnessed when taking 68 low mass nuclei in 786 training data and 7 light nuclei in 109 test data into consideration. The model C2 seems to have a better performance for a broader mass range according to Table 3. Turning to the commonality of extrapolation between the two mass ranges of our object, from Table 2 and Table 3, it can be concluded that both C1 and C2 have better extrapolating abilities when using Input 1, involving in these quantities ( Z , N , B a v , I , P ) of the surrounding nuclei, and the extrapolating error of 0.015 fm can be obtained.
The number of the nuclear charge radii has been expanded again in the 2021 complication [12], where the latest RMS charge radii of 236 nuclei measured by the laser spectroscopy experiment are compiled. Combining the three complications, 1027 charge radii data for A > 8 are used in subsequent calculations in aggregate. We randomly choose 80% from these data as training sets and the rest of the nuclear data are naturally classified as test sets. Two CNN models each with two input forms are still discussed in such an interpolation. The results of five random divisions of the data set are listed in Table 4. It is evident that using Input 2 is able to give better predictions for both C1 and C2, which is different with the performances in extrapolations. C2 has a less-obvious advantage for random predictions. Figure 5 shows the deviations between the experimental charge radii and the output of C2 with Input 2, and just the results of data division 1 and 2 are presented, which is more explicit about the discrepancy among different random test sets. The data with Z = 3 and Z = 4 in test set 1 are really obtrusive, and the predicted error of 9 Be is even around 0.15 fm. So, the relatively big disparity between the two test sets is originated from these very light nuclei to a large extent. In more detail, the models that have been well trained can make good predictions that are close to the results of learning, when the nearly identical tendency over the different Z ranges for these four sets is captured.
Figure 6 vividly shows the differences between the experimental data and the calculations of C2 fed by Input 2. The results of the corresponding model are the random interpolating group 2 in Table 4, and the RMS error of the entire set can be calculated as 0.0108 fm. The majority of the calculated charge radii differ from the experimental values by 0.01 fm. The position distribution of the nuclei with larger errors, labeled by red and black pixels, is similar in the training and test sets, and is mainly concentrated in the edge zones. Hence, when the data of all 1027 nuclei are fed into such a model, it is possible to predict the unknown charge radii well.
To provide a more intuitive understanding of this work and to compare with other ML methods in Table 1, Table 5 summarizes our application and results for the CNN method.
C2 with Input 2 has made predictions about the charge radii of several isotope chains for a straightforward perception of the network’s performances. The shell effect is an intriguing nuclear property, and its visualization in charge radii has also been researched [43,44]. We choose Sr ( Z = 38 ) and Ba ( Z = 56 ) isotopic chains, as well as the isotones with N = 64 and 118 for validating the prediction of shell effect, and each chain is tested individually. In Figure 7, the C2 prediction for the charge radii of these nuclei are shown, compared with the corresponding experimental values. The transitions of the charge radii at the point of magic numbers are well reproduced. Almost all nuclei are perfectly reproduced for isotones. After all, when predicting the charge radii of isotones, their isotopes participate in the process of learning. As 120 Ba, forecast in not only the N = 64 chain but also the Z = 56 chain, it is more difficult to predict a single nucleus whose isotopes are not trained according to Figure 7. The C2 performs relatively poorly in regenerating the charge radii of the Ba isotopic chain, especially for those nuclei near the drip line areas. Combining the half Sr and Ba chains on the left of neutron magic numbers, the tendency of C2 predictions tends to be a smooth arc. Thus, the model falls slightly to reproduce the left half isotopes of the Ba chain with a slight fluctuation.
Figure 8 sequentially compares the predicted values of the charge radii of the four isotopic chains of Ca ( Z = 20 ), Zn ( Z = 30 ), Zr ( Z = 40 ) and Pb ( Z = 82 ) with their experimental data. The predicted outputs of the Zn and Pb isotopes are in good agreement with the experimental values. The trend of the charge radii of these two chains is truly relatively smooth. There is the shape transition [45] occurring at 100 Zr ( N = 60 ), which also contributes to the appearance of shape coexistence [46]. An abrupt increase at the charge radius values from 99 Zr to 100 Zr can be seen in Figure 8. The similar behavior is known to occur at N = 60 in the chains of Rb ( Z = 37 ), Sr ( Z = 38 ) [47,48], Y ( Z = 39 ) isotopes and at N = 90 in the chains of Nd ( Z = 60 ), Sm ( Z = 62 ), Gd ( Z = 64 ), and Dy ( Z = 66 ) [45]. However, C2 are overwhelmed by such a sudden transition according to the predictions of Sr and Zr isotopes in Figure 7 and Figure 8, respectively. The poor performances on Ca isotopes are obtrusive. It is a pity that the odd–even staggering between 40 Ca and 48 Ca has not reappeared successfully, but the predicted gap between 40 Ca and 48 Ca is acceptably small.

5. Conclusions

In this article, most of the existing works on ML for nuclear charge radii are briefly reviewed and compared, and the CNN method is employed on reconstructing the 1027 experimental charge radii. We construct two CNNs with different network structures and segment two kinds of inputting matrix charts from nuclear data matrices. The extrapolations in the heavy nuclei region and the global range are independently compared among different models. Two constructed CNNs with two inputting numerical matrices are used to validate 20% interpolating nuclei selected randomly from the total sets. The CNN inputting Input 1 is applicable to extrapolating, while Input 2 performs better when interpolating, and the 0.015 fm testing RMS error can be obtained for both ways of testing the generalization ability of CNNs. The charge radii of isotones involved in the shell effect are regenerated easily and successfully. When individually predicting whole isotopic chains, the CNN shows a great advantage for those chain with a smooth tendency, but falls slightly for those nuclei near the drip lines.
Although a whole level with the error of 0.0108 fm has been achieved by the CNN approach, we expect that CNN predictions can give a relative precise description for per nucleus, and those interesting phenomena such as odd–even staggering of Ca isotopes and the mutation near N = 60 of Sr and Zr isotopes and so on can be predicted with the help of other effective machine learning methods in the near future. In the special case where there is only one isotope map and almost only data on the diagonal, the proper use of CNNs is also a way to find a solution.

Author Contributions

Investigation, P.S.; writing—original draft preparation, P.S.; writing—review and editing, W.-B.H. and D.-Q.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Natural Science Foundation of China (Nos. 11925502, 11935001, 11961141003, 11421505, 11475244, 11927901, and 11835002), the Strategic Priority Research Program of the CAS (No. XDB34030000), the National Key R&D Program of China (No. 2018YFA0404404).

Data Availability Statement

All data that support the findings of this work are available from the corresponding authors upon reasonable request. However, they are mentioned in the references.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaginyan, V.R. Coulomb Energy of Nuclei. Phys. At. Nucl. 2001, 64, 471–476. [Google Scholar] [CrossRef]
  2. Mayer, M.G. On Closed Shells in Nuclei. Phys. Rev. 1948, 74, 235–239. [Google Scholar] [CrossRef]
  3. Haxel, O.; Jensen, J.H.D.; Suess, H.E. On the magic numbers in nuclear structure. Phys. Rev. 1949, 75, 1766. [Google Scholar] [CrossRef]
  4. Brown, B.A. Mirror charge radii and the neutron equation of state. Phys. Rev. Lett. 2017, 119, 122502. [Google Scholar] [CrossRef] [PubMed]
  5. Yang, J.J.; Piekarewicz, J. Difference in proton radii of mirror nuclei as a possible surrogate for the neutron skin. Phys. Rev. C 2018, 97, 014314. [Google Scholar] [CrossRef]
  6. Sammarruca, F. Proton skins, Neutron skins and proton radii of mirror nuclei. Front. Phys. 2018, 6, 90. [Google Scholar] [CrossRef]
  7. Vries, H.D.; Jager, C.; Vries, C.D. Nuclear charge-density-distribution parameters from elastic electron scattering. At. Data Nucl. Data Tables 1987, 36, 495–536. [Google Scholar] [CrossRef]
  8. Fricke, G.; Bernhardt, C.; Heilig, K.; Schaller, L.A.; Schellenberg, L.; Shera, E.B.; Dejager, C.W. Nuclear ground state charge radii from electromagnetic interactions. At. Data Nucl. Data Tables 1995, 60, 177–285. [Google Scholar] [CrossRef]
  9. Lee, F. Changes of mean-square nuclear charge radii from isotope shifts of electronic Kα X-rays. At. Data Nucl. Data Tables 1974, 14, 605–611. [Google Scholar]
  10. Tran, D.T.; Ong, H.J.; Nguyen, T.T.; Tanihata, I.; Aoi, N.; Ayyad, Y.; Chan, P.Y.; Fukuda, M.; Hashimoto, T.; Hoang, T.H.; et al. Charge-changing-cross-section measurements of 12–16C at around 45A MeV and development of a Glauber model for incident energies 10–2100A MeV. Phys. Rev. C 2016, 94, 064604. [Google Scholar] [CrossRef]
  11. Kanungo, R.; Horiuchi, W.; Hagen, G.; Jansen, G.R.; Navratil, P.; Ameil, F.; Atkinson, J.; Ayyad, Y.; Cortina-Gil, D.; Dillmann, I.; et al. Proton distribution radii of 12–19C illuminate features of neutron halos. Phys. Rev. Lett. 2016, 117, 102501. [Google Scholar] [CrossRef] [PubMed]
  12. Li, T.; Luo, Y.N.; Wang, N. Compilation of recent nuclear ground state charge radius measurements and tests for models. At. Data Nucl. Data Tables 2021, 140, 101440. [Google Scholar] [CrossRef]
  13. Sheng, Z.Q.; Fan, G.W.; Qian, J.F.; Hu, J.G. An effective formula for nuclear charge radii. Eur. Phys. J. A 2015, 51, 40. [Google Scholar] [CrossRef]
  14. Brown, B.A.; Bronk, C.; Hodgson, P.E. Systematics of Nuclear RMS Charge Radii. J. Phys. G Nucl. Phys. 1984, 10, 1683–1701. [Google Scholar] [CrossRef]
  15. Nerlo-Pomorska, B.; Pomorski, K. A simple formula for nuclear charge radius. Z. Phys. A 1994, 384, 169–172. [Google Scholar] [CrossRef]
  16. Angeli, I.; Marinova, K.P. Table of experimental nuclear ground state charge radii: An update. At. Data Nucl. Data Tables 2013, 99, 69–95. [Google Scholar] [CrossRef]
  17. Casten, R.F.; Brenner, D.S.; Haustein, P.E. Valence p-n interactions and the development of collectivity in heavy nuclei. Phys. Rev. Lett. 1987, 58, 658. [Google Scholar] [CrossRef]
  18. Virender, T.; Shashi, K.D. A study of charge radii and neutron skin thickness near nuclear drip line. Nucl. Phys. A 2019, 992, 121623. [Google Scholar]
  19. Warda, M.; Nerlo-Pomorska, B.; Pomorski, K. Isospin Dependence of Proton and Neutron Radii within Relativistic Mean Field Theory. Nucl. Phys. A 1998, 635, 484–494. [Google Scholar] [CrossRef]
  20. Wang, J.S.; Shen, W.Q.; Zhu, Z.Y.; Feng, J.; Guo, Z.Y.; Zhan, W.L.; Xiao, G.Q.; Cai, X.Z.; Fang, D.Q.; Zhang, H.Y.; et al. RMF calculation and phenomenological formulas for the rms radii of light nuclei. Nucl. Phys. A 2001, 691, 618–630. [Google Scholar] [CrossRef]
  21. Boehnlein, A.; Diefenthaler, M.; Sato, N.; Schram, M.; Ziegler, V.; Fanelli, C.; Hjorth-Jensen, M.; Horn, T.; Kuchera, M.P.; Lee, D.; et al. Colloquium: Machine learning in nuclear physics. Rev. Mod. Phys. 2022, 94, 031003. [Google Scholar] [CrossRef]
  22. Bedaque, P.; Boehnlein, A.; Cromaz, M.; Diefenthaler, M.; Elouadrhiri, L.; Horn, T.; Kuchera, M.; Lawrence, D.; Lee, D.; Lidia, S.; et al. AI for nuclear physics. Eur. Phys. J. A 2021, 57, 100. [Google Scholar] [CrossRef]
  23. Schwartz, M.D. Modern Machine Learning and Particle Physics. Harv. Data Sci. Rev. 2021, 3, 2. [Google Scholar] [CrossRef]
  24. Akkoyun, S.; Bayram, T.; Kara, S.O.; Sinan, A. An artificial neural network application on nuclear charge radii. J. Phys. G Nucl. Part. Phys. 2013, 40, 055106. [Google Scholar] [CrossRef]
  25. Utama, R.; Chen, W.C.; Piekarewicz, J. Nuclear charge radii: Density functional theory meets Bayesian neural networks. J. Phys. G Nucl. Part. Phys. 2016, 43, 114002. [Google Scholar] [CrossRef]
  26. Wu, D.; Bai, C.L.; Sagawa, H.; Zhang, H.Q. Calculation of nuclear charge radii with a trained feed-forward neural network. Phys. Rev. C 2020, 102, 054323. [Google Scholar] [CrossRef]
  27. Ma, Y.F.; Su, C.; Liu, J.; Ren, Z.Z.; Xu, C.; Gao, Y.H. Predictions of nuclear charge radii and physical interpretations based on the naive Bayesian probability classifier. Phys. Rev. C 2020, 101, 014304. [Google Scholar] [CrossRef]
  28. Dong, X.X.; An, R.; Lu, J.X.; Geng, L.S. Novel Bayesian neural network based approach for nuclear charge radii. Phys. Rev. C 2022, 105, 014308. [Google Scholar] [CrossRef]
  29. Ma, J.Q.; Zhang, Z.H. Improved phenomenological nuclear charge radius formulae with kernel ridge regression. Chin. Phys. C 2022, 46, 074105. [Google Scholar] [CrossRef]
  30. Shang, T.S.; Li, J.; Niu, Z.M. Prediction of nuclear charge density distribution with feedback neural network. Nucl. Sci. Tech. 2022, 33, 153. [Google Scholar] [CrossRef]
  31. Dong, X.X.; An, R.; Lu, J.X.; Geng, L.S. Nuclear charge radii in Bayesian neural networks revisited. Phys. Lett. B 2023, 838, 137726. [Google Scholar] [CrossRef]
  32. Wang, Y.; Zhang, Q.H.; Yao, Q.X.; Huo, Y.G.; Zhou, M.; Lu, Y.F. Multiple radionuclide identification using deep learning with channel attention module and visual explanation. Front. Phys. 2022, 10, 1036557. [Google Scholar] [CrossRef]
  33. Niu, Z.M.; Liang, H.Z. Nuclear mass predictions based on Bayesian neutral network approach. Nucl. Phys. Lett. B 2018, 778, 48–53. [Google Scholar] [CrossRef]
  34. Wu, X.H.; Guo, L.H.; Zhao, P.W. Nuclear masses in extended kernel ridge regression with odd-even effects. Phys. Lett. B 2021, 819, 136387. [Google Scholar] [CrossRef]
  35. Saxena, G.; Sharma, P.K.; Saxena, P. Modified empirical formulas and machine learning for α-decay systematics. J. Phys. G Nucl. Part. Phys. 2021, 48, 055103. [Google Scholar] [CrossRef]
  36. Wang, Z.A.; Pei, J.C.; Liu, Y.; Qiang, Y. Bayesian Evaluation of incomplete fission yields. Phys. Rev. Lett. 2019, 123, 122501. [Google Scholar] [CrossRef] [PubMed]
  37. Michael, N. Neural Networks and Deep Learning; Determination Press: San Francisco, CA, USA, 2015. [Google Scholar]
  38. Kevin, P.M. Probabilistic Machine Learning: An Introduction; The MIT Press: Cambridge, MA, USA; London, UK, 2022; pp. 463–497. [Google Scholar]
  39. Angeli, I. A consistent set of nuclear rms charge radii: Properties of the radius surface R(N,Z). At. Data Nucl. Data Tables 2004, 87, 185–206. [Google Scholar] [CrossRef]
  40. Day Goodacre, T.; Afanasjev, A.V.; Barzakh, A.E.; Marsh, B.A.; Sels, S.; Ring, P.; Nakada, H.; Andreyev, A.N.; Van Duppen, P.; Althubiti, N.A.; et al. Laser Spectroscopy of Neutron-Rich 207,208Hg Isotopes: Illuminating the Kink and Odd-Even Staggering in Charge Radii across the N = 126 Shell Closure. Phys. Rev. Lett. 2021, 126, 032502. [Google Scholar] [CrossRef]
  41. Dong, C.; Loy, C.C.; He, K.; Tang, X.O. Image Super-Resolution Using Deep Convolutional Networks. IEEE T-PAMI 2016, 38, 295–307. [Google Scholar] [CrossRef]
  42. Wang, M.; Huang, W.J.; Kondev, F.G.; Audi, G.; Naimi, S. The AME2020 atomic mass evaluation (II). Tables, graphs and references. Chin. Phys. C 2021, 45, 030003. [Google Scholar] [CrossRef]
  43. Wang, N.; Li, T. Shell and isospin effects in nuclear charge radii. Phys. Rev. C 2013, 88, 011301. [Google Scholar] [CrossRef]
  44. An, R.; Jiang, X.; Cao, L.G.; Zhang, F.S. Odd-even staggering and shell effects of charge radii for nuclei with even Z from 36 to 38 and from 52 to 62. Phys. Rev. C 2022, 105, 014325. [Google Scholar] [CrossRef]
  45. Cejnar, P.; Jolie, J.; Casten, R.F. Quantum phase transitions in the shapes of atomic nuclei. Rev. Mod. Phys. 2010, 82, 2155. [Google Scholar] [CrossRef]
  46. Heyde, K.; Wood, J.L. Shape coexistence in atomic nuclei. Rev. Mod. Phys. 2011, 83, 1467. [Google Scholar] [CrossRef]
  47. Silverans, R.E.; Lievens, P.; Vermeeren, L.; Arnold, E.; Neu, W.; Neugart, R.; Wendt, K.; Buchinger, F.; Ramsay, E.B.; Ulm, G. Nuclear Charge Radii of 70–100Sr by Nonoptical Detection in Fast-Beam Laser Spectroscopy. Phys. Rev. Lett. 1988, 60, 2607–2610. [Google Scholar] [CrossRef]
  48. Rodriguez-Guzman, R.; Sarriguren, P.; Robledo, L.M.; Perez-Martin, S. Charge radii and structural evolution in Sr, Zr and Mo isotopes. Phys. Lett. B 2010, 691, 202–207. [Google Scholar] [CrossRef]
Figure 1. An example of a convolutional layer with the 2 × 3 × 3 × 3 kernel. The notation * indicates the convolution operation. 3 channels in the input layer are mapped to 2 channels in the hidden layer with the stride length of 1 pixel.
Figure 1. An example of a convolutional layer with the 2 × 3 × 3 × 3 kernel. The notation * indicates the convolution operation. 3 channels in the input layer are mapped to 2 channels in the hidden layer with the stride length of 1 pixel.
Symmetry 15 01040 g001
Figure 2. The structure of the constructed CNNs. C1 consists of four convolutional layers with one residual block, while C2 is a simple network with three layers.
Figure 2. The structure of the constructed CNNs. C1 consists of four convolutional layers with one residual block, while C2 is a simple network with three layers.
Symmetry 15 01040 g002
Figure 3. The matrix layout of nuclear isotopes with 102 rows and 158 columns. The 102 × 158 matrix is filled for each of six physical quantities ( Z , N , R c r m s , B a v , I , P ), so the 6 × 102 × 158 numerical matrix can be obtained. For each nucleus, only the 13 × 13 square matrix framed by the green dashed line is exactly as the input of CNNs in practical calculations. The heaviest one in the collected experimental data is 248 Cm ( Z = 96 ). When this region is centered on it, 102 × 158 matrices are required.
Figure 3. The matrix layout of nuclear isotopes with 102 rows and 158 columns. The 102 × 158 matrix is filled for each of six physical quantities ( Z , N , R c r m s , B a v , I , P ), so the 6 × 102 × 158 numerical matrix can be obtained. For each nucleus, only the 13 × 13 square matrix framed by the green dashed line is exactly as the input of CNNs in practical calculations. The heaviest one in the collected experimental data is 248 Cm ( Z = 96 ). When this region is centered on it, 102 × 158 matrices are required.
Symmetry 15 01040 g003
Figure 4. The example of the inputting of CNNs. Input 1 consists of the R c r m s channel without central data and the filled ( Z , N , B a v , I , P ) channels, while Input 2 composes of the same R c r m s channel and ( Z , N , B a v , I , P ) channels where only central data are remained.
Figure 4. The example of the inputting of CNNs. Input 1 consists of the R c r m s channel without central data and the filled ( Z , N , B a v , I , P ) channels, while Input 2 composes of the same R c r m s channel and ( Z , N , B a v , I , P ) channels where only central data are remained.
Symmetry 15 01040 g004
Figure 5. The deviations between the experimental charge radii ( R exp ) and the results of C2 as Input 2 are inputted ( R C 2 ). The left panels are for the training sets, while the results about test sets are shown in the right ones. Additionally, the labels 1 and 2 correspond to the groups of data division 1 and 2 in Table 4.
Figure 5. The deviations between the experimental charge radii ( R exp ) and the results of C2 as Input 2 are inputted ( R C 2 ). The left panels are for the training sets, while the results about test sets are shown in the right ones. Additionally, the labels 1 and 2 correspond to the groups of data division 1 and 2 in Table 4.
Symmetry 15 01040 g005
Figure 6. The differences between the experimental data and the results of C2 with Input 2 fed. The data group is from the random division 2 in Table 4.
Figure 6. The differences between the experimental data and the results of C2 with Input 2 fed. The data group is from the random division 2 in Table 4.
Symmetry 15 01040 g006
Figure 7. The comparison of C2 predictions with the experimental charge radii for isotones with N = 64, 118 and Sr, Ba isotopes.
Figure 7. The comparison of C2 predictions with the experimental charge radii for isotones with N = 64, 118 and Sr, Ba isotopes.
Symmetry 15 01040 g007
Figure 8. The differences between predicted results and experimental values of the nuclear charge radii of Ca, Zn, Zr and Pb isotopic chains.
Figure 8. The differences between predicted results and experimental values of the nuclear charge radii of Ca, Zn, Zr and Pb isotopic chains.
Symmetry 15 01040 g008
Table 1. The comparisons of different ML models for charge radii. The number of nuclei involved in the corresponding model is marked as (Count). The RMS deviation of the interpolation and extrapolation for data sets are signed as σ i n and σ o u t , respectively, and Δ R = R c e x p R c t h , which is the residuals between the experimental data and the calculated values by theoretical models or phenomenological formulae.
Table 1. The comparisons of different ML models for charge radii. The number of nuclei involved in the corresponding model is marked as (Count). The RMS deviation of the interpolation and extrapolation for data sets are signed as σ i n and σ o u t , respectively, and Δ R = R c e x p R c t h , which is the residuals between the experimental data and the calculated values by theoretical models or phenomenological formulae.
ReferenceML MethodData Range (Count)InputOutput σ i n (fm) σ o u t (fm)
TrainTestEntireTrainTestEntire
Ref. [24]ANN A 6 (900)Z, N R c r m s 0.0360.025
Ref. [25] *BNN Z 20 , A 40 (820)Z, A Δ R 0.01710.01630.01690.02100.02620.0217
Ref. [26]ANN(347)Z, N, g ( B ( E 2 ) , δ ) R c r m s 0.02660.0231
Ref. [27] *NBPc A 3 (896)Z, N Δ R 0.01950.02000.01960.0195
Ref. [28]BNN Z 20 , A 40 (933)Z, A, δ , P Δ R 0.01430.01870.0149
Ref. [29] *KRR Z 8 , N 8 (884)Z, N Δ R 0.01230.02680.0168
Ref. [30]FNN(370)Z, N, Z 1 / 3 , A 1 / 3 c, z 0.0769
Ref. [31]BNN Z 20 , A 40 (933)Z, A, δ , P, I 2 , L I Δ R 0.01400.0139
* The deviations of the experimental values from a variety of theoretical models or phenomenological formulae are refined in these works, and the results in the table are for the best respective groups.
Table 2. The extrapolating standard deviations σ (fm) obtained from different neural network models using nuclei beyond 40 Ca as input data. The 722 nuclei beyond 40 Ca in the 2004 compilation [39] are chosen as the train set, and the remnant 98 nuclei with Z 20 , A 40 in the 2013 compilation [16] are divided into the test set. The results of BNN are from Ref. [25].
Table 2. The extrapolating standard deviations σ (fm) obtained from different neural network models using nuclei beyond 40 Ca as input data. The 722 nuclei beyond 40 Ca in the 2004 compilation [39] are chosen as the train set, and the remnant 98 nuclei with Z 20 , A 40 in the 2013 compilation [16] are divided into the test set. The results of BNN are from Ref. [25].
MethodData σ (fm)
Train (Count)Test (Count)
BNN [25] Z 20 , A 40 0.0210 (722)0.0262 (98)
C1Input 10.0113 (722)0.0145 (98)
Input 20.0114 (722)0.0160 (98)
C2Input 10.0148 (722)0.0145 (98)
Input 20.0118 (722)0.0180 (98)
Table 3. The extrapolating standard deviations σ (fm) obtained from different machine learning models as nuclei from the more global mass number regions are inputted. The data of the NBP method are from Ref. [27].
Table 3. The extrapolating standard deviations σ (fm) obtained from different machine learning models as nuclei from the more global mass number regions are inputted. The data of the NBP method are from Ref. [27].
MethodData σ (fm)
Train (Count)Test (Count)
NBP [27]A > 3 0.0200 (787)0.0196 (82)
C1A > 8 (input 1)0.0133 (786)0.0161 (109)
A > 8 (input 2)0.0133 (786)0.0178 (109)
C2A > 8 (input 1)0.0114 (786)0.0149 (109)
A > 8 (input 2)0.0115 (786)0.0157 (109)
Table 4. The interpolating standard deviations σ (fm) obtained from the C1 and C2 model both with the two inputting forms. Overall, 1027 nuclei with A > 8 from Refs. [12,16] are chosen as the entire set. Then, 80% of the data are randomly selected from it five times as 5 training sets, and the corresponding remaining data are divided into test sets.
Table 4. The interpolating standard deviations σ (fm) obtained from the C1 and C2 model both with the two inputting forms. Overall, 1027 nuclei with A > 8 from Refs. [12,16] are chosen as the entire set. Then, 80% of the data are randomly selected from it five times as 5 training sets, and the corresponding remaining data are divided into test sets.
ModelData Set12345Average
C1 Input 1 σ train 0.01410.01240.01110.01300.01260.0126
σ test 0.02060.01440.01670.02110.01630.0178
C1 Input 2 σ train 0.01320.01280.01300.01350.01380.0133
σ test 0.01810.01400.01400.01650.01660.0158
C2 Input 1 σ train 0.01490.01150.01250.01240.01280.0128
σ test 0.02040.01320.01830.01900.01750.0177
C2 Input 2 σ train 0.01270.01010.01100.01120.01280.0116
σ test 0.01780.01320.01380.01380.01790.0153
Table 5. The summary of the CNN method in this work for charge radii. As in Table 1, the RMS deviation of the interpolation and extrapolation for data sets are signed as σ i n and σ o u t , respectively. The RMS charge radii of surrounding nuclei are input in this work, which is signed as R c o .
Table 5. The summary of the CNN method in this work for charge radii. As in Table 1, the RMS deviation of the interpolation and extrapolation for data sets are signed as σ i n and σ o u t , respectively. The RMS charge radii of surrounding nuclei are input in this work, which is signed as R c o .
ML MethodData RangeInputOutput σ i n (fm) σ o u t (fm)
TrainTestEntireTrainTestEntire
this workCNN A 8 Z , N , R c o , B a v , I , P R c r m s 0.01160.0153 0.01140.01490.0118
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, P.; He, W.-B.; Fang, D.-Q. Progress of Machine Learning Studies on the Nuclear Charge Radii. Symmetry 2023, 15, 1040. https://doi.org/10.3390/sym15051040

AMA Style

Su P, He W-B, Fang D-Q. Progress of Machine Learning Studies on the Nuclear Charge Radii. Symmetry. 2023; 15(5):1040. https://doi.org/10.3390/sym15051040

Chicago/Turabian Style

Su, Ping, Wan-Bing He, and De-Qing Fang. 2023. "Progress of Machine Learning Studies on the Nuclear Charge Radii" Symmetry 15, no. 5: 1040. https://doi.org/10.3390/sym15051040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop