Next Article in Journal
Reinforcement Learning Derived High-Alpha Aerobatic Manoeuvres for Fixed Wing Operation in Confined Spaces
Next Article in Special Issue
A Hybrid Simulation and Reinforcement Learning Algorithm for Enhancing Efficiency in Warehouse Operations
Previous Article in Journal
Learning to Extrapolate Using Continued Fractions: Predicting the Critical Temperature of Superconductor Materials
Previous Article in Special Issue
Evolving Multi-Output Digital Circuits Using Multi-Genome Grammatical Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR

by
Artemiy Belousov
1,2,†,
Ivan Kisel
1,2,3,4,*,† and
Robin Lakos
1,2,*,†
1
Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany
2
Institute of Computer Science, J. W. Goethe University, 60629 Frankfurt am Main, Germany
3
GSI Helmholtz Centre for Heavy Ion Research, 64291 Darmstadt, Germany
4
Helmholtz Research Academy Hesse for FAIR, 60438 Frankfurt am Main, Germany
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2023, 16(8), 383; https://doi.org/10.3390/a16080383
Submission received: 31 May 2023 / Revised: 28 July 2023 / Accepted: 8 August 2023 / Published: 9 August 2023
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
Fast and efficient algorithms optimized for high performance computers are crucial for the real-time analysis of data in heavy-ion physics experiments. Furthermore, the application of neural networks and other machine learning techniques has become more popular in physics experiments over the last years. For that reason, a fast neural network package called ANN4FLES is developed in C++, which will be optimized to be used on a high performance computer farm for the future Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR, Darmstadt, Germany). This paper describes the first application of ANN4FLES used in the reconstruction chain of the CBM experiment to replace the existing particle competition between K s -mesons and Λ -hyperons in the KF Particle Finder by a neural network based approach. The raw classification performance of the neural network reaches over 98% on the testing set. Furthermore, it is shown that the background noise was reduced by the neural network-based competition and therefore improved the quality of the physics analysis.

1. Introduction

Over the last few years, the remarkable growth of machine learning and neural networks has had a profound impact across multiple scientific fields, redefining approaches to problem-solving and pushing the boundaries of scientific exploration [1]. The versatility and learning capacity of neural networks have positioned them as formidable tools in computational sciences, capable of addressing a broad spectrum of challenges across various domains. This includes, for instance, solving differential equations [2], and addressing forward and inverse problems involving nonlinear partial differential equations using physics-informed neural networks [3].
The predictive prowess of neural networks has resulted in models of outstanding accuracy. In fields like physics and computational science, these methodologies are offering new perspectives and solutions to longstanding challenges [4]. Particularly in particle physics, these methodologies permit applications at different stages of experiments, enabling entirely new approaches or complementing existing algorithms. This includes particle identification, trajectory reconstruction, event classification, and others [5,6].
This powerful interplay between computational science and particle physics will be prominently featured in the future heavy-ion experiment, Compressed Baryonic Matter (CBM), at the Facility for Antiproton and Ion Research (FAIR) [7]. This experiment will leverage advancements in neural networks for improved data analysis and interpretation. CBM is planned as a fixed-target experiment using the particle accelerator at FAIR. It will provide scientists with the ability to explore states of matter in regions of high baryonic densities at moderate temperatures [8] and will allow for the search for rare short-lived particles at unprecedented collision rates of up to 10 MHz [9].
In CBM, heavy ions (e.g., gold ions) will be accelerated to almost the speed of light to collide with a target (e.g., a thin plate of gold), to enforce a nucleus–nucleus collision that creates states of matter with extremely high density [10]. Under these conditions, a state known as Quark–Gluon Plasma can be produced. In this state, quarks and gluons are freed from their usual strong interactions within the plasma. After that phase, during hadronization, quarks combine into new particles that burst out of the central collision point (primary vertex) right into the detector setup.
Some of these newly formed particles decay almost immediately, either due to instability or interaction with other particles. At the point of decay, particles are generally referred to as mother particles, whereas the particles created during the decay are so-called daughter particles. The specific mother particles that decay almost immediately are called short-lived particles, and they can only be measured indirectly by the reconstruction of decays or decay chains, since they rarely reach any detector before. Then, the daughter particles’ trajectories (tracks) are used and extrapolated to find the point of decay (secondary vertex), whereas their properties help to identify the decayed particle.
An efficient search for rare short-lived particles requires the already-mentioned high interaction rates of up to 10 MHz. This creates further computational challenges. At these interaction rates, it is not possible to store the data streams entirely, as they are produced with approximately 1 TB of data per second [11]. To reduce the amount of data that has to be stored, a reduction by approximately three orders of magnitude [12] is required, and can be achieved by selecting only collisions (events) of physicists’ interest. However, in CBM, there is no simple criteria for event selection, as the search for rare short-lived particles requires a full event reconstruction [13], including the reconstruction of decays and decay chains.
An algorithm package called First Level Event Selection (FLES) [14] is used to provide a full event reconstruction in real-time that allows selection of events of interest and, therefore, significantly reduces the amount of data that has to be stored on disk. The algorithms used for the experiment are tested and evaluated step by step using precise Monte Carlo simulated data. Thus, the true outcome of each event is known, and reconstructed particles can be matched with the corresponding simulated particles to allow a performance analysis. The reconstructed particle candidates are identified with a hypothesis, which can be tested and evaluated using simulated data. In real experiments, however, the comparison is not possible, which makes a detailed performance analysis a crucial step for a successful experiment. Therefore, the presented approach will use the already-established tools of the KF Particle Finder to measure the performance within the reconstruction chain of CBM.
The Kalman Filter (KF) Particle Finder is a package included in FLES, responsible for particle and decay reconstruction, including the reconstruction of mother particles. When mother particles are created, the type of particle and their properties are inferred by the properties of the daughter particles. Due to inaccuracies in the reconstruction of daughter particles, the reconstruction of multiple mother particle candidates is not uncommon. For that reason, the particles pass through a particle competition to find the best-fitting candidate. That way, the background noise produced by mistakenly created mother particles can be reduced.
The presented neural network approach is used to replace the existing particle competition of the KF Particle Finder. For each pair of mother particle candidates that would compete against each other, the neural network classifies the best-fitting mother particles based on their properties. The neural network solves the competition by classifying the most probable candidate out of both competitors. Previous work [15] already investigated a multi-layer perceptron-based approach, demonstrating that such models are generally capable of solving the problem with comparable results. The present work shows improvements in the particle competition by using a more complex model, including a different topology and hyper-parameters. The new model provides a raw classification performance with an error of less than 2% on the test set.

2. Materials and Methods

2.1. Particle Competition of K s and Λ

The neutral particles K s -mesons and Λ -hyperons serve as important indicators for the CBM experiment. K s consists of a down-quark and a strange anti-quark, whereas Λ is build with an up-quark, down-quark and a strange quark. Theoretical predictions suggest that enhanced strangeness production (the production of strange, multi-strange or hyper-strange particles, that consist of one, two or three strange quarks or strange anti-quarks) is an indicator of deconfined matter [16], the Quark–Gluon Plasma. Both particles are abundantly created in the energy ranges of the experiment [15] and are therefore a reliable source of information. The two particles decay with the given probability BR (branching ratio) as follows:
K s π + π ( BR : 69.20 ± 0.05 % ) , Λ p π ( BR : 63.9 ± 0.5 % ) ,
(please see [17,18], respectively), and therefore both give a rise to negatively charged pions π . Moreover, all daughter particles of K s and Λ are non-neutral and, therefore, can be measured by the detectors and reconstructed by the algorithms. The particle reconstruction procedure combines all possible daughter particles, resulting in the simultaneous creation of a “common” pion, which is a decay product for both, the Λ -hyperon and K s -meson, leading to the creation of these two possible mother particles, even though only one exist in the Monte Carlo simulated data. As a result, Λ and K s create background noise for each other, which will hinder the physical analysis of the particles using real experiment data. Performing a particle competition to decide for the best-fitting mother particles helps to reduce this physical background noise by removing the falsely created mother particle candidate.
The Kalman Filter (KF) Particle Finder [19] is an important package included in FLES, responsible for online reconstruction of short-lived particles and their decay chains. The package was updated recently by the Missing Mass method [20], and is now capable to reconstruct more than 150 different particle decays. Furthermore, it includes tools for performance measurements in the particle reconstruction process. The KF Particle Finder package reconstructs all possible mother particle candidates when a decay is recognized. Then, χ 2 -cuts and a particle competition are applied to select the best-fitting reconstructed particles and to reject the others. The existing implementation of particle competition has several stages. One of the most important steps is the evaluation of particle candidates’ reconstructed mass values. Here, the algorithm examines the reconstructed particles’ masses of each pair of candidates (competitors), and checks if one is within 3 σ range of the known mass distribution peak, the so-called Particle Data Group (PDG) (which refers to a global collaboration of particle physicists, who define standards and publish results within the scope of research) mass. If one of them is within the range, the mass distance to the respective particle’s PDG mass is used to determine the best-fitting mother particle.

2.2. Data Extraction Using the KF Particle Finder Package

In CBM, the performance of algorithms within the FLES package is measured by a comparison of Monte Carlo simulated data and reconstructed results by the algorithm packages. Using this approach, the reconstruction efficiency and precision of each part of the package can be evaluated. The Monte Carlo data can be considered as the true outcomes of each event, allowing the application of supervised learning techniques. In the stage of the KF Particle Finder, when the particles and decay chains are reconstructed, they are matched with the corresponding Monte Carlo true particle. In this work, the Monte Carlo information of each matched particle is utilized to check if the reconstructed particle is a true K s -meson or Λ -hyperon. If this is the case, the information of the corresponding reconstructed particle is extracted to build a dataset. In real experiments, Monte Carlo information does not exist and, therefore, the neural network has to perform on reconstructed information and hypothesis only.
Overall, 25,000 events generated by the UrQMD model [21] were used in this work. A total of 12,000 events were used for training and testing, whereas 13,000 were used to evaluate the network’s performance in comparison to the existing approach. All events are central Au+Au collisions at 10 GeV energy, and therefore within the specifications of the CBM experiment. In general, the created generated particles are processed by a transport engine (e.g., GEANT4 [22]) to simulate the particles flying through the detector system, including all relevant physics processes such as decays, scattering and interaction [23]. At this point, the whole event outcome is simulated, including all decay chains and trajectories. Afterward, the detector responses are generated, resulting in measurements (hits) that would also be produced by real experiment particles when they interact with a detector. The hits are then used to reconstruct tracks and the KF Particle Finder is applied to reconstruct short-lived particles and decays.
The first part of the dataset (12,000 events) is used to train and test the neural network architectures in the Artificial Neural Networks for First Level Event Selection (ANN4FLES) [24] standalone package. Here, a measurement for raw classification performance was obtained by the accuracy and cross entropy loss values per epoch. This dataset was divided by a 80:20 ratio for a training dataset and a testing dataset, respectively. Several fully-connected neural network architectures and settings were tested by hand, where a multi-layer perceptron with three hidden layers with a peak accuracy of more than 98% on the test set was finally chosen.
The second part of that data set, consisting of 13,000 events, is used for comparison of the neural network based approach and the existing particle competition within the KF Particle Finder. Hence, after training and testing, the pre-trained neural networks are evaluated, implemented in the KF Particle Finder package as a part of the reconstruction chain of the CBM experiment. That way, it is possible to use the tools for performance measurement included in the KF Particle Finder, that are a well-established standard to evaluate the algorithms’ performance in heavy-ion physics experiments.

2.3. Performance Measurements in the KF Particle Finder Package

When specific particles are investigated for physics analysis, it is important to achieve clean probes. Due to the large amount of particles (up to 1000) in a collision [25], there are many tracks pointing to the collision point (primary vertex) and, thus, create a large amount of possible track combinations. Due to the limited underlying detector resolution and calculation inaccuracies of floating point representation in a computer system, it is only possible to reconstruct all tracks and particle decays within an acceptable range of imperfection. In several cases, multiple tracks are within a defined error range, such that uncertainties in the reconstruction can not be prevented. Some of the reconstructed particles’ tracks are real tracks that exist, others produce so-called combinatorial background, due to mismatched track segments in the reconstruction process. These can be found by comparison to the simulated data.
When using Monte Carlo simulated data, particles that were reconstructed and classified correctly are called signals, whereas particles that were mistakenly reconstructed are called ghosts, and misclassified particles are generally referred to as physical backgrounds for the respective other particles. The KF Particle Finder package produces histograms for several parameters, such as particle’s mass distribution, and separate histograms for each of these categories. This allows a detailed analysis of the reconstruction performance when working with simulated data.
Beside the histograms, metrics can be calculated based on it. These include the significance and the Signal/Background ratio (S/B ratio). The S/B ratio is just the amount of signal divided by the amount of background, which allows evaluation of the approaches by showing if one or the other approach is rejecting more signal relative to the amount of rejected background. The significance expresses the relation of the signal peak in comparison to (lower) peaks produced by background. A significance of 1 indicates that the background peaks by fluctuation are as large as the signal and, therefore, in a real experiment, the peak would not be recognized as a different particle, as it has no difference to background fluctuations. Contrarily, a significance of 5 is considered a threshold to see a peak as a signal of a particle that should be investigated. However, since Λ and K s are already well-known and parameters are usually set to find them with a high significance, the threshold for these particles is always achieved, even without competition. Nevertheless, a large reduction in significance should be investigated.

2.4. Training and Testing Using ANN4FLES and PyTorch

The C++ package ANN4FLES is designed for the fast and efficient creation of various neural network architectures and will be optimized for its usage within the full event reconstruction chain of the CBM experiment. In its current state, ANN4FLES includes architectures such as fully connected multi-layer peceptrons, convolutional neural networks, recurrent neural networks, graph neural networks, and more. Furthermore, it provides a graphical user interface for training, testing and fine-tuning of various hyperparameters without the need for additional programming, such that pre-trained neural networks can be easily exported to be used in CBM’s FLES package.
In the present work, a Multi-Layer Perceptron (MLP) [26,27] is used to solve a classification problem between two possible mother particles in a competition based on their reconstructed properties. The ANN4FLES standalone package is used to create the neural networks for training and testing. Then, the chosen pre-trained network is included into the KF Particle Finder package and used in the particle competition to classify the competitors.
Since the previous work [15] already showed that a neural network can perform comparably to the existing competition of the KF Particle Finder, ANN4FLES is used for the first time within the KF Particle Finder package to implement a more complex neural network for this classification task (see Figure 1). ANN4FLES was tested on multiple well-known data sets and it was shown that ANN4FLES offered comparable results to other neural network packages, which makes one confident that the implemented mathematics in ANN4FLES are correct [24]. However, for comparison with a reference network implemented in PyTorch [28], the weights are initialized by the same uniform distribution method [29]. ADAM [30], with a learning rate of α = 0.003 and default β 1 , β 2 , was chosen as a weight optimizer, whereas the selected activation functions are Leaky-ReLU for all hidden neurons and Softmax for the output layer. The loss was calculated using binary cross entropy and the training and testing phase was repeated over 100 epochs with a batch size of 50.
To find a well performing model and settings, different learning rates in the range of 0.001 and 0.005 were tested. The final results are accomplished by a network that was trained with a learning rate of 0.003, whereas lower learning rates performed almost equally, but larger learning rates tend to perform slightly worse. It is assumed that a larger learning rate does not allow for moving as deep into a minimum as with using lower rates, since it might overshoot the minimum slightly.
Besides the learning rates, different layer sizes and network depths were tested. Here, the main focus was set on finding a balance between network size and results. On the one hand, a more complex structure could lead to an even better performance. However, the raw classification performance is already good, such that much deeper networks were not tested yet. A deep neural network increases the amount of parameters and calculations, slowing the classification process and, therefore, has to be balanced with the fast algorithms required for the real-time event reconstruction in CBM. On the other hand, simpler architectures than the chosen one seem to perform worse, which indicates a less complex model is not capable of learning the patterns.

3. Results

Particle Competition Based on Mass and PDG Mass

The MLP implemented using ANN4FLES provides a raw classification accuracy of up to 98.6% for the testing set (see Figure 2). An almost identically constructed neural network was implemented in PyTorch to ensure valid results. The PyTorch architecture provided equally high accuracy values and again confirmed the classification performance of ANN4FLES.
An error of only 1.4% on the testing set suggests that the classification performance of the more complex ANN4FLES architecture is more suitable to solve the task, since the error rate of the previous work is given with more than 10%. There are several possible reasons for the better results, such as the topology, different learning rate or the loss minimizing algorithm, which was Broyden–Fletcher–Goldfarb–Shanno (BFGS) [15,31,32] in the previous work. Furthermore, although the generators for simulated events should perform equally, further investigations are suggested, as previous work used the PHSD [33] model to generate data for the neural network training, testing and evaluation, whereas in the present work, UrQMD is the underlying model to generate event data.
In Figure 3, the total mass spectra histograms for K s and Λ are shown. The results using KF Particle Finder without competition (black) and with the existing method (green) are visualized for comparison. The ANN4FLES approach is colored red, and one can see that it seems to reduce the number of entries over the whole range. In areas with larger distance to the peak, this is likely to be reduced background. Nevertheless, there is also a reduction in entries at the peak area for K s , where a signal rejection can be the reason. In general, it reduces the amount of particles that were finally classified as K s or Λ , respectively.
Investigating the signal mass histograms in Figure 4 shows that, in both cases, a slight reduction in signal is indicated compared to running the KF Particle Finder without competition. For K s , the ANN4FLES approach rejected slightly more signal compared to the existing method, whereas for Λ , the results are the other way around. Here, the existing method rejects more signal particles than ANN4FLES. Thus, one can assume that the largest part of rejected particles in Figure 3 is due to correctly rejected background.
In Figure 5, the background mass distribution is shown. Based on the histogram for Λ , it is difficult to see which competition approach is better. Both competitions reduce the background significantly but, around the peak, the existing method seems to perform better, since there is a clearly visible peak for ANN4FLES at Λ ’s PDG mass 1.116 GeV / c 2 , whereas in the whole range, ANN4FLES seems to reduce the background slightly more. Moreover, there is a large peak of the existing method at m < 1.11 GeV / c 2 , indicating less rejected background by the existing method compared to the neural-network-based approach. For Λ , this histogram indicates a tie between ANN4FLES and the existing method. However, evaluating K s , there is no peak at the PDG mass of 0.493 GeV / c 2 by ANN4FLES. The neural network used seems to perform well in rejecting K s background around the known mass peak, which, overall, increases the physics analysis for K s . Furthermore, a competition based on the mass is quite difficult, if the reconstructed mass of the background producing particle is within a range where it is expected for the respective investigated particle. These results show that the network is not only classifying by the distance to the mass distribution peak, as is performed in the existing method.
In the ghost histograms (see Figure 6), ANN4FLES seems to be ahead in background rejection in both cases Λ and K s . Although, similar to the existing method, ANN4FLES has a peak around the PDG mass bins, the existing method has more ghosts over the whole range and within the peak area. Thus, the neural network rejects more ghost particles than the default approach and, therefore, helps to reduce the mistakenly created mother particle candidates even more.
In general, the presented plots indicate a better performance of ANN4FLES by rejecting background particles. This can be confirmed by the S/B ratio and significance. In Figure 7, the invariant mass spectra of K s = π + π and Λ = p π for the existing competition are shown. With a significance of 149 and 213 for K s and Λ , respectively, the clear signal is given in both cases using the existing competition of the KF Particle Finder. Additionally, the S/B ratio of 3.58 shows that the data visualized in the K s -plot consists of almost four times more signal than background, where it is over eight times more signal than background for Λ .
The following results are achieved by the ANN4FLES approach (Figure 8). Comparing the S/B ratio for K s , one can see that the ANN approach improved the value by around 16%. For Λ , however, the S/B ratio has been reduced by about 2%. Considering both particles, ANN4FLES has reduced the background successfully even further. However, for K s , the significance was also decreased by about 3%, whereas the significance of Λ was increased by 1%. That indicates that even if the background was reduced successfully on average, the fluctuation in the background has been increased on average in comparison to the existing method. Nevertheless, both significance values are high enough to consider these particles as a clear signal, making the minor reduction negligible.

4. Conclusions

Summarized, the ANN4FLES-based competition using reconstructed mass and PDG mass can perform comparably to the existing method. In general, using a more complex topology, it reduces background slightly better than the existing approach in the KF Particle Finder, even though the significance is decreased slightly on average. Especially, the amount of ghost particles were reduced almost over the whole range for both particles, and the background reduction around the PDG mass of K s was strong, even though a similar model might show other results, depending on the learned features. The reason for a good background reduction is most likely that the existing competition is only based on the mass distance to the known PDG mass of a particle, whereas the neural network can also learn patterns between reconstructed mass and PDG mass that can be used to classify particles correctly.
The neural network approach is currently only classifying between Λ and K s , whereas the existing method is furthermore cleaning the background by suppressing, for example, γ -decays and other cleanup methods. In the presented results, the network was not able to reject a particle entirely, hence neither classifying it as K s nor Λ . The extension of the model by allowing either particle rejection or a classification between more particles that interact as background for each other, might require an increase of the underlying model complexity, but also can help to reduce the background even further.
The ANN4FLES package itself will now be improved with respect to its runtime for the planned applications in the real-time reconstruction chain of the future CBM experiment. After these improvements, ANN4FLES will be integrated into the physics analysis module of the FLES package.

Author Contributions

Conceptualization, I.K.; Methodology, A.B. and R.L.; Software, A.B. and R.L.; Validation, A.B. and R.L.; Investigation, A.B. and R.L.; Resources, A.B.; Data curation, A.B.; Writing original draft, A.B., I.K. and R.L.; Visualization, R.L.; Supervision, I.K.; Project administration, I.K.; Funding acquisition, I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by the Federal Ministry of Education and Research (grant number 01IS21092), Germany, and Helmholtz Research Academy Hesse for FAIR (project ID 2.1.4.2.5), Germany.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  2. Tsoulos, I.G.; Gavrilis, D.; Glavas, E. Solving differential equations with constructed neural networks. Neurocomputing 2009, 72, 2385–2391. [Google Scholar] [CrossRef]
  3. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  4. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  5. Bourilkov, D. Machine and deep learning applications in particle physics. Int. J. Mod. Phys. 2019, 34, 1930019. [Google Scholar] [CrossRef] [Green Version]
  6. Shlomi, J.; Battaglia, P.; Vlimant, J.R. Graph neural networks in particle physics. Mach. Learn. Sci. Technol. 2020, 2, 021001. [Google Scholar] [CrossRef]
  7. Sturm, C.; Stöcker, H. The Facility for Antiproton and Ion Research FAIR. Phys. Part. Nucl. Lett. 2011, 8, 865–868. [Google Scholar] [CrossRef]
  8. Friman, B.; Höhne, C.; Knoll, J.; Leupold, S.; Randrup, J.; Rapp, R.; Senger, P. (Eds.) The CBM Physics Book, 1st ed.; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  9. Ablyazimov, T.; Abuhoza, A.; Adak, R.; Adamczyk, M.; Agarwal, K.; Aggarwal, M.M.; Ahammed, Z.; Ahmad, F.; Ahmad, N.; Ahmad, S.; et al. Challenges in QCD matter physics –The scientific programme of the Compressed Baryonic Matter experiment at FAIR. Eur. Phys. J. A 2017, 53, 60. [Google Scholar] [CrossRef] [Green Version]
  10. Friese, V. The CBM experiment at GSI/FAIR. Nuclear Phys. A 2006, 774, 377–386. [Google Scholar] [CrossRef]
  11. Friese, V. Simulation and reconstruction of free-streaming data in CBM. J. Phys. Conf. Ser. 2011, 331, 032008. [Google Scholar] [CrossRef]
  12. Agarwal, K. The Compressed Baryonic Matter (CBM) Experiment at FAIR–Physics, Status and Prospects. Phys. Scr. 2023, 98, 3. [Google Scholar] [CrossRef]
  13. Akishina, V. Four-Dimensional Event Reconstruction in the CBM Experiment. Ph.D. Thesis, J. W. Goethe University, Frankfurt, Germany, 2016. [Google Scholar]
  14. Kisel, I.; Kulakov, I.; Zyzak, M. Standalone First Level Event Selection Package for the CBM Experiment. IEEE Trans. Nucl. Sci. 2013, 60, 3703–3708. [Google Scholar] [CrossRef]
  15. Banerjee, A.; Kisel, I.; Zyzak, M. Artificial neural network for identification of short-lived particles in the CBM experiment. Int. J. Mod. Phys. A 2020, 35, 2043003. [Google Scholar] [CrossRef]
  16. Rafelski, J.; Müller, B. Strangeness Production in the Quark-Gluon Plasma. Phys. Rev. Lett. 1982, 48, 1066–1069. [Google Scholar] [CrossRef]
  17. Zyla, P.A.; Barnett, R.M.; Beringer, J.; Dahl, O.; Dwyer, D.A.; Groom, D.E.; Lin, C.J.; Lugovsky, K.S.; Pianori, E.; Robinson, D.J.; et al. Particle Data Group. Prog. Theor. Exp. Phys. 2020, 2020, 083C01. [Google Scholar]
  18. Amsler, C.; Doser, M.; Antonelli, M.; Asner, D.; Babu, K.S.; Baer, H.; Band, H.R.; Barnett, R.M.; Beringer, J.; Bergren, E.; et al. Particle Data Group. Phys. Lett. B 2008, 667, 1–6. [Google Scholar] [CrossRef] [Green Version]
  19. Zyzak, M. Online Selection of Short-Lived Particles on Many-Core Computer Architectures in the CBM Experiment at FAIR. Ph.D. Thesis, J. W. Goethe University, Frankfurt, Germany, 2016. [Google Scholar]
  20. Kisel, P. KF Particle Finder Package: Missing Mass Method for Reconstruction of Strange Particles in CBM (FAIR) and STAR (BNL) Experiments. Ph.D. Thesis, Goethe University, Frankfurt, Germany, 2023. [Google Scholar]
  21. Bleicher, M.; Zabrodin, E.; Spieles, C.; Bass, S.A.; Ernst, C.; Soff, S.; Bravina, L.; Belkacem, M.; Weber, H.; Stöcker, H. Relativistic hadron-hadron collisions in the ultra-relativistic quantum molecular dynamics model. J. Phys. Nucl. Part. Phys. 1999, 25, 1859. [Google Scholar] [CrossRef] [Green Version]
  22. Agostinelli, S.; Allison, J.; Amako, K.A.; Apostolakis, J.; Araujo, H.; Arce, P.; Asai, M.; Axen, D.; Banerjee, S.; Barrand, G.; et al. Geant4—A simulation toolkit. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2003, 506, 250–303. [Google Scholar] [CrossRef] [Green Version]
  23. Friese, V.; for the CBM Collaboration. The high-rate data challenge: Computing for the CBM experiment. J. Phys. Conf. Ser. 2017, 898, 112003. [Google Scholar] [CrossRef] [Green Version]
  24. Senger, P.; Friese, V. CBM Progress Report 2022; Number CBM PR 2022; GSI: Darmstadt, Germany, 2022; p. 161. [Google Scholar]
  25. Höhne, C.; Rami, F.; Staszel, P. The Compressed Baryonic Matter Experiment at FAIR. Nucl. Phys. News 2006, 16, 19–23. [Google Scholar] [CrossRef] [Green Version]
  26. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning internal representations by error propagation. Parallel Distrib. Process. 1986, 1, 318–363. [Google Scholar]
  28. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Curran Associates, Inc.: Red Hook, NY, USA, 2019. [Google Scholar]
  29. torch.nn.Linear—PyTorch 1.9.0 Documentation. 2023. Available online: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html (accessed on 30 March 2023).
  30. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  31. Broyden, C. A new double-rank minimisation algorithm. Preliminary report. Am. Math. Soc. Not. 1969, 16, 670. [Google Scholar]
  32. Fletcher, R. A new approach to variable metric algorithms. Comput. J. 1970, 13, 317–322. [Google Scholar] [CrossRef] [Green Version]
  33. Cassing, W.; Bratkovskaya, E.L. Parton transport and hadronization from the dynamical quasiparticle point of view. Phys. Rev. C 2008, 78, 034919. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Multi-Layer Perceptron (MLP) topology used to classify K s and Λ , using 3 hidden layers with 8 neurons each, hidden activation function Leaky-ReLU. Output layer consists of two neurons with Softmax activation and cross entropy loss.
Figure 1. Multi-Layer Perceptron (MLP) topology used to classify K s and Λ , using 3 hidden layers with 8 neurons each, hidden activation function Leaky-ReLU. Output layer consists of two neurons with Softmax activation and cross entropy loss.
Algorithms 16 00383 g001
Figure 2. ANN4FLES and PyTorch accuracy for training and testing, using reconstructed mass and PDG mass over 100 epochs with a peak performance of 98.6% in the testing set. Both networks, ANN4FLES and PyTorch, achieved high accuracy values on the testing set.
Figure 2. ANN4FLES and PyTorch accuracy for training and testing, using reconstructed mass and PDG mass over 100 epochs with a peak performance of 98.6% in the testing set. Both networks, ANN4FLES and PyTorch, achieved high accuracy values on the testing set.
Algorithms 16 00383 g002
Figure 3. Histograms of mass spectra for K s (left) and Λ (right). Comparison of no competition (black), the existing mother particle competition (green) and the competition by ANN4FLES (red).
Figure 3. Histograms of mass spectra for K s (left) and Λ (right). Comparison of no competition (black), the existing mother particle competition (green) and the competition by ANN4FLES (red).
Algorithms 16 00383 g003
Figure 4. Histograms of signals for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Figure 4. Histograms of signals for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Algorithms 16 00383 g004
Figure 5. Histograms of background for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Figure 5. Histograms of background for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Algorithms 16 00383 g005
Figure 6. Histograms of ghosts for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Figure 6. Histograms of ghosts for K s (left) and Λ (right) masses. Comparison of no competition (black), the existing mother particle competition (green) and ANN4FLES (red).
Algorithms 16 00383 g006
Figure 7. Invariant mass distributions of K s = π + π and Λ = p π for the existing competition, including signal–background ratio and significance.
Figure 7. Invariant mass distributions of K s = π + π and Λ = p π for the existing competition, including signal–background ratio and significance.
Algorithms 16 00383 g007
Figure 8. Invariant mass distributions of K s = π + π and Λ = p π , including signal–background ratio and significance for the ANN4FLES approach.
Figure 8. Invariant mass distributions of K s = π + π and Λ = p π , including signal–background ratio and significance for the ANN4FLES approach.
Algorithms 16 00383 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Belousov, A.; Kisel, I.; Lakos, R. A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR. Algorithms 2023, 16, 383. https://doi.org/10.3390/a16080383

AMA Style

Belousov A, Kisel I, Lakos R. A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR. Algorithms. 2023; 16(8):383. https://doi.org/10.3390/a16080383

Chicago/Turabian Style

Belousov, Artemiy, Ivan Kisel, and Robin Lakos. 2023. "A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR" Algorithms 16, no. 8: 383. https://doi.org/10.3390/a16080383

APA Style

Belousov, A., Kisel, I., & Lakos, R. (2023). A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR. Algorithms, 16(8), 383. https://doi.org/10.3390/a16080383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop