Author Contributions
Conceptualization, M.W., A.S. and E.D.M.; methodology, M.W.; software, M.W.; writing—original draft preparation, M.W. and A.S.; writing—review and editing, M.W. and A.S.; resources, M.W.; visualization, M.W., data curation, A.S. and E.D.M.; supervision, M.W.; funding acquisition, E.D.M.
Figure 1.
The LHC tunnel. The blue cryostat contains superconducting main dipole magnets. The protection unit is visible on the floor under the magnet (yellow rack). The photo taken by A.S. in 2007.
Figure 1.
The LHC tunnel. The blue cryostat contains superconducting main dipole magnets. The protection unit is visible on the floor under the magnet (yellow rack). The photo taken by A.S. in 2007.
Figure 2.
The presentation of a contents of field of two PM data files for one of the superconducting magnets (with electrical current 600 ). The voltage range of the ADC is from to 256 . Time 0 refers to trigger (request) time stored in the field in the PM data files.
Figure 2.
The presentation of a contents of field of two PM data files for one of the superconducting magnets (with electrical current 600 ). The voltage range of the ADC is from to 256 . Time 0 refers to trigger (request) time stored in the field in the PM data files.
Figure 3.
Test facilities of SM18 for testing MQXFS inner triplet quadrupole magnet, including the rack used for the data acquisition and tests (photos provided by E.M.).
Figure 3.
Test facilities of SM18 for testing MQXFS inner triplet quadrupole magnet, including the rack used for the data acquisition and tests (photos provided by E.M.).
Figure 4.
Samples per bin for PM dataset channel (). Note the logarithmic scale.
Figure 4.
Samples per bin for PM dataset channel (). Note the logarithmic scale.
Figure 5.
Full (a) and zoomed-in (b) bin edges for PM dataset channel (). Please note that adaptive quantization algorithm effectively yields only 10 bins, since some edges values occur multiple times.
Figure 5.
Full (a) and zoomed-in (b) bin edges for PM dataset channel (). Please note that adaptive quantization algorithm effectively yields only 10 bins, since some edges values occur multiple times.
Figure 6.
High-level system architecture. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 6.
High-level system architecture. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 7.
Design flow for hardware implementation. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 7.
Design flow for hardware implementation. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 8.
Proposed system. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 8.
Proposed system. © 2018 IEEE. Reprinted, with permission, from Wielgosz, M.; Skoczeń, A.; Wiatr, K. Looking for a Correct Solution of Anomaly Detection in the LHC Machine Protection System. 2018 International Conference on Signals and Electronic Systems (ICSES), 2018, pp. 257–262 [
1].
Figure 9.
Example single series results visualization (, , ). Red line across all subplots marks the and gray spans indicate the anomalies found by the system.
Figure 9.
Example single series results visualization (, , ). Red line across all subplots marks the and gray spans indicate the anomalies found by the system.
Figure 10.
Example single series results visualization (, , ). Red line across all subplots marks the and gray spans indicate the anomalies found by the system.
Figure 10.
Example single series results visualization (, , ). Red line across all subplots marks the and gray spans indicate the anomalies found by the system.
Figure 11.
Grid use for various values and . ad—adaptive, ra—recursive_adaptive, ca—cumulative_amplitude.
Figure 11.
Grid use for various values and . ad—adaptive, ra—recursive_adaptive, ca—cumulative_amplitude.
Figure 12.
score as a function of for several and values. Dashed line shows Random baseline model performance for the same .
Figure 12.
score as a function of for several and values. Dashed line shows Random baseline model performance for the same .
Figure 13.
The ROC curve for algorithm adaptive (, , GRU (two layers, 64 and 32 cells) + Dense).
Figure 13.
The ROC curve for algorithm adaptive (, , GRU (two layers, 64 and 32 cells) + Dense).
Figure 14.
The ROC curve for algorithm recursive_adaptive (, , GRU (two layers, 64 and 32 cells) + Dense).
Figure 14.
The ROC curve for algorithm recursive_adaptive (, , GRU (two layers, 64 and 32 cells) + Dense).
Figure 15.
The ROC curve for algorithm cumulative_amplitude (, , GRU (two layers, 64 and 32 cells) + Dense).
Figure 15.
The ROC curve for algorithm cumulative_amplitude (, , GRU (two layers, 64 and 32 cells) + Dense).
Table 1.
The parameters of NN built with GRU cells for three different algorithms (two layers, 64 and 32 cells + Dense, ).
Table 1.
The parameters of NN built with GRU cells for three different algorithms (two layers, 64 and 32 cells + Dense, ).
| | Accuracy | Score | Score |
---|
adaptive | 16 | 0.8462 | 0.6722 | 0.6167 |
32 | 0.8506 | 0.7031 | 0.6687 |
64 | 0.8611 | 0.7376 | 0.7124 |
128 | 0.8838 | 0.7973 | 0.7835 |
256 | 0.9162 | 0.8743 | 0.8796 |
512 | 0.9543 | 0.9474 | 0.9522 |
recursive_adaptive | 16 | 0.8507 | 0.6920 | 0.6481 |
32 | 0.8543 | 0.7022 | 0.6561 |
64 | 0.8652 | 0.7350 | 0.6928 |
128 | 0.8868 | 0.8040 | 0.7939 |
256 | 0.9172 | 0.8746 | 0.8749 |
512 | 0.9571 | 0.9506 | 0.9560 |
cumulative_amplitude | 16 | 0.8436 | 0.6609 | 0.5999 |
32 | 0.8473 | 0.6664 | 0.5968 |
64 | 0.8562 | 0.7115 | 0.6620 |
128 | 0.8853 | 0.7927 | 0.7622 |
256 | 0.9231 | 0.8830 | 0.8805 |
512 | 0.9669 | 0.9625 | 0.9779 |
Table 2.
Testing accuracy (20% of dataset). All models were run with and , using single input channel (), NNs were trained for 7 epochs. The best result is marked in bold.
Table 2.
Testing accuracy (20% of dataset). All models were run with and , using single input channel (), NNs were trained for 7 epochs. The best result is marked in bold.
Model | |
---|
Adaptive | Recursive_Adaptive | Cumulative_Amplitude |
---|
Random (stratified) | 0.6334 | 0.6334 | 0.6334 |
Elliptic Envelope | 0.6700 | 0.7775 | 0.6700 |
Isolation Forest | 0.7947 | 0.7596 | 0.8094 |
OC-SVM (RBF kernel) | 0.3300 | 0.8232 | 0.3300 |
OC-SVM (linear kernel) | 0.2959 | 0.7881 | 0.2528 |
GRU (two layers, 64 and 32 cells) | 0.8928 | 0.9005 | 0.8842 |
LSTM (two layers, 64 and 32 cells) | 0.8271 | 0.8552 | 0.7402 |
Table 3.
Testing accuracy (20% of dataset). Models were run with and , using four input channels (, , , ), NNs were trained for 7 epochs. The best result is marked in bold.
Table 3.
Testing accuracy (20% of dataset). Models were run with and , using four input channels (, , , ), NNs were trained for 7 epochs. The best result is marked in bold.
Model | |
---|
Adaptive | Recursive_Adaptive | Cumulative_Amplitude |
---|
GRU (two layers, 64 and 32 cells) | 0.9235 | 0.9300 | 0.8842 |
LSTM (two layers, 64 and 32 cells) | 0.9194 | 0.9092 | 0.9023 |
Table 4.
Coefficients Quantization Results for GRU (two layers, 64 and 32 cells) + Dense, trained on four input channels. Accuracy as a function of bit-width.
Table 4.
Coefficients Quantization Results for GRU (two layers, 64 and 32 cells) + Dense, trained on four input channels. Accuracy as a function of bit-width.
Bits | Method | |
---|
Adaptive | Recursive_Adaptive | Cumulative_Amplitude |
---|
Original Model | | 0.9235 | 0.9300 | 0.8842 |
10 | linear | 0.9236 | 0.9287 | 0.8841 |
minmax | 0.9233 | 0.9300 | 0.8841 |
log_minmax | 0.9235 | 0.9298 | 0.8842 |
tanh | 0.9232 | 0.9283 | 0.9232 |
9 | linear | 0.9236 | 0.9279 | 0.8838 |
minmax | 0.9237 | 0.9295 | 0.8842 |
log_minmax | 0.9231 | 0.9293 | 0.8843 |
tanh | 0.9219 | 0.9260 | 0.8842 |
8 | linear | 0.9206 | 0.9257 | 0.8830 |
minmax | 0.9238 | 0.9311 | 0.8838 |
log_minmax | 0.9207 | 0.9283 | 0.8844 |
tanh | 0.9161 | 0.9143 | 0.8836 |
7 | linear | 0.9177 | 0.3989 | 0.8850 |
minmax | 0.9194 | 0.9250 | 0.8841 |
log_minmax | 0.9218 | 0.9236 | 0.8833 |
tanh | 0.9131 | 0.9033 | 0.8851 |
6 | linear | 0.8952 | 0.9008 | 0.8871 |
minmax | 0.9144 | 0.8839 | 0.8842 |
log_minmax | 0.9111 | 0.9076 | 0.8844 |
tanh | 0.8702 | 0.8782 | 0.8788 |
5 | linear | 0.3722 | 0.8442 | 0.8802 |
minmax | 0.9031 | 0.9058 | 0.8810 |
log_minmax | 0.3948 | 0.8878 | 0.8812 |
tanh | 0.8247 | 0.3306 | 0.8670 |
4 | linear | 0.8500 | 0.2745 | 0.8587 |
minmax | 0.8678 | 0.8702 | 0.8775 |
log_minmax | 0.8649 | 0.3848 | 0.8734 |
tanh | 0.7491 | 0.8464 | 0.3017 |
3 | linear | 0.7928 | 0.8135 | 0.8190 |
minmax | 0.3391 | 0.7900 | 0.8530 |
log_minmax | 0.7664 | 0.8023 | 0.8564 |
tanh | 0.6922 | 0.2833 | 0.7985 |
2 | linear | 0.3006 | 0.6700 | 0.7065 |
minmax | 0.7371 | 0.3391 | 0.3466 |
log_minmax | 0.7908 | 0.7369 | 0.3110 |
tanh | 0.7216 | 0.7549 | 0.2309 |
1 | linear | 0.6700 | 0.3300 | 0.3300 |
minmax | 0.6706 | 0.7003 | 0.6717 |
log_minmax | 0.7171 | 0.7459 | 0.2121 |
tanh | 0.7171 | 0.7459 | 0.2121 |