Next Article in Journal
DoubleStrokeNet: Bigram-Level Keystroke Authentication
Previous Article in Journal
Temperature Reliability Investigation for a 400 W Solid-State Power Amplifier under High and Cold Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Firmware and Software Implementation Status of the ICBLM and nBLM Systems for the ESS Facility

1
Department of Microelectronics and Computer Science, Lodz University of Technology, 93-005 Lodz, Poland
2
European Spallation Source ERIC, 224 84 Lund, Sweden
3
CEA Paris-Saclay, 91190 Saclay, France
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(20), 4308; https://doi.org/10.3390/electronics12204308
Submission received: 27 September 2023 / Revised: 13 October 2023 / Accepted: 16 October 2023 / Published: 18 October 2023

Abstract

:
The European Spallation Source (ESS) is a neutron source powered by a 5 MW, 2 GeV superconducting proton linac which is currently being constructed in Lund. Due to high beam current, just a few microseconds of operation with a misaligned beam can melt the accelerating cavity wall. Therefore, a Beam Loss Monitor (BLM) system is necessary to protect hardware and personnel from the consequences of beam loss. The BLM’s main function is to inhibit beam production when excessively high beam losses are detected. Besides protection functionality, the system is anticipated to provide the means to monitor the beam losses during all modes of operation with the objective of preventing excessive machine activation. This paper concentrates on implementation for ionization chamber Beam Loss Monitor (ICBLM) and Neutron Sensitive Beam Loss Monitor (nBLM) systems. The required microsecond reaction time has been achieved due to the application of an FPGA. A single detector module was installed and successfully tested at LINAC4 at CERN, in the section where conditions are similar to the ones expected in the ESS linac. The summary of these tests is also included.

1. Introduction

The European Spallation Source is the world’s most powerful neutron source, currently being built in Lund, Sweden. By design, this facility will achieve a neutron beam from an accelerated proton beam with an average power of 5 MW, aimed at the rotating tungsten target [1]. The energy to be delivered to the proton beam is up to 2 GeV and this will take place in the linear accelerator (linac) section of the infrastructure [2]. The linear accelerator consists of both normal conducting (NC) and superconducting (SC) cavity sections, as depicted in Figure 1.
The ESS linac construction phase is about to be completed with the Beam On Target (BOT) milestone scheduled for May 2025.
Nonetheless, some subsystems are in the final state and have already moved to the initial operation phase.
Since the accelerated beam current can achieve up to 62.5 mA, even small particles’ deviation from the desired trajectory can cause serious consequences. Just a few microseconds of operation with a misaligned beam can melt the cavity wall [3]. Therefore, it is extremely important to switch off the beam if the beam losses occur.
To assure a high availability level of the device, as well as to minimize the risk of hardware failure and personnel injury caused by extensive beam losses, different types of measures have been taken. The family of Beam Loss Monitor (BLM) devices has been foreseen for the linac vicinity installation. These devices are sensitive to secondary particles that are generated by a lost beam that activates nearby accelerator equipment. These devices are especially useful for the protection of the machine against extensive beam current leakage, but also for the purposes of linac beam tuning.
Two major types of BLM devices are currently being evaluated and installed at ESS [3,4]. One is the ICBLM kind and the second is the nBLM kind. The first one, which is located mainly in the superconducting part of the linac, incorporates the ionization chambers as detectors [5,6]. The foreseen quantity of 266 pieces should provide an optimal coverage and level of protection for such a size of facility. To complement the ICBLM, the low-energy part of the linac will be equipped with the mentioned nBLM detectors [7]. The nBLM installation consists of 82 neutron detectors, located mainly in the normal conducting section.
The focus of the work presented in the paper was the set-up and implementation of the firmware and software layers for both ICBLM and nBLM systems according to the BLM Systems Specification.
The first stage was to identify common elements of both systems—the systems must perform continuous data acquisition from all measurement channels to available memory banks, provide consistent time-stamps and interface with the protection systems.
The data streams generated by both systems vary in terms of required memory bandwidth—the nBLM is a more demanding one as it generates 3 GB/s (16-bit 250 MHz data for 6 channels) of raw data, so the DAQ part must fulfill this requirement.
The further steps covered the implementation of system-specific elements, such as ADC front-ends (6-channel 250 MHz data acquisition for the nBLM and 8-channel 1 MHz data acquisition for the ICBLM) and processing algorithms (complex pulse discrimination for the nBLM and simple threshold analysis for the ICBLM) with requirements on total loss detection latency below 1 µs. Such a short reaction time can only be achieved by data processing implemented in an FPGA.
The next sections of the paper present the hardware platform used for the nBLM and ICBLM systems together with additional details on implementation. Finally, a preliminary experimental evaluation of the resulting prototype is given.
Figure 1. The ESS linac layout. Orange colour indicates the Ion source (that introduces protons), red colour represents the NC accelerator section and blue represents the SC one. Accelerated proton beam is transferred to neutrons at the target [8].
Figure 1. The ESS linac layout. Orange colour indicates the Ion source (that introduces protons), red colour represents the NC accelerator section and blue represents the SC one. Accelerated proton beam is transferred to neutrons at the target [8].
Electronics 12 04308 g001

2. The Hardware Platform

The form factor for the majority of real-time critical systems used at the ESS is Micro Telecommunications Computing Architecture (MicroTCA) [9]. This is a relatively new standard recently applied at several high-energy physics facilities such as XFEL [10]; FLASH, GSI, Germany [11]; the ESS, Sweden [12,13,14]; and JPARC, Japan. Its specification addresses most common requirements of such a system, which are:
  • High reliability;
  • Modularity;
  • Re-usability;
  • Well-defined management.
The MicroTCA standard provides redundancy of most critical parts such as cooling units and power modules. It is highly modular with functional units implemented as separate hot pluggable Advanced Mezzanine Card (AMC) cards, which use well-known interfaces such as PCIe and Ethernet. This allows it to easily reuse hardware elements and implementations on firmware/software layers in different systems.
The block diagram of the MicroTCA architecture used for the described application is presented in Figure 2. The elements of the system can be classified into two groups:
  • Infrastructure—provide environment and support for execution units:
     
    Chassis
    Mechanical and electrical holder for elements of the system. The systems at the ESS are using 6 (nBLM)- and 12 (ICBLM)-slot MicroTCA chassis depending on the application.
     
    Power modules
    Provide main supply voltages for the modules. Two power supplies are distributed in the chassis—12 V payload power for functional parts and 3.3 V management power used by management controllers on the modules. The power distribution to each slot is implemented using separate power supply channels.
     
    Cooling units
    Provide cooling power in the chassis. For optimal cooling and redundancy reasons, the chassis can support several fan trays.
     
    Back-plane
    Provides interconnects amongst the modules in the chassis. The MicroTCA back-plane provides a set of fat-pipes (fast serial interfaces) used to implement Ethernet and PCIe communication, as well as a dedicated bus-like distribution of trigger signals and dedicated lines for clocks.
     
    Management unit—MicroTCA Carrier Hub (MCH)
    The MCH is a complex device, which implements a dedicated controller to manage and monitor the state of individual modules. In addition to management functions, it is also equipped with an Ethernet switch, a PCIe switch and a clock generation device to distribute protocols to the modules in the chassis. If needed, the MCH can be redundant using a second slot available in the chassis.
  • Execution of application-specific functions:
     
    Processing unit—CPU
    Standard CPU, which implements the PCIe root complex and provides computing power to run dedicated Epics applications.
     
    Event receiver—EVR
    Performs interaction with the ESS timing systems and distributes events and clocks to all other modules in the chassis.
     
    FPGA-based FMC carriers—IFC1410
    Extended by dedicated ADC FMC modules, which execute all time-critical functions (BLM algorithms, fast DAQ).

2.1. IFC1410 FMC Carrier Board

The FMC carrier selected for the application is IFC1410. It is a Xilinx Kintex Ultrascale-based unit with 1 GB of dual channel DDR3 memory and an additional Freescale/NXP QorIQ T2081 processor system for supporting applications. It is equipped with HPC FMC slots for a wide range of data acquisition FMC boards.
The IFC1410 FMC carrier is delivered with a dedicated firmware framework called TOSCA. It provides easy-to-use firmware interfaces for all available on-board peripherals and FMCs together with a dedicated Linux driver and command line applications. The user of the board can focus on custom implementation of required functionality without the detailed knowledge on exact interfaces. The general block diagram of the framework is presented in Figure 3. The blocks indicated by the green colour are dedicated for user implementations and the white ones are part of framework itself. The blocks implemented in the scope of this work are marked in yellow.

2.2. nBLM Acquisition Module

Since IFC1410 is considered as a universal hardware platform used at the ESS for different applications, the selection of the ADC acquisition module has been carried out to fulfill a wide range of different requirements. The typical output signal of the detectors used for nBLM systems is a negative pulse with a rise time in the range of 30–50 ns and a duration of up to 200 ns—the ADC3110 [15] selected during the process fulfills the requirements of nBLM systems. The card is equipped with eight 250 MHz 16-bit Analog to Digital converters with an analog front-end bandwidth in the range of 500 MHz. In addition, it implements a high-precision clock distribution with an ultra-low noise oscillator and high-performance power distribution.
The block diagram of the board is presented in Figure 4. As part of the work, the dedicated ADC interface has been implemented using the VHDL language—it is capable to interface with high-speed ADCs using a data rate of up to 1 Gbps per single differential pair. The interface is based on ISERDES FPGA primitives with additional clock and delay calibration modules. The data are provided to further FPGA blocks using double-word data path to decrease clocking requirements. In addition, several diagnostic and control blocks have been implemented—this includes the PLL configuration, clock frequency monitors and pattern generators.

2.3. ICBLM Acquisition Module

The ionization chambers used as detectors for ICBLM systems have completely different requirements. The system operates in current mode—the current produced by the ionization chambers scales with the flux of the incoming ionizing radiation that traverses the detector active area and has a very high dynamic range—from less than nA to a few mA. The FMC-PICO-1M4-C1 [16]—a 1 MSPS 20-bit ADC card compatible with the IFC1410 platform—has been selected for the application as one of the few on the market, which fulfills the dynamic range requirements. The block diagram of the board is presented in Figure 5. The ADC is equipped with an SPI serial interface with strict requirements on interface timing to ensure the proper quality of the sampled data. The exact tuning of the sampling point, SPI transactions and wait time has been conducted using a 3 ns resolution state machine.

3. Data-Processing Algorithms

The BLM system’s main function is to detect improper beam behavior that could harm or unnecessarily activate the linac equipment. It stops the beam operation by deasserting the BEAM_PERMIT signal, which is always sent to the Beam Interlock System (BIS), the core of the MPS (Machine Protection System), when it detects such conditions. The system also acquires information about beam losses for monitoring purposes.

3.1. nBLM-Specific Algorithms

Each nBLM detector’s analog signal is digitized and continuously processed to obtain the number of neutrons detected in each Monitoring Time Window (MTW)— N n —which can serve as an indicator of beam loss. The MTW is 1 µs long, to meet the system’s 5 µs response time requirement for machine protection purposes and the time characteristics of the signal pulse generated by incoming neutrons [17].
The raw data have a fixed detector-dependent baseline value (pedestal) which is subtracted at the start of the data-processing chain. The next step of the analysis is the Neutron Detection Algorithm (NDA), explained below. The NDA mainly gives a value of N n every microsecond, which is then used as an input to the protection function algorithm where BEAM_PERMIT is calculated. Also, the NDA outputs are used to obtain some statistics (called “periodic data”) that are usually gathered over each machine cycle (14 Hz) for monitoring purposes (for example, N n averaged over the machine cycle).
The signal produced by the detector has the shape of a series of negative pulses (see Figure 6) that are generated directly or indirectly by incoming particles that interact with the detector’s sensitive volume. If the signal during the pulse drops below the predetermined event detection threshold, it is interpreted as the start of an interesting event (Figure 7). The following parameters are obtained from interesting events for further analysis:
  • Time Over Threshold (TOT): the time the signal remains below the threshold during the pulse.
  • Q_TOT: the area of the pulse, proportional to the pulse charge.
  • peakValue: the minimum value over the TOT window reached during the pulse, representing the pulse amplitude.
Figure 6. Signal recorded with nBLM-F detector during prototype tests at Linac4 at CERN. Two consecutive pulses, each due to a γ particle, are visible. Pedestal correction was applied offline.
Figure 6. Signal recorded with nBLM-F detector during prototype tests at Linac4 at CERN. Two consecutive pulses, each due to a γ particle, are visible. Pedestal correction was applied offline.
Electronics 12 04308 g006
Figure 7. Definition of signal pulse characteristics extracted for each interesting event.
Figure 7. Definition of signal pulse characteristics extracted for each interesting event.
Electronics 12 04308 g007
An interesting event can either be a neutron or a non-neutron event. Neutron events happen when either one neutron creates one distinct pulse, or many neutrons (pileup) create several pulses that are too close to tell apart (see Figure 8). Non-neutron events happen because of noise spikes or background particles (mostly γ - and X-rays), and their rate depends on the event detection threshold and the detector environment at the given detector operation conditions.
The NDA’s main idea is to recognize single neutron, pileup and non-neutron events. The event is considered a neutron event if the TOT is above a certain limit and peakValue drops below a fixed threshold. The event is recognized as a pileup if the TOT of a neutron event is large enough, otherwise it is a single neutron.
When a single neutron event happens, N n is incremented. For a pileup event, Q_TOT is used to estimate the number of neutrons.
The NDA is a little more complex than what was described above. To make it less sensitive to noise, the event detection threshold has hysteresis used to identify interesting events (see Figure 7). Also, the data processing is complicated by the need to report neutron counts per time unit without too much delay, as one cannot wait forever for the pileup to end. Therefore, some events are cut off at the MTW edge and an extra event (for the pulse tail) appears in a following time window.
The block diagram of a data-processing pipeline executed by the FPGA for each channel is presented in Figure 9. As a result of processing, the BEAM_PERMIT signal is asserted to indicate that the level of beam loss detected by the detector is below the threshold. The data-processing path consists of seven blocks:
  • Preprocessor: subtracts pedestal from data samples and compares the samples with the threshold.
  • Event detector: classifies events as “interesting events” and counts ADC saturation events.
  • Event aligner: delays one event, allowing simultaneous presentation of two subsequent events to the next block.
  • Neutron counter: computes the number of neutrons causing “interesting events”, taking into account two events at the MTW boundary if necessary.
  • Neutron summarizer: produces the summary of neutrons counted within each MTW.
  • Interlock logic: generates the BEAM_PERMIT signal.
  • MPS interface: transmits BEAM_PERMIT signal to the Machine Protection System.
Figure 9. Data flow in a single channel.
Figure 9. Data flow in a single channel.
Electronics 12 04308 g009
As the assertion of the BEAM_PERMIT interlock signal is critical from the point of view of machine safety, no stalls and delays are allowed during the process.
The supporting computation blocks marked on the diagram are used to produce additional statistics called “periodic data”, which are presented on the operator panel for visualization purposes. The data have lower priority, so the occasional intermittent stalls, leading to data loss (resulting, e.g., from the lack of available DDR3 memory bandwidth), are acceptable.
All of the operational parameters of algorithms can be set via the PCI Express interface using the TOSCA framework (TCSR register interface). All of the critical data generated during processing (the one used for determining the state of BEAM_PERMIT) are stored in two banks of DDR3 memory on the IFC1410 board, logically grouped into independent data streams (channels). They are available to the software layers via DMA interfaces (TOSCA implemented) through the PCI Express interface.

3.2. ICBLM-Specific Algorithms

3.2.1. Decimation and ROI

Due to the pulsed operation of the ESS facility, the dedicated acquisition module was implemented, which allows the selection of the region of interest (usually around the pulse trigger) for the data acquisition subsystem. It has the possibility to perform additional decimation of the data, returning both average and maximum/minimum values over the decimation period.

3.2.2. Background Subtraction

The specifics of loss detectors used for ICBLM systems make the whole signal path vulnerable to the background correlated with the operations of RF systems. These effects must be cancelled to ensure there are no false alarms. Therefore, special background measurement and a subtraction algorithm have been implemented—it uses no-beam diagnostic pulses present in the machine. Such pulses are marked by timing systems with a dedicated trigger signal, which activates the algorithm.
Each point in the measurement window for a diagnostic pulse is filtered by an exponential filter with operator-defined coefficients and stored for later use. The results of the filtering are applied (as subtraction) to measurements taken during beam-qualified pulses to cancel repetitive distortions caused by RF. This results in a corrected signal, which can be used for further processing [18].

3.2.3. HV Modulation Detection

To ensure that the detectors and their HV power supplies are operating in the right way, special low frequency modulation is applied to the power supply voltage. Its frequency is below 1 Hz and is configurable using an HV power supply control system. The modulation is also present on all input currents signals sampled by ICBLM DAQ cards. For diagnostic purposes, the parameters of this modulation must be detected and compared with HV Control System settings for consistency.
The detection of <1 Hz frequencies in the 1 MHz data stream using an FIR low pass filter would require a few millions of coefficients—the decision has been made to reduce the data sampling rate to 1 Hz before further processing. This is accomplished with a dedicated decimator with anti-aliasing filters. The structure of the decimator is presented in Figure 10. The decimator consists of 20 FIR-downsampler pairs.
The resulting 1 Hz data streams (one for each measurement channel) are passed through a pre-processing block, which forms FFT frames and applies the selected data window (Hamming, Hanning, Bartlett or Rectangular). The data are then fed into the FFT algorithm to calculate frequency components of the signal. Low bandwidth of the data stream makes multiplexing of the single FFT block in FPGA logic possible. The results are sent to DAQ (via framers and circular buffer) to be presented to the operators and compared to reference data on the higher layers of the system.

4. Implementation

During the initial analysis of the design, the functional modules were divided into two groups: common infrastructure modules used by both applications and application-specific algorithms. The modules in each group are described in further sections.

4.1. Common Infrastructure

Since both BLM systems share many functional elements on the hardware level, several firmware components are reused in both of them. One of the common parts is data stream controller implementation, which provides the possibility of continuous data acquisition into the external memory banks. Its structure is presented in Figure 11. It is equipped with the dedicated FIFOs, which provide data buffering capabilities as well as clock domain crossing infrastructure among data acquisition and memory clocks. The TOSCA framework uses a custom protocol, allowing the writing of data blocks in bursts of a specific size at the clock frequency up to 300 MHz. To obtain good write performance, a burst size of at least 2048 bytes is necessary. Therefore, the data are stored in the FIFO until a full burst transfer can be initiated. In addition, there is a latency timer, forcing a flush of incomplete burst if a specified amount of time has passed since the last transfer. The operation is controlled by the dedicated state machine with write and read counters. The operation status and dedicated interrupts are provided to the outside for monitoring.
The data in the memory banks are organized in the form of circular buffers. If the CPU readout is too slow, the channel controller provides the appropriate data overwrite flag. The data streams are divided into frames, protected by 32-bit CRC, allowing time-stamping and integrity checking. The frames have the start-of-frame signature at the beginning, facilitating stream resynchronization after the circular buffer overwrite.
The circular buffers are used together with other modules to provide DAQ capabilities for the applications. The full data flow diagram is presented in Figure 12.
The nBLM acquisition board is equipped with 8 channels, but the application requirements assume that the single system is monitoring only six detectors—additional channels can be used as spares—and for this reason, dedicated switching has been implemented, which allows the reconnection of channels to different algorithm blocks. The selected raw data streams are passed to frames for raw data acquisition and to the algorithm block for further processing.
The framers group the subsequent data samples into frames, putting the time-stamp of the first sample in the header. To reduce the framing overhead, the framers wait until the specific number of samples is collected. A latency timer can be used to flush the undersized frame if required. If the channel controller is out of buffer space, the frame can be larger than usual, up to the size of the framer buffer.
The outputs from framers are sent to Data Channel Controllers, managing circular buffers in two DDR3 memory banks. The Data Channel Controllers are connected to the arbiters, allowing the sharing of the single memory access channel between different data streams. The arbiters use a round-robin scheduling algorithm.
On the CPU side, two independent threads are reading data from each of the memory banks. To reduce the CPU load, interrupts are used to indicate that unread data are available in any of the buffers. The threshold for the amount of data generating an interrupt is configurable. In order to prevent keeping small amounts of data in buffers indefinitely, yet another latency timer is employed. It generates and interrupts regardless of the threshold, if the data rate is low.
Both ICBLM and nBLM implementations also share the following modules:
  • Memory arbiters.
  • Clock domain crossing infrastructure.
  • Dedicated TOSCA framework interfaces (such as register interface and DMA interface).

4.2. High Level Synthesis

During the implementation of the algorithms, it has been decided to evaluate most recent High Level Synthesis Tools provided by the FPGA vendor in terms of suitability for complex algorithm implementation. The main reasons behind this decision were:
  • Complexity of the data-processing algorithm, requiring the consideration of two samples in parallel and involving many additions and multiplications on data of varying width.
  • Fast implementation with C++.
  • Fast verification of the implementations without the need to achieve the FPGA/VHDL simulation level.
  • Ready to use or adapt test C implementations prepared by BD at the ESS.
The large span of requirements between the ICBLM and nBLM (resource reuse vs. low latency) makes the task even more interesting.
The typical work flow using HLS is presented in the technical manuals [19] and covers the following steps:
  • Design entry (C++, C or SystemC).
  • Functional verification (simple testbenches implemented in high level languages).
  • RTL generation (additional control directives allow to define constraints).
  • RTL simulations (as final verification step).
HLS was used to synthesize the data-processing paths. The firmware components interacting with external hardware and the TOSCA framework were written in VHDL due to the specific timing and interface requirements, which are difficult to fulfill otherwise.

4.3. HLS for the nBLM

The 250 MHz sampling rate and continuous data acquisition and processing for nBLM systems require real-time low latency responses from algorithm blocks. For this reason, individual processing stages were implemented as separate C++ functions with latency requirements narrowed to 1 clock cycle. There was also another reason for this constraint. Loosening this requirement caused the HLS system to generate components producing incorrect results. To decrease the effort of the implementation and HLS tools, two ADC samples are processed in a single clock cycle (therefore, the required logic frequency is 125 MHz) and the data flow control was implemented by external VHDL-based modules. This resulted in several simple-to-use building blocks, which were placed in the pipeline structure using a graphical schematic editor integrated into the Vivado Software Suite.

4.4. HLS for the ICBLM

The algorithms used in ICBLM systems do not require such low latency—in this case, it is more important to multiplex FPGA resources to process data from all channels with a single instance of logic element. The individual algorithms (FFT input windows, ROI, background subtraction) were implemented as single C++ function, without explicitly splitting it into pipelined stages. Each function contains a computation part and additional elements responsible for flow control and the generation of data packets. They were encapsulated into IP packages and used inside the main VHDL code of the ICBLM system.

4.5. HLS Summary

The implementations and testing of the modules generated by HLS tools clearly show that the system is capable of the preparation of real-time low latency processing as well as complex data flow control. The modules were approved and integrated into the final application. Moreover, HLS will be considered as an implementation tool for future functionalities and other projects.
The latency of the nBLM data-processing chain (from the preprocessor to neutron summarizer) is 72 ns. The resource utilization of the data-processing chain (preprocessor—neutron summarizer) is presented in Table 1. The resource utilization of the TOSCA framework and nBLM application is presented in Table 2.

5. Deployment and Higher Software Levels

The main software packages for BLM subsystems were implemented as EPICS modules developed in the framework of the European Spallation Source EPICS Environment (E3). This is a framework widely used by the ESS developer, which provides all EPICS infrastructure and flexible tools for application development and deployment. The overall software layers in the case of different configurations are visible in Figure 13 for the ICBLM and Figure 14 for the nBLM, respectively.
At the lowest level layer (on the CPU side), the TOSCA framework provides the kernel and user library to provide communication and data acquisition capabilities from/to the HW layer. Both (IC and nBLM) configurations incorporate the PCI Express interface for high-speed data transfer. The upper layer incorporates various EPICS layers. In the case of the ICBLM system, it is the Asynchronous Driver Support (asynDriver) [20]. At the same time, the nBLM software is based on the Nominal Device Support (NDS) module [21]. More details about the SW implementation for the nBLM can be found at [22]. Following the above, the EPICS API layer has been developed in order to cover functional requirements concerning measured data processing and storage.
In addition to the EPICS layer, several low-level testing tools have been implemented to allow fast and efficient debugging of firmware implementation. They can be run as Linux console programs and they make it possible to perform full configuration of algorithm blocks and data acquisition to dedicated files.
As the highest level of software, the dedicated graphical control panels have been created (presented in Figure 15). The main tool for this implementation is the Control System Studio software package. The control panels allow easy access to all functions of individual BLM modules and provide the presentation layer for data acquisition subsystems.

6. Experimental and Laboratory Results

To prove the correctness of the designed and developed systems (the nBLM and ICBLM), they were precisely verified in various tests, performed on different hierarchy levels. First, low-level modules described in HDL were separately analyzed by means of behavioral simulations, using dedicated HDL test benches. In a similar way, the codes prepared for HLS were tested before synthesis using C++ test benches. After the synthesis, they were examined together with other components in the system-level simulations. The results of these simulations were automatically verified against reference data. For this purpose, the standard Questa Advanced Simulator was integrated with project-specific software tools processing data in real systems.
To verify the reliability of both systems, they were deployed on laboratory MicroTCA stands. Long-term tests confirmed the correctness of the computations—by matching the expected results with the provided reference data—but also the stability and compatibility of the system (hardware, firmware and software) components.
Subsequently, in 2018, a nBLM-F pre-series detector module was installed at LINAC4 at CERN, in the section where conditions similar to the ones anticipated in the ESS DTL (Drift Tube Linac) can be encountered. The module was positioned near the beam pipe at the inter-tank region between the DTL1 and DTL2 tank, where the H beam energy attains ∼12 MeV (see Figure 16). The objective of this data-collection campaign was to:
  • Assess detector response in realistic environment. For that purpose, the data were recorded with an oscilloscope for offline analysis.
  • Perform the first test of the full nBLM DAQ chain including a detector in a realistic environment.
Preliminary conclusions of the latter are reported in this paper; more details can be found in [23]. The results from the data collected with the oscilloscope are discussed in [24].
Figure 16. Fast nBLM pre-series module installed at LINAC4 at CERN.
Figure 16. Fast nBLM pre-series module installed at LINAC4 at CERN.
Electronics 12 04308 g016
The nBLM DAQ prototype that was tested was at an early stage of its development with a functioning NDA and the ability to manually trigger the extraction of data at various stages of processing, including the raw unprocessed data stream in a time window of up to 2 s. The prototype lacked the periodic data monitoring features, but the main processing chain was in the final version. An example of raw signal with pedestal correction applied that was acquired during this campaign is shown in Figure 6.
The pulse characteristics of interesting events (amplitude, TOT, etc.) mentioned above were derived from several acquisition runs with durations varying from 5 min to 8 h. Before this, several linac cycle periods of raw data were acquired for offline determination of pedestal value and selection of appropriate event detection thresholds. Due to the limited beam availability and low neutron rates, the efforts were concentrated on acquiring sufficient statistics on interesting events for offline analysis and comparison with the results obtained with an oscilloscope.
The distribution of number of interesting events vs. pulse amplitude, extracted via the offline analysis, is shown in Figure 17. Note that some pulses had to be split in the real-time processing, so reconstruction of recorded events (merging events at MTW boundaries) was needed to obtain the real pulse characteristics, resulting in the differences seen at higher amplitudes. The reconstructed distribution has the expected shape with the first slope due to noise, and the second slope due to γ particles mainly caused by the RF and neutrons dominating at amplitudes above ∼30 mV, as shown in Figure 17. This agrees with the results obtained from the oscilloscope data [24] with minor differences in the shape caused by more advanced offline analysis carried out on the oscilloscope data.
Since the hardware setups with real detectors are not yet ready to use, the ICBLM implementations were tested in the laboratory systems with external data generators.
The laboratory tests were also performed for the Background Calculation block using an artificial trigger set up—the timing system behavior and exact triggering schemes are not yet defined, so some assumptions have been made.

7. Conclusions and Future Work

The data acquisition and processing algorithms implemented in the FPGA are working correctly, as confirmed by the experimental results from LINAC4 and laboratory tests. During the tests, no pileups were observed, so full verification of data-processing functionality will require performing experiments in environments with greater neutron flux.
Further steps in system development include implementation and testing of the machine protection functionality and interaction with the MPS and accelerator timing system—especially when it comes to final time-stamping of the acquired data and correlation with other measurement systems.

Author Contributions

Conceptualization, W.J., G.J., W.C. and I.D.K.; methodology, W.J., G.J., I.D.K., R.K., T.S. and K.R.; software, W.J., G.J. and R.K.; resources, Y.M., V.N., L.S. and T.P.; validation, W.J., G.J., R.K., I.D.K., C.D., V.G., K.R. and F.D.S.A.; formal analysis, I.D.K.; investigation, W.J., G.J. and I.D.K.; data curation, I.D.K.; writing—original draft preparation, W.J.; writing—review and editing, W.J., G.J., R.K., I.D.K. and W.C.; visualization, I.D.K.; supervision, W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Polish Ministry of Science and Higher Education, decision number DIR/WK/2018/2020/02-2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not available.

Acknowledgments

The authors would like to thank Jiří Král and William Viganò for their valuable help with detector installation and realization of the test at LINAC4.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AMCAdvance Mezzanine Card
BLMBeam Loss Monitor
CPUCentral Processing Unit
ESSEuropean Spallation Source
EVREvent Receiver
FMCFPGA Mezzanine Card
FPGAField-Programmable Gate Array
ICBLMIonization Chamber Beam Loss Monitor
MicroTCAMicro Telecommunications Computing Architecture
nBLMneutron Beam Loss Monitor

References

  1. ESS. ESS Technical Design Report. In European Spallation Source; Technical Report ESS-0016915; ESS: Lund, Sweden, 2013. [Google Scholar]
  2. Eshraqi, M.; Bustinduy, I.; Celona, L.; Comunian, M. The ESS linac. In Proceedings of the IPAC 2014—5th International Particle Accelerator Conference, Dresden, Germany, 16–20 June 2014. [Google Scholar]
  3. Dolenc Kittelmann, I.; Shea, T. Simulations and detector technologies for the Beam Loss Monitoring System at the ESS linac. In Proceedings of the HB2016—The 57th ICFA Advanced Beam Dynamics Workshop on High-Intensity and High-Brightness Hadron Beams, Malmö, Sweden, 7 July 2016. [Google Scholar]
  4. Dolenc Kittelmann, I.; dos Santos Alves, F.; Bergman, E.; Derrez, C.; Grishin, V.; Rosengren, K.; Shea, T.J.; Legou, P.; Mariette, Y.; Nadot, V.; et al. Neutron sensitive beam loss monitoring system for the European Spallation Source linac. Phys. Rev. Accel. Beams 2022, 25, 022802. [Google Scholar] [CrossRef]
  5. Grishin, V.; Dehning, B.; Koshelev, A.; Larionov, A.; Seleznev, V.; Sleptsov, M. Ionisation Chambers as Beam Loss Monitors for ESS linear accelerator. In Proceedings of the IBIC2017—6th International Beam Instrumentation Conference, Grand Rapids, MI, USA, 20–24 August 2017. [Google Scholar]
  6. Dolenc Kittelmann, I. Requirements and Technical Specifications—ESS ICBLM System; Technical Report ESS-1158292; ESS: Lund, Sweden, 2019. [Google Scholar]
  7. Papaevangelou, T.; Alves, H.; Aune, S.; Gressier, V.; Beltramelli, J.; Bertrand, Q.; Bey, T.; Bolzon, B.; Chauvin, N.; Combet, M.; et al. ESS nBLM: Beam Loss Monitors based on Fast Neutron Detection. In Proceedings of the 61st ICFA Advanced Beam Dynamics Workshop on High-Intensity and High-Brightness Hadron Beams, Daejeon, Republic of Korea, 17–22 June 2018. [Google Scholar]
  8. Garoby, R.; Vergara, A.; Danared, H.; Alonso, I.; Bargallo, E.; Cheymol, B.; Darve, C.; Eshraqi, M.; Hassanzadegan, H.; Jansson, A. The European Spallation Source Design. Phys. Scr. 2018, 93, 014001. [Google Scholar] [CrossRef]
  9. PICMG. PicMG—MicroTCA Standard Description. Available online: https://www.picmg.org/openstandards/microtca/ (accessed on 15 October 2023).
  10. Branlard, J.; Ayvazyan, G.; Ayvazyan, V.; Grecki, M.; Hoffmann, M.; Jeżyński, T.; Ludwig, F.; Mavrič, U.; Pfeiffer, S.; Schlarb, H.; et al. MTCA.4 LLRF system for the European XFEL. In Proceedings of the 20th International Conference Mixed Design of Integrated Circuits and Systems—MIXDES 2013, Gdynia, Poland, 20–22 June 2013; pp. 109–112. [Google Scholar]
  11. Zappai, J.; Schlitt, B.; Schnase, A.; Schreiber, G. Development of a New Digital LLRF System for the UNILAC Based on MTCA.4; GSI Scientific Report 2015; GSI: Darmstadt, Germany, 2016; p. 309. [Google Scholar]
  12. Jamróz, J.; Cereijo García, J.; Korhonen, T.; Lee, J.H. Timing System Integration with MTCA at ESS. In Proceedings of the 17th International Conference on Accelerator and Large Experimental Physics Control Systems, New York, NY, USA, 5–11 October 2019. [Google Scholar] [CrossRef]
  13. Martins, J.P.; Farina, S.; Lee, J.H.; Piso, D. MicroTCA.4 Integration at ESS: From the Front-End Electronics to the EPICS OPI. In Proceedings of the 16th International Conference on Accelerator and Large Experimental Physics Control Systems, Barcelona, Spain, 8–13 October 2017. [Google Scholar] [CrossRef]
  14. Szewinski, J.; Golebiewski, Z.; Gosk, M.; Krawczyk, P.; Kudla, I.M.; Abramowicz, A.; Czuba, K.; Grzegrzolka, M.; Rutkowski, I. Contribution to the ESS LLRF System by Polish Electronic Group. In Proceedings of the 8th International Particle Accelerator Conference, Copenhagen, Denmark, 14–19 May 2017. [Google Scholar] [CrossRef]
  15. IOxOS. ADC3111 Product Page. Available online: https://www.ioxos.ch/produit/adc-3110-3111/ (accessed on 15 October 2023).
  16. CAENels. PICO4 Product Page. Available online: https://www.caenels.com/products/fmc-pico-1m4/ (accessed on 15 October 2023).
  17. Dolenc Kittelmann, I. Requirements and Technical Specifications—ESS nBLM System; Technical Report ESS-0044364; ESS: Lund, Sweden, 2019. [Google Scholar]
  18. Kittelmann, I.D.; Alves, F.S.; Bergman, E.; Derrez, C.; Grishin, V.; Grandsaert, T.; Shea, T.J.; Cichalewski, W.; Jabłoński, G.W.; Jałmużn, W.; et al. Ionisation Chamber Based Beam Loss Monitoring System for the ESS Linac; IBIC: Malmö, Sweden, 2019; paper MOPP023. [Google Scholar]
  19. Xilinx. ug902 HLS User Guide. Available online: https://www.xilinx.com (accessed on 5 October 2023).
  20. Rivers, M. ASYN EPICS Documentation Webpage. Available online: https://epics.anl.gov/modules/soft/asyn/ (accessed on 15 October 2023).
  21. Cosylab. NDS EPICS Repository Webpage. Available online: https://github.com/Cosylab/nds3/ (accessed on 15 October 2023).
  22. Mariette, Y.; Nadot, V.; Bertrand, Q.; Gougnaud, F.; Joannem, T.; Papaevangelou, T.; Segui, L.; Jabłoński, G.; Cichalewski, W.; Jałmużna, W.; et al. New Neutron Sensitive Beam Loss Monitor (nBLM). In Proceedings of the 17th International Conference on Accelerator and Large Experimental Physics Control Systems, New York, NY, USA, 5–11 October 2019. [Google Scholar] [CrossRef]
  23. Kittelmann, I.D.; Alves, F.S.; Bergman, E.; Derrez, C.; Grishin, V.; Rosengren, K.; Shea, T.J.; Bertrand, Q.; Joannem, T.; Legou, P.; et al. Neutron sensitive Beam Loss Monitoring system for the ESS linac. In Proceedings of the IBIC 2019—8th International Beam Instrumentation Conference, Malmö, Sweden, 8–12 September 2019. [Google Scholar] [CrossRef]
  24. Segui, L.; Alves, H.; Aune, S.; Beltramelli, J.; Bertrand, Q.; Combet, M.; Dano-Daguze, A.; Desforge, D.; Gougnaud, F.; Joannem, T.; et al. Characterization and first beam loss detection with one nBLM-ESS system detector. In Proceedings of the IBIC 2019—8th International Beam Instrumentation Conference, Malmö, Sweden, 8–12 September 2019. [Google Scholar]
Figure 2. Block diagram of the architecture of the MicroTCA system for BLM application.
Figure 2. Block diagram of the architecture of the MicroTCA system for BLM application.
Electronics 12 04308 g002
Figure 3. The TOSCA framework structure.
Figure 3. The TOSCA framework structure.
Electronics 12 04308 g003
Figure 4. AD3111 ADC module used for nBLM systems.
Figure 4. AD3111 ADC module used for nBLM systems.
Electronics 12 04308 g004
Figure 5. PICO1M4 ADC module used for ICBLM systems.
Figure 5. PICO1M4 ADC module used for ICBLM systems.
Electronics 12 04308 g005
Figure 8. Simulated nBLM signal used for firmware implementation evaluation with visible pileup, single neutron and spike or background event (from left to right).
Figure 8. Simulated nBLM signal used for firmware implementation evaluation with visible pileup, single neutron and spike or background event (from left to right).
Electronics 12 04308 g008
Figure 10. The structure of decimator used for HV Modulation Detection.
Figure 10. The structure of decimator used for HV Modulation Detection.
Electronics 12 04308 g010
Figure 11. Structure of the data channel controllers.
Figure 11. Structure of the data channel controllers.
Electronics 12 04308 g011
Figure 12. An overview of data flow in firmware.
Figure 12. An overview of data flow in firmware.
Electronics 12 04308 g012
Figure 13. Overview of the ICBLM software (EPICS) layers.
Figure 13. Overview of the ICBLM software (EPICS) layers.
Electronics 12 04308 g013
Figure 14. Overview of the nBLM software (EPICS) layers.
Figure 14. Overview of the nBLM software (EPICS) layers.
Electronics 12 04308 g014
Figure 15. General view of user panel.
Figure 15. General view of user panel.
Electronics 12 04308 g015
Figure 17. Distribution of number of interesting events over amplitude collected with nBLM DAQ at LINAC4. Blue histogram represents reconstructed data, while black refers to the data as recorded.
Figure 17. Distribution of number of interesting events over amplitude collected with nBLM DAQ at LINAC4. Blue histogram represents reconstructed data, while black refers to the data as recorded.
Electronics 12 04308 g017
Table 1. nBLM data path resource utilization (single channel).
Table 1. nBLM data path resource utilization (single channel).
ComponentCLB LUTsCLB RegistersCLB
Preprocessor17325166
midrule Event detector526600132
midrule Event aligner5216536
midrule Neutron counter9034357
midrule Neutron summarizer18231343
Table 2. nBLM resource utilization.
Table 2. nBLM resource utilization.
ComponentCLB LUTsCLB RegistersCLBBlock RAMs
TOSCA framework35,04041,9487838189
nBLM application99,534105,63718,832171
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jałmużna, W.; Jabłoński, G.; Kiełbik, R.; Cichalewski, W.; Dos Santos Alves, F.; Kittelmann, I.D.; Rosengren, K.; Derrez, C.; Grishin, V.; Shea, T.; et al. Firmware and Software Implementation Status of the ICBLM and nBLM Systems for the ESS Facility. Electronics 2023, 12, 4308. https://doi.org/10.3390/electronics12204308

AMA Style

Jałmużna W, Jabłoński G, Kiełbik R, Cichalewski W, Dos Santos Alves F, Kittelmann ID, Rosengren K, Derrez C, Grishin V, Shea T, et al. Firmware and Software Implementation Status of the ICBLM and nBLM Systems for the ESS Facility. Electronics. 2023; 12(20):4308. https://doi.org/10.3390/electronics12204308

Chicago/Turabian Style

Jałmużna, Wojciech, Grzegorz Jabłoński, Rafał Kiełbik, Wojciech Cichalewski, Fabio Dos Santos Alves, Irena Dolenc Kittelmann, Kaj Rosengren, Clement Derrez, Viatcheslav Grishin, Thomas Shea, and et al. 2023. "Firmware and Software Implementation Status of the ICBLM and nBLM Systems for the ESS Facility" Electronics 12, no. 20: 4308. https://doi.org/10.3390/electronics12204308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop