**High-Energy Gamma-Ray Astronomy: Results on Fundamental Questions after 30 Years of Ground-Based Observations**

Editors

**Ulisses Barres de Almeida Michele Doro**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Ulisses Barres de Almeida Brazilian Center for Physics Research (CBPF) Brazil

Michele Doro University of Padova Italy

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Universe* (ISSN 2218-1997) (available at: https://www.mdpi.com/journal/universe/special issues/ gamma-ray astronomy).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-5727-4 (Hbk) ISBN 978-3-0365-5728-1 (PDF)**

Cover image courtesy of Chiara Righi

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

### **Contents**



### **About the Editors**

### **Ulisses Barres de Almeida**

Dr. Ulisses Barres de Almeida obtained his Ph.D. from the University of Durham, UK, in 2011, with a thesis associated with the H.E.S.S. experiment, and since then worked as part of various international collaborations dedicated to gamma-ray astronomy and astro-particle physics, including the Cherenkov Telescope Array (CTA). Before joining the Brazilian Center for Physics Research (CBPF) as a researcher in 2013, he was a research fellow at the Max-Planck-Institute for Physics in Munich for two years. In 2014, he was elected Fellow of the Institute of Advanced Studies of the University of Durham, and in 2017 became Affiliate Member of the Brazilian Academy of Sciences. More recently, in 2021, he was nominated Vice-Spokesperson for the Southern Wide-Field Gamma-ray Observatory (SWGO). Since 2017 he has integrated the Brazilian Delegation to the Committee on the Peaceful Uses of Outer Space of the United Nations (UN-COPUOS).

### **Michele Doro**

Prof. Michele Doro graduated in Physics at University of Padova in 2004. He obtained a PhD in Physics in 2009 at University of Padova with a thesis on the technology of mirror facets for the Cherenkov telescopes MAGIC and the world-wide project CTA and indirect dark matter searches with gamma rays from the annihilation or decay of dark matter in astrophysical environments. He worked at the Institut de Fisica d'Altas Energies (IFAE, Spain, 2004 and 2014), at the Universitat Autonoma de Barcelona (UAB, Spain 2010-2013) and at the Max Planck Institut for Physics (MPI-Munich, Germany 2015). He is a member of the MAGIC, CTA/ LST and SWGO collaborations. He has worked in several managerial roles in such collaborations, both technological (Telescope Operation, Telescope Safety, Mirrors) and scientific (Dark Matter and Fundamental Physics coordinator, member of the Time Allocation Committee). He is currently an associate professor in Experimental Particle Physics at University of Padua, Department of Physics and Astronomy, holding Experimental Laboratory and General Physics courses.

### **Preface to "High-Energy Gamma-Ray Astronomy: Results on Fundamental Questions after 30 Years of Ground-Based Observations"**

What is here, is found elsewhere What is not here, is nowhere!

Mahabharata

The Special Issue book of *Universe* is a major contribution to the documentation of the work conducted in the previous decades and provides inspiration for the present and future generations. After reading the articles in this book, having learnt of many new innovations, I finished with a profound feeling of admiration for the human enterprise under development and so well described here.

We are living through a transition period in ground gamma-ray astronomy. The Cherenkov Telescope Array (CTA) Observatory is a game changer and the community is looking forward to analyzing its data. This moment is perfectly captured and clearly developed by the authors of the papers in the book. The papers have solid roots in the current generation of observatories and point firmly to the expectations of what CTA will make possible. They allow us to foresee an even brighter future for ground gamma-ray astronomy.

The editors, whom I have known for a long time, offer a great variety of subjects and views by collecting research from specialists with different backgrounds. Anyone involved in the organization of scientific publications knows this is not an easy task. Choosing the right combination of subjects and finding experts with time to accept the invitation are demanding and perilous tasks. The editors accepted this challenge with courage, and the success of this publication does not come as a surprise for those who are aware of the quality of their scientific work.

I belong to the last generation of scientists who used the library to read papers during their PhD. I still remember the excitement when a new printed volume of the main journals arrived. As students, we used to spend at least one day per week in the library reading them. This was a special event and we read countless papers from each volume—not only those directly related to our thesis. Journal clubs to discuss the papers were far more common and students who did not know the basics about the related fields were often easy to spot.

I have to confess that if I had not accepted the invitation to write this foreword, I would probably have read only one or two papers in this book. As I grew older, I changed my reading habits; fortunately, the editors of this book required me to read all papers in a volume for the first time in many years. Nowadays, the amount of information reaching us is beyond the processing capacity of any human being. Choosing how to spend our time when finding valuable and relevant information has become one of the most important skills. Usually, we choose the papers we are going to read based on the following three pieces of information: title, authors and number of citations. If the title indicates a subject we are interested in, we ask ourselves if we know the authors. Thanks to technology, we can immediately check how many times the paper has been cited and viewed. Our preference lies with the paper with the highest number of citations, those that are written by authors we know, and research about subjects closely related to the information we seek before producing our own research. Needless to say, this modern selection procedure has many biases.

It is likely that many of the scientists who come across this book will select which papers to read using a selection procedure very similar to the one I described above. For the papers you will select with this procedure, I need no argument to convince you to read them. However, my trust in the quality of this book is so high that I would like to propose that you read at least one more paper from an author you are unfamiliar with and about a subject not closely related to your own research. Try selecting a paper without many citations or views. I am sure you will be surprised by the quality of the paper and I am convinced that you will select to read another one from the collection. I also believe there is a high probability that you will cite them in your next publication.

I am grateful for the authors who dedicated their time to writing these papers. I am grateful to the journal *Universe* for making this collection available to the next generation. Above all, I thank the editors for reminding me of my youth: reading an entire volume is, as I had forgotten, fun!

### **Luiz Vitor de Souza Filho**

### *Editorial* **Editorial to the Special Issue: "High-Energy Gamma-Ray Astronomy: Results on Fundamental Questions after 30 Years of Ground-Based Observations" †**

**Ulisses Barres de Almeida 1,\* and Michele Doro 2,\***


Gamma-ray astronomy is the observational science that studies the cosmos in the last unexplored electromagnetic window, namely, above the megaelectronvolt (EeV = 106 eV) (MeV). This radiation is mostly produced as the result of very energetic parent charged particles (cosmic rays) that have been accelerated during cosmic times to kinetic energies up to exaelectronvolt (EeV = 10<sup>18</sup> eV) and beyond. In the presence of other particles, photons or magnetic fields, cosmic rays lose energy by emitting gamma rays and other carriers of astrophysical information, such as neutrinos. The combined observation of these probes, whose origin is closely linked, make up the multi-messenger astronomy framework, of which gamma-rays are the key ingredient.

Since the discovery of the first TeV-emitting source a little over 30 years ago, groundbased gamma-ray astronomy, and in particular the imaging air-Cherenkov technique (IACT), has been a major actor in the many revolutions witnessed in the field of astroparticle physics. Today, over 200 TeV objects of all classes have been discovered. This was complemented by the results of satellite-borne detectors, which revolutionised our view of the MeV-GeV sky by detecting thousands of high-energy emitters, as well as the diffuse emission signature of cosmic ray propagation and interaction across the Galaxy. More recently, shower front detectors have singled-out the first several PeVatrons candidates—the putative accelerators of the most energetic cosmic rays in the Galaxy.

This is the general context which motivated the proposal of this Special Issue, for which the goal is to document the tremendous efforts by the two generations of astrophysicists who have taken the field from its first discovery to the successful instruments in operation today. Our expectation is that the resulting volume may pay tribute and clearly demonstrate the major challenges that were tackled and the solutions which were found, as well as to review the main scientific and technological achievements by the field. We believe this exercise becomes ever more relevant as we approach the beginning of a new era with the start of operations of the next generation of instruments, such as CTA and LHAASO. We hope the final result of this work will succeed in its objectives.

As a reading guide, one could approach this Special Issue starting from the contribution of P. Chadwick, "35 Years of Ground-Based Gamma-ray Astronomy" [1], on the history of the field and the major challenges it faced until finally establishing itself as a mature field of astronomy. This contribution is complemented by that of R. Mirzoyan on "Technological Novelties of Ground-Based Very High Energy Gamma-Ray Astrophysics with the Imaging Atmospheric Cherenkov Telescopes" [2], which presents in a clear account the details of the specific technical challenges and how they were conquered.

These two introductory papers put into context the work of E. Amato and B. Olmi, "The Crab Pulsar and Nebula as Seen in Gamma-Rays" [3], on the history of the observation of the Crab Nebula, which reports the knowledge accumulated so far on this key source, whose TeV detection is celebrated in this Special Issue. The statistical framework in which

**Citation:** de Almeida, U.B.; Doro, M. Editorial to the Special Issue: "High-Energy Gamma-Ray Astronomy: Results on Fundamental Questions after 30 Years of Ground-Based Observations". *Universe* **2022**, *8*, 389. https://doi.org/10.3390/ universe8080389

Received: 27 June 2022 Accepted: 10 July 2022 Published: 22 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

IACT data are analysed is a key aspect of the field, brilliantly presented by G. D'Amico in his contribution "Statistical Tools for Imaging Atmospheric Cherenkov Telescopes" [4]. To further understand the detection and data analysis techniques, the reader can proceed to the paper by C. Nigro, T. Hassan and L. Olivera-Nieto on the "Evolution of Data Formats in Very-High-Energy Gamma-Ray Astronomy" [5], as well as "The Making of Catalogues of Very-High-Energy gamma-ray Sources" [6] by M. de Naurois and "High-Energy Alerts in the Multi-Messenger Era" [7] by D. Dorner, M. Mostafá and K. Satalecka

The physical results achieved by IACTs are further reported in a number of science reviews. Cosmology and cosmic ray physics are discussed by L. Tibaldo, D. Gaggero and P. Martin in "Gamma Rays as Probes of Cosmic-Ray Propagation and Interactions in Galaxies" [8], by A. Franceschini in "Photon–Photon Interactions and the Opacity of the Universe in Gamma Rays" [9] and by R. Alves Batista and A. Saveliev in "The Gamma-ray Window to Intergalactic Magnetism" [10], reporting on the limits obtained with gammarays in the intergalactic magnetic field.

Extragalactic astrophysics has now become the territory of multi-messenger astronomy. In "Astrophysical Neutrinos and Blazars" [11], P. Giommi and P. Padovani narrate the latest results on the connections between neutrinos and gamma-rays. The multi-messenger connections are further investigated by L. Nava in "Gamma-ray Bursts at the Highest Energies" [12]. P. Cristofari in "The Hunt for Pevatrons: The Case of Supernova Remnants" [13] discussed the recent discoveries of PeV candidates and the tension with a naive supernova explanation.

Gamma-ray astronomy observations can also constrain several New Physics topics, as discussed by T. Terzi´c, D. Kerszberg and J. Striškovi´c in "Probing Quantum Gravity with Imaging Atmospheric Cherenkov Telescopes" [14] for the case of Lorentz Invariance Violation searches and I. Batkovic, A. De Angelis, M. Doro and M. Manganaro in "Axionlike Particle Searches with IACTs" [15] on the probing of these elusive particles.

Other contributions depict the general context in which IACTs have developed and worked in the course of the past 30 years. K. K. Singh and K.K. Yadav in "20 Years of Indian Gamma Ray Astronomy Using Imaging Cherenkov Telescopes and Road Ahead" [16], tell the history of pioneering projects in India; meanwhile, A. Malizia and others report on the "INTEGRAL View of TeV Sources: A Legacy for the CTA Project" [17]. Last but not least, in "EAS Arrays at High Altitudes Start the Era of UHE gamma-ray Astronomy" [18], Z. Cao tells us about the new revolution underway in gamma-ray astronomy and the discovery of the first PeV source candidates.

Evidently, several relevant science topics are missing in this Special Issue. Extragalactic astro-physics with IACTs has not been described systematically in this issue and could in itself be the subject of a dedicated volume; the interested reader is referred to [19] for recent reviews. On the galactic scale, IACTs have been the sources of successful studies, such as studies on supernova remnants, pulsars and their associated nebula, binary systems, as well as extended regions and objects glowing at very high energies. All of those fields of research may allow us to finally build a complete theoretical model on the acceleration of galactic cosmic rays, as well as their diffusion through the interstellar medium, with important repercussions to our understanding of structure formation and the evolution of galaxies.

Finally, IACTs have proven to be great probes of fundamental physics topics, specially dark matter (DM) searches. DM can be seen through gamma-rays in their annihilation or decay products. Interesting targets are both galactic (the Milky Way center and satellite galaxies) and extragalactic (clusters of galaxies). A recent comprehensive review can be found in [20], in which further searches such as those for Primordial Black Holes, Magnetic Monopoles and Tau-Neutrinos are reported.

As a conclusion, we would like to greatly thank all our colleagues who accepted the task of contributing to this Special Issue and have brilliantly done so over the unusually challenging times of the COVID-19 Pandemic. To you, the reader, we hope that this volume will prove a relevant and lasting reference and will succeed in conveying the exciting three decades that this newest among all fields of observational astronomy has undergone. We also hope that the comprehensive picture emerging from all contributions may shed some new light on the best strategies and paths to follow on keeping on investigating the sky with very high-energy gamma-rays.

**Author Contributions:** U.B.d.A. and M.D. contributed equally to the ideation, the reviewing and the editing of this Special Issue. All authors have read and agreed to the published version of the manuscript.

**Funding:** M.D. acknowledges funding from Italian Ministry of Education, University and Research (MIUR) through the "Dipartimenti di eccellenza" project Science of the Universe.

**Acknowledgments:** We acknowledge the support from the entire team of the MDPI *Universe* journal for their initial invitation to a Special Issue on Gamma-ray Astronomy, and their continuous and relentless support during all the phases of this project.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:



### **References**


### *Article* **35 Years of Ground-Based Gamma-ray Astronomy**

**Paula Chadwick**

Department of Physics, Durham University, Durham DH1 3LE, UK; p.m.chadwick@durham.ac.uk

**Abstract:** This paper provides a brief, personal account of the development of ground-based gammaray astronomy, primarily over the last 35 years, with some digressions into the earlier history of the field. Ideas related to the imaging of Cherenkov events and the potential for the use of arrays were in existence for some time before the technical expertise required for their exploitation emerged. There has been occasional controversy, great creativity and some heroic determination—all of it part of establishing a new window into the universe.

**Keywords:** gamma-ray astronomy; astroparticle physics; Cherenkov telescopes

### **1. Introduction**

It came as something of a surprise to be asked to write this review article, until I realised that I have indeed been working in the field of ground-based gamma-ray astronomy for over 35 years, having started my PhD in Durham with Ted Turver in 1984. I have never managed to leave Durham (at least not for long), which either shows a singular lack of imagination or great dedication to the cause. While I may not have gone anywhere, ground-based gamma-ray astronomy certainly has, and this article is an attempt to give an overview of that progress from a very particular position in a small city in the far north-east of England. Others have written much more comprehensive overviews of the development of ground-based gamma-ray astronomy than I could ever hope to [1,2]. This is therefore a personal view, and I cannot claim that it is completely impartial or indeed complete at all.

### **2. The 1980s—Hunting the Snark**

As I started my PhD, the telescopes that Durham operated at the Dugway Proving Grounds in Utah, USA, had just shut down. Sundry parts arrived shortly thereafter in a couple of shipping containers, and the group got on with salvaging the useful equipment primarily, a great deal of NIM electronics and some 5-inch and 3-inch diameter photomultiplier tubes (PMTs). There were 4 telescopes in the array at Dugway, known as the Mark I telescopes. One of these was replaced by a Mark II telescope, so it was 3 Mark I instruments and one Mark II telescope that came back to Durham in 1984. It is worth considering how those telescopes came about, as it explains much of the direction of Durham's work at the time.

The starting point is a paper published by Turver and Weekes in 1978 [3]. In it, they described some simulations that they had performed of Cherenkov light from gamma-ray and proton-initiated airshowers. To our eyes now, the number of simulations seems extremely small (there are never more than 100 simulations at a given energy and sometimes as few as 9), but bearing in mind the computing facilities available at the time, this represented a considerable effort. The proposal they made for a Cherenkov telescope system sounds familiar to us now:

Two large reflectors of size and optical quality similar to the 10 m detector1 would be operated in parallel with a lateral spacing of about 100 m. Each reflector would have a matrix of 5 cm phototubes (19 or 37 in each), each tube having a field of view of 0.25◦ half-angle. The system would be triggered by a coincidence between one or more detectors in each reflector; the pulse heights of all the tube outputs

**Citation:** Chadwick, P.M. 35 Years of Ground-Based Gamma-ray Astronomy. *Universe* **2021**, *7*, 432. https://doi.org/10.3390/universe 7110432

Academic Editor: Yongquan Xue

Received: 20 September 2021 Accepted: 14 October 2021 Published: 12 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

would then be recorded digitally (6 bit accuracy), so that two "images" would be obtained of the angular distribution of the shower light with 0.5◦ resolution. By analysis of the "images" in the two systems, it will be possible to determine the energy and the angle of incidence of the shower to high precision.

Although the prospect of using the differences in the airshowers to separate the gamma-rays from the overwhelming background of hadron events had been postulated some time ago by Jelley and Porter [4], this represented a considerable step forward from the state-of-the art in 1978—when even the Whipple reflector had only a single 5-inch (12.5 cm) PMT at its focus. Even more, at a Royal Society meeting in 1981, a plan for the future was developed [5]. This included an outline of what they described as a 'third generation' experiment that would use both timing and imaging techniques (Figure 1). There is also a list of potential sources in the paper; while they were not so lucky with the Galactic objects (although, of course, the Crab Nebula was included), the short extragalactic target list included Centaurus A, M87 and BL Lac, all of which are now known to be VHE (very high energy) gamma-ray emitters.

**Figure 1.** The 'third generation' gamma-ray telescope array proposed by Turver and Weekes [5]. The suggested energy range was 10 Gev to 10 TeV; each reflector would be 10–15 m in diameter and be separated by 50–100 m.

To build more than one large, sophisticated telescope was beyond any one group's budget. The Whipple team built the first multi-PMT camera, while Durham went on to experiment with the array concept, but without any imaging capability, hence the 4 Dugway telescopes. I suspect (though this is before even my time) that financial constraints played a part on this. One can only speculate as to what might have happened had the two approaches been brought together earlier.

The individual Durham telescopes (Figure 2) each consisted of three reflectors, with a PMT acting as the detector for each reflector. The 3 PMTs were operated in coincidence, as a way of reducing the noise in the system. In particular, this removed events produced by muons passing through the detectors from the datastream. There was no array trigger, but events common to more than one telescope were identified offline using event timestamps. 'Absolute' time was provided by a central crystal oscillator; this oscillator slowly drifted from the correct time, so it required regular resetting from a radio signal. The drift rate was not always the same—presumably due to differences in the ambient temperature—so it was monitored regularly and only reset when required, because the discontinuities caused by the resets were a nuisance when it came to the data analysis. Nonetheless, roughly monthly resets were required, and a large, hand-written piece of card with the characteristics of the clock drift for each reset was pinned to a door in the observatory in Durham for reference when analysing data.

**Figure 2.** One of the Durham Mark I telescopes situated at Dugway Proving Ground, Utah in the USA.

The Mark I telescope mirrors were army-surplus searchlight mirrors, much like the mirror used by Jelley and Galbraith for the first atmospheric Cherenkov detector [6], but larger (1.5 m diameter). The optics were not ideal, so they were improved by the use of a secondary Cassegrain mirror system—dual-mirror Cherenkov telescopes are not so new after all. The Mark II telescope departed from this design; there were still three reflectors, but each consisted of seven custom-built mirrors with more suitable 250 cm focal lengths made from machined and polished aluminium. A 3-inch (7.5 cm) PMT was placed at the focus of each reflector.

### *2.1. Telescopes Everywhere*

In August 1986, a NATO Advanced Research Workshop devoted to VHE Gamma Ray Astronomy was held in Durham. This provides a useful survey of the field at the time, and I briefly consider the science results in Section 2.3. There were many Cherenkov systems dotted around the world; some were similar to Durham's, with multiple individual reflectors on a single mount, such as the Haleakala telescope in Hawaii, with its 6 reflectors, and the array of 3 triple-reflector telescopes at Potchefstroom in South Africa. In Pachmarhi in India, there were 18 individual telescopes, 10 with 0.9 m diameter mirrors and 8 with 1.5 m diameter mirrors, each on their own mount. The largest single array in terms of mirror area was at Themis in France, where there were 7 telescopes, each 7 m in diameter. The University of Adelaide had telescopes both at White Sands (3 single-reflector telescopes of 5 m diameter) and a triple-reflector telescope in Woomera. Finally, the Whipple Observatory was in the process of adding a second telescope to the first to create HER-CULES2 [7]. The ideas that HERCULES was designed to exploit would eventually have profound effects.

### *2.2. The Durham Mark III Telescope*

The workshop in 1986 marked, for Durham, the end of the construction phase of the Mark III telescope. Ted Turver and Keith Orford had recognised that the southern hemisphere would be, in all likelihood, a good hunting ground for gamma-ray astronomy, so the telescope was built on the old Sydney University Giant Airshower Recorder (SUGAR) site, near the small town of Narrabri in Australia. Like the Mark II telescope, the Mark III had three reflectors consisting of multiple individual mirrors on a single mount (Figure 3). In this case, the three 'cameras' consisted of 750 mm diameter PMTs arranged in a hexagonal pattern. As before, the time reference was provided by a local oscillator cross-checked with a radio signal. However, by now, we were using a rubidium oscillator (seen in Figure 4), which was much better than the crystal that had been used in Dugway. It required resetting to the Royal Australian Navy signal only rarely.

Details of the Mark III can be found in [8]; I will just mention a few important or unusual features here.

**Figure 3.** The Durham Mark III telescope under construction in Durham, with about 75% of the mirrors in situ. Note the snow on the ground! It is also possible to see the edge of the Mark II telescope at the bottom right, which was rebuilt in Durham for test purposes.

**Figure 4.** The Durham Mark III telescope control room. While the main DAQ was performed by a Motorola 68000 computer situated under the console, the system's interfaces were all BBC microcomputers. These were remarkable- and remarkably cheap-computers for their time. The copper box on top of the console contains the rubidium oscillator used for timing.

### 2.2.1. Automatic Gain Control

Before imaging existed, there were a number of observing techniques employed in the hope of detecting a signal. Tracking an object of interest was obviously possible, but did not provide a good means of detecting a source that did not have a time-varying flux. The simplest technique was 'drift scanning', in which the telescope was kept at a fixed position, and the object of interest was allowed to pass through the telescope field-of-view. A source would then be identified by the rise in count-rate as it passed through the fieldof-view. This was the only method that could be used for extended objects, such as the Galactic plane, in small field-of-view instruments. The final method, and the one most often used with the Mark III, was 'chopping'. Here, the central PMT and the off-axis PMT in the same horizontal plane were alternately pointed at the source—in the Mark III's case, the ON/OFF switch was made every 15 min. This allowed for the study of both constant and time-varying objects.

The problem with all these techniques was that PMT gain could change by as much as 10%, going from dark to bright fields, resulting in a change in count-rate of a few percent. This was of the same order as an (optimistically!) expected signal. The answer was to stabilise the PMT gain by using the light from a green LED (blue LEDs were not available in the 1980s) embedded into a perspex ring placed around the PMT entrance window, thus distributing light across the PMT. The LED current was controlled via a feedback loop which kept the anode current constant at the 1% level. This system, known by us in Durham as automatic gain control, and by the Whipple folks as padding lamps, effectively removed short-term variations caused by changes in night-sky background or atmospheric conditions. The drawback was that this introduced extra noise into the system, although a suitable coincidence requirement mitigated this.

### 2.2.2. Aluminium Surface, Honeycomb Mirrors

The Mark III telescope was equipped with lightweight mirrors, made using a construction technique based on that of the antenna sections of the UK/NL millimetre-wave telescope which the staff at the Rutherford Appleton Laboratory had built. The reflective material, made by Alanod3, was (and still is) designed for home interiors and lighting, now increasingly for solar power. It wasn't clear how it would perform outside for our purposes, so samples were sent to Trevor Weekes at Whipple to put in the test system there. A letter with the results (no e-mails then) came back some time later with the comment "It's very good—what is it?". This was used for the 120, 60 cm diameter mirrors needed for the Mark III telescope.

Each mirror surface was formed around a mould under vacuum and bonded to aluminium honeycomb (used for aircraft construction) which had been crushed to approximately the correct profile. The back of this was bonded to a flat backplate and the whole mirror was encircled by an aluminium ring to provide structural integrity. The reflectance was reasonably good—over 75% between 300 and 500 nm—and the image of a point source was around 10 mm in diameter, which corresponded to about 0.2◦, more than adequate for a telescope in which the field-of-view of each PMT was 1◦.

These mirrors were cheap, lightweight, and turned out to be durable (particularly when tools were inadvertently dropped on them). However, one drawback was that on cold, damp, winter nights, condensation would form on the mirrors. A number of approaches were tried to obviate this, including heating the mirrors, which would have needed 120 kW for the whole telescope, and was quickly abandoned. The best approach was to spray the mirrors before observing with a solution of what was called 'high quality wetting agent' in any papers on the subject. It was, in fact, dishwasher rinse aid, purchased in quantity from the supermarket in Narrabri. Heaven knows how their stock control system coped with the apparently huge fluctuation in the washing of dishes in the area between summer and winter. If it was particularly cold and damp, condensation would start to form anyway, happily usually just as observations were finishing for the night. The result the next morning was sparkling clean mirrors.

At the time, it was thought that the condensation was due to the honeycomb structure of the mirrors. We now know that the main reason for condensation on mirrors is high emissivity of the reflective surface in the infrared, which causes them to cool rapidly when pointed to the cold night sky, so that when the relative humidity is high, the mirror temperature is soon below the dewpoint [9]. A similar composite structure employing aluminium-coated glass reflective surfaces has, of course, proved rather successful.

### 2.2.3. Signal Enhancement

Although imaging was not possible with the Mark III telescope, the hexagonal geometry of the detector package did allow for some basic background reduction to be attempted. The assumption was that all the gamma-ray events from the object being tracked would be contained within the 1◦ field-of-view of the central PMT. The centres of the outer PMTs were 2◦ from the middle of the central PMT; thus, they were used as a 'guard ring', and any events which triggered one or more off-source channels as well as an on-source channel would be rejected. This approach could have a software trigger added to it, which specified the percentage of the on-source signal that should be detected off-source.

In hindsight, this was too crude to make an appreciable difference to the signal to noise—but imaging was in its infancy at the time, and it was a good try.

### *2.3. Gamma-ray Sources (or Not)*

I have already alluded to the NATO Advanced Research Workshop that was held in 1986. Since detecting a constant source was a considerable challenge without imaging, there was a great deal of concentration on variable sources. At that stage, nobody had detected an active galactic nucleus, so efforts were concentrated on variable objects in the Galaxy. This meant pulsars and binary systems containing neutron stars.

Back in 1986, the main source of excitement was Hercules X-1. An X-ray binary, this is known to contain a 1.24 s pulsar and to show cyclotron lines, indicative of a strong magnetic field. The first report of gamma-ray emission from Her X-1 came from Durham in 1984 [10]. At the 1986 meeting, the Durham, Whipple, and Haleakala groups all reported the detection of pulsed emission from the object [11–13]. An episode of emission in April 1984 was observed simultaneously with both the Dugway and Whipple telescopes, which independently measured the same pulse period [11]. Most of these reports translated into journal papers, and indeed there were many further reports in the 1980s [14–17], including a report from the Pachmarhi group of a strong burst of emission from the object [18]. Other binary systems came along too, sometimes without confirmation by more than one telescope, sometimes with: SMC X-1, Vela X-1, Cen X-3, LMC X-4, 4U0115+63,... it is quite a list [19]. The emission was generally episodic in nature and sometimes pulsed. There was also a clutch of upper limits.

The most intriguing object at the time was Cygnus X-3, first detected in gammarays in the 1970s with the Crimean Observatory telescopes [20,21]. This was followed by a confirmation from the Whipple Observatory [22], from the solar energy facility at Edwards Air Force Base [23] and from the Dugway telescopes [24]. All these observations seemed to show the object's characteristic 4.8 h periodicity, although the exact time of the emission within the assumed orbit was not necessarily consistent. Most controversial were Durham's claims of a 12.6 ms pulsar in the system, first seen in the data from the Dugway telescopes [25,26], and later from the Mark IV telescope operating on La Palma [27]. There were several apparent confirmations of this result, but on closer inspection, most did not stand up [28–30]. There was much discussion about the statistical approach taken, both for and against [31,32].

We are jumping ahead here, but none of these apparent signals were confirmed once it was possible to identify gamma-ray-induced images reliably. So what was going on? Was everyone slightly crazy? Maybe, but it seems to me more likely that this was a case of a series of marginally significant apparent signals reinforcing one another. Although in the end we did not learn very much from the observations, Hillas [2] pointed out that it was the Cygnus X-3 controversy in particular that kept ground-based gamma-ray astronomy alive and spurred on the development of Cherenkov telescopes and particle detector arrays. Cygnus X-3 is a known *Fermi*-LAT source, primarily detected in outburst [33], and perhaps in the near future we will genuinely detect gamma-rays from the object with ground-based telescopes.

To go back to the aforementioned HERCULES detector, the intention was to add another 10 m class telescope to sit alongside the Whipple telescope and two high-resolution cameras (which then meant a pixel spacing of 0.25◦) [7]. Hillas had already shown that images produced by gamma-ray showers could be distinguished from hadron-induced showers [34]. (In his modest way, Hillas never published this fundamentally important work in a refereed journal; with 233 citations<sup>4</sup> and counting, it must be the most-cited cosmic ray conference paper in history.) The preamble to the description of HERCULES states:

Despite its obvious advantages, these ground-based techniques have not been developed to their full potential; the total investment in all such experiments on five continents since the early sixties amounts to only a few million dollars, a small percentage of the cost of GRO5, which included EGRET.), DUMAND6 or a major experiment in high energy physics.

Although the second telescope did not go quite to plan, with the publication of the Whipple team's ground-breaking detection of the Crab using the imaging atmospheric Cherenkov technique in 1989 [35], everything started to look different, and the prospects for more investment looked somewhat brighter. The imaging atmospheric Cherenkov telescope (IACT) had come of age.

### **3. The 1990s: Towards a Major Atmospheric Cherenkov Detector**

It has been said for many years that there is the Crab, and then there is the rest of astronomy. The worry was that the rest of astronomy did not exist in very high-energy (VHE) gamma-rays; for a couple of years the catalogue seemed to consist of only the Crab Nebula. These worries were largely dispelled by the second object detected using the imaging technique: the blazar Markarian 421 [36]. This was particularly exciting, because it represented something new and completely unexpected—an active galaxy, no less, and one which was not detected strongly in the data from the EGRET gamma-ray telescope on board the Compton Gamma-Ray Observatory. This challenged the almost unspoken assumption that whatever was detected from the ground must also be bright at lower energies. The excitement was bolstered by the detection 4 years later of the second AGN with the Whipple telescope, Mrk 501 [37]—an object that was below EGRET's detectability threshold. Here was a whole new scientific area that the ground-based telescopes could exploit. It was convincing. It was time to build some bigger telescopes—but how, exactly?

The best way to go to give the sensitivity, angular resolution and energy resolution that would enable ground-based gamma-ray astronomy to move forwards was by no means clear. The options were discussed at length over a series of 6 international meetings entitled 'Towards a Major Atmospheric Cherenkov Detector', which ran from 1992 (in Paris) to 1999 (in Utah). (There was a seventh meeting in Paris some time later, in 2005.)

The starting position was that a single experiment was imminent, and a number of working groups were set up at the first meeting with this in mind: a science working group, a technical working group, and a simulations working group. These were reported back at the second meeting in Calgary in 1993. There were updates from the various groups around the world: Whipple had just completed a camera upgrade, which included a rotating camera head surrounded by scintillators (for recording the passage of local cosmic rays through the PMTs) on the 10 m telescope [38]; Durham had started stereoscopic imaging with the Mark 3A and 5A telescopes [39]; The Nooitgedacht telescopes in South Africa were moving to a system of 6 'mini-telescopes' [40]; and the HEGRA7 telescopes destined for La Palma in the Canary Islands were in preparation [41]. There were also some rather heroic experiments described, including GASP8 at the South Pole [42]. Already by this stage, we can see the beginnings of discussions about silicon-based detectors, too [43].

However, at this point, the world was not ready for a single, major detector- or even collaboration. There was not, as yet, any consensus as to what 'the' detector would look like. As Weekes wrote in his postscript to the meeting:

...is it really obvious that the next major advances in ground-based gamma-ray astronomy will have to come with a single large "world" telescope? From my reading of the discussion at the workshop the answer was "no!"; one is bigger but more is better!

For the time being, it was on with more than one approach to the problem.

One option was to use an array of small mirrors spread across a large area. Measuring the arrival times of the Cherenkov light at each detector with sub-nanosecond resolution enabled very good directional information to be obtained, and wavefront sampling made it possible to improve the signal:noise, by exploiting the fact that the light front from a hadronic shower is much less uniform than that from gamma-rays. This approach was investigated by THEMISTOCLE in France and by PACT in India. THEMISTOCLE<sup>9</sup> consisted of 18 small (0.8 m diameter) parabolic mirrors spread over an area of around 1.7 × <sup>10</sup><sup>5</sup> m2. This was used to detect the Crab Nebula, but as the mirrors were small, the threshold was high (3 TeV) and a 6.5*σ* detection of the Crab required 162 h of on-source observations [44]. PACT10, operated by the Tata Institute of Fundamental Research in India, did better. Although this also used small (0.9 m) mirrors, they were deployed in 25 clusters of 7, giving a lower threshold of 0.9 TeV. A test observation with half the array produced an 12*σ* detection of the Crab Nebula in 31 h, comparable to the Whipple telescope [45]. A very similar array located at Hanle in the Himalayas, the HAGAR11 Observatory, has made a number of detections of AGN [46,47].

In 1991, a start was made on the construction of the HEGRA Cherenkov telescopes on La Palma. By 1998, this consisted of 5 telescopes of relatively modest area (∼ 8.5m2), but importantly, all equipped with imaging cameras, eventually consisting of 271 pixels. This was a true stereoscopic Cherenkov telescope system, and clearly demonstrated the power of this technique, with its ability to locate gamma-rays to around 0.14 deg, and to reject around 90% of hadron-induced images [48]. The telescopes proved their worth with the detection of several new objects, including Cas A and M87 [49,50]; the final array ran successfully until 2002, when other projects began to take precedence.<sup>12</sup>

The Cherenkov array at Themis (CAT) was built on a similar timescale to the HEGRA telescopes, with operations starting in 1996 and ending in 2001 [51]. Although the telescope was small, with a 4.5 m diameter equivalent mirror, it was equipped with a 600-pixel camera. High-resolution spatial information was used to distinguish the gamma-ray events; by comparing the data to a detailed model it was possible to infer the position of the source on the sky, the impact point on the ground and the energy of the gamma-ray, even without stereoscopic information. This was also the first Cherenkov camera to contain integrated readout electronics, as is now the norm.

### *The Durham Mark 6 Telescope*

Meanwhile, Durham looked at combining the imaging technique with a 3-mirror telescope system. Having constructed a small-scale prototype, the Mark 5A13, in 1992/3, the eventual result was the Durham Mark 6 telescope, shown in Figure 5 [52].

**Figure 5.** The Durham Mark 6 telescope in Narrabri, NSW, Australia. The telescope was around 20 m from end-to-end.

The Mark 6 telescope comprised 3 parabolic reflectors on a single mount, once again consisting of a honeycomb structure with an anodised aluminium skin, but this time formed into triangular sectors, so that a continuous reflective surface was created. These were too large for a vacuum chamber, but the relatively small sagitta of the mirrors, mostly in one direction, meant that it was possible simply to stretch the surface over the mould. However, we had probably reached the limit of the possible image quality with the anodised aluminium skin. Although malleable and durable, with excellent reflectance, there is a fundamental issue with the material that relates to the way it is manufactured. As a rolled material, there is an inherent directionality in the material and hence in the reflected light—the result had considerably diffuse reflectance in the direction in which the underlying aluminium had been rolled. Some batches were better than others, but the manufacturers did not know why. We seriously considered building aluminium-surface mirrors for H.E.S.S. at one point, but the company was not able to pursue any research into the reasons for the variations in quality without considerable financial input.

The camera at the focus of the central mirror represented Durham's first serious imaging camera (Figure 6), and consisted of 91 PMTs 2.5 cm in diameter with a surrounding ring consisting of 18, 5 cm diameter PMTs. The flanking dishes had simpler cameras at their foci, made of 19 close-packed hexagonal PMTs, which were 5.5 cm from flat-to-flat. These PMTs had come free of charge from a medical device manufacturer as part of huge job lot. Most of them were unused, either falling slightly out of specification, or having become parted from their test data (no manufacturer of medical equipment can afford not to have a full audit trail). This was very useful, as it enabled the best PMTs from the batch to be selected and used for telescopes. My job, as it had been in 1984, was to test the PMTs and construct the cameras—there was a lot of testing to be done, and I spent many hours sitting in our underground 'bunker' as it was known, in which the university's seismograph had been located at one time. This was very dark and prone to mice, but at least it was warm; the central heating pipes ran through it.

**Figure 6.** The Durham Mark 6 telescope's central camera.

The telescope was triggered via a coincidence between the central camera and corresponding PMTs in the lower-resolution left and right cameras. The camera trigger required that the left and right PMTs should be in the same region as at least 2 adjacent PMTs in the central camera. This trigger, devised by the ever-ingenious Lowry McComb, enabled the energy threshold to be lower than would usually be expected for a telescope situated not much above sea level, though I think not as low as the simulations suggested. Although there was no array trigger, once again, events from the Mark 6 could be correlated with those from the other telescopes on site, the 5A and 3A (a slightly upgraded Mark III), using the event timestamps.

Working in Australia was sometimes challenging. We had no onsite technical support, which meant that if you broke something, you were the one who would be fixing it. This made us all into careful and disciplined observers! The breakage which I personally hoped would not happen on my shift was to the main drive shaft to the gearboxes on the telescope drives. Occasionally these would snap, either due to general wear and tear or because a gust of wind had caught the telescope in question. Once the gearbox was off the telescope (no mean feat in itself), the top had to be prised off and the broken driveshaft removed from the drive trains. I can still remember the noise as the cogs in a 196:1 ratio gearbox moved—and the uncomfortable realisation that all of them needed to be put back into place.

Living with the wildlife in the bush was also interesting. We had various snakes, enormous spiders (which liked to live inside the electronics), echidnas, a large goanna, geckos in the house, some fabulous birds (not so fabulous when waking up those who had been observing all night), and many wallabies that were surprisingly easy to walk into at night. The local possum population was fascinated by the telescope mirrors, and we would often wake up in the day to find they had left paw marks on them. I suspect that anyone who has been observing at a remote site has similar stories to tell.

Some useful detections were made with the Mark 6 telescope, particularly PKS 2155- 304 [53]. This held the IACT redshift record for couple of years, and of course, has turned out to provide a lot of science. There was some technical work too, and in particular Keith Orford's simulations of the effects of the geomagnetic field on Cherenkov images were borne out of the Mark 6 observations [54]. However, by 2000 the funds came to a halt and the time had come for us to pack up the telescopes in Australia. For ground-based gamma-ray astronomy in general, things were also starting to move on.

### **4. The 2000s: Opening the Window**

### *4.1. Solar Farm Telescopes*

It was—and indeed still is—very desirable to reduce the energy threshold of Cherenkov telescopes to a few 10s of GeV, in order to provide seamless energy coverage with satellitebased instruments. As so few Cherenkov photons are produced by low-energy showers, the main requirement is to have an exceptionally large mirror area. An attractive (and cheap) option to obtain the required area was to use solar power facilities. Here, large arrays of mirrors (heliostats) tracked the Sun, focusing sunlight onto a single target situated in a tower, thereby producing heat which was used to run a steam turbine. At night, of course, the heliostats were not in use, so they could be turned into large area Cherenkov telescopes by using them to track objects of interest. This idea was first proposed in 1982 [55], but was difficult to implement due to noise created by the overlapping heliostat images. However, a suitable arrangement of mirrors or Fresnel lenses could be used to improve the optics and focus the light onto PMTs [56]. There were four such adapted arrays that started operation in the 2000s. STACEE<sup>14</sup> in New Mexico ultimately used 64 heliostats, each of area 37 m2 [57]; CACTUS<sup>15</sup> in California similarly used 64 heliostats, each of area ∼40 m<sup>2</sup> [58]; CELESTE16 in France eventually used 53 mirrors of area 54 m<sup>2</sup> [59]; and GRAAL<sup>17</sup> in Spain used 63 heliostats, each of area ∼38 m<sup>2</sup> [60] (this last experiment used a slightly different optical configuration to the others).

Adapted solar farm telescopes made several detections of the Crab, Mrk 421 and Mrk 501 [61–64], as well as providing a number of upper limits, including of gammaray bursts [65]. Indeed, CELESTE was the first Cherenkov telescope to detect an object below 100 GeV [66]. (A nice review by Smith gives a summary of the various results [67]). However, the optical system required was tricky and did not provide a large field-ofview; it became clear that more conventional, although large, instruments would likely be better for the detection of gamma-rays below 100 GeV. CELESTE was dismantled in 2004, CACTUS ceased observations in 2005, STACEE stopped operations in 2007, and GRAAL shut down at about the same time.

### *4.2. IACT Arrays*

By around 2000, it had become clear that an array of IACTs, all of 10-m class or larger and equipped with high-resolution cameras, would constitute that elusive major atmospheric Cherenkov detector. We are therefore now almost approaching the present day, and I do not propose to give a detailed summary of the next generation instruments H.E.S.S., MAGIC and VERITAS. There are a number of papers giving technical details of the telescopes [68–74], which are all in operation now. Their histories will be written by others in the future.

In addition to the arrays currently in operation, mention should also be made of the CANGAROO<sup>18</sup> telescopes. The first CANGAROO telescope was 3.8 m in diameter and had originally been designed for lunar ranging. Successive upgrades culminated in a 256-pixel camera with a field-of-view of around 3◦ in 1995. This was followed bya7m telescope with a 512-pixel camera, and by 2003 there were 4 telescopes of 10 m diameter, each with a 427-pixel camera, dubbed CANGAROO-III [75,76]. The telescopes' mirrors were made of aluminium-coated carbon fibre-reinforced plastic. These were vulnerable to damage in the outdoor environment, and proved to be the Achilles heel of the telescopes. Nonetheless, CANGAROO reported detections of several objects, particularly Galactic objects such as RX J0852.0-4622 [77]. The last reported observations taken with the telescopes were in 2009.

Having closed down the Narrabri site in 2000, and with Ted Turver's retirement happening at about the same time, we in Durham joined the H.E.S.S. Collaboration. The 4 telescopes of H.E.S.S. I, each 12m in diameter and equipped with 960-pixel cameras (Figure 7), have of course provided a wealth of results over the years. We were part of H.E.S.S. for a lot of that time—until there was one of the intermittent UK funding crises, and our funds ran out. Working in a large collaboration was a new experience, since we had always run our telescopes on our own. It was a particular pleasure not to have to cover all the

observing sessions; it had been quite a strain for a group of around 10 people to cover all the dark moon periods over the years. When the telescopes were switched on, one of the first objects to be observed was PKS 2155-304 [78]. Michael Punch sent round an e-mail with the resulting detection and the wry comment that "this might interest our Durham colleagues". It certainly did! I also remember Heinz Völk in a Collaboration Board meeting in (I guess) 2004 commenting that we would have to manage the expectations of our PhD students—there would not be "an object each" as he put it. Shortly afterwards, the first Galactic plane scan results arrived [79], and I think it is fair to say that, just this once, Heinz was proved wrong.

**Figure 7.** One of the H.E.S.S. I telescopes at the array's inauguration in 2004.

The successes of these third generation telescopes have resulted in a considerable catalogue of objects; indeed, there are now considerably more classes of object detected than there were objects back in 2000. The excellent TeVCat19, maintained by Deirdre Horan and Scott Wakely (to whom the whole field owes a debt of gratitude), now lists 243 sources, a figure that nobody would have believed possible back in 1984 when I started. There had been a joke amongst gamma-ray astronomers—I am not sure where it originated—that one photon constituted a detection, two was a spectrum, and three was variability. Now, gamma-ray telescopes provided spectra and variability detection in abundance, and even images, heralded by the H.E.S.S. detection of RXJ1713-3946 [80].

### **5. 2010 to the Present: May the Fourth Be with You**

Even in 2000, when most telescope arrays were in the final stages of design or the early stages of construction, there were ideas for the 4th generation of Cherenkov telescopes. One of these was 5@5, a proposal to build an array of 5 telescopes at 5 km above sea level, which would give an energy threshold of 5 GeV [81]. This was all part of a lively debate regarding whether it was better to go to low energies to meet—and compete—with satellite-based instruments or to do what satellites could not, and go to higher energy. The answer, of course, was to do both, and so the Cherenkov telescope array (CTA) concept began to emerge, with its large, medium and small telescopes covering the range from a few 10 s of GeV to 100 s of TeV. No doubt there will be much more detail about CTA in this volume, and there are two comprehensive guides to CTA and its scientific objectives available [82,83], so I will confine myself (once again) to a few observations from my perspective.

The arrival of Jim Hinton in the UK in 2006 gave the field a boost, and a number of groups had begun to coalesce around CTA. There were a few false starts, but by 2012, we had a small amount of funding to get us going, and now there are 5 groups in the UK forming a core CTA team, from Armagh Observatory & Planetarium, and the universities of Leicester, Liverpool, Oxford and Durham20. In 2021, we held a 2-day meeting about CTA in the UK, which over 90 scientists attended. It was very different from that meeting in 1986. We are no longer primarily discussing what may or may not actually be producing gamma-rays; the scientific implications are taking centre-stage. There is interest from AGN modellers, cosmologists, radio astronomers and particle physicists. Ground-based gamma-ray astronomy has taken the place of one of the many tools which we use to try to understand the Universe.

I have avoided discussing the particle detector arrays, largely because they are not something with which I have been involved up until recently. However, the results from HAWC [84] have shown us the value of such arrays with their ability to view the entire overhead sky day and night, come rain or shine. The move to build a large particle detector array in the southern hemisphere in the shape of the Southern Wide-field Gamma-ray Observatory (SWGO) will be an important complement to CTA [85], but importantly, LHAASO (Large High Altitude Air Shower Observatory) has revealed more, and more energetic, objects than we might have expected, emphasising the importance of such instruments in their own right [86]. Couple these with the neutrino detectors and gravitational wave detectors and CTA, and it is easy to see that there's a very exciting time ahead in astroparticle physics. It's almost enough to make me wish I was starting again. Almost...

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Acknowledgments:** I have encountered so many people over the 30-odd years that I probably shouldn't pick out any names for fear of offending anyone by their omission. However, I am going to venture to mention a few of the people from the early days in Durham who were pivotal to the group's work, but perhaps did not always hit the headlines. Lowry McComb was not only the electronics design mastermind for our telescopes but also a simulations and physics guru, and taught me how to solder (at least passably); our technical staff Sue Hilton, Ken Tindale and Peter Cottle made up cables, welded aluminium, built consoles, kept the Observatory van going and did countless other things over the years; John Dowthwaite taught me a great deal about statistics, as well as the rules of cricket; Ian Kirkman was an exceptional data analyst who taught me how to analyse gamma-ray data; and finally, Steve Rayner's careful and meticulous approach to data-taking and ability to sing Flanders and Swann songs made many an observing trip a success. I hope that I have not forgotten anyone and equally hope that those whom I have will forgive me. Thanks are also due to the current members of the group in Durham for putting up with my muttering about this review (and indeed for putting up with me in general): Atreya Acharyya, Anthony Brown, Max Harvey, Sheridan Lloyd, Abi Peake, Alberto Rosales de Leon, Cameron Rulten, and Patrick Stowell. Finally, I would like to thank Ulisses Barres de Almeida and Michele Doro for asking me to write this review. It has been instructive to look back; I hope the result is what you wanted!

**Conflicts of Interest:** The author declares no conflict of interest.

### **Notes**


### **References**


### *Review* **Technological Novelties of Ground-Based Very High Energy Gamma-Ray Astrophysics with the Imaging Atmospheric Cherenkov Telescopes**

**Razmik Mirzoyan**

Max-Planck-Institute for Physics, 80805 Munich, Germany; razmik.mirzoyan@mpp.mpg.de

**Abstract:** In the past three decades, the ground-based technique of imaging atmospheric Cherenkov telescopes has established itself as a powerful discipline in science. Approximately 250 sources of very high gamma rays of both galactic and extra-galactic origin have been discovered largely due to this technique. The study of these sources is providing clues to many basic questions in astrophysics, astro-particle physics, physics of cosmic rays and cosmology. The currently operational generation of telescopes offer a solid performance. Further improvements of this technique led to the nextgeneration large instrument known as the Cherenkov Telescope Array. In its final configuration, the sensitivity of CTA will be several times higher than that of the currently best instruments VERITAS, H.E.S.S., and MAGIC. This article is devoted to outlining the technological developments that shaped this technique and led to today's success.

**Keywords:** imaging atmospheric Cherenkov telescope; IACT; IACT technology; very high energy gamma-ray telescope; ground-based gamma-ray astrophysics

### **Citation:** Mirzoyan, R. Technological Novelties of Ground-Based Very High Energy Gamma-Ray Astrophysics with the Imaging Atmospheric Cherenkov Telescopes. *Universe* **2022**, *8*, 219. https://

Academic Editor: Banibrata Mukhopadhyay

doi.org/10.3390/universe8040219

Received: 16 February 2022 Accepted: 16 March 2022 Published: 29 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

### **1. Introduction**

The classical book of Jelley [1] is a real jewel for researchers interested in Cherenkov radiation. It covers diverse aspects of the bluish emission and in great detail. Despite it being 65 years since the first edition of the book was introduced, it is still the table book of many researchers. Many highly regarded papers have been devoted to the history of Cherenkov emission and its use for ground-based very high energy (VHE) gamma astrophysics ([2–7]). For more details, the reader is invited to read the recent highly interesting book of D. Fegan [8], as well as the article from this author [9].

Some of the above publications show the chronological developments and a list of instruments built and operated in different countries. Unlike those publications, in this paper the author aims to highlight the chain of important technological developments, which improved the technique and allowed us to consider this branch of science as mature and established.

Below the author will to go into the details of important developments that led us to today's success.

### **2. Discovery of Cherenkov Emission in the Atmosphere**

Galbraith and Jelley fixed a 25 cm diameter parabolic mirror of a short focal length inside a dustbin and set a 2-inch PMT in its focus, see Figure 1. In a series of experiments, they detected Cherenkov light flashes from air showers. Their discovery paper laid the foundation of the atmospheric Cherenkov light detection technique in 1953 [10]. Further developments of the latter gave rise to ground-based VHE gamma-ray astrophysics and led to today's powerful branch of astrophysics.

**Figure 1.** The original detector of W. Galbraith and J. V. Jelley described in [10].

### *2.1. First Generation Atmospheric Cherenkov Telescopes*

Here we pose an interesting question. Would the early researchers working in groundbased VHE gamma-ray astronomy half a century ago have dreamed about its future scale and impact as a well-established branch of science?

Before proceeding with these important questions, we would like to prepare the reader with some basic information about the air showers, atmospheric Cherenkov light emission, the threshold of a telescope, and some other useful information.

### 2.1.1. Extensive Air Showers and the Cherenkov Light Emission

The earth's atmosphere is constantly bombarded by charged cosmic rays and neutral gamma photons. The charged particles and gammas, with energies in excess of several to a few tens of GeV, interact with the air molecules and trigger avalanche-like events known as extended air showers (EAS). To illustrate, let us imagine a 1 TeV gamma entering the atmosphere. In the vicinity of an air molecule, the incident photon can be converted into an electron–positron pair: <sup>γ</sup> → <sup>e</sup><sup>−</sup> + e+. The latter has a very high energy and will therefore generate further gammas via the effect of *bremsstrahlung* in the electric field of the atomic nuclei. The above two-step cycle can be repeated in multiple rounds. Apparently, after each cycle, the number of secondary charged particles doubles while the original energy is shared among them. The height at which the number of secondary particles reaches the maximum is called the shower maximum. Typically, the secondary particles have an energy of ~300 MeV at the shower maximum. When the secondary particle energy decreases below the critical energy of ~84 MeV, the shower extinction phase starts.

A 2 TeV gamma can produce another generation of secondary particles compared to 1 TeV due to its twice higher energy. One can therefore assume that there should be a linear relationship between the incident energy and the number of secondary particles. The latter move at a higher speed than light in the atmosphere; Note that this is possible because the refractive index of air is larger than in a vacuum, e.g., 1.00029 at sea level. Such particles produce Cherenkov light. The opening angle of the Cherenkov light cone *θ* can be calculated from the simple relationship *cosθ* = 1/*n β*, where *n* is the refractive index of air at the given altitude and *β = v*/*c* is the relative velocity of the particle (*c* is the speed of light). The 1 TeV gamma photon will produce ~130 Cherenkov photons per m<sup>2</sup> in the wavelength range 300–600 nm, up to the so-called "hump" at ~130 m from the shower core (for an observation altitude of ~2 km above sea level). A typical electromagnetic shower has a time structure of 5–10 ns, as light and particles travel together (the latter being slightly faster) similar to a pancake that is a few meters thick.

This hump is the result of a kind of self-focusing effect of Cherenkov light production; the point of impact of a Cherenkov photon on the ground is the product of the Cherenkov angle with its production height. While the Cherenkov light emission angle continues to increase as particles penetrate deeper into the atmosphere (due to the increased density and refractive index of air), the product of the angle with height shows some focusing effect at 60–130 m from the shower core, see Figure 2a).

**Figure 2.** (**a**) Lateral distribution of Cherenkov light emission from a single relativistic muon traversing the atmosphere. The observation height is 600 m a.s.l. One can see the light focusing effect (the "hump") on the ground, at 70–130 m distance from the core. Image courtesy V. Samoliga. (**b**) The lateral distribution of Cherenkov light from a 100 GeV gamma and 400 GeV proton. One can see the "hump" at about 125 m distance from the shower axis (observation height 2.2 km a.s.l.). The "core particles" and the "halo particles" are meant to show the impact parameter range where the light arrives from high and low altitudes in the shower development, respectively.

During the development of an air shower, the e<sup>−</sup>e+ will emit Cherenkov light within a cone opening angles of ~0.2–1.2◦. A 1 TeV gamma-ray shower is estimated to produce ~700 e<sup>−</sup> e+ pairs at the shower maximum. It may seem that the emitted Cherenkov light should be concentrated within a circle of a radius equal to the distance to the hump. In reality, the e<sup>−</sup> and e<sup>+</sup> scatter multiple times along their motion, so the scattered light smears the light distribution on the ground and can form a large angle with respect to the shower axis, sending photons well beyond the hump. The multiple scattering angle is inversely proportional to the particle energy, i.e., the lower the energy, the larger the scattering angle. A simple estimate shows that the Cherenkov and multiple scattering angles (one sigma value) are about 0.6–0.7◦ for the e−e<sup>+</sup> energy of ~1 GeV. For lower energies, i. H. for lower altitudes in the shower development, the multiple scattering effect becomes the dominant mechanism to spread Cherenkov light on the ground.

A hadron-initiated shower behaves similarly to that produced by a gamma, since in the hadron interaction, along with *π<sup>+</sup>* and *π*−, *π*<sup>0</sup> will also be born, which immediately decays into two gammas: *<sup>π</sup>*<sup>0</sup> → <sup>2</sup>*γ*. These two gammas will initiate electromagnetic cascades as described above. What makes the difference is that the charged *π<sup>+</sup>* and *π*<sup>−</sup> also decay and the induced shower shows the hadron interaction signatures comprising hadrons, muons, neutrons, neutrinos, etc., superimposed on the electromagnetic showers. The differences between the hadron and electromagnetic showers will show up in their shapes and also in their lateral distribution of Cherenkov light density (see Figure 2b). One can see that, unlike protons, the gamma-rays can preferentially produce triggers in a measuring instrument for the impact parameter range ≤130 m. The hadron showers will be much wider (due to the transverse momentum of the three-particle decay), longer, and with a chaotic structure, compared to gammas. Thus, a snapshot of both types of showers can help differentiating them. That is exactly what the contemporary imaging atmospheric Cherenkov telescope does; it can easily suppress hadrons by a few orders of magnitude while selecting gammas with high efficiency.

An imaging Cherenkov telescope of ~10 m<sup>2</sup> mirror area will collect on average 130 ph/m2 × 10 m2 = 1300 Cherenkov photons from a 1 TeV gamma-ray EAS. Assuming a ~10% conversion efficiency from Cherenkov photons to photoelectrons (ph.e.), one can expect to get 130 ph.e. for an image. Such an image can be well parameterized and used for efficient image selection.

Researchers have long wondered whether and how, for example, 50 GeV gamma-ray showers can be observed. Please note that such a shower provides only five Cherenkov photons per square meter area within the hump.

The formulation of this problem was due to the fact that many interesting phenomena were expected in the energy range below 300 GeV.

In general, measuring the spectrum of a particular type of gamma-ray source over a potentially broad energy range and bridging it with the spectrum of lower-energy satellite missions could have provided a wealth of information.

Due to lower absorption by the extragalactic background light (EBL) the universe is becoming increasingly transparent to lower energy gamma rays, i.e., signals from distant active galactic nuclei (AGN), gamma ray bursts (GRB) and other possible remote transient events expected to become visible, see paragraph 6. Furthermore, the weak signals from pulsars were expected to become visible above the very low threshold ~10–20 GeV.

For a long time, up until the mid-1990s, the research community believed that measuring such low-energy events required the operation of expensive telescope facilities with unrealistically large mirror sizes, or converted solar power plants with several thousand square meters of mirror area. This is discussed in more detail later in Sections 6 and 7.

### 2.1.2. Chudakov's Telescopes in Crimea

Starting in 1960, A. Chudakov and colleagues built the first system of 12 telescopes with a total mirror area of 21 m2 in Crimea, near the shore of the Black Sea [11]. They intended to find out whether one can (relatively easily) measure a signal from some "prominent" celestial source candidates. These telescopes were simple parabolic searchlight mirrors of F/D = 0.6 m/1.55 m design with single 5 cm diameter PMTs in their foci. A diaphragm provided an aperture of 1.75◦ FWHM. To reduce the large aberrations and to improve the signal timing Chudakov designed a special lens set in front of the PMTs. Each of the four independent mechanical mounts, set next to each other, carried three rigidly connected telescopes, see Figure 3. The pointing precision was 0.2◦ and 0.4◦ in elevation and azimuth, respectively. A coincidence system between these four telescopes had an advanced feature—a simple rate-stabilizing electronic circuit for counter-acting the variations of the light of the night sky (LoNS).

**Figure 3.** The telescope of A. Chudakov and colleagues in Crimea near the Black Sea shore.

The physical rate of showers was in the range of 3 Hz and the gamma-ray energy threshold was estimated to be ~3.4 TeV.

Instead of continuous tracking, the so-called drift-scan mode was used for observations. These were performed by pointing in advance at the expected source position and waiting for it to slew through the field of view. Through repeated scans one could collect a reasonable amount of data during one night.

The list of observed sources is impressive given the limited knowledge of X-ray sources at the time. The educated guess was that radio sources should be good candidates to observe.

They observed the Crab Nebula, Cygnus A, Cassiopea A, Virgo A, Perseus A, Sagittarius A. Moreover, the clusters of galaxies Ursa Majoris II, Corona Borealis, Bootes, and Coma Berenices were also observed.

The experiment was carried out for the duration of about 4 years.

Unfortunately, they did not succeed in measuring a signal from any of the observed sources and, instead, derived only upper limits. For example, in the case of the Crab Nebula, the derived upper limit was about 20 times higher compared to its currently measured flux. In principle, they could have discovered the gamma-ray emission from the Crab Nebula already ~60 years ago, but only at the expense of unreasonably long observations.

The list of sources above demonstrates that the researchers had a very smart observational program, even by today's standards.

It is not so uncommon in the history of science that a chain of incremental achievements step-by-step improve the technique and the technology, thus paving the road towards the goal to develop a new branch of science. Not surprisingly, this was initially performed by a handful of research groups scattered around the world. Perseverance may pay off and guide to the needed technology and technique.

At the ICRC conference in Moscow in 1959 G. Cocconi claimed that with a cosmic-ray instrument of ~1◦ resolution, operating at ~TeV energy range, one will measure a factor of one thousand times higher gamma-ray signal over the background from the direction of the Crab Nebula [12]. Unfortunately, this could not be confirmed by Chudakov and his crew and it had a sobering effect.

Interestingly, over a long run such overoptimistic claims turn out to be wrong. Nevertheless, they play an important role in sparking curiosity and generating activity in a particular field.

Chudakov's installation has probed the potential of the large size (~20 m2) array of the first generation non-imaging telescopes, operating in a narrow time coincidence. The hope of easy detection of cosmic gamma-ray sources turned out to be elusive, or at least more complex, than was originally anticipated.

A. Chudakov became a famous and influential researcher on the international level. Colleagues standing close to him used to tell that after non-detections of sources by his instrument, he became really skeptical about the prospects of the ground-based gamma-ray astronomy for a long time.

The distinguishing technical features of Chudakov's instrument were (a) the relatively large mirror area, which is essential for achieving a low detection threshold; (b) the narrow time coincidence between somewhat separated telescopes, aiming to counteract the effect of LoNS on signal detection; and (c) the best aperture for maximizing the signal/noise ratio by selecting a special diameter of a diaphragm.

### 2.1.3. Other First-Generation Telescopes

In the following years many smaller-scale experiments were built and operated. Several examples are listed below.

A new telescope of a very fast optical and electronic design, optimized for the pulsar studies, was built in Glencullen valley not far from Dublin in 1967–1968. It was based on four 0.9 m diameter F/2 mirrors (total area ~2.5 m2) and fast PMTs, put into coincidence with a gate width of 3.5 ns. The intention was to increase the signal-to-noise ratio by reducing the integrated charge from the LoNS. Later, that telescope was at first shifted to Harwell and then to Malta, where it started observations in early 1969. As a result, flux upper limits for several pulsars were set [13].

In another development, two 6.5 m diameter reflectors were set at a distance of ~120 m for stellar interferometry in Narrabri, Australia. In 1968, the researchers carried out observations of the Crab Nebula and two pulsars. No signal was measured [14].

Grindlay and colleagues made a step forward by using the "double beam" observation technique. Each telescope had two PMTs. While the main PMTs were inclined towards one another at 0.4◦ for observing the shower maximum region from a selected source direction, the other two PMTs were inclined to angles of 1.3◦ towards each other for measuring a signal from the so-called "muon core" of the showers. Though obscure and mysterious from today's point of view, the authors claimed that in this way they could halve the hadron background [15]. That was not much but the principle was interesting; upgrades and modifications of it will be widely used in the future. The author is still curious if one can imagine that effort as the first element of a form of two-pixel imaging?

The Haleakala telescope in Hawaii included six spherical, aluminum coated, coplanar glass mirrors of f/1 optics with 1.5 m focus, set on a single equatorial mount. Two independent sets of 18 PMTs in the focal planes observed separate areas of the sky, sharing the same set of mirrors. Each tube within a set collected light from a different segment of the total mirror area. The PMTs in the focal plane were operated in a fast single ph.e. detection

mode [16]. When several tubes produced single ph.e. signals in a tight coincidence window, a trigger was produced. Later, it turned out that this detection technique suffered from exhaustive trigger rates from local muons.

The Nooitgedacht MK I telescope near Potchefstroom in the South African Republic consisted of four equatorially mounted mini telescopes (MT), set 55 m apart. An MT contained three light detectors, consisting of 1.5 m diameter, f/0.43 rhodium coated mirrors, focusing light on a XP2020Q PMT. Later, these were modified to the MK II telescope, which consisted of six MTs, set 225–322 m apart from each other. A single MT consisted of three mirrors, forming an f/1 optic with a focal length of 1.94 m. A PMT in its prime focus measured the ON source region, whereas the Cassegrain ring mirror in the focal plane around the telescope's axis reflected the light from the 4.5◦ OFF source region to a PMT installed on the mirror level. Thus, one could simultaneously measure the ON and OFF source regions. For details please see [17] and the references therein.

Another example is the first-generation telescopes of the Durham group, arranged similar to the logo sign of Mercedes [18].

Please note that due to the flatter "plateau" of the lateral distribution of Cherenkov light from gammas compared to hadrons, the separated by 50–100 m distance arrays of telescopes could preferentially trigger on gammas at the hardware level (see Figure 2b).

The THEMISTOCLE array followed the approach of building a widely spaced array of 18 tracking telescopes in the south of France, each carrying a parabolic mirror 0.8 m in size.

These provided a shower collection area in excess of 105 m2. This installation measured a signal from the Crab Nebula, but due to the small area of individual mirrors and the wide spacing, it had a high threshold of ~3 TeV and a low sensitivity [19].

The PACT experiment in India was of a similar design to THEMISTOCLE, but with higher sensitivity due to the larger size of both the mirror area of individual stations and the cluster of distributed stations [20].

The HAGAR telescopes [21] are located in Hanle in Himalaya, at 4500 m a.s.l., the same location as the 21 m diameter MACE imaging Cherenkov telescope [22].

HAGAR includes seven telescopes located at the center and corners of a hexagon inscribed in a circle of 50 m radius. The total reflector area of all the seven telescopes is about 31 m2. Each telescope consists of seven parabolic glass mirrors of 0.9 m diameter. Results on the Crab Nebula detection were recently published by this telescope [21].

The AIROBICC instrument of HEGRA on the Canary island of La Palma operated for almost ten years, starting in 1992. It comprised an array of 100 optical detector stations, each based on an 8-inch size PMT, coupled to a Winston cone-type light concentrator. These stations were placed next to the particle (scintillator) detector array of HEGRA for simultaneously measuring the Cherenkov light to particle density from air showers. AIROBICC stations measured integrated Cherenkov light in a wide field of view of ~1 sr. The fast timing between the detector stations allowed for the measurement of the incoming direction of showers with a high precision [23]. Due to the LoNS integration in a wide field of view, the threshold of AIROBICC for gamma rays was estimated to be a few tens of TeV. The relatively small size of the array, coupled to the high threshold, did not enable significant gamma-ray source detections.

Except for the 10 m diameter Whipple telescope, which played a central role in giving birth to gamma astronomy (this will be discussed in some detail below), no major technical improvements were achieved until the 1980s. In Figure 4, one can see a photo of the 10 m diameter Whipple telescope.

As a rule, researchers used 0.6–1.5 m diameter military searchlight mirrors of the parabolic shape of F/0.5 optics that suffered from poor angular resolution. They used coincidence between several such mirror elements, which enabled the lowering of the energy threshold of the instrument. Further, some of them used a number of PMTs for simultaneously monitoring the source and the background regions.

Most of these are reflected in the proceedings of the workshop series of "Towards a Major Atmospheric Cherenkov Detector" as well as in proceedings of the international cosmic ray conferences.

**Figure 4.** Photo of the 10 m diameter pioneering Whipple IACT on Mount Hopkins in Arizona.

2.1.4. A Short Summary on the First-Generation Telescopes

Please note that most of those experiments were based on counting the number of excess events from the ON-source and selected OFF-source regions. The fluctuations of the LoNS play an important role because its instantaneous positive fluctuation can add-up with a genuine small signal from a given shower and produce a trigger. Because of the natural differences of the LoNS intensity in ON and OFF-source regions, such brightness differences can produce a positive or negative excess. Early on researchers learned to counteract the brightness differences in ON and OFF regions by inserting a weak light source next to the PMT so that the sum light from it and LoNS stayed constant to a few percent precision [11]. However, whether the achieved precision was enough for long observations still remains a sensitive question.

One should keep in mind that these were (are) non-imaging telescopes, the output signal of which solely depends on the count rate through the set in front of the PMT diaphragm.

The technique used in the first-generation telescopes has improved over time. The researchers understood the main issues and made important developments, which paved the road for the next generation imaging telescopes.

The majority of the first-generation telescope installations from time to time reported "signals" measured from some sources, including pulsars. While one could imagine that sometimes some unknown at the time upper atmosphere light phenomena may have produced a short, sporadic excess, in the case of pulsars one of the probable reasons was the improper statistical and systematic treatment of the data (see, for example, [15]).

Unfortunately, those reports could not stand any serious criticism from today's point of view. Those can be understood by considering the strong desire of the small, enthusiastic groups of researchers to over-interpret small, insignificant and/or sporadic excess from observations as indications of the searched for signal from sources.

It seems that over time, the unwritten rule of "source code" (analogous to "dress code") encouraged researchers reporting source detections at conferences.

Such source reports culminated by the end of 1980s (see [24]).

The net impact of these questionable reports proved important, as it allowed the discipline to be kept alive in its infancy.

### **3. EAS Images in Cherenkov Light Obtained Using an Image Intensifier**

The measurement of air shower image shapes by using an image intensifier by Hill and Porter in 1960 [25] can be considered as a real milestone. They coupled a 25 cm diameter wide-angle Schmidt telescope to an image intensifier and photographed the images of air showers. At some point they understood the potentialities of ground-based gamma-ray astronomy. At first, they noticed that the elliptical-shape shower images were offset from the source direction as well as understood that the image shape depends on the impact point of the shower axis. From here it was a stone's throw to the idea that by using two separated by some distance telescopes, one can derive the incoming direction of parent particle as well as largely suppress the background.

This important aspect has been demonstrated by the stereoscopic systems of Imaging Air Cherenkov Telescopes some 30 years later. It was the HEGRA collaboration that demonstrated the advantages of the so-called "stereo" observations, see more below. Further, the Crimean GT-48 telescopes pursued a similar goal, but their results remained more than modest due to the separation of telescopes by only ~20m. Because of this, the telescopes measured nearly the same images and the image parameters were strongly correlated, see more below.

In the summer of 1985, during one of the visits to Crimea, the author asked the GT48 group leader Arnold Stepanian why he put the telescopes so close to each other. He answered: "show me a single installation that has a higher count rate of air showers than mine". Thus, this complex array served to also provide a low detection threshold, which incidentally was ~0.9 TeV according to their simulations.

### **4. The First Monte Carlo Simulations and the "Stereo" Observations**

Victor Zatsepin (not to be confused with Georgy Zatsepin from GZK cutoff), a crew member of A. Chudakov, published a remarkable Monte Carlo study paper in 1964 [26]. He obtained the equal photon density contours of air shower images produced by gamma rays as well as their angular distribution and radial photon densities. It is striking to read in that paper "since the maximum intensity of the light from a shower does not coincide with the direction of arrival of the primary particle, in researches in which the determination of the angular coordinates of the primary particle is made by photographing the light flash from the shower one should seek improved accuracy in this determination by photographing the shower simultaneously from several positions".

From the above it can be seen that some researchers clearly understood the potential of coincidence measurements, now better known as stereoscopy, some 60 years ago.

The experiments with image intensifiers continued for some more years, but no further breakthrough occurred.

### **5. The Second-Generation Telescopes**

### *5.1. The 10 m Whipple Telescope*

In 1967, Giovanni Fazio and colleagues began constructing a 10 m diameter, F/0.7 telescope on Mount Hopkins, at the Whipple observatory, at a height of 2300 m a.s.l. [27]. The large diameter of the reflector of the telescope, together with a fast PMT in the focal plane, provided a relatively low detection threshold for air showers. The telescope started operating in 1968, initially with a single 5-inch PMT in the focus. Afterwards the number of PMTs was at first increased to two and later to ten PMTs for simultaneous ON and OFF source observations. In 1968 Trevor Weekes co-authored a publication with G. Fazio and two more colleagues about observations of 13 gamma-ray source candidates. The Crab Nebula was prominently in the list, but M87, M82, IC443 were also observed [28]. Only flux upper limits above the threshold of ~2 TeV were derived. As we know from later observations, the listed sources turned out to be gamma-ray emitters.

Already in 1977, T. Weekes and E. Turver suggested using a system of two telescopes separated by 100 m, equipped with 37-pixel imaging cameras. The intention was to strongly suppress the background [29]. The first imaging camera used 37 pixels of 0.5◦ size in a hexagonal configuration and covered a field of view of 3.0◦ in the sky. This camera was installed on the 10 m telescope in ~1983.

The next key development was the suggestion of Michael Hillas to parameterize the images by using the second moments of the measured charge distributions in the camera plane [30]. Interestingly, it is one of the rare cases where a conference contribution paper collected a huge number of citations.

Using this formalism, the Whipple team succeeded in measuring the famous 9 sigma signal from the Crab Nebula in 1989 [31].

This is considered as the birthday of the ground-based VHE gamma astronomy.

The scientific intuition and perseverance of Trevor Weekes and the small team around him paid off after ~20 years of effort and gave birth to a new branch of science.

The technological novelties of the Whipple telescope were the use of the Davies–Cotton optical design [32], for the 10 m diameter reflector and the 37-pixel imaging camera in its focus. A few years later this camera was exchanged for a finer resolution one, employing pixels of 0.25◦ in size. This has significantly improved the telescope's sensitivity and allowed it to lower its threshold from 700 GeVdown to ~300 GeV.

### *5.2. GT-48 in Crimea*

Since the late 1960s the group in the Crimean Astrophysical Laboratory (CrAO) led by Arnold Stepanian used two parabolic searchlight mirrors of 1.5 m diameter in coincidence for studying gamma sources. They reported detections of Cassiopea and Cyg X3 in the early 1970s, with the latter getting a particularly strong response from the community. In the 1980s, the group started constructing a set of two large telescopes, separated by 20 m distance, named GT-48. On each mount they built six telescopes, three of the imaging type with 37 pixels and another three operating a single UV-sensitive, solar blind PMT. Every telescope had four mirrors of 1.2 m diameter and 5 m focus. The goal of the Crimean group was to profit from the stereo observations, see, for example [33]. Because they did not want to sacrifice neither the threshold nor the coincidence rate, they put the telescopes at 20 m distance from each other. Their relatively small reflectors and the low altitude of the location of 600 m a.s.l. provided a threshold of 900 GeV. The proximity of the telescopes did not allow them to fully exploit the differences in image parameters otherwise seen from largely separated detectors. In 1989, this installation was put into operation and in subsequent years it measured a number of sources.

The technological novelties of the Crimean GT48 were the two sets of telescopes separated by 20 m, the used solar blind PMTs for measuring the UV content of air showers (the idea was that the muons in hadron showers produce more UV light) and the use of the coincident technique.

### *5.3. High Energy Gamma Ray Astronomy (HEGRA)*

The first telescope of HEGRA was designed in 1990, as a somewhat modified version of the Yerevan Physics Institute (YerPhI) first Cherenkov telescope [34]. The latter was the prototype of the planned five telescope "stereo" array (proposal from 1985) which was later adopted by the HEGRA collaboration. Further developments of it became known in the community as the HEGRA air Cherenkov telescopes [35].

Originally each telescope was planned to have a 3 m diameter tessellated mirror of 5 m2 area and to be equipped with a 37-pixel imaging camera in the 5 m focal plane. The pixels used light guides of a conical form (focons), made of UV transparent Plexiglas and subtending an angular aperture of 0.41◦ [36]. The imaging camera was based on the Soviet FEU-130 type special PMTs with GaP first dynode, providing a gain of 25–30 and thus a very high amplitude resolution. The very high-quality glass mirrors were produced in Yerevan Physics Institute. The mechanical mount of the first telescope was installed on the *Roque de los Muchachos* observatory in La Palma in late fall 1991 and the camera in mid-1992. A ~5 sigma hint of the first signal from Crab appeared after two months of data taking, in late fall 1992 [37]. In the following year the second telescope with the same pixel size but with one more ring in the camera (61 pixels) and a larger reflector of 4.2 m was built and put into operation at ~100 m distance from the first one. The stereo observations, the power of which had been predicted in a dedicated Monte Carlo study paper in 1993 [38], could start.

In the following years, four more telescopes of the same size as the second one but with cameras of 271 pixels with a size of 0.25◦ were installed. In the end, the second telescope, too, was given a 271-pixel camera and the array was completed in 1997. The last upgrade in the same year was the increase in the mirror area of the first telescope to 10.3 m2. In Figure 5 one can see a photo of the HEGRA array. HEGRA operated until 2002. It convincingly demonstrated the long-awaited power of stereo observations [39] and produced a wealth of scientific results.

These second-generation imaging telescopes provided only a handful of sources, but it became clear that still there was a big potential in the "stereo" technique that was just waiting to be explored more extensively.

**Figure 5.** Photo of the HEGRA array. One can see four out of the six IACTs of HEGRA.

### *5.4. The 7-Telescope Array*

The Japanese 7-Telescope Array was originally planned as a detector that included two arrays, each with 127 imaging telescopes, operating in coincidence [40]. Each telescope had a 3 m diameter mirror and a 256-pixel camera. In 1996–1997, three out of seven such telescopes were built and installed in Dugway proving grounds, Utah, USA. The remaining four were planned to be installed within one year. The telescopes started taking data on several interesting objects as, for example, the flaring MKN-501 and 1ES1959. Unfortunately, a ~6 m long unarmed military missile lost its target and instead hit the data taking containers. This array operated for less than one year in 1997.

The square PMTs and light guides used in these telescopes were innovative.

### *5.5. CLUE*

The Italian CLUE collaboration tried to extend the application range of the IACT technique into the deep near UV range. They installed an array of nine 1.8 m telescopes at the HEGRA site on La Palma. They used a multi-wire, UV-light-sensitive proportional chamber (MWPC) for recording the EAS images. A matrix of electrodes in the rear side of the camera allowed to read out the images. The imaging camera was filled with a gas mixture containing TMAE. It was believed to provide a quantum efficiency of 5–15% in the range 190–230 nm. TMAE turned out to be an aggressive substance, attacking the camera materials. Further, the short distance of Cherenkov light absorption in the chosen wavelength range limited the imaging capabilities. These telescopes operated in 1997–1999. Clue reported detections of the Crab Nebula, Mkn 421 and Mkn 501, and the lunar shadow [41].

### *5.6. CAT*

The French CAT telescope, put into operation in late 1996 [42], on the same site as the previous non-imaging ASGAT [43] and THEMISTOCLE [19] instruments, started operating a 600-pixel high-resolution imaging camera based on a pixel size of 0.12◦. Soon after the telescope was put into operation, the researchers found out that due to the very fast pulses from the PMTs, the detection efficiency of gamma-rays was quite low. After slowing down the speed of pulses towards ≥2.5 ns, they recovered the high efficiency for triggering gamma-rays. To counteract the bending of the relatively fragile mechanical frame of the telescope, they used data from the several imaging cameras installed on the structure. This was a successful telescope, which provided very interesting results.

### *5.7. CANGAROO*

CANGAROO was a collaboration between several universities from Japan and the university of Adelaide. The collaboration started operating a 3.8 m size single telescope of parabolic shape that had been used for lunar ranging in the past. It started operating in 1992 at a threshold of a few TeV and in the following years discovered several new sources of gamma rays. Ten years later, four telescopes of 10 m size were built. These telescopes made a number of discoveries and very interesting observations. The telescopes had some differences in the design. Along with technical problems, mostly related to the chosen type of mirrors, there were also technical and organizational problems related to the data analysis. When the H.E.S.S. telescopes started to become operational in 2002–2004, they could not confirm some of the CANGAROO results [44]. A few years later this array terminated its operation.

### **6. The Very Low Threshold, EBL and Solar Power Plants as Gamma-Ray Telescopes**

It was recognized rather early that mirror-based solar power plants can offer large mirror areas of several thousand m<sup>2</sup> that could be used for collecting scarce photons from sub-100 GeV gamma showers. There was no measuring instrument and the unexplored energy range 10–300 GeV was considered as "terra incognita". Many interesting physical phenomena were expected there.

The universe is full of photons emitted by galaxies and stars during its evolution. The EBL photons could be thought of as a kind of "gas" filling the space. The complex spectrum of this light extends from UV to far infrared, see for example [45]. When a very high-energy gamma ray travels through space from a cosmologically distant source, it can interact with one of these low-energy photons. If the energy of these two photons in the center of mass system is more than twice the rest mass of the electron, an electron–positron pair can be produced. This is an energy dependent phenomenon that limits the visibility of gamma ray sources in the universe; the higher the energy, the stronger the absorption. This weakens the flux of gamma rays from distant sources. The situation changes dramatically when moving towards the energy range below 100 GeV, down to ~10–20 GeV; the universe is becoming more and more transparent and very distant sources could be observed. Just to give the reader a feel for it; from the famous Mrk-421 and Mrk-501 sources, located at the redshift of ~0.03, the measured highest energy photons are limited to below 20 TeV due to strong absorption. However, if the source is located at the redshift of ~1, given a strong signal, some photons with energies up to ~200 GeV could still survive. Signals from pulsars, from distant AGN, from GRB and from various transient events were anticipated in the unexplored energy range 10–300 GeV.

The researchers were discussing about the possibilities if and how one can lower the threshold of a Cherenkov telescope by more than one order of magnitude.

In the beginning of 1990s the threshold energy of the 10 m Whipple telescope of ~75 m2 reflecting area was estimated to be 300–400 GeV. The common belief was that for lowering the threshold energy of a telescope by a factor of *n* one needs to increase its mirror area by *n*<sup>2</sup> times. So, for example, for lowering the threshold energy of ~1 TeV of the ~10 m<sup>2</sup> HEGRA CT1 telescope by a factor of 20 down to ~50 GeV one needed to increase its mirror area by 400 times, i.e., one needed a mirror area of 4000 m2! For the very low threshold project, it was proposed to build an array of nine telescopes, each 100 m in diameter [46]. Compared to the latter option, the existing solar power plants with their distributed mirror surface area of several thousand square meters seemed to offer an interesting alternative. Several solar power plants were rendered into gamma-ray detectors. The technique of doing that was quite different for individual research teams from STACEE (NM, USA), CACTUS (CA, USA), CELESTE (France) and GRAAL (Germany and Spain). For example, while the GRAAL team [47] was attempting to collect Cherenkov photons from heliostats in the field into a ~1 m-size Winston cone, STACEE tried to organize a kind of imaging in the central light collection tower, directing light from individual heliostats to specific PMT channels [48]. For a comprehensive review of converted solar power plants please see [49].

Some interesting measurements were performed by using these arrays. The French CELESTE instrument tried to measure flux from the Crab Nebula down to ~60 GeV [50]. The comparison with today's precise measurements show that their reported flux was 2.5 times too low.

With the operation of the MAGIC telescope one confirmed that the above assumed relation of the threshold on the mirror area was wrong. As predicted, the threshold was inversely proportional to the mirror area [51,52], i.e., for reducing the cited above CT1 threshold from 1 TeV down to 50 GeV, one needed to increase the mirror area by ~20 times, i.e., to build a telescope with ~200 m<sup>2</sup> (see the next paragraph for more details). It became obvious that the "classical" imaging method was able to provide much higher efficiency than the solar power plant detectors, so shortly afterwards they ceased their operation.

### **7. The Threshold of an Imaging Air Cherenkov Telescope**

It is interesting to note that the MAGIC-I telescope, that has only 236 m2 of mirror area, could successfully perform measurements also in the sub-100 GeV energy range, down to 50 GeV. This is in striking contrast to the above-mentioned statement about the threshold dependency on the mirror area.

The issue of the threshold is interesting to illustrate on the example from a publication by K. E. Turver and T. C. Weekes from 1981 [18]. There one can read:

*"The energy threshold of a simple detector is inversely proportional to the diameter of the light collector. An energy threshold of 1011 eV requires an effective aperture of 5–10 m. To reach 1010 eV requires an aperture of 50–100 m; such apertures would have been out*

*of the question a few years ago but the development of large concentrators for solar energy research makes this energy threshold a realistic possibility".*

The above-mentioned dependence of the threshold on the mirror area, that even today is circulating in some publications, is not correct.

Please note that for sub-100 GeV gamma-ray astronomy the authors refer to the use of large concentrators for solar energy, which in fact some 20 years later has happened (see Section 6).

Unlike the non-imaging detectors, the lower threshold of an imaging telescope is simply inversely proportional to the used mirror area (or to the squared-diameter). This can be explained by the fact that for an imaging telescope it is not the fluctuations of the LoNS that set the lower threshold, because the LoNS in the field of view is "split" between a large number of pixels, which in addition are put into some coincidence scheme. A higher-level requirement is that for analyzing an image one needs some minimum amount of charge, on the order of ~100 photo electrons [51].

Realization of the latter relation played a key role for enabling the successful operation of the IACT technique in sub-100 GeV energy range, down to 10–20 GeV. This was substantiated by proposing and building the pioneering 17 m diameter MAGIC telescope project for sub-100 GeV gamma-ray astrophysics [52].

### **8. The Third Generation Telescopes**

The third-generation telescopes were designed before the potential of the secondgeneration telescopes was fully exploited. Already, in 1995, the first presentations on the concrete concept of 17 m diameter MAGIC were made [53,54]. These were followed by the VERITAS letter of intent in fall 1996 and in the next year by H.E.S.S. Both VERITAS and H.E.S.S. were following the goal of conducting astrophysics with a stereo system of 10 m diameter telescopes, based on well-known, proven technologies. These were well-known thanks to the Whipple telescope and the fresh experience of HEGRA. Instead, the design of MAGIC aimed to move to the sub-100 GeV energy range, down to 20–30 GeV, into the "terra incognita". Obviously, this task was significantly more demanding and challenging, and several novel techniques and technologies were necessary for making it possible.

When HEGRA stopped operating in 2002, the collaboration split into two parts. One part together with the scientists from France, largely people from the CAT experiment, made the core of the H.E.S.S. collaboration and built their instrument in Namibia. The other part stayed in La Palma, at the original site of HEGRA, and together with scientists from Spain and Italy founded the MAGIC collaboration.

### *8.1. H.E.S.S.*

The application of the H.E.S.S. collaboration was supported by the German and French financial agencies (while the VERITAS team had to wait for several more years to secure the financial support). The H.E.S.S. collaboration built their telescopes and started operation in Namibia in 2002–2004. At the beginning, the H.E.S.S. team performed a scan along the galactic plane and developed a really rich harvest of galactic sources. This array has turned out to be a very successful instrument, making a really high number of important discoveries and measurements above the energies 160–200 GeV, see, for example, [55]. A very large telescope of 28 m diameter was set in the center of H.E.S.S. in 2012. This also allowed them to perform observations in the very low energy range of a several tens of GeV [56].

The design and construction of the H.E.S.S. telescopes followed a conservative approach, based on proven technologies.

### *8.2. VERITAS*

The VERITAS telescopes, unlike H.E.S.S., who operate imaging cameras of ~5◦ geometrical apertures, use cameras of 3.5◦ field of view. Otherwise, both instruments are similar, and both have increased the originally planned 10 m diameter of their telescopes to 12 m. For some time, the exact location of these telescopes in Arizona remained uncertain. In the end VERITAS was built next to the administrative building of the Harvard-Smithsonian Center for Astrophysics, not far from the foot of Mount Hopkins and was inaugurated in 2007. As one would expect, VERITAS also turned out to be a very successful instrument that, in recent years, has realized a high number of important discoveries and measurements, see, for example [57].

The design of the Veritas telescopes, similar with H.E.S.S., followed a conservative approach.

One should mention the upgrade of VERITAS with high QE bialkali PMTs in 2012 [58], which lowered the threshold down to ~90 GeV.

### *8.3. MAGIC*

In the mid-1990s, an energy scale below ~300 GeV, down to 10–20 GeV, was considered as "Terra Incognita", simply because there was no instrument, neither on ground nor in space, to observe.

The intention to build MAGIC was to operate a ground-based instrument in the energy domain of ≤300 GeV, down to ~10 GeV. In the mid-1990s that was considered impossible.

Initially, mostly because of the financial and organizational constraints, the 17 m diameter MAGIC was proposed as a stand-alone telescope [52–54]. Several innovations were necessary for operating the single telescope that was also in the very low energy range of <100 GeV, where a strong background from local muons was expected. MAGIC researchers hoped to strongly suppress the different backgrounds by using an ultra-fast opto-electronic design of the telescope. For this purpose, a reflector of parabolic design was chosen, which can provide, for example, time resolution of ≤140 ps within the 1◦ field of view. Spherical shape mirrors of 11 different radii of curvature, laid on the reflector, provided a good approximation of the intended parabola. Along with this, very fast hemispherical PMTs were developed for the needs of MAGIC by the company *Electron Tubes* from England.

In combination with the light guides and a mat lacquer coating, these provided an enhanced quantum efficiency [59].

The PMT analog signals were converted into light by using Vertical Cavity Surface Emitting Laser (VCSEL) diodes operating at ~850 nm. This light, by using optical fibers, was transported to the electronic room, where it was converted back into electrical pulses with practically no degradation of time features.

The MAGIC-I telescope was built and put into operation in 2003–2004.

The fast signals were initially read out by using 300 MSample/s custom-built FADCs. Starting in 2007 MAGIC-I used 2 GSample/s fast multiplexed FACDs for the readout [60].

The measurements showed a bandwidth of ~230 MHz for signal channels. This ultra-fast timing allowed MAGIC to suppress further down the contribution from the LoNS as well as the hadron-induced background by a factor of 2–3. For the first time one could significantly enhance the sensitivity of a detector based on fast timing [61]. A single telescope, which measures a given shower projected on its imaging camera as a two-dimensional image, due to fast timing (one image every 500 ps) can also measure data about the third, perpendicular to the camera direction, i.e., it can scan the image in three directions, coming close to stereoscopic imaging.

By developing a special so-called SUM-trigger configuration (see more below), the researchers could operate the stand-alone MAGIC-I telescope even at a very low threshold of ≥25 GeV. This allowed them to discover a pulsed signal from the Crab pulsar, which made a strong impact on the pulsar theory models [62].

The next serious improvement of MAGIC's sensitivity was due to the construction and operation of an almost clone telescope, at 85 m distance from the first one in 2009, see Figure 6. This has essentially doubled the sensitivity of the first telescope. By using the

standard trigger MAGIC performed observations of some selected sources as, for example, the Crab Nebula and its pulsar, at energies as low as ~50 GeV [63].

**Figure 6.** The two 17 m diameter MAGIC IACTs at the ORM observatory in La Palma. In the center one can see the experimental house. The white dome on top of the garages harbors the LIDAR instrument. In the top left corner one can see the famous 4.3 m diameter William Herschel telescope. The 2.5 m diameter Nordic optical telescope is located on the top-right summit. In the lower right corner one can see the 4 m FACT telescope.

The imaging cameras on both MAGIC-I and MAGIC-II were upgraded in 2012. Along with novel, higher QE hemispherical PMTs from Hamamatsu, developed for MAGIC, a new capacitive memory based FADC readout system on DRS4, operated at ~2 GSample/s, was introduced.

The MAGIC telescopes introduced a number of novelties into the field, some of which later became the standard.

While some of the novelties were important from the operation reliability point of view (fully sealed, actively temperature controlled and stabilized imaging camera by circulating a liquid coolant in a closed loop), others (analog signal light transmission via optical fibers for preserving the fast speed of pulses, hemispherical input window PMTs, coupled to tailor to these light guides of special design, intended to enhance the detection efficiency of photons due to double crossing of the PMT photo cathode by the impinging light) helped to further improve the technique.

Special attention was paid to produce a light-weight reflector frame from reinforced carbon fiber, for the reduced weight of the telescope and low needed momentum for fast repositioning. This was considered as an important step for promptly reacting to alerts from satellite missions on transient sources, such as, for example, GRBs. Though delayed, nevertheless this feature has fully paid off on January 2019, when for the first time the most intense gamma-ray signal at VHE was measured from the A 190114C only one minute after its explosion [64,65]. Of course, the light-weight reflector frame was not for free; it bends under varying gravitational loads when tracking a source. To counteract the deformations, an Active Mirror Control system was developed. While tracking a source, this adjusts the direction of every single mirror of ~1 m2 area under computer control, providing the best optical point spread function (PSF) in the focus [66].

The other novelty was related to ultrafast speed of 2 GSample/s readout of the data. This very fast readout of the data at every 500 ps allows one to obtain multiple images from one and the same shower, tracing its development in time. Moreover, by integrating the signal charge in only ~3 ns time window, one effectively suppresses the LoNS contribution. All these has enhanced the background rejection power, allowing for the operation of the telescopes in the energy domain of ≥20 GeV (see below).

SUM Trigger for MAGICs

One of the main obstacles for obtaining a low-threshold setting for an IACT is the adverse effect of after-pulsing in PMTs [67].

The standard trigger threshold of the two MAGIC telescopes was halved by using a so-called Sum-Trigger. Along with a circuit to suppress the importance of the after-pulsing, the Sum-Trigger detects weak, loose images in ~0.5◦ wide patches. As already mentioned, in 2008 it allowed for the revelation of pulsations from the Crab pulsar for energies of ≥25 GeV [61].

The novel, professional version of the Sum-Trigger-II, developed for the two MAGICs to work in coincidence, recently allowed the detection of a very weak signal from the Geminga pulsar at energies of ≥15 GeV [68].

Physical novelty introduced by MAGIC: extend the threshold of an IACT technique down to a ~20 GeV domain.

Technological novelties introduced by MAGIC:


### **9. Fourth Generation Instruments**

### *9.1. Cherenkov Telescope Array—The Major Instrument*

The series of "Towards a Major Atmospheric Cherenkov Detector" workshops, taking place between 1989 in Crimea (seen historically the workshop number "0"), and 2005 in Paris (the last one), served its purpose. The researchers reached a consensus that one needed to unify the efforts of different collaborations and of the entire community and to move towards one major instrument. In 2006 a new collaboration was formed for building the Cherenkov Telescope Array. This collaboration, which counts over ~1500 researchers, includes practically all the researchers worldwide working with the atmospheric Cherenkov technique and many newer groups with interest in exploring the sky in gamma rays with unprecedented sensitivity. In the meantime, the CTA collaboration has produced advanced prototypes of its constituent telescopes and moved into the construction phase. Originally about 100 telescopes of 23 m, 12 m and 4–7 m size were planned for building in the southern and northern observatories, covering the energy range from 10 GeV to more than 100 TeV [69,70]. This is going to be the major ground-based instrument for conducting astrophysics by means of gamma rays for the next few decades.

The first Large Size Telescope (LST) is in its final stage of commissioning. It has already measured gamma-ray signal from dozens of sources. Soon publications are scheduled from this telescope [71].

Similarly, the prototypes of the Middle Size Telescope (MST), the double mirror, 9.7 m diameter Schwarzschild-Couder (SCT) telescope in Arizona and the small size (SST), 4 m prototype ASTRII telescope in Sicily have been built and successfully commissioned, see [71] and the links therein.

One of the advances of the CTA telescopes can be considered as the wide field of views of its telescopes. The study of the wide-FoV prime-focus telescopes began in 2005 with publication [72]. Soon this has been expanded by the study of even wider FoV IACTs of a more complex design, including two optical elements [73,74]. The ASTRII and SCT followed the design [73].

The CTA is planning to operate LST, MST and SST telescopes of ~4.5◦, ~8◦ and ~10◦ apertures, correspondingly. The technology of these novel, fourth generation telescopes has been refined practically everywhere. After saturating the gamma-ray detection efficiency of individual telescopes, the CTA is pursuing the plan to use a large number of such telescopes to cover a large area, for providing an exclusively high sensitivity.

Another technological progress of the CTA is the use of the advanced light sensors, such as the classical PMTs with strongly improved parameters as well as the so-called SiPM, see more on these below.

### 9.1.1. Enhanced Quantum Efficiency PMTs

In the first stage of the development work, initiated in ~2004, the researchers from MAGIC, cooperating with the companies Electron Tubes Enterprises (London), Photonis (France) and Hamamatsu (Japan), succeeded in increasing the peak QE of PMTs with bialkali photo cathode from stagnating over ~40 years value of 25–27% up to ~32–35%. Subsequently those PMTs were dubbed as "Superbialkali" type [75].

The second stage of the development was started by a group of researchers cooperating with Hamamatsu and Electron Tubes in the frame of the CTA collaboration in 2009. One of the novel technologies applied in the MST and LST telescopes of the CTA collaboration is the use of novel 1.5-inch size PMTs with significantly improved parameters. At the end of the development work the PMTs from Hamamatsu showed a somewhat better performance than those from Electron Tubes and thus were selected for the use in CTA. It should be noted that those developments revised all aspects of a classical PMT, including, for example, the light emission of its dynode system, see Figure 7a. The latter effect caused a high-rate of after-pulses [76]. A dedicated re-design reduced that negative effect. The novel PMTs became commercially available from Hamamatsu in 2014. They show an average peak quantum efficiency of ~43%, electron collection efficiency on the first dynode of 94–98% (for wavelengths ≥ 400 nm) and an after-pulsing rate of ≤0.02% for the set threshold of ≥4 ph.e. The pulses from the 7-dynode PMTs measure ~2.5 ns Full Width at Half Maximum (FWHM) level. The achieved record parameters make this PMT currently the best in the world [77].

Use of such PMTs allows one to significantly lower the energy threshold of both the MST and LST telescopes. In the latter case a threshold of ~20 GeV has already been reported.

### 9.1.2. SiPM-Based Imaging Camera

The emerging at the end of the 1990s of a new semiconductor light sensor technology, dubbed as SiPM (Silicon Photo Multiplier), received a strong boost in the development, especially for the possible use in the MAGIC IACT and the EUSO space missions [78]. The parameters of the sensors started rapidly improving; already in 2008–2010 the majority of parameters were almost saturated in pilot productions, as for example, a peak photon detection efficiency (PDE) of ~60% along with a cross-talk of ~2.5% level has been reported [79]. One of the remaining issues that still exists, but to a less degree, is the problem of cross-talk, see Figure 7b [80,81]. Researchers started building the first custom-segments of imaging cameras [80]. The first full-scale SiPM-based camera has been built and installed on the left-over mechanical mount of the HEGRA third telescope in La Palma ten years ago. Since then, the telescope dubbed as FACT is in successful operation, in recent years in a robotic regime [82]. The potential of those relatively old SiPMs could not be fully explored, mostly due to the mentioned cross-talk effect, which limited the PDE. With time the SiPM parameters have significantly improved and both the double mirror structure prototype telescopes ASTRII SST and the 9.7 m Schwarzschild-Couder (SCT) next to the Veritas telescopes in Arizona, use SiPM-based cameras [71].

**Figure 7.** (**a**) This photo shows the light emission leaking through the space between the dynode system of the Hamamatsu R8619 PMT. Part of this light could arrive at the photocathode, causing high-level after-pulsing. Later, the company installed baffles, which reduced the negative effect. See [76] for details. (**b**) Light emission microscopy of a pilot SiPM sample (prod. By B. Dolgoshein and team) under operational voltage. The narrowly focused (<4 μm), weak laser beam shoots at the location of the yellow-black dot on the surface of a 100 μm size cell. One can also see that the neighbor cells emit light. This is the essence of the cross-talk effect; a single incident photon can fire more than one cell.

The size of the largest SiPM is limited to ≤10 mm. This limits their direct application mostly in SSTs. For using those in MSTs or LSTs, either one needs to use a significantly higher number of sensors and readout channels (compared to the number of current PMTs), or to group a large number of SiPMs for imitating one single larger-size sensor [83]. Please note that it does not make much sense to use sensors with a size much less than the optical PSF of a given telescope. Which of the above options will turn to be viable in future will depend on the cost evolution of those sensors and of the readout channels, as well as the ready availability of integral readout solutions.

### *9.2. TAIGA*

An interesting hybrid approach has chosen the TAIGA pilot instrument in Tunka valley near lake Baikal. One of the main goals is to explore the gamma-ray energy range from several to 100s of TeV.

TAIGA includes 120 HiSCORE stations (improved version of the former timing array AIROBICC), deployed in an area of 1 km<sup>2</sup> [84], two 4 m class IACTs with a 9.6◦ FoV [85] and other types of Cherenkov light and particle detectors. The number of IACTs is planned to be completed to four until the end of the next year.

A combination of the timing and imaging air shower detection techniques allows one observing the novel "hybrid stereo" mode: the core position and the incoming direction of a given shower can be obtained from HiSCORE, while its images from the IACTs will help measuring the type (gamma or hadron) and the energy.

Placed at a distance of about 600 m from each other, the four IACTs with HiSCORE will compose a sensitive detector with a collection area in excess of 1 km2.

The TAIGA approach has the promise to offer a cost-effective solution for building a highly sensitive detector of a very large area.

### *9.3. LHAASO*

LHASSO is a multi-component, very large size cosmic and gamma-ray detector. It is located in Sichuan Province of China, at a mountain altitude 4410 m a.s.l. It plans to measure the cosmic and gamma rays in the energy range of ≥1012 eV and 1011–1015 eV, respectively. LHAASO is designed to measure electrons, muons, Cherenkov and fluorescence light. Recently it made an important discovery, with a dozen so-called PeVatron sources identified [86]. The Wide Field-of-view Cherenkov Telescope Array (WFCTA) of LHAASO includes eighteen telescopes, based on reflectors of 5 m<sup>2</sup> area [87]. Composite SiPM pixel imaging cameras with a FoV of 16 × 16◦ are installed at their focal planes. Interestingly, the telescopes are portable, so their configuration and location can be easily changed.

### **10. Conclusions**

It is remarkable to see the progress, made from the first detection of cosmic rays via Cherenkov light emission in the atmosphere in 1953 until present day. The original tiny Cherenkov telescope has served its purpose. After about 70 years the Veritas, MAGIC, H.E.S.S, and now the LST/CTA telescopes allow one to measure a significant gamma-ray signal from the Crab Nebula in less than a minute. Essentially, the speculative presentation of Cocconi from 1959 came true, if not exactly in the way he had predicted. Today we have instruments with resolutions of 0.05–0.1◦, which can measure a gamma-ray signal from the Crab Nebula with the signal to noise ratio of 300:1 for energies above 100 GeV. In the near future, the completed CTA instrument will further enhance that signal to noise ratio. The hundreds of new discoveries made at the very high energies established the firm place of the ground-based gamma-ray astrophysics as one of the rapidly evolving successful branches in astronomy. One can anticipate much more numerous, important results in cosmic rays, in multi-wavelength and messenger astrophysics and cosmology to become available within the next ~10 years.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The author wants to express his gratitude to Derek Strom for critically reading the manuscript.

**Conflicts of Interest:** The author declares no conflict of interest.

### **References**


### *Review* **The Crab Pulsar and Nebula as Seen in Gamma-Rays**

**Elena Amato1,2,\* and Barbara Olmi 1,3**


**Abstract:** Slightly more than 30 years ago, Whipple detection of the Crab Nebula was the start of Very High Energy gamma-ray astronomy. Since then, gamma-ray observations of this source have continued to provide new surprises and challenges to theories, with the detection of fast variability, pulsed emission up to unexpectedly high energy, and the very recent detection of photons with energy exceeding 1 PeV. In this article, we review the impact of gamma-ray observations on our understanding of this extraordinary accelerator.

**Keywords:** ISM: supernova remnants; ISM: individual objects—Crab Nebula; pulsars: general; radiation mechanisms: nonthermal; gamma rays: general; acceleration of particles; astrophysical plasmas; MHD

### **1. Introduction**

The remains of the Supernova explosion in AD 1054 is likely the best studied astrophysical system after the Sun [1]. The remnant consists of two different, bright, nonthermal sources—the pulsar and the nebula. Both objects have played a key role in the development of high-energy astrophysics. Thanks to their bright emission at all wavelengths, they have been observed by virtually all new astronomical instruments and have been at the origin of a wealth of important scientific discoveries.

The Crab pulsar was one of the first detected pulsars and actually the one that provided smoking gun evidence for the identification of these radio sources as neutron stars. The Crab nebula had long been known to be the result of a SN explosion [2]; in 1934, Baade and Zwicky [3] suggested that supernova explosions might be signaling the transformation of an ordinary star into a neutron star, but the prospects for revealing these objects (small and presumably very dim) had been considered poor; in 1967, Pacini [4] suggested that a fast-spinning, highly magnetized neutron star could be the energy source powering the activity of the Crab Nebula; in 1968, the first pulsar was discovered and suggested to be a white dwarf or a neutron star [5]. The discovery of pulsations from one of the two stars at the center of the Crab nebula [6] served as the last piece of the pulsar puzzle.

The contribution of the Crab pulsar and nebula to the progress of science did not end there, however. It is from this system that we have learned the basic physics behind the energy release by a young neutron star—the star spins down due to the electromagnetic torque and most of its rotational energy goes into the production of a relativistic magnetized wind; if this wind is effectively confined, as is the case for the Crab pulsar, the neutron star energy becomes detectable in the form of nonthermal emission by a surrounding nebula—the Pulsar Wind Nebula (PWN hereafter). This class of sources, of which the Crab nebula is the prototype, has typically a very broad nonthermal spectrum, often extending from low radio frequencies (tens of MHz) to Very High Energy gamma-rays (*E* > 100 GeV photons; VHE hereafter). In fact, they account for the majority of galactic sources emitting TeV gamma-rays; further, a number of unidentified gamma-ray sources are likely to be

**Citation:** Amato, E.; Olmi, B. The Crab Pulsar and Nebula as Seen in Gamma-Rays. *Universe* **2021**, *7*, 448. https://doi.org/10.3390/ universe7110448

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 18 October 2021 Accepted: 13 November 2021 Published: 19 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

associated with unobserved pulsars [7]. Finally, very recent measurements by the LHAASO telescope [8] might indicate that PWNe are also the most numerous class of Extremely High Energy (*E* > 100 TeV photons; EHE hereafter) gamma-rays emitters.

How exactly the star rotational energy is converted into the wind energy, and what the composition of the wind is, are questions with only partial answers. At the same time, the importance of these questions goes beyond pulsar physics, and, as we will discuss in this article, has implications for our understanding of particle acceleration in extreme conditions and up to the highest achievable energies, and on the origin of cosmic rays. Gamma-ray emission offers a privileged window to investigate these questions.

On the other hand, gamma-ray observations of the Crab pulsar and nebula have continued to surprise us with unpredicted discoveries, such as pulsations extending to unexpectedly high energies, extremely fast variability at GeV energies, and detection of photons at PeV energies. In the following, we discuss these discoveries and their implications for our understanding of pulsars, the physics of relativistic plasmas, and particle acceleration up to the highest energies. The article is structured as follows: In Section 2, we review our present understanding of the properties of pulsar magnetospheres, with particular reference to the implications for pair production that come from the detection of VHE pulsed emission. In Section 3, we review how modeling of the nebular plasma has evolved, pushed by the improvement of observational capabilities at increasingly high energy. In particular, we illustrate how 3D MHD modeling guided by high-resolution X-ray data has affected our understanding of the wind properties and estimates of its parameters, and the kind of information that gamma-rays can provide. In Section 3.2, we discuss the problems in explaining particle acceleration in the Crab nebula and the insight that can be gathered from modeling the time variability of the source. The two major surprises that observations of the Crab nebula have offered us in recent years are presented in Sections 3.3 and 3.4: the gamma-ray flares and the detection of PeV emission. In Section 4, we discuss in what respects the Crab nebula is different from most other objects in this source class, and how these differences might reflect in gamma-rays. Finally, we provide our summary and outlook in Section 5.

### **2. The Crab Pulsar in Gamma-Rays: Origin of the Emission and Pair Multiplicity**

As mentioned above, the Crab pulsar is a source whose existence had been predicted even before discovery [4], based on the need for an energy source to power the Crab nebula. Indeed, most of the pulsar spin-down energy, *<sup>E</sup>*˙ <sup>≈</sup> <sup>5</sup> <sup>×</sup> 1038 erg s−1, ends up in a magnetized wind expanding with relativistic bulk speed. At some distance from the star, the wind is slowed down to match the conditions of nonrelativistic expansion of the conducting cage of supernova ejecta that confines it. This transition is thought to occur at a termination shock (TS hereafter), where the bulk energy of the outflow is dissipated and particles are accelerated, giving rise, thereafter, to the bright nonthermal nebula. We will worry about the bulk of the energy and address the nebular emission later in this article, while this section is devoted to the <sup>∼</sup>1% of *<sup>E</sup>*˙ that goes into direct electromagnetic radiation, with a non-negligible fraction emitted in gamma-rays [9].

The Crab pulsar is the source in this class with the broadest detected emission spectrum, extending from a few ×100 MHz to TeV photon energies [10]. While the advent of *Fermi*-LAT has revealed that High Energy (100–300 MeV photons; HE hereafter) gammaray pulsations are not uncommon among pulsars [11], despite recent efforts [12], no other pulsar has been firmly detected at VHE. The detection of the Crab pulsar in gamma-rays of progressively higher energy has had a tremendous impact on our ideas about pulsar magnetospheres and the mechanisms behind their emission in the different wavebands.

In spite of the fact that pulsars were first recognized as pulsating radio sources (to which they actually owe their name), and only later identified at shorter wavelengths, pulses of radio emission have always been the most challenging to account for in terms of theory, due to the coherent nature of their emission (see [13] for a review and [14,15] for recent work on the subject). On the other hand, higher energy emission, from infrared

frequencies upwards, is not coherent and has always appeared easier to understand as the result of classical emission processes—such as synchrotron, curvature, and/or inverse Compton (IC) radiation—depending on the frequency and on the model. While nearinfrared through optical-UV—and often also nonthermal X-ray—emission is commonly accepted to be of synchrotron origin (see e.g., [16]), the process behind gamma-ray emission has long been debated [17]. Different emission mechanisms and different regions of origin are assumed by the different models, and in fact, gamma-ray emission has long been thought to hold the key to understanding the hidden workings of the star magnetosphere [18]. Indeed, fundamental constraints have come from gamma-ray observations, especially in the VHE range.

The general picture of the pulsar immediate vicinities is thought to be as follows. A pulsar is an excellent, highly magnetized, and fast-spinning conductor. While inside it, charges organize themselves so as to screen the electric field; the unscreened field at the surface is strong enough to extract electrons and possibly even ions from the star, generating a corotating magnetosphere around the star [19]. The corotating magnetosphere can only extend up to a distance from the pulsar such that corotation does not imply superluminal motion: this defines the light cylinderradius *RLC* = *cP*, with *c* the speed of light and *P* the star rotation period. Magnetic field lines originating close enough to the pulsar magnetic axis (the so-called polar cap region) will not close within *RLC* and will form the open magnetosphere. Particles flowing along these lines meet regions of unscreened electric potential where they are accelerated and emit high-energy radiation that subsequently leads to pair production. It is through this process that each electron extracted from the star gives rise to *κ* electrons, with *κ* 1 the so-called pulsar multiplicity. The open field lines are finally loaded with orders of magnitude more particles than originally extracted from the star surface: these particles flow away from the pulsar, carrying with them most of the star rotational energy in the form of a magnetized relativistic wind, as we discuss further below. The exact multiplicity, i.e., the exact amount of pair production that should be expected from the magnetosphere of a given pulsar, is still a controversial subject (e.g., [20]). A way to estimate *κ* from observations is by observing and modeling the PWNe, when possible. However, even in the case of the Crab nebula, the results obtained from this kind of observations are controversial, as we will discuss in more detail later in this article. Alternative constraints on the magnetospheric models and on the number of pairs they produce can be derived from gamma-ray observations.

The big expectation in terms of the information that pulsed VHE emission might hold relates exactly to the topic of pair production. Particles extracted from the star quickly accelerate during the extraction process and emit high-energy photons. In the intense magnetic field close to the star, photons with sufficiently large energies are absorbed and initiate a pair production cascade. The threshold energy for photons to escape rather than be absorbed, and give rise to a new generation of pairs, depends on the magnetic field strength; therefore, it will be different at different locations in the magnetosphere. This is why the detection of high-energy gamma-rays was long awaited as a probe of the location of cascade development and the pair emission process. For the former, three main possible locations have been suggested since the early times of pulsar studies—the polar caps [21,22], the slot gaps [23,24], and the outer gaps [25]. In the first model, gamma-ray emission would come from the pulsar vicinity and should show a superexponential cut-off at ∼ GeV energies, while in the latter two, it would come from larger distances from the pulsar and be the result of curvature or Inverse Compton radiation, rather than synchrotron.

In addition to gap models, another scenario that satisfies this constraint is one in which particle acceleration and subsequent gamma-ray emission occurs in the equatorial current sheet of the pulsar wind, as a consequence of magnetic reconnection in the striped wind, taking place at distances from the pulsar comparable to *RLC* or larger, e.g., [26]. If this process occurs close to *RLC*, for young and energetic pulsars, such as Crab, it can come with associated pair creation: accelerated particles emit synchrotron gamma-ray photons that may create pairs through *γ*–*γ* interaction [27].

The detection by *Fermi* of a large number of gamma-ray pulsars immediately seemed to disfavor polar caps as the main site of gamma-ray emission [28]: the simplest argument in this sense is the large number of detected pulsars, easier to reconcile with the wider beam of radiation predicted by models locating the emission further from the pulsar. More stringent constraints came from the detection of VHE pulsations from the Crab pulsar by MAGIC [29,30] and VERITAS [31]: starting from 2008, the two telescopes detected pulsed emission from Crab at progressively higher energy, with the current record being 1.5 TeV [32].

These data enforce the view that gamma-ray emission comes from distances of order *RLC* or larger, with VHE gamma-rays most likely resulting from IC scattering of lowerenergy photons. At lower gamma-ray energies, the physical mechanism behind the emission is still debated between curvature [33], synchrotron [34], and synchro-curvature [35]. The spatial location of the emission, however, seems better established. Indeed, in the last 15 years, there has been enormous progress in terms of modeling the pulsar magnetosphere and in the detailed comparison between models and data. Numerical studies of the pulsar magnetosphere have been evolving from the force-free and full MHD regime towards global PIC simulations including pair creation (see [36] for a review, and references therein for further details). These latter studies are clearly the frontier in a complex multiscale problem such as that of the pulsar magnetosphere. The general consensus is that whenever the pair supply is sufficient to screen the electric field, the magnetosphere is globally well-described by the force-free solution ([37] and references therein), with the formation of a Y point near the light cylinder, where the equatorial current sheet connects with the two curved current sheets that form along the separatrix between open and closed field lines. In this case, different prescriptions about the location of pair creation lead to similar results [38–40]. This is expected to be the case for young, fast-spinning pulsars [41], such as the Crab and most gamma-ray-emitting pulsars. For these objects, current numerical simulations predict, in fact, that most of the high-energy radiation results from synchrotron emission in the vicinity of the light cylinder [26,27,34]. In the case of the Crab pulsar, this idea also gains support from the fact that detailed modeling of the light curve and optical polarization [34] leads to determine values of the inclination between the pulsar magnetic and rotation axis and of the viewing angle that are in agreement with estimates based on completely different considerations related to the morphology of the nebula in X-rays [42].

The VHE emission from the Crab pulsar has never been computed within the refined global approach to magnetospheric dynamics and emission modeling discussed above. However, phenomenological modeling of phase-resolved spectra above 60 MeV [43] strongly suggest that emission above 60 GeV comes from regions near or even beyond the light cylinder. In addition, even before the detection of pulsed TeV radiation, Mochol and Petri [44] predicted multi-TeV gamma-rays as a distinctive signature of gamma-ray production via synchrotron-self-Compton at tens of *RL*.

### **3. The Crab Nebula: What We Learn from Gamma-Rays**

The Crab nebula has been known as a source of VHE gamma-rays since the late 1980s [45], and was detected, for the first time, at MeV photon energies in the early 1990s [46]. The observed emission was readily interpreted as the result of IC scattering between the relativistic leptons populating the nebula and ambient photons, mainly contributed by the cosmic microwave background (CMB), thermal dust emission, and nebular synchrotron emission [47,48].

In the last 15 years, the advent of the current generation of HE (*Fermi*-LAT and AGILE) and VHE (MAGIC, VERITAS, H.E.S.S., HAWC, Tibet As-*γ*, LHAASO) gammaray telescopes has allowed us to gain much deeper insight in the properties of the Crab nebula at these highest energies, and has also brought two big surprises: variability in the MeV range [49–51] and detection up to unexpectedly high energies [52]. In Figure 1, we show the most recent measurements of the Crab nebula gamma-ray spectrum, including LHAASO data points, showing emission beyond 1 PeV—about the highest energy we think

achievable by galactic accelerators, based on measurements of the cosmic ray spectrum at the Earth (see e.g., [53] for a recent review). Before discussing the most impressive surprises that came from gamma-rays and how they have impacted our understanding of the Crab nebula, we briefly review the physical picture of the nebular dynamics and emission properties that has been built through time, thanks to constant improvements in the quality of observations, theories, and numerical modeling.

**Figure 1.** Focus on the gamma-ray spectrum of the Crab nebula. Data from different instruments are shown with diverse symbols/colors—namely, green rectangles for HEGRA data [54], blue squares for HESS data [55], pink circles for *Fermi*-LAT ones [56], red diamonds for MAGIC data [57,58], orange stars for HAWC [59], brown triangles for Tibet AS-*γ* [60], and violet ones for LHAASO data [61]. Figure courtesy of Michele Fiori.

### *3.1. Modeling the Nebular Plasma*

The Crab nebula is the PWN for which most models were developed and over which most of our understanding of the entire class is based. As we mentioned in Section 1 most of the rotational energy lost by the pulsar goes into accelerating a relativistic outflow, mostly made of pairs (though the presence of ions is not excluded, as we will discuss later) and a toroidal magnetic field. The outflow starts out cold (low emissivity, as highlighted by the presence of an underluminous region surrounding the pulsar [1]) and highly relativistic, until it reaches the termination shock (TS). Since the outflow is electromagnetically driven, it must start out as highly magnetized at *RLC*: the ratio between Poynting flux and particle kinetic energy, *<sup>σ</sup>*, is thought to be *<sup>σ</sup>*(*RLC*) ≈ 104 [62,63]. In contrast, the magnetization must be much lower at the TS, in order for the flow to be effectively slowed down. Initial estimates of *σ* at the TS, based on steady-state 1D magnetohydrodynamics (MHD) modeling, would give *<sup>σ</sup>*(*RTS*) ≈ <sup>10</sup><sup>−</sup>3, equal to the ratio between the nebular expansion velocity and the speed of light. This estimate has later been revised towards larger values of *σ* in light of 3D MHD numerical modeling, as we discuss below, but the general consensus is still that *σ*(*RTS*) cannot be much larger than unity. How the conversion of the flow energy from magnetic to kinetic occurs, between *RLC* and *RTS*, is still a matter of debate—the so-called *σ*-problem—and some of the suggested mechanisms could show radiative signatures in the gamma-ray band (e.g., [26]), while keeping dark in other wavebands. In fact, at least at low latitudes around the pulsar rotational equator, a plausible mechanism for energy conversion in the wind is offered by the existence of a magnetically striped region [64]. In an angular sector, whose extent depends on the inclination between the pulsar spin and magnetic axes, *θi*, a current sheet develops between toroidal field lines of alternating polarity [37]: this is an ideal place for magnetic reconnection to occur and transfer energy

from the field to the plasma [64]. Where along the flow and whether efficiently enough this energy conversion occurs is an open question, the answer to which depends on the pair-loading of the flow [65]—namely, on the pulsar multiplicity *κ*—again, a parameter to be preferentially investigated in gamma-rays. This latter statement is true in two respects: constraints on pair production in the magnetosphere can be gained from pulsed gammaray emission, as discussed in Section 2, but a more direct estimate of the number of pairs injected in the nebula can be obtained from detailed modeling of the nebular emission spectrum and morphology. This is discussed in the following.

The morphology of the synchrotron nebula is known in great detail, at photon energies from radio to X-rays (see Figure 2), and hence, represents both a driver and a very challenging test for theoretical and numerical models. The size of the nebula is observed to vary noticeably with the energy of the emitting electrons, and consequently, with the observation waveband. The higher the energy of the electrons is, the shorter the distance they travel before losing most of their energy due to synchrotron radiation.

**Figure 2.** Left panel: The Crab nebula as seen in radio with the National Radio Astronomy Observatory (credits: M. Bietenholz, T. Burchell NRAO/AUI/NSF; B. Schoening/NOAO/AURA/NSF). Right panel: The Crab nebula in X-rays, as seen by *Chandra* (credits: *Chandra* X-ray Observatory NASA/CXC/SAO/F.Seward et al.).

The most advanced available modeling of the Crab nebula so far is based on the assumption that beyond the TS, MHD provides a good description of the flow dynamics. 1D MHD models, both stationary and self-similar, were proposed since the 1970s [66–68], as well as stationary 2D solutions [69]. These models could generally account for the size shrinkage of the nebula with increasing frequency as a result of advection and synchrotron losses, for an average magnetic field in the nebula close to the equipartition value and for the synchrotron luminosity of the nebula, assuming a wind magnetization *<sup>σ</sup>* ≈ <sup>3</sup> × <sup>10</sup>−3, a wind Lorentz factor <sup>Γ</sup>*<sup>w</sup>* <sup>≈</sup> <sup>3</sup> <sup>×</sup> <sup>10</sup>6, and an injection rate of particles in the nebula *<sup>N</sup>*˙ <sup>≈</sup> <sup>10</sup><sup>38</sup> <sup>s</sup>−1. Particles responsible for radio emission could not be accounted for with these values of the parameters.

The discovery by *Chandra* of a jet-torus morphology of the inner nebula [70] prompted efforts to model the system with 2D axisymmetric MHD simulations, assuming a latitude dependence of the pulsar outflow [71,72]. The latter was taken in agreement with the splitmonopole solution proposed by [73], and later proved to provide a very good description of the force-free pulsar magnetosphere [37]: the pulsar wind flows along streamlines that become asymptotically radial beyond *RLC* and has an embedded magnetic field that is predominantly toroidal, with alternating polarity in a region 2*θ<sup>i</sup>* around the equator. In this angular sector, magnetic dissipation is usually assumed to occur before the TS.

The energy flux in the wind has a latitude-dependent distribution, with most of the energy concentrated in the pulsar equatorial plane. As a consequence, the pulsar wind TS does not have a spherical surface, but rather, a highly oblate shape, being much closer to the pulsar along the rotational axis than at the equator. The obliquity of the shock front plays a key role to explain the X-ray observations of polar jets. These appear to originate so close to the pulsar position that, if the shock were spherical, they would have to be collimated directly in the highly relativistic plasma upstream of the shock, where known mechanisms are inefficient [74]. 2D MHD simulations proved that collimation happens, in fact, in the downstream plasma, as soon as magnetic hoop stresses are sufficiently strong, namely, as soon as the magnetic field in the nebula can reach equipartition. This reflects in a lower limit on the wind magnetization for the jets' formation: *σ* - 10−<sup>2</sup> [71,72,75], about one order of magnitude larger than the value provided by 1D models.

A schematic representation of the flow geometry can be seen in the left panel of Figure 3. In the right panel of the same figure, we show a simulated X-ray image of the Crab nebula.

**Figure 3.** Left panel: Cartoon of the inner nebula geometry (the oblate TS, jets formation, striped wind) with the identification of the accelerating regions for particles responsible for the *wisps* emission at different wavelengths. Right panel: Surface brightness map at X-ray energies (1 keV), with intensity normalized to the maximum value and expressed in logarithmic scale. Reprinted with permission from Del Zanna et al. (2006) © 2006 ESO.

2D axisymmetric models have proven very successful at accounting for the morphological properties of the Crab nebula emission. They very well reproduce most of the observed brightness features in the inner nebula in very fine detail, including the X-ray rings and the *knot* [70,76]. On a larger scale, they account reasonably well for the shape of the nebula (elongated along the pulsar rotation axis [77]) and for size shrinkage with increasing frequency, from radio to X-rays [75].

As far as gamma-rays are concerned, no detailed morphological information is available, due to the very limited angular resolution of gamma-ray telescopes. For a long time, the only available information simply constrained the gamma-ray nebula to lie within the radio synchrotron one [55,78,79]. The first direct measurement of the Crab nebula extension in gamma-rays became available last year, thanks to the H. E. S. S. telescope [80]. With the analysis of 22 h of observations collected during 6 years of operation, the PWN radial extension was finally determined: it turns out to be ∼52 in the 700 GeV-5 TeV energy range, and hence, smaller than in the UV (where the extension is ∼2.5 ) and very similar to the X-ray size (∼50 ), which is perfectly consistent with a picture in which TeV gamma-rays are produced by synchrotron X-ray emitting particles. This is also in very good agreement with the results of the only available effort at computing simulated gamma-ray emission maps of the Crab nebula [81]. In this work, the IC emission was computed on top of a 2D

MHD numerical model and maps were produced for different photon energies, showing a shrinkage with increasing energy similar to that observed between radio and X-rays. This can be seen in Figure 4, which also clearly shows how the jet-torus structure should become visible again at TeV energies. Probing the nebular morphology at this level of detail in VHE gamma-rays is however beyond the reach of current and planned instruments [82].

**Figure 4.** IC surface brightness maps at various energies in the gamma-ray range. Each map is normalized to its maximum and plotted in logarithmic scale. Reprinted from Volpi et al. (2008) © 2008 ESO.

One thing that gamma-rays can readily probe, however, is the goodness of 2D MHD models at correctly describing the energy content of the nebula: in fact, the main limitations inherent to the assumption of axisymmetry become apparent as soon as one compares the IC spectrum computed from simulations with the available data. As shown in Figure 5, the 2D MHD simulations largely overpredict the IC flux. Indeed, the limits of axisymmetric models are evident when trying to describe the large-scale properties of the PWN, primarily the global magnetic field structure. The imposed symmetry reflects in an artificial pileup of magnetic loops along the polar axis and an enhanced compression of the magnetic field in the inner nebula. In order to reproduce the nebular morphology, one is then forced to adopt an artificially low magnetization of the flow (*σ* ≤ 0.1), and as a result, the overall magnetic energy in the nebula is underestimated. In order to reproduce the synchrotron spectrum, one is then forced to inject in the nebula a number of particles larger than in reality, which is readily revealed by the IC flux. The particle energy losses are also underestimated, and this forces one to assume an injection spectrum for high-energy particles that is steeper than what is deduced from X-ray spectral index maps of the inner nebula [75].

**Figure 5.** Total integrated spectrum of the Crab nebula computed on top of the 2D MHD numerical model by [77]. The zoom-in on the gamma-ray spectrum highlights the fact that the IC emission can be correctly reproduced if the magnetic field strength is artificially rescaled so as to ensure an average value of ∼200 μG (this is how the spectrum in the inset is obtained). Different symbols-colors reproduce data at the different energy bands, as taken from [83] and references therein.

The solution to many of these problems appeared with results from the first 3D MHD simulations [84]. With the third spatial dimension available, kink-type plasma instabilities produce considerable mixing of the magnetic field in the entire nebula, with an ensuing high level of magnetic dissipation. This definitely allows for the increase of the initial magnetization in the pulsar wind to values of order of unity [84–87]. The main limitation of 3D models is that they require a huge amount of numerical resources and time to be performed. For this reason, in [84], only a very initial phase of evolution of the Crab nebula was investigated, for a total of ∼70 years, so that the self-similar expansion phase was not yet reached. A longer simulation, fully reaching the self-similar expansion phase, was presented in [85]. Synchrotron emission maps computed on top of these simulations show that, for parameters appropriate to reproduce the X-ray morphology, the surface brightness distribution at radio and optical frequencies becomes much more uniform in 3D, reflecting the structure of the magnetic field, which appears to be rather different from what was originally found based on 2D models [88], with differences increasing with distance from the shock and from the equatorial plane.

In Figure 6, we show color maps of the magnetic field strengths in 2D (left) and 3D (right) corresponding to *σ* = 0.025 and *σ* = 1, respectively. The first thing to notice is that in 3D, the pile-up of field lines around the polar axis is much reduced and their filling factor in the nebula much more uniform. This is due to the fact that, even injecting a toroidal magnetic field at the shock surface, the mixing is so efficient that a poloidal component immediately develops, becoming comparable in magnitude to the toroidal one within a distance from the pulsar of order 2–3 times the TS radius. On the other hand, the magnetic field remains almost toroidal in the inner nebula, making predictions from 2D axisymmetric models limited to this region still valid.

**Figure 6.** Comparison of the magnetic field intensity (in logarithmic scale and units of G) between a 2D MHD model and a 3D one, which both reproduce the X-ray morphology (from original simulations presented in [85,88]).

The second noticeable thing is that, in spite of the much higher magnetization adopted for the 3D simulation (a factor 40 larger *σ*), the average magnetic field in the nebula is only about a factor 2 higher than in 2D. This is a result of efficient magnetic dissipation: [85] found that magnetic dissipation is so high that even an initial magnetization of order unity is not enough to lead to an average magnetic field of the expected strength order ∼150–200 μG, so that the actual wind magnetization might have to be even larger than unity, revising by more than 3 orders of magnitude the initial estimate based on 1D steady state modeling and strongly mitigating the *σ*-problem.

Before concluding this section, we think it is important to remark that in current 3D simulations, magnetic dissipation has a purely numerical origin, while the actual physical process at work in the Crab nebula plasma remains unconstrained. In reality, how much of the injected toroidal field is left at any point in the nebula can be constrained by comparison of polarization maps with observations (see e.g., [84,89]). Important new insights in this respect will soon be provided by the availability of X-ray polarimetric observations [90].

### *3.2. Time-Variability and Particle Acceleration*

The era of multi-D MHD simulations also opened up the possibility of using spatially resolved time-variability as an additional, powerful diagnostic for the physical properties of the plasma in the nebula and, most notably, for the processes responsible for particle acceleration within it. Brightness variations of the nebular structures has been known to occur, at optical frequencies, for a long time: the so-called *wisps* were first identified by [91]. These features, strongly resembling outward propagating plasma waves, appear at distances from the pulsar comparable with the TS radius in the equatorial plane, and then progressively fade while moving outward, with time-scales from weeks to months [92]. Similar features were later observed both in the X-rays [70] and in the radio band [93,94]. In spite of these morphology variations, however, the integrated emission was found to vary only by a few percent per year [95].

The *wisp's* appearance and time evolution, however, is not the same at all wavelengths [96], and varies in a way that, within the MHD framework, can only be interpreted as due to differences in the particle spectrum at different locations along the shock front, or, in other words, to particles in different energy ranges being accelerated in different places [88]. On the other hand, the plasma conditions along the TS front are expected to be highly nonuniform, especially in terms of magnetization of the flow (see Figure 3), and this is an important parameter to determine the kind of acceleration process that can be locally at work, as we discuss below.

In fact, how particle acceleration occurs in the Crab nebula in different energy ranges is not understood (see e.g., [97,98] for a review). The nebular synchrotron spectrum is consistent with a broken power-law, with a particle spectral index *γ<sup>R</sup>* = 1.6 for radio-emitting particles and *γ<sup>X</sup>* = 2.2 for X-ray-emitting ones (see e.g., [99]). At the highest energies, particles must be accelerated at the TS; otherwise, the decrease in size of the nebula with increasing frequencies could not be explained. On the other hand, radio-emitting particles could be, in principle, accelerated anywhere in the nebula. The evidence of the coexistence of two different particle populations has been suggested by Bandiera et al. [100] after a comparison of radio, millimetric, and X-ray maps of the Crab nebula. The observation of *wisps* at radio frequencies seemed to exclude this possibility [93], but at a closer look, this phenomenon can well be accounted for within the MHD framework as simply due to the structure of the magnetic field and of the MHD flow: [77] showed that radio emission maps and time-variability can be reproduced even assuming that radio emitting particles are uniformly distributed in the nebula, as would be the case for diffuse acceleration in the body of the nebula, associated with stochastic magnetic reconnection or Fermi-II process due to MHD turbulence. The frequency-dependent behavior of the *wisps* can only be accounted for, within MHD transport, if X-ray-emitting particles are accelerated in the equatorial sector of the TS, while lower-energy particles are predominantly accelerated elsewhere, either in the body of the nebula or at high latitudes at the TS [88]. In Figure 7, we show, on the left, the radio emission map obtained by [77], assuming a uniform distribution of radio-emitting particles in the nebula. The right panel of the same figure shows, instead, the time-evolution of the surface brightness peak at radio (orange) and X-ray (blue) frequencies when particles are injected in the sectors of the TS shown in Figure 3, highlighted with the corresponding colors.

With an estimated Lorentz factor of the wind in the range 104–107, the shock in the Crab nebula is among the most relativistic in nature. The mechanism usually invoked for particle acceleration in astrophysical sources, diffusive shock acceleration, or first-order Fermi process (Fermi-I), can only work at such a shock if the magnetization of the wind is low enough, *σ* 10−<sup>3</sup> [101] (see [102] for a review). This condition can only be realized in a small equatorial sector of the wind, assuming efficient magnetic reconnection in the striped wind upstream of the shock, or in the vicinity of the polar axis, when the magnetic field naturally decreases and O-point-type reconnection is also possible. The results found by [88] concerning the preferred location of X-ray emitting particle acceleration are consistent with acceleration occurring mainly in the equatorial region and *γ<sup>X</sup>* is consistent with the outcome of Fermi-I acceleration. A question that remains open, and waits to be addressed in the framework of 3D MHD simulations, is whether a sufficiently large fraction of the flow satisfies the condition of low *σ* required by the Fermi-I process.

Other possible acceleration mechanisms that have been suggested are associated with driven magnetic reconnection occurring at the TS [103] or resonant absorption of ion cyclotron waves [104,105]. The former requires very large wind magnetization (*σ* - 30 at the TS) and pair multiplicity (*κ* - 108), while the latter requires the presence of ions in the pulsar wind. Both questions are again to be addressed by gamma-ray observations (see e.g., [98]).

As far as requirements on *κ* are concerned, from the point of view of pulsar theory, a value as large as *<sup>κ</sup>* ≈ <sup>10</sup><sup>8</sup> seems very difficult to account for, in spite of the recent and ongoing evolution of pulsar magnetospheric models (Section 2). In addition, with *<sup>κ</sup>* ≈ 108, the wind would reconnect before the TS [65] (with possible signatures in gamma-rays [26]) and the magnetization could not be as high as required. Finally, even ignoring all the theoretical difficulties, and simply counting the number of particles that have accumulated in the nebula during its history, through combined modeling of the synchrotron and IC spectrum, that value of *κ* is too large by ≈ 2–3 orders of magnitude [106]. Of course, the lack of evidence and/or motivation for large *κ* does not exclude the possibility for magnetic reconnection to be responsible for acceleration in a limited energy range, as we further discuss in Section 3.3.

Concerning acceleration via ion-cyclotron absorption, this mechanism requires a sizable fraction of the wind energy to be carried by ions [105], and hence, that the pulsar multiplicity be not too large *κ* 104 [98]. The implied population of ions would be made of particles with a Lorentz factor equal to that of the wind, 104 < Γ*<sup>w</sup>* < 107, and the only direct probe of their presence can come from gamma-ray or neutrino emission [107]. Recent LHAASO observation of the Crab nebula might hold important clues in this respect [61]. This aspect will be further discussed in Section 3.4.

Of course, the possibility of analyzing spatially resolved time-variations in the gammarays would provide essential clues to the acceleration mechanism, but this type of analysis is currently out of reach due to the poor spatial resolution of the observations. According to the picture discussed above, variations in the TeV domain are not expected to be dramatic in the case of Crab: being the emission mostly due to the interaction between radio emitting particles and internal synchrotron radiation [81], a radio-*wisp*-like behavior is expected, accompanied by very small variations of the integrated flux. However, the situation is completely different in the GeV range, where one is looking at the cut-off of the synchrotron spectrum and, hence, in the case of Crab, at particles that have acceleration times comparable with radiation loss times. The dramatic consequences that this fact has on the Crab-integrated emission in the GeV energy range will be the subject of the next section.

**Figure 7.** *Left panel:* Surface brightness map at a 1.4 GHz radio frequency. Small scales have been subtracted and the map convolved with the VLA PSF. The intensity is given in linear scale and in mJy/arcsec<sup>2</sup> units. The emitting particles are assumed as uniformly distributed in the nebula. *Right panel:* Non-coincidence of the X-ray (aquamarine circles) and radio at 5 GHz (orange diamonds) wisps, produced by particles accelerated in the regions highlighted with the same colors in the left panel of Figure 3. More discussion on this can be found in [88]. The map in the left panel is reprinted with permission from Olmi et al. (2014) © 2014 Olmi et al.

### *3.3. The Crab Flares and Their Implications for Particle Acceleration*

A much unexpected discovery that came from gamma-ray observations of the Crab Nebula was that of episodes of extremely fast gamma-ray variability, the so-called gammaray flares. Global variations of the emissivity were predicted in the *Fermi* band as a consequence of rapid synchrotron burn-off of particles at the high-energy cut-off of the distribution [108]. Assuming radiation reaction limited acceleration, the maximum energy up to which electrons can be accelerated is

$$E\_{\text{max,rad}} = m\_c c^2 \left(\frac{6\pi e\eta}{\sigma\_T \, B}\right)^{1/2} \approx 6 \text{ PeV } \eta^{1/2} B\_{-4}^{-1/2} \text{ } \tag{1}$$

where *c* is the speed of light; *e* and *me* are the electron charge and mass, respectively; *σ<sup>T</sup>* is the Thomson cross section; and we have assumed the acceleration to be due to an electric field *η B*, with *B* the magnetic field strength. The second equality provides an estimate of the maximum achievable energy for magnetic field strengths in units of *<sup>B</sup>*−<sup>4</sup> <sup>=</sup> <sup>10</sup>−<sup>4</sup> G, corresponding to the value estimated as the nebular average. One can easily see that PeV energies can only be reached for *η* ≈ 1 and magnetic field strengths not much in excess of 10−<sup>4</sup> G.

In this synchrotron-loss limited regime, it is easy to see that the maximum energy of synchrotron-emitted photons only depends on *η* and reads

$$
\epsilon\_{\rm max,sym} = \frac{3}{2} \hbar \frac{\varepsilon B}{m\_{\rm c} c} \left(\frac{E\_{\rm max,rad}}{m\_{\rm c} c^2}\right)^2 = \eta \frac{9 \pi \hbar c^2 m\_{\rm c} c}{\sigma\_T} \approx 230 \,\eta \text{ MeV} \,\text{}.\tag{2}
$$

Global emissivity variations are therefore expected [108] in the hundreds of MeV range on time-scales:

$$t\_{\rm var} \approx m\_{\rm c} \,\mathrm{c} \,\sqrt{\frac{6\pi}{c\sigma\_T}} \,\eta^{-1/2} \,\mathrm{B}^{-3/2} \approx 2.5 \,\eta^{-1/2} \,\mathrm{B}\_{-4}^{-3/2} \,\mathrm{months} \,\mathrm{.}\tag{3}$$

The big surprise came with *Agile* [49] and *Fermi* [50] observations showing, on top of continuous small variations, some dramatic events, where not only the flux increases by a factor of several (up to 30 for the most spectacular event, in April 2011) over a period of one to a few weeks, but the emission extends well beyond max,sync, reaching GeV photon energies. In addition, the amount of energy released is typically non-negligible, and in the biggest detected flare, was really huge, corresponding to an isotropic luminosity of *<sup>L</sup>*max <sup>=</sup> <sup>4</sup> <sup>×</sup> <sup>10</sup><sup>36</sup> erg/s <sup>≈</sup> 0.01*E*˙ . At present, 17 flares have been clearly identified [109], with a flare rate of 1.5 per year. In addition to episodes of sudden increase of the gamma-ray flux, dips are observed in the same energy band [110].

The flares are not easy to interpret, and up to now, there is still no accepted model to explain them. First of all, emission beyond 230 MeV implies *η* > 1, which cannot be accommodated within ideal MHD. The possible solutions to this puzzle are as follows: (1) the acceleration is due to a nonideal mechanism with *η* 1, as can be the case for magnetic reconnection; (2) the acceleration occurs in a region of low magnetic field and then the emission occurs in a more magnetized region; (3) the emission comes from particles with mildly relativistic bulk motion, so that the frequency and power of the radiation are actually Lorentz-boosted. All these possibilities have been widely explored in the literature. In the first suggested scenario, acceleration of particles responsible for the flare would be part of the process of magnetic reconnection occurring in the vicinities of the TS. This idea has been thoroughly investigated by means of numerical simulations [111,112]. The general conclusion of these works is that acceleration by X-point magnetic reconnection would in fact explain emission beyond the synchrotron cut-off and a highly variable flux. In the brightest flare, the flux doubles in less than 8 h [113]. Such a short time-scale implies emission from a very compact region, of size *<sup>L</sup>* ≈ <sup>3</sup> × <sup>10</sup>−<sup>4</sup> pc; in addition, if interpreted in terms of Equation (3), it implies *<sup>B</sup>* ≈ *<sup>η</sup>*−1/3 3.7 mG. Clearly, this finding is challenging for any value of *η* < 1, and in fact, as we will discuss later in more detail, it is challenging even for *η* ≈ 1, in light of the recent LHAASO observations (see Section 4).

In a reconnection scenario, the fast time-scale can be associated with the high level of fragmentation of the reconnection layer, made of a chain of magnetic islands, or plasmoids. Furthermore, these move with relativistic bulk speeds, which helps enhancing the intensity and frequency of the emitted radiation via Doppler boosting. Additional beaming is also provided by kinetic effects associated with the anisotropy of the particle distribution in the reconnection layer [114]. Despite all these promising features, 3D PIC simulations of magnetic reconnection indicate that the process is not fast enough to fully account for the properties of Crab flares [115]: the reconnection rate is typically found to be *v*rec/*c* 0.1 [116], likely translating into too weak an electric field.

A possible alternative is provided by explosive magnetic reconnection [117–120], where the process occurs on a dynamical time-scale. Very high Lorentz factors can be reached, because the highest energy particles are accelerated by the parallel electric field in the current layers and only suffer radiation losses after leaving the layer, building a scenario in which acceleration and radiation occur separately and the requirement *η* > 1 imposed by Equation (1) is not an issue anymore. In addition, the radiation is beamed, which helps with fast variability, and also with the implied energetics.

Besides scenarios invoking magnetic reconnection, a different class of models has attempted to explain the flares within the standard picture of Fermi acceleration. An early suggestion by [121] is that the flare emission be interpreted as synchrotron emission in the cut-off regime in a magnetic field with stochastic fluctuations, such as is expected downstream of a shock that is efficiently accelerating particles. An interesting aspect of this picture is that it is proven to explain not only flux increases, but also depressions [110]. The required magnetic field strength to explain the flare is in the mG range. The highly turbulent structure invoked by [110,121] could be the outcome of another scenario that has received much attention—that of a corrugated shock with mildly relativistic motion [122,123]. Of course, the constraint from Equation (3) would be relaxed if the variability has a different origin (unrelated to the acceleration time-scale) or if the emission comes from regions where the plasma is moving with a mildly relativistic speed, in which case the intrinsic time-scale of the variations would be longer by a factor equal to the flow Lorentz factor. More recently, a modified picture of the shock, taking into account the latitudinal dependence of the magnetic field, has been numerically investigated [124], proving that mildly relativistic bulk motion develops, with Lorentz factor Γ*<sup>w</sup>* ∼ 3–4, enough to strongly relax all the constraints on frequency, time-variability, and energetics of the flare. In particular, with Γ*<sup>w</sup>* in this range, also the previously discussed mechanism of ion-cyclotron absorption provides values in the right ballpark for the abovementioned quantities, even with a magnetic field around 100 μG.

### *3.4. Constraints on the Pulsar Wind Composition from >100 TeV Emission*

As discussed above, the pulsar wind is generally considered to be mostly composed of electron–positron pairs, while the possible presence of a hadronic component is still a matter of debate [48,107,125,126]. If present, despite being a minority by number, hadrons could even be energetically dominant in the wind, changing drastically our understanding of the pulsar wind properties. The relativistic hadrons possibly present in the Crab nebula could generate electromagnetic emission in the form of VHE gamma-rays deriving from decay of neutral pions produced in nuclear collisions with the gas in the SN ejecta. This spectral contribution is only expected to become detectable above 100∼ 150 TeV, where IC scattering emission starts to be suppressed by the Klein–Nishina effect.

The current IACTs (Imaging Atmospheric Cherenkov Telescopes), such as H. E. S. S. and MAGIC, could find no evidence of hadronic emission up to their sensitivity limit around tens of TeV. Emission beyond 100 TeV is currently only accessible with sufficient sensitivity by water Cherenkov detectors and air shower detectors. Indeed, the Crab nebula was detected above 100 TeV by HAWC employing the former technique [59] and by Tibet AS-*γ* [60] employing the latter. Very recently, LHAASO, combining both techniques, has obtained the record-breaking detection of >PeV photons from this source [8], opening up a window to finally see the possible emergence of the hadronic contribution. In fact, the increasing uncertainties above 500 TeV make the LHAASO spectrum still consistent with a purely leptonic origin of the emission. Under such an assumption, the PeV range data can be effectively used to constrain the strength of the magnetic field at the shock, which cannot exceed **(112** *±* **15)** μG or otherwise, as one can readily see from Equation (1), even assuming maximally efficient acceleration (*η* = 1), radiation reaction would make it impossible to achieve particle acceleration up to the 2.8 PeV energy needed to explain the highest energy data point as due to IC scattering in the Klein–Nishina regime. A side remark is that in such a field, even a 2.8 PeV electron would emit synchrotron radiation at 50 MeV; even a Lorentz boost by a factor Γ*<sup>w</sup>* ∼ 3–4 would not be enough to account for the Crab gamma-ray flares. In other words, the flares should come from a different region of the nebula, with higher magnetic field, or otherwise imply the presence of -10 PeV electrons, extremely close to the maximum potential drop available from the Crab pulsar, which is also the limiting energy for particles accelerated anywhere in the nebula (see Section 4).

On the other hand, taken at face value, the LHAASO data seem to suggest that a new component might be showing up at the highest energies. This new component is consistent with a quasi-monochromatic distribution of protons with energy around 10 PeV (as discussed in Vercellone et al. in preparation). This is exactly what would be expected by models assuming that protons are part of the wind emanating from the Crab pulsar with a Lorentz factor <sup>Γ</sup>*<sup>w</sup>* ≈ <sup>10</sup>7: in this case, their Larmor radius in a 100 <sup>μ</sup>G is of order *<sup>R</sup>*TS, so large that their energy distribution would not be much altered at the shock [105]. Of course, smoking gun evidence would be the detection of neutrinos [107], likely possible with the upcoming sensitivity improvement of dedicated experiments.

As far as gamma-ray data alone are concerned, in order to find clear evidence for the emergence of a hadronic component, more precise data and better modeling of the IC emission are needed, as well as a better understanding of the possible systematics

entailed by the different techniques of VHE photon detection. The high-altitude detectors provide flux measurements that are usually below those measured by IACT (comparing H. E. S. S. and MAGIC Crab data points with respect to Tibet As-*γ* and LHAASO). While the discrepancy is not large, the error bars attached to the points do not overlap (see Figure 1), which is somewhat puzzling, being that the Crab nebula is the primary calibration source in this energy range. This lack of overlap might be due to systematic errors not being included in the error bars. On the other hand, multiple independent measurements of the Crab nebula spectrum in this energy range offer the perfect opportunity to properly asses the systematics of these complex observations. Decisive insight will be provided by next-generation IACTs with good sensitivity beyond 100 TeV as the CTA SSTs (Small Size Telescopes) in the southern hemisphere and ASTRI Mini-Array in the north.

Before concluding this section, we notice that the Crab nebula is not the only source to have been detected at EHE. Very recently, LHAASO [8] has also detected about ten more EHE emitters in the Galaxy (partially overlapping with the sources already detected by HAWC [127] beyond 56 TeV). For the majority of these sources, the distance between the center of the emission and the nearest pulsar is less than or comparable with the instrument PSF, so it is not unlikely that almost all these PeVatrons are associated with pulsars (and possibly leptonic in nature [128]). The much better spatial resolution of IACTs might also help to shed light on the real nature of these extreme accelerators, and assess whether acceleration of particles to PeV energies and beyond is a generic property of PWNe powered by energetic pulsars, rather than a unique property of Crab.

### **4. The Crab Nebula and the Other PWNe**

While being considered as the prototype PWN, the Crab nebula is different from all other sources in this class in many respects, especially when it comes to gamma-rays. The first noticeable difference is that Crab is the only known PWN whose gamma-ray spectrum is partly formed with internal synchrotron radiation as a target. This is a consequence of its very bright synchrotron emission, due to the young age and high magnetic field. In addition, for the same reason, particle acceleration here is limited by radiation reaction, which is likely not the case for older objects with lower magnetic fields. In the latter, electrons can in principle be accelerated up to higher energies, comparable with the entire pulsar potential drop. In fact, the maximum achievable energy in the dissipation region, assumed to be located at a distance *R*TS from the pulsar, is *E*max = *eB*TS*R*TS, where an electric field of the same strength as the magnetic field has been assumed. On the other hand, the magnetic field at *R*TS can be estimated based on pressure equilibrium between the ram-pressure-dominated flow upstream of *<sup>R</sup>*TS and the downstream *<sup>B</sup>*TS <sup>=</sup> *<sup>ξ</sup>*1/2*E*˙/*c*/*R*TS with *ξ* ≤ 1, the fraction of wind energy that is turned into magnetic energy. As a result, *<sup>E</sup>*max ≈ *<sup>ξ</sup>*1/2*<sup>e</sup> E*˙/*c*—namely, a fraction *<sup>ξ</sup>*1/2 of the pulsar potential drop, *<sup>E</sup>*drop <sup>=</sup> *<sup>e</sup> E*˙/*c*.

The fact that in the majority of the observed PWNe, the maximum particle energy is not limited by radiation losses, might have something to do with the lack of flare observations from any source other than the Crab. It certainly has important implications for the escape of particles from evolved systems. At the same time, the fact that in evolved sources, the VHE spectrum is uniquely due to upscattering of CMB photons (and occasionally local IR), has important consequences for the ratio between emission in different energy bands. Particles responsible for the IC emission are generally less energetic than those responsible for the high-energy synchrotron emission: a ∼10-TeV electron produces gamma-rays at 1 TeV, with the CMB as a target, while 1-keV synchrotron emission is produced by ∼50-TeV electrons in an ambient magnetic field of 10 μG. This difference in energy of the emitting electrons reflects in the different lifetime of a PWN in gamma-rays and X-rays, making the PWN to be still bright in gamma-rays when the X-ray emission is very low or even totally faded away.

Considering a rate of birth of 1 pulsar every 100 years in our Galaxy [129], and an average lifetime of PWNe in gamma-rays of order of 100 kyr, the total number of PWNe possibly detectable at TeV energies is of the order of 1000. Most of these would be too old to be observed at other frequencies. Evolved PWNe have in fact extended and diffused radio emission, difficult to reveal on top of the background, while X-rays are hardly detectable due to the burn-off of the emitting particle population. Moreover, old systems have gone through the so-called *reverberation* phase, when the SN reverse shock—traveling towards the center of the SN explosion—interacts with the PWN, likely causing a contraction of the nebula, with the consequent compression of the magnetic field and increase of the particle radiation losses [130–132]. Due to the system geometry and/or to the properties of the surrounding medium, the reverse shock is likely to be nonspherical and causes an asymmetric deformation of the nebula [133]. Additional deformation is likely induced by the PSR proper motion: the mean value of the kick velocity in the PSR population is of order *V*PSR ∼ 350 km/s [129], so that in a large fraction of sources, the pulsar will accumulate a sizable displacement from the TeV-emitting nebula during the system evolution. The expected asymmetries and displacement from the parent pulsar position are then an important complication for the gamma-ray identification of PWNe. As an example, out of the 24 extended sources revealed in the H. E. S. S. galactic plane survey [7], only 14 have a multi-wavelength counterpart that allows for a firm association of the source with a PWN. In the *Fermi*-LAT 3FGL catalog [134], unidentified sources represent around ∼20% of the detections at VHE. It seems plausible that many of these unidentified, bright gamma-ray sources are actually PWNe: the implication is that this class can cover up to 40% of the total gamma-ray sources in the sky. A property of evolved PWNe that has attracted much attention in recent times is the release in the ISM of relativistic electron–positron pairs. This process has implications that go beyond PWN physics, since the pairs released by PWNe are currently the best candidates to provide an astrophysical explanation for the so-called *positron excess* observed in cosmic rays at energies above ∼10 GeV [135–137].

The most energetic particles in the nebula, with energy close to *E*drop, have been shown to efficiently escape from the head of the bow shock that forms at the interface between the PWN and the ISM, once the pulsar has emerged from the SNR (*bow shock* PWN). Those particles have large Larmor radii, comparable with the bow shock thickness in the head of the system, and can stream in the outer medium along the magnetopause at the contact discontinuity between the nebula and the ISM [138].

Depending on their energy and on the properties of the surrounding ISM, the escaping particles are expected to form diffuse halos around the bow shock head or extended and collimated jets, eventually misaligned with the pulsar direction of motion (see Figure 8), and somehow similar structures have been observed in the last years to emerge from many bow shock nebulae in the X-rays [133,139–144].

The escaping particle flux also shows evidence of effective charge separation.

This property could play a key role to understand the formation of the so-called *gamma-ray halos*. This new class of sources was first identified by HAWC, which detected extended halos of multi-TeV emission surrounding two evolved systems: Geminga (PSR B0633+17) and the Monogem (PSR B06556+14) [145]. The size of the halo around Geminga is much larger than the observed size of the nebula in X-rays (∼25 pc vs. ∼0.2 pc), so that it must be produced by particles that have escaped from the system. On the other hand, the extension is too small to be produced by particles that propagate in the standard Galactic diffusion coefficient, since the expected size would be a factor of ∼100 larger in that case. A possible explanation has been searched for in a modification of the diffusion properties around that source, possibly conveyed by self-generated turbulence associated with electrons and positrons leaking the nebula [146], or due to the injection of MHD turbulence by the parent SNR [147]. At present, understanding the formation of TeV halos is one of the big challenges in high-energy astrophysics (see e.g., Lopez-Coto et al. in preparation), both for their possible implications for galactic cosmic ray transport and for their implications for future gamma-ray observations. In fact, these could provide an important source of confusion, being weak and extended and not easy to identify. The number of expected detectable halos in the TeV sky is also still a matter of debate, with estimates ranging from many (∼50–240 [148] to a few [149]. The need for better theoretical understanding and physically motivated predictions of their abundance and location is apparent.

In this respect, the Crab nebula is certainly not a prototype, and a better understanding of this source is doubtful to help.

**Figure 8.** Maps of bow shock nebulae from 3D MHD simulations with density contours (in gray color) and the flux of escaping leptons (of two different energies). Dots of different colors indicate particles injected at different locations in the pulsar wind: the majority of escaping particles are injected in the polar region of the wind (red and green), while very few of them come from the equatorial region. In both plots, the PSR direction of motion is aligned with the *Z* direction, while the magnetic field, with strength *B*ISM = 0.01*ρ*ISM*V*<sup>2</sup> PSR (with *ρ*ISM the ISM mass density) lies in the orthogonal plane. Plots have been elaborated based on the simulations presented in [138]. The figure on the left is reprinted from Olmi & Bucciantini 2019 © 2019 Olmi & Bucciantini.

### **5. Summary and Future Prospects**

The Crab nebula and its pulsar are certainly among the most-studied astrophysical sources in the sky, and as such, they provide an excellent laboratory to investigate many aspects of high-energy astrophysics and relativistic plasma physics. At the same time, this system has proven to be an endless source of surprises. The discovery of the Crab pulsar was the confirmation that radio pulsars are actually rotating neutron stars, while the study of the Crab nebula has taught us that most of the pulsar spin-down energy goes into a highly relativistic and magnetized outflow. In this article, we reviewed what we have learned about the pulsar and the nebula in the last two decades. While both objects have a very broad emission spectrum, high-energy observations, and gamma-ray observations in particular, have played a special role in recent developments.

We have seen in Section 2 how HE and VHE observations have put stringent constraints on the origin of pulsed gamma-ray emission, enforcing the view that it is produced far from the pulsar, at distances -*RLC*, and suggesting new scenarios for the related process of pair creation.

In Section 3, we reviewed how our understanding of the PWN plasma dynamics has changed in recent years, thanks to a combination of improved modeling and high-quality observations. We discussed how 2D and 3D MHD models of the nebular dynamics have allowed researchers to solve (or alleviate) some of the mysteries of the Crab nebula, such as the *wisps* activity, the origin of the X-ray emitting jet, or the *σ*-problem. The jet is explained as a result of an anisotropic energy flow from the pulsar (higher along the pulsar rotational equator than along the polar axis) and the dynamical effect of the hoop stresses associated with the toroidal magnetic field. This explanation requires the wind magnetization *σ* to be sufficiently large. The latter must be much larger than the value of *<sup>σ</sup>* ∼ <sup>10</sup>−<sup>3</sup> that was originally estimated, and likely *σ* a few, in order for the gamma-ray spectrum of the nebula to be correctly accounted for. The variability of the *wisps* is naturally found in timedependent MHD modeling, and the *wisps* appearance at different wavelengths implies different locations for the acceleration of particles in different energy ranges: in particular, X-ray-emitting particles must be accelerated in the equatorial sector of the shock, while lower-energy particles can be accelerated anywhere. What mechanisms are responsible for particle acceleration in the different energy ranges is an unsettled question, because all the proposed mechanisms have strengths and weaknesses, and none can be completely ruled out for lack of better knowledge of the wind composition and magnetization at different locations along the shock front.

In spite of our ignorance of what process is actually at work, in Sections 3.3 and 3.4, we have shown how extraordinary an accelerator the Crab nebula is, as highlighted in the last decade by the gamma-ray flares and, very recently, by the detection of PeV photons. Several different scenarios have been proposed to explain the flare, with its emission beyond the synchrotron cut-off frequency and extremely fast variability. However, most of these proposals assume the emission to come from a region with mG strength magnetic field. Such a value of the field is one order of magnitude larger than implied by the detection of PeV emission, if this is of leptonic origin and due to IC scattering.

The PeV data are also especially intriguing because there is a suggestion that a new component might be showing up at the VHEs, consistent with a quasi-monochromatic distribution of protons with energy ∼10 PeV. The presence of hadrons in the pulsar wind would be a paradigm-changing discovery—not only would it change the current view of the pulsar outflow (with effects on the modeling of both the pulsar magnetosphere and the nebula), but it would also have consequences on cosmic ray astrophysics, lending support to the idea that fast-spinning, highly magnetized neutron stars can be major contributors of ultra-high-energy cosmic rays.

However, as we discussed in Section 3.4, smoking gun evidence for the presence of hadrons in the Crab pulsar wind requires more precise data and possibly better control on the systematics at VHE. The contribution of hadrons in the Crab spectrum is expected to emerge above around 150–200 TeV, where IC starts to be suppressed by the Klein–Nishina effect. The next generation of IACTs (CTA and ASTRI Mini-Array), with sensitivity extended to this energy range, is likely to play a crucial role in finally answering this question.

As we discussed in Section 4, the gamma-ray astronomy community has long been interested in PWNe as the dominant class of galactic sources, and this interest has been recently increased by the discovery of gamma-ray halos around pulsars. The advent of the new generation of high-sensitivity and high-resolution IACTs, with special reference to CTA, will give us access to a huge amount of new data. PWNe will be the largest population of gamma-ray sources in future surveys (possibly up to 40% of the total). The expected number of newly detected PWNe by CTA is of order 200, while the number of detectable halos is right now very uncertain.

In terms of the population of gamma-ray emitting PWNe, the Crab cannot be considered as prototypical: due to its young age and high magnetic field, the Crab is, in fact, the known PWN whose IC spectrum is partly due to self-synchrotron radiation and one of the few gamma-ray-emitting PWNe in which the maximum particle energy is determined by radiation losses, rather than shortage of available potential. Especially, the latter condition is critical in determining the presence or absence of a halo, since only particles close to the maximum pulsar potential drop are expected to efficiently escape from the nebula and form an IC scattering halo. Based on available simulations, efficient particle escape at lower energy is only possible from the tail of pulsar Bow shock nebulae. This is an important aspect to assess quantitatively in view of explaining the cosmic ray positron excess as due to pulsars. Measurements of the total lepton spectrum at VHE, which will be possible with next-generation IACTs, will contribute to clarifying this issue. On the other hand, more refined modeling of the highest-energy particle escape and associated plasma instabilities should help clarify the nature of gamma-ray halos and their expected

abundance. These are again crucial problems for cosmic ray physics, since they could imply a change of our description of particle transport in the Galaxy. At the same time, the detectability of gamma-ray halos, as well as that of evolved PWNe is a major challenge for gamma-ray astronomy, since these weak and extended sources not only are scientifically interesting, but also need to be taken into account carefully as background contributors against the detection of other sources—most notably, potential hadronic PeVatrons, whose identification is one of the main science goals of upcoming facilities.

Going back to Crab, this source is very different, in many respects, from the evolved PWNe that future IACTs will detect in very large numbers. In this sense, the Crab is not the source to look at if the purpose is that of learning about the average properties of gamma-ray-emitting PWNe. On the other hand, the Crab keeps being the best place to learn about the processes that make these objects such extreme accelerators, both in terms of efficiency and achievable energies. By looking at this ever-surprising source, future IACTs might be able to tell us that PWNe are themselves hadronic PeVatrons.

**Funding:** This research was funded by the Italian Space Agency (ASI) and National Institute for Astrophysics (INAF) under the agreements ASI-INAF n.2017-14-H.0, from INAF under grants: "PRIN SKA-CTA", "INAF Mainstream 2018", and "PRIN-INAF 2019".

**Acknowledgments:** We acknowledge Rino Bandiera and Niccolò Bucciantini for continuous collaboration during the years, and for providing useful comments on this manuscript. We also thank Michele Fiori for providing us with the updated gamma-ray spectrum of the Crab Nebula shown in Figure 1.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **References**


### *Review* **Statistical Tools for Imaging Atmospheric Cherenkov Telescopes**

**Giacomo D'Amico**

Department for Physics and Technology, University of Bergen, NO-5020 Bergen, Norway; giacomo.damico@uib.no

**Abstract:** The development of Imaging Atmospheric Cherenkov Telescopes (IACTs) unveiled the sky in the teraelectronvolt regime, initiating the so-called "TeV revolution", at the beginning of the new millennium. This revolution was also facilitated by the implementation and adaptation of statistical tools for analyzing the shower images collected by these telescopes and inferring the properties of the astrophysical sources that produce such events. Image reconstruction techniques, background discrimination, and signal-detection analyses are just a few of the pioneering studies applied in recent decades in the analysis of IACTs data. This (succinct) review has the intent of summarizing the most common statistical tools that are used for analyzing data collected with IACTs, focusing on their application in the full analysis chain, including references to existing literature for a deeper examination.

**Keywords:** statistical analysis; gamma rays; IACTs; likelihood; bayes

### **1. Introduction**

Any scientific experiment would be incomplete if only the collected data were reported. A statistical analysis is needed in order to interpret the data and to draw conclusions from the experiment. This is the case for experiments that are imaging the Cherenkov light emitted by a cascade of secondary particles produced by the interaction of gamma rays and cosmic rays within the Earth's atmosphere. They are called Imaging Atmospheric Cherenkov telescopes and popular examples are MAGIC [1], HESS [2], VERITAS [3], and CTA [4], which is currently under construction. By "statistic" we mean any function S<sup>1</sup> computed from the observed data assuming the truth of a model. Very well-known examples of such functions are the mean, the variance and the *χ*2. As the observed data consists of random variables, the statistic itself is a random variable whose distribution can be derived either from theoretical considerations or empirically using Monte Carlo (MC) simulations. A statistical analysis is therefore performed by comparing the observed value of the statistic with the frequency distribution of the values of the statistic from hypothetical infinite repetition of the same experiment assuming a given model of interest. This approach is usually called the "classical" or "frequentist" approach. This comparison (referred to as the test statistic) between the observed statistic and its long-run distribution allows the analyzer to draw a conclusion from the observed data with a procedure that is right2 (<sup>1</sup> − *<sup>α</sup>*) · 100% of the time. The value (<sup>1</sup> − *<sup>α</sup>*) · 100% is referred to as the confidence level (CL). It is important to underline that it is the procedure, not the conclusion, which is correct (1 − *α*) · 100% of the time. To better clarify this point we can consider the following claim: "a flux of 10−<sup>13</sup> · cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> from the observation of a gamma ray burst is excluded at 95% CL". Claiming that such a value of the flux is excluded is obviously always wrong in infinite experiments in which the true flux of the observed gamma ray burst is 10−<sup>13</sup> · cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1. However, in these infinite experiments, the procedure would lead the analyzer to this wrong conclusion only 5% of the time3. The procedure, i.e., the test statistic and the value *α* for its significance, is usually dictated by many factors, such as

**Citation:** D'Amico, G. Statistical Tools for Imaging Atmospheric Cherenkov Telescopes. *Universe* **2022**, *8*, 90. https://doi.org/10.3390/ universe8020090

Academic Editor:Francesca Calore

Received: 24 December 2021 Accepted: 26 January 2022 Published: 29 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the assumptions about the underlying model, the way the data have been collected and, sometimes, also by the biased conclusions<sup>4</sup> one is willing to derive from the experiment. A common principle [6] is to choose the test statistic with the maximum power, where the power of the statistic is the probability of rejecting a hypothesis that is false. According to the Neyman–Pearson lemma the most powerful statistic is the likelihood ratio, usually defined in the literature, for reasons that will be clear soon, as follows

$$-2\log\frac{\mathcal{L}(\theta|D\_{obs})}{\mathcal{L}(\hat{\theta}|D\_{obs})},\tag{1}$$

which by definition can only take values bigger or equal than zero, since by ˆ *θ* we have defined the values of the model parameters that maximize the likelihood L. The likelihood is a function of the model parameters *θ* defined as the probability of obtaining the observed data *Dobs* assuming *θ* to be true:

$$
\mathcal{L}(\theta | D\_{\text{obs}}) = p(D\_{\text{obs}} | \theta). \tag{2}
$$

Searching for the parameter values ˆ *θ* that maximize the likelihood in Equation (2) is also referred to as fitting the model to the data. If nuisance parameters<sup>5</sup> *π* are present in the model, *π* is maximized to the value ˆ*π*ˆ in the numerator of Equation (1) letting *θ* be free, resulting in the so-called likelihood profile

$$
\mathcal{L}(\theta|D\_{\text{obs}}) = \mathcal{L}(\theta, \sharp(\theta)|D\_{\text{obs}}).\tag{3}
$$

Taking the log value of the likelihood ratio as done in Equation (1) allows making use of Wilk's theorem [7], which states that under certain circumstances this random variable has a *χ*<sup>2</sup> distribution with degrees of freedom given by the number of the free parameters. This property makes the likelihood ratio a very appealing statistic, whose usage has a very broad application, and it will indeed appear many times in this manuscript. Yet, it must be used cautiously: the interested reader may refer to Ref. [8] for a critical review of the usage and abuse of the likelihood ratio.

The frequentist theory described so far may be considered unsatisfactory by some [9] with its dependence on long-run distributions from infinite experiments and its arbitrariness in the choice of the statistic. An alternative approach is provided by the "Bayesian" or "probabilistic" approach, in which probabilistic statements about hypotheses and model parameters are made through the Bayes theorem

$$p(\theta|D\_{\rm obs}) = \frac{p(D\_{\rm obs}|\theta)p(\theta)}{p(D\_{\rm obs})}.\tag{4}$$

The prior probability *p*(*θ*) captures the available knowledge about the parameters, or, more generally, about the hypotheses under study. The so-called evidence *p*(*Dobs*) can be seen as a normalization factor. It follows from probability theory that

$$p(D\_{\rm obs}) = \sum\_{\theta} p(D\_{\rm obs}|\theta)p(\theta). \tag{5}$$

In this case nuisance parameters are treated via the marginalization

$$p(\theta|D\_{\rm obs}) = \sum\_{\pi} p(\theta, \pi|D\_{\rm obs}),\tag{6}$$

or, in other words, instead of profiling the likelihood by fixing *π* to ˆ*π*ˆ(*θ*), one marginalizes the likelihood by integrating out *π*. Another way of looking at Equation (4) is to consider the odds of a hypothesis defined as the ratio of its probability of being true and not being true

$$\rho(\mathcal{H}) = \frac{p(\mathcal{H})}{1 - p(\mathcal{H})} \equiv \frac{p(\mathcal{H})}{p(\mathcal{H})'} \tag{7}$$

where <sup>H</sup>¯ and <sup>H</sup> are mutually exclusive and collectively exhaustive hypotheses. Using the odds formalism the Bayes theorem takes the following form

$$o(\mathcal{H}|D\_{\rm obs}) = \text{BF} \cdot o(\mathcal{H}), \quad \text{with} \quad \text{BF} = \frac{p(D\_{\rm obs}|\mathcal{H})}{p(D\_{\rm obs}|\mathcal{H})} \text{},\tag{8}$$

where BF stands for the Bayes Factor, i.e., the likelihood ratio of two competing hypotheses. After the measurement the Bayesian approach leads us to update the odds we assign to a given hypothesis by multiplying it with the BF. Unlike the frequentist approach where the goal is to provide a statement about the long-run performance of a test statistic, in Bayes theory we are not interested in hypothetical infinite experiments but in calculating the probability of hypotheses from the observed data and from our prior knowledge of them.

Going deep into the details of the statistical analysis and the difference between the frequentist and Bayesian theory goes beyond the scope of this paper, the interested readers may refer to Refs. [10,11] and references therein. Yet, this brief introduction of the basic principles and definitions used for performing an inference analysis in both the frequentist and Bayesian approach is necessary for reviewing the statistical tools used in IACTs analysis in the next sections.

The typical workflow of an IACTs analysis is schematically shown in Figure 1, where, starting from the shower images, the variable *s* (the expected number of signal events) is derived, which is then used for inferring the flux Φ of gamma rays by taking into account the instrument response function (IRF) of the telescopes.

**Figure 1.** Schematic workflow of the inference analysis performed in order to estimate the intrinsic flux Φ of gamma rays (and the values of its parameters *θ*) from the recorded images. The acronym IRF stands for Instrument Response Function (see Section 4). The bold arrows (going from the right to the left) show the relation of cause and effect. The aim of the inference analysis (shown as a thin arrow going from the left to the right) is to invert such relation.

In the remaining part of this section, the structure of the paper is outlined. First we discuss in Section 2 the most common techniques implemented for performing the event reconstruction from the shower images detected with IACTs. The goal of these techniques' yields is to obtain a list with the estimated energy, direction, and discriminating variables for each of the candidate gamma ray events.

Using this event list, it is then shown in Section 3 how to estimate the strength of the signal *s* and how confidently we can claim that a gamma ray source is producing part of the recorded events.

The final result of the statistical analysis is the differential gamma ray flux Φ, which corresponds to the number *Nγ* of expected photons per unit energy (*E*), time (*t*), and area (*A*):

$$\Phi(E, t, \hbar) = \frac{dN\_\gamma(E, t, \hbar)}{dEdAdt},\tag{9}$$

where **n**ˆ is the photon direction. We denote with Φ the observed flux, i.e., the flux of events actually observed by the telescopes when the IRF of the telescope is included. The expected counts *s* is connected to Φ by taking into account the exposure of the observation, i.e., by integrating Φ over the temporal, energetic and spatial range in which the events have been collected. This is the topic of Section 4, in which the way the source flux and its model parameters are obtained is also discussed.

### **2. Event Reconstruction Techniques**

The first statistical analysis one has to face in IACTs is the reduction of the recorded images in the camera of the telescopes to a few parameters of interest. The Cherenkov light from the shower of secondary particles is reflected by mirrors and focused on a camera with photomultipliers composing the pixels of the shower image (see Figure 2). The event reconstruction consists therefore in extracting from the photo-electron (PhE) counts and arrival time of each pixel the following variables:


The role of these discriminating variables is to provide information on how likely one event can be associated with a gamma ray or to the background composed mainly by hadronic cosmic rays. The background estimation and the signal extraction are discussed in Section 3, while for the remaining part of this section the most commonly used event reconstruction tools are reviewed.

**Figure 2.** Difference between the images of gamma-induced (**left**) and hadron-induced (**right**) showers in the camera of a IACT. Reprinted with permission from Ref. [12]. Copyright 2009 Völk et al.

### *2.1. Hillas Method*

The most common event reconstruction technique is based on the moments (up to the second order) of the pixel amplitudes in the camera, referred to as Hillas parameters [13]. This technique can be thought of as fitting an ellipse to the pixels: a likelihood function that depends on the Hillas parameters is maximized under the assumption that the Cherenkov

light from a shower initiated by a gamma ray would produce an elliptical shape in the camera. The set of parameters includes variables such as the total PhE counts in all pixels, the PhE-weighted barycenter of the pixel positions, and the time gradient of the pixel arrival times. If more than one telescope is involved, then these single-image parameters can be combined in order to obtain stereoscopic parameters giving a 3-dimensional reconstruction of the event [14]. The calculation of these parameters is easily affected by image noise and night-sky background, which requires a cleaning procedure in order to remove the pixels that do not contain the shower image. Moreover, the dim and small shower images below 100 GeV can result in parameters values affected by large fluctuations and systematic uncertainties, which is the reason why the instrument response function of IACTs deteriorates at lower energies. Techniques based on Hillas parameters have been implemented since the 1980s and are used in a variety of experiments such as MAGIC [14] and HESS [15], demonstrating the robustness and reliability of the method.

After the parametrization of the event is completed, the gamma ray energy is estimated from the shower impact parameter and from the photon density is measured with each telescope. This is done by constructing look-up tables for different observational conditions, filled with MC information about the true energy of the gamma ray as a function of the simulated image amplitude and impact parameter. The arrival direction is obtained from the crossing point of the main ellipse orientations in the individual cameras. A weighted combination of some Hillas parameters can be used as a discriminating variable [15]. More refined techniques have been developed, aimed at improving the inference analysis on the gamma ray properties starting from the Hillas parameters (see Section 2.5).

### *2.2. Semi-Analytical Method*

Despite the robustness and stability under different conditions of the Hillas method, additional reconstruction procedures have been explored in order to exploit more information from the recorded image. The so-called semi-analytical method consists of fitting to the shower images a model of the Cherenkov light produced by a gamma ray shower as seen by the camera. A first implementation of this method can be found in Ref. [16] from the CAT collaboration in the late 1990s. In this pioneering implementation, the 2D-models are stored in a look-up table and compared to the observed image via a *χ*2-function of the gamma ray energy, the impact parameter and the source position in the focal plane. This function is defined as the sum of the squared differences over all pixels between the expected PhE content and the actual observed one. This sum is weighted according to the Phe count quadratic error. A *χ*<sup>2</sup> minimization is performed to obtain the best fit parameter of the gamma ray, while the resulting *χ*<sup>2</sup> is then used as a discriminating variable. This method has been re-implemented and subsequently improved by the HESS collaboration [17], where the *χ*<sup>2</sup> minimization has been substituted by the minimization of a log-likelihood defined as

$$\ln \mathcal{L} = \sum\_{\text{pixels } i} \ln \mathcal{L}\_i = -2 \sum\_{\text{pixels } i} \ln p(n\_i | \theta). \tag{10}$$

The variable *ni* is the observed PhE count in the pixel *i*, while *θ* are the shower model parameters. This method is referred to as semi-analytical because the template library of shower images is produced with MC simulations which are usually carried out with dedicated software such as KASKADE [18] and CORSIKA [19]. Compared with the Hillas method, a more precise estimation of the energy and direction of the primary gamma ray is provided by this reconstruction technique, especially at low energies.

### *2.3. 3D-Gaussian Model*

An additional approach is given in Refs. [20,21], where the single-pixel PhE counts are fitted to an analytical gaussian air shower model. This method, referred to as a 3D model or 3D Gaussian model, assumes an isotropic angular distribution of the shower, and its rotational symmetry with respect to its incident direction is used to select gamma ray events. As usual a likelihood function is maximized with respect to the shower parameters. This

maximization process is rather fast thanks to the simpler assumption of the 3D-Gaussian model. More recently [22] the 3D-model was combined with a multivariate analysis that makes use of the so-called *Boosted Decision Tree* (see Section 2.5) and adapted to the detection necessity of IACTs, particularly for the discovery of new faint sources.

### *2.4. MC Template-Based Analysis*

The previously mentioned methods in Sections 2.2 and 2.3 strongly rely on a model fit that becomes more difficult to describe as we reach higher energies. The more energetic the gamma ray, the more particles are produced, and a large fraction of the latter is capable of reaching the ground. This causes strong fluctuations in the fit model above ∼10 TeV. Moreover, in these approaches, the quality of the model fit is inevitably worsened by instrumental effects and atmospheric conditions which require approximations in order to be taken into account. To overcome these issues and improve the accuracy of the analysis, the authors of Ref. [23] proposed an Image Pixel-wise fit for Atmospheric Cherenkov Telescopes (ImPACT). In this approach, the template shower images are produced using more detailed and time consuming MC simulations. The simulation chain consists in simulating the air shower with CORSIKA [19] which is then combined with *sim\_telarray*<sup>6</sup> [24] for reproducing the instrumental effects of the telescopes. The sensitivity is improved by a factor of 2 when the ImPACT reconstruction method is implemented, relative to the Hillasbased method (see Section 2.1). Compared with the 2D-model, some improvements were shown at higher energies. A similar implementation for the VERITAS telescopes can be found in Ref. [25]. The role of realistic MC simulations has been very recently emphasized by the authors of Ref. [26], who proposed a new simulation and analysis framework as an alternative to the current way MC templates are obtained. In the existing paradigm, simulations are generated from pre-defined observation and instrument settings, such as the zenith of the observation or the configuration of the camera. Each simulation can be then seen as a grid point in the setting-parameters space. The analyzer willing to use a "run"<sup>7</sup> performed with some given settings has to look for the adjacent grid points either by interpolating them or by taking the closest one. Instead in the *run*-wise simulation approach described in Ref. [26], simulations are generated on a run-by-run basis. In this way observational conditions are fully taken into account, thus leading to more realistic MC simulations that reduce systematic uncertainties and improve the scientific output of the statistical analysis.

### *2.5. Multivariate Analysis*

To date, we described techniques for parametrizing an event detected by IACTs. These parameters are then used for inferring the energy, the arrival direction of the primary gamma ray, and discriminating variables. The latter are used to tell how likely the event can be associated with a gamma ray. The usage of discriminating variables is quite simple: all the events with values of the parameter larger (or smaller, depending on the kind of variable) than a pre-defined threshold are retained and considered to be *gamma-like* events, i.e., originating from a gamma ray. A different approach that avoids cutting data by exploiting the full probability distribution function (PDF) of the discriminating variable will be discussed in Section 3. Once a discriminating variable is chosen and a fixed threshold is defined, the separation (or discrimination) power can be obtained from the so-called *Q* value

$$Q = \frac{\mathfrak{e}\_{\gamma}}{\sqrt{\mathfrak{e}\_{h}}}.\tag{11}$$

where  *<sup>x</sup>* is the efficiency of the selection procedure given by the fraction of events belonging to the population *x* surviving the selection (*h* stands for *hadrons* which compose the background population). This classification problem becomes considerably more difficult when more than one parameter can be actually used for discriminating signal events from the background population. Multivariate methods consist of combining several of the

shower parameters into one single discriminating parameter. The main advantages of these approaches are that


A detailed review and comparison of different multivariate methods for event classification in IACTs can be found in Ref. [27]. In this section, we provide a brief description of the currently most used methods, the *Boosted Decision Tree* (BDT) and the *Random Forest* (RF) [28]. The BDT approach, implemented for the HESS [29,30] and VERI-TAS [31] telescopes, is a binary tree where events are sorted into small subsets by applying a series of cuts until a given condition is fulfilled. This condition might be given by requiring that the number of events in a leaf is smaller than a predefined value or that the signal over the background ratio in a leaf must be large enough. The term "boosted" refers to the fact that more than one individual decision trees are combined in a single classifier by performing a weighted average. The boosting allows improving the stability of the technique with respect to fluctuations in the training sample and is able to considerably enhance the performance of the gamma/hadron separation compared to a single decision tree. Like the BDT approach, the RF method, implemented for the MAGIC telescopes [32], also relies on decision trees, which are built up and combined with some elements of random choice. As for the BDT, training samples of the two classes of population (signal and background) are needed for constructing the decision trees. Once the classifier has been properly8 trained, the algorithm can be used to assign to each event a single discriminating variable whose distribution on a test gamma and hadron population can be seen in Figure 3.

**Figure 3. Left panel**: distribution for background events (hatched red) and simulated *γ* (blue filled) of the discriminating variable given in output from the BDT method implemented by the HESS collaboration. Reprinted with permission from ref. [30] Copyright 2010 Fiasson et al. **Right panel**: distribution for background events (black) and simulated *γ* (blue) of the discriminating variable (called *hadronness*) given in output from the RF method implemented by the MAGIC collaboration. Reprinted with permission from Ref. [33] Copyright 2009 Colin et al.

### *2.6. Deep Learning Methods*

The multivariate methods described in Section 2.5 for discriminating the signal events from the background have shown a great capability in improving the sensitivity of IACTs. This effort has been recently pushed forward by Deep Learning (DL) [34] techniques for object recognition in images. Such algorithms, which require more computational power, are getting more and more attention thanks to the improvements during recent decades in the usage of CPU and GPUs for matrix operations. When it comes to image processing, the leading DL algorithm is Convolutional Neural Networks (CNNs) whose first application in the context of IACTs can be found in Ref. [35], where a CNN was applied in the simple case of muon-ring events. This work served as a pathfinder for the application of CNNs for gamma/hadron separation from the raw recorded images.

A CNN is made of many connected layers which in turn consist of different nodes. The first layer is the input image whose pixels represent its nodes. The inputs in a new

layer are convolved with kernels that have to be trained. Each new layer is in general much smaller than the input one, and allows identifying features in the previous layer. Adding more and more layers, one aims to extract more and more complex features, which can be possibly used to identify those discriminating features in the images that otherwise would not be considered in other event-classifier algorithms. For a more detailed review and description of DL and CNNs algorithms, we refer the reader to Refs. [34,36] and references therein. The training process in CNN can be computationally demanding, due mainly to the very large number of parameters. The main advantages of DL relative to previous event reconstruction methods in IACT is that CNNs do not need the image parameters (such as the Hillas ones), and therefore all the features contained in the image are fully exploited, while they might get lost or suppressed during the parametrization. Recent applications of CNNs in the image processing of IACTs data can be found in Refs. [37–40], where the algorithm was also implemented for the energy and arrival direction estimation of the gamma rays.

### **3. Detection Significance and Background Modeling**

The final result of the statistical analysis described in Section 2 is a list containing useful information about the candidate gamma ray events, such as their estimated energy and arrival direction. Neglecting any background contamination, the total number *n* of events in such a list would be a random variable distributed according to the Poisson PMF

$$\mathcal{P}(n|s) = \frac{s^n}{n!} \varepsilon^{-s},\tag{12}$$

where *s* is the expected number of signal counts. The problem is that the majority of the observed events are actually generated by hadronic cosmic rays, while only a small fraction (which for the case of a bright source such as the Crab Nebula is of the order of 10−3) can be associated with gamma rays. By applying a signal extraction selection on the data based on a discrimination variable, it is possible to reduce the background by a factor of 100 or more, thus increasing the signal-to-noise ratio close to 1 for a bright source. In order to infer the gamma ray flux from the resulting event list it is essential to first estimate the remaining background contamination. We can consider three different scenarios (which are the topic of Sections 3.1–3.3, respectively) where the expected background count *b* in the target region is:


The latter case is the most common one and requires the definition of two regions: a region of interest (ROI), also referred to as target, test or ON region, and a background control region, called OFF region. The ON and OFF regions provide, respectively, independent *Non* and *Noff* counts, where the latter is ideally void of any signal event. A normalization factor *α* is introduced to account for differences (such as the acceptance and the exposure time) between the ON and OFF region. It can be defined as

$$\alpha = \frac{\int\_{ON} A(\mathbf{x}, t) d\mathbf{x} dt}{\int\_{OFF} A(\mathbf{x}, t) d\mathbf{x} dt'},\tag{13}$$

where *A*(*x*, *t*) is the instrument acceptance, which is a function of the observation time *t* and of all observational parameters (such as the FoV position or the zenith angle) here denoted for simplicity with *x*. The goal of the background modeling analysis is to provide the values of *α* and *Noff* that are then used for estimating the signal *s* along with the detection significance.

### *3.1. The Background Is Zero or Negligible*

Although very rare, in some analyses the background *b* in the ON region may be assumed to be zero or negligible relative to the signal *s*. Given the simplicity of this case, it is worth dwelling on it, discussing with examples the statistical conclusions that can be drawn from a measurement using the frequentist and Bayesian approach. In this scenario the likelihood function is trivially

$$\mathcal{L}(\mathbf{s}) = \mathcal{P}(N\_{on}|\mathbf{s}),\tag{14}$$

where P is the Poisson distribution (see Equation (12)) with observed and expected counts *Non* and *s*, respectively. Using the likelihood ratio defined in Equation (1) as a statistic, and taking into account that *s*ˆ = *Non* is the value of *s* that maximizes the likelihood, we get for any *s* > 0 the following statistic

$$\mathcal{S} = \mathcal{Z} \cdot (s - N\_{\rm{on}}) - 2N\_{\rm{on}} \cdot (\log s - \log N\_{\rm{on}}).\tag{15}$$

Such a statistic, known in the literature as the Cash statistic or C-statistic [41], has a straightforward meaning: if we measured *Non* counts in the ON region and assumed that the true signal rate is *s*, then the value obtained from S according to Wilks' theorem is a random variable that follows a *χ*<sup>2</sup> distribution with 1 degree of freedom. This can be checked by performing MC simulations as shown in the left plot of Figure 4. The smaller the true value of *s* the more difficult it is to find an exact distribution for the statistic S which, due to the discreteness of the Poisson distribution, cannot be assumed anymore to be a *χ*<sup>2</sup> variable. For small expected signal counts *s*, it is therefore necessary to get the CDF of S from MC simulations.

**Figure 4.** A comparison between the frequentist (**left panel**) and Bayesian (**right panel**) conclusion from the experiment result *Non* = 82 on the hypothesis that the gamma ray expected counts is 65.4 and no background is present. Left panel: in black the cumulative distribution function (CDF) of the statistic defined in Equation (15) from 10<sup>6</sup> simulations assuming *s* = 65.4, while in grey the expected CDF of a *χ*<sup>2</sup> random variable. The step shape of the CDF of the statistic is due to the discrete nature of the Poisson distribution. Dashed lines show the point in which the statistic is 3.9 and the CDF is 0.952. Right panel: evolution of the BF defined in Equation (19) as a function of the expected counts *s*2, using *s*<sup>1</sup> = 65.4 and *Non* = 82. Dashed lines show the point in which the expected counts are 65.4 and by definition the BF is 1.

We can be interested, for instance, in the hypothesis H0: "the number of expected signal events (for a given temporal and energetic bin and surface area) is *s* = 65.4". After having performed the experiment, we obtain from the measurement *Non* = 82 events. In this scenario, by computing the statistic in Equation (15) we get S = 3.9. If H<sup>0</sup> was true we would have observed a value of the statistic equal or greater than 3.9 only 4.8% of the time (see the left panel of Figure 4). The conclusion of the frequentist approach is therefore that H<sup>0</sup> can be excluded with a 95.2% CL or in other words with a 1.98 *σ* significance. The

latter is obtained by expressing the CL in multiples of the standard deviation *σ* of a normal distribution via the inverse of the error function9:

$$
\sqrt{2}\,\text{erf}^{-1}(\text{CL}).\tag{16}
$$

The aim of the Bayesian approach is instead to provide a probabilistic statement about *s*, which is not fixed as in the frequentist approach. By applying the Bayes theorem, we get that the PDF of *s* is (up to a normalization factor)

$$p(s|N\_{on}) \propto \mathcal{P}(N\_{on}|s) \cdot p(s),\tag{17}$$

where *p*(*s*) is the prior PDF of *s*, which encloses the prior knowledge the analyzer has on the source's signal. In the Bayesian context we can compare two hypotheses *s* = *s*<sup>1</sup> and *s* = *s*<sup>2</sup> as follows

$$\frac{p(s\_1|N\_{on})}{p(s\_2|N\_{on})} = \text{BF} \cdot \frac{p(s\_1)}{p(s\_2)},\tag{18}$$

where

$$\text{BF} = \left(\frac{s\_1}{s\_2}\right)^{N\_{\text{syn}}} e^{-s\_1 + s\_2}. \tag{19}$$

The evolution of the BF as a function of the expected counts *s*2, using *s*<sup>1</sup> = 65.4 (which is our hypothesis H<sup>0</sup> of interest) and the experiment result *Non* = 82 is shown in the right panel of Figure 4. It is worth noticing that the BF is connected to the statistic in Equation (15) via

$$-2\log BF = \mathcal{S}\_{\prime} \tag{20}$$

in which *s*<sup>1</sup> = *s* and *s*<sup>2</sup> = *Non*. Lastly, it can be shown that assuming a uniform prior<sup>10</sup> the PDF of *s* in Equation (17) is

$$p(\mathbf{s}|N\_{on}) = \mathcal{P}(N\_{on}|\mathbf{s}) = \frac{\mathbf{s}^{N\_{on}}}{N\_{on}!}e^{-\mathbf{s}}.\tag{21}$$

### *3.2. The Background Is Known Precisely*

This is the case in which we know from theoretical or experimental considerations the true value ¯ *b* of the background. This happens for instance in the *field-of-view*-background model, where the entire FoV (excluding positions where *γ*-ray events are expected) is used for modeling the background. Since the OFF region is composed by the entire FoV and the ON region by a small portion of it, we have *α*  1 . Therefore the Poissonian fluctuations of the background contamination in the ON region can be neglected, being given by

$$
\sigma(aN\_{off}) = a\sqrt{N\_{off}}.\tag{22}
$$

Indeed the detection significance of the "known" and "unknown" background cases coincide for *α*  1 (see Section 3.3). Thus, in the *field-of-view*-background model we can make the assumption of knowing precisely the background, given by the product *αNoff* .

The likelihood function is

$$\mathcal{L}(\mathbf{s}) = \mathcal{P}(N\_{\rm on}|\mathbf{s} + \boldsymbol{\tilde{b}}),\tag{23}$$

which reaches its maximum value for *<sup>s</sup>*<sup>ˆ</sup> <sup>=</sup> *Non* <sup>−</sup> ¯ *b*. The statistic is obtained from the Cash one defined in Equation (15) in which *s* has been substituted with *s* + ¯ *b*. An important difference between the previous case, in which the background was assumed to be zero, is that now the statistic is also defined for the hypothesis *s* = 0. For this no-source hypothesis it is common to slightly modify the Cash statistic by taking the square root of it and by introducing a sign that is arbitrarily chosen to be positive when the excess *Non* <sup>−</sup> ¯ *b* is positive, yielding

$$\mathcal{S} = \pm \sqrt{2 \cdot (\mathcal{b} - N\_{\rm ou}) - 2N\_{\rm on} \cdot \left(\log \mathcal{b} - \log N\_{\rm ou}\right)}.\tag{24}$$

In this way, for large enough counts (*Non* - 10) the statistic is a random variable that follows a normal distribution with mean zero and variance 1 (as shown in the left panel of Figure 5). This allows immediately converting the output of S in a significance level. If for instance we assume ¯ *b* = 10 and we observe *Non* = 21 events, then S = 3.0 which means that the no-source hypothesis can be excluded with a significance of 3 *σ*.

**Figure 5.** Distribution of the Cash statistic in Equation (24) (**left panel**) and the Li&Ma statistic in Equation (28) (**right panel**) from measurements in which the background is known precisely to be ¯ *b* = 10 or must be estimated from the OFF counts *Noff* with *α* = 1, respectively. In both simulations, the true values of *s* and *b* are 0 and 10, respectively. In both plots, the PDF of a normal distribution with mean zero and variance 1 is shown in grey as reference. In both cases 106 simulations were performed. The distribution in the left panel looks to be less populated due to the fact that having the background fixed to 10 (and not estimated from an OFF measurement) limits the number of possible outcomes of the statistic.

### *3.3. The Background Is Estimated from an OFF Measurement*

Let us consider the most common scenario, in which we do not know the background *b* and therefore need to estimate it by performing OFF measurements, supposedly void of any signal. Such OFF measurements can be performed following one of the below procedures:


For a more detailed review of the background modeling and comparison of the different methods see Ref. [46].

From one of the above-mentioned procedures, once we obtained the value of *α* and *Noff* , the inference analysis on *s* is performed using the following likelihood:

$$\begin{split} \mathcal{L}(s,b) &= \mathcal{P}(N\_{\text{on}} \mid s+ab) \cdot \mathcal{P}(N\_{off} \mid b) = \\ &= \frac{(s+ab)^{N\_{\text{on}}}}{N\_{\text{on}}!} e^{-(s+ab)} \cdot \frac{b^{N\_{off}}}{N\_{off}!} e^{-b} . \end{split} \tag{25}$$

We are not directly interested in knowing the background, which is therefore a nuisance parameter. Thus, in the frequentist approach we have to profile the likelihood (see Equation (3)) by fixing *<sup>b</sup>* to the value ˆˆ *b* that maximizes L for a given *s* (see for instance Ref. [47] for a derivation of ˆˆ *b*) , i.e.,

$$\hat{b} = \frac{N + \sqrt{N^2 + 4(1 + 1/a)sN\_{off}}}{2(1+a)},\tag{26}$$

with *N* ≡ *Non* + *Noff* − (1 + 1/*α*)*s*. Performing as usual the logarithm of the likelihood ratio we have

$$\mathcal{S} = -2\log\frac{\mathcal{L}(s,\hat{\boldsymbol{\delta}})}{\mathcal{L}(\boldsymbol{\xi},\hat{\boldsymbol{b}})} = 2\left[N\_{on}\log\left(\frac{N\_{on}}{s+a\hat{\boldsymbol{b}}}\right) + N\_{off}\log\left(\frac{N\_{off}}{\hat{\boldsymbol{b}}}\right) + \boldsymbol{\varepsilon}\right]$$

$$\begin{split} +\boldsymbol{s} + (1+a)\boldsymbol{\hat{b}} - N\_{on} - N\_{off} \Big|\_{\prime} \end{split} \tag{27}$$

where *<sup>s</sup>*<sup>ˆ</sup> <sup>=</sup> *Non* <sup>−</sup> *<sup>α</sup>Noff* and <sup>ˆ</sup> *b* = *Noff* are the values11 that maximizes the likelihood in Equation (25), while ˆˆ *b* is given in Equation (26) and maximizes L for a given *s* . The statistic in Equation (27) depends only on the free parameter *s* and according to Wilks' theorem it follows a *χ*<sup>2</sup> distribution with 1 degree of freedom. It can then be used for hypothesis testing, in particular the "*s* = 0" hypothesis<sup>12</sup> from which we can obtain the detection significance. Similarly to Equation (24), we can take the square root of Equation (27) and set *s* = 0, yielding the statistic

$$S = \pm \sqrt{2} \left[ N\_{on} \log \left( \frac{1}{a} \frac{(a+1)N\_{on}}{N\_{on} + N\_{off}} \right) + N\_{off} \log \left( \frac{(a+1)N\_{off}}{N\_{on} + N\_{off}} \right) \right]^{1/2} \tag{28}$$

where the sign is arbitrarily chosen to be positive when the excess *Non* − *αNoff* is positive. This expression is the well-known "Li&Ma" [48] formula for computing the detection significance in ON/OFF measurements. As shown in the right panel of Figure 5 for large enough counts (*Non*, *Noff* - 10) the statistic in Equation (28) distributes according to a normal distribution with mean zero and variance 1. We can again consider the example in which *Non* = 21 counts have been observed in the ON region, but instead of assuming a known background ¯ *b* = 10, our background is instead estimated from the OFF measurement *Noff* = 10 with *α* = 1. Using the statistic in Equation (28) we get a detection significance of 2 *σ*, which is smaller than the 3 *σ* obtained from the Cash statistic where the background is assumed to be known precisely. A comparison of different values of *Non* between the Cash (Equation (24)) and Li&Ma (Equation (28)) statistic is shown in Figure 6, where one can see that the former becomes bigger than the latter as more events are observed in the ON region. This is due to the fact that the Li&Ma statistics account for the Poissonian fluctuations in the observed counts *Noff* . These fluctuations make an association with the gamma ray excess with the source signal less likely. When *α*  1 the two statistics give the same result.

The Li&Ma expression in Equation (28) based on the likelihood ratio is not the only statistic used for rejecting the background-only hypothesis in Poisson counting experiments. One can find in the literature the so-called "signal-to-noise ratio"

$$\mathcal{S} = \frac{N\_{on} - aN\_{off}}{\sqrt{N\_{on} + a^2 N\_{off}}} \text{.} \tag{29}$$

which has the disadvantages of following a normal distribution only for values of *α* close to 1 and for large enough counts [48]. Another approach is to compute the *p*-value from the observed *Non* counts, i.e., the probability of observing a total count bigger than *Non* under the assumption of the only-background hypothesis. If we ignore uncertainties in the background we have

$$p\text{-value} = \sum\_{n=N\_{\text{syn}}}^{\infty} \mathcal{P}(n|ab) = \frac{\Gamma(N\_{\text{on}}, 0, ab)}{\Gamma(N\_{\text{on}})},\tag{30}$$

written in terms of the incomplete gamma function Γ. When we want to include the fact that the background is estimated from an OFF measurement, it is convenient to introduce the variable *Ntot* = *Non* + *Noff* , and it can be shown [49] that the observed quantity *Non* follows (for a given *Ntot*) a binomial distribution B with success probability *ρ* ≡ *α*/(1 + *α*) and total numer of attempts *Ntot*. The *p*-value is

$$p\text{-value} = \sum\_{n=N\_{on}}^{\infty} \mathcal{B}(n|N\_{bot}, \rho) = \frac{\beta(\rho, N\_{on}, N\_{off} + 1)}{\beta(N\_{on}, N\_{off} + 1)},\tag{31}$$

with *β* the incomplete and complete beta functions (distinguished by the number of arguments). Finally the statistic is defined from the above *p*-values using

$$\mathcal{S} = \sqrt{2} \,\mathrm{erf}^{-1}(1 - 2 \cdot p \text{-value}). \tag{32}$$

A review and comparison of these statistics applied to the ON/OFF measurement can be found in Refs. [48–51]. Finally, it is worth mentioning that different extensions and refined versions of the Li&Ma expression in Equation (28) were introduced, each one with its application to a particular problem. The problem of including PSF<sup>13</sup> information in the likelihood is addressed in Refs. [52–54] , while the problem of including the prior knowledge of the source light curve is considered in Ref. [55] . Assessing the role in the detection significance of systematic uncertainties, especially those rising from the normalization factor *α*, is performed in the studies of Refs. [51,56,57].

**Figure 6.** Comparison between the Li&Ma statistic in Equation (28) (x-axis) and the Cash statistic in Equation (24) (y-axis). In both plots, each point shows the significance for a different *Non* ranging from 10 (where the significance is zero in both cases) to 56. For the Cash formula ¯ *b* is fixed to 10 in both plots, while for the Li&Ma formula *Noff* is 10 with *α* = 1 in the left plot, and *Noff* = 1000 with *α* = 0.01 in the right plot. As a reference the equation *y* = *x* (dashed line) is shown. One can see that the Li&Ma statistic converges to the Cash one when *α*  1.

Following the prescriptions of probability theory, in the Bayesian approach, the signal *s* is estimated by defining its PDF in which the nuisance parameter *b* has been marginalized:

$$p(\mathbf{s} \mid \mathcal{N}\_{on}, \mathcal{N}\_{off}; \mathbf{a}) \propto \int\_0^\infty db \, \mathcal{P}(\mathcal{N}\_{on}|\mathbf{s} + ab) \mathcal{P}(\mathcal{N}\_{off}|b) p(b) \, p(\mathbf{s}) . \tag{33}$$

Assuming flat priors *p*(*s*) and *p*(*b*) (with *s* > 0 and *b* > 0) it can be shown [58] that the integral in Equation (33) is

$$p(s \mid N\_{on}, N\_{off}; a) \propto \sum\_{N\_s=0}^{N\_{on}} \frac{(N\_{on} + N\_{off} - N\_s)!}{(1 + 1/a)^{-N\_s}(N\_{on} - N\_s)!} \cdot \frac{s^{N\_s}}{N\_s!} e^{-s},\tag{34}$$

where *Ns* is a bound variable whose physical meaning will be clear soon. Thus, the PDF of the expected signal counts *s* is given by a weighted sum of Poisson distributions with observed counts ranging from 0 to *Non*. One can recognize (see Refs. [58,59] for a detailed explanation) in Equation (34) a marginalization over the variable *Ns*. The weights in the sum of Equation (34) are indeed the PMF of the number of signal events *Ns* in the ON region:

$$p(\mathbf{N}\_s \mid \mathbf{N}\_{on}, \mathbf{N}\_{off}; a) \propto \frac{(\mathbf{N}\_{on} + \mathbf{N}\_{off} - \mathbf{N}\_s)!}{(1 + 1/a)^{-\mathbf{N}\_s}(\mathbf{N}\_{on} - \mathbf{N}\_s)!} \,. \tag{35}$$

In the left plot of Figure 7 the PDF of *s* and the PMF of *Ns* from Equations (34) and (35), respectively, are shown. The best estimate of *s* can be then obtained from the mode of the PDF in Equation (34). The evolution of the Bayesian estimation of *s* as a function of the excess *Non* − *αNoff* can be found in the right plot of Figure 7.

**Figure 7. Left panel**: Points show the PMF defined in Equation (35) of the number of signal events *Ns*, while lines show the PDF defined in Equation (34) of the expected signal counts *s*. Different colors are used to distinguish the different counts *Noff* (160 in red, 120 in blue and 10 in black), while *Non* and *α* are fixed to 80 and 0.5, respectively. **Right panel**: The mode of the PDF defined in Equation (34) as a function of the excess *Non* − *αNoff* . As a dashed line the equation *x* = *y* is shown for reference.

We can now compare the two hypotheses H*s*+*<sup>b</sup>* and H*b*, respectively, "the observed counts in the ON region are produced by the source and background" and "the observed counts in the ON region are only produced by the background". The BF is (see Ref. [60]) in this case (assuming again flat priors for *s* and *b*)

$$\text{BF} = \frac{p(\text{N}\_{on}|\text{N}\_{off}, a, \mathcal{H}\_{s+b})}{p(\text{N}\_{on}|\text{N}\_{off}; a, \mathcal{H}\_b)} = \frac{1}{s\_{\text{max}}} \frac{\text{N}\_{on}!}{(\text{N}\_{on} + \text{N}\_{off})!} \cdot \sum\_{\text{N}\_s=0}^{\text{N}\_{on}} \frac{(\text{N}\_{on} + \text{N}\_{off} - \text{N}\_s)!}{(1 + 1/a)^{-\text{N}\_s}(\text{N}\_{on} - \text{N}\_s)!},\tag{36}$$

where *smax* is the maximum prior value of *s*, i.e., *p*(*s*) = 1/*smax*. From the above expression, one can then compute the odds of H*s*+*<sup>b</sup>* following

$$o(\mathcal{H}\_{s+b}|N\_{on}, N\_{off}, a) = \text{BF} \cdot o(\mathcal{H}\_{s+b}),\tag{37}$$

with

$$p(\mathcal{H}\_{s+b}) = \frac{p(\mathcal{H}\_{s+b})}{1 - p(\mathcal{H}\_{s+b})} = \frac{p(\mathcal{H}\_{s+b})}{p(\mathcal{H}\_b)},\tag{38}$$

and *p*(H*s*+*b*) and *p*(H*b*) the priors of the two competing hypothesis H*s*+*<sup>b</sup>* and H*b*, respectively. The above odds can be expressed in a "frequentist-fashion" way by converting the posterior probability of H*<sup>b</sup>* in a significance value using the inverse error function:

$$\mathcal{S} = \sqrt{2} \cdot \text{erf}^{-1}\left(1 - p(\mathcal{H}\_b | \mathcal{N}\_{on}, \mathcal{N}\_{off}, \alpha)\right) \tag{39}$$

as shown for instance in Ref. [61] where both constant and Jeffreys's [62] priors are assumed and a comparison with the Li&Ma significance (see Equation (28)) is shown. More recently this effort has been pushed forward in Ref. [63], where an objective Bayesian solution is proposed and compared to the result of Ref. [61]. The main advantage of these Bayesian solutions is that there are no restrictions in the number of counts *Non* and *Noff* , while the frequentist ones holds only when the counts are large enough. Yet, it is important to not confuse the two approaches, since they aim at finding the solution of two different problems: studying the long-run performance of a statistic in the frequentist approach and deriving the probability of hypotheses in the Bayesian approach.

Lastly, it is worth mentioning that it is possible [59] to extend the PDF of *s* in Equation (34) by including the information on how the discriminating variables distribute for a signal or background population. The authors of Ref. [59] showed that by performing such extension not only can one avoid discarding data based on a discrimination variable (which inevitably discards also part of the signal events) but one can also increase the resolution of the signal estimation.

### *3.4. Bounds, Confidence and Credible Intervals*

We have shown so far how, given the number of events observed in the ROI, one can estimate the source signal *s* and its significance. However, the statistical analysis would be incomplete without also reporting lower and upper bounds on the inferred parameters. In the former case they are referred to in the literature as lower limits (LLs), while in the latter as upper limits (ULs), with the interpretation that values of the parameters below the LL or above the UL are more unlikely to be true. They are particularly useful when the detection is not significant, for instance when the source is too dim, and therefore one would like to provide an UL on the strength of the signal *s*.

In the frequentist approach, these bounds are obtained by looking at the log-run behavior of the statistic: a threshold value S<sup>∗</sup> of the statistic is defined such that in infinite experiments with fixed parameter ¯ *θ*, we would have observed S≤S<sup>∗</sup> only *x*% of the time. The lower or upper bound *θ<sup>x</sup>* is then defined such that S(*θx*) = S∗. In other words, we look for the values of the parameter that are excluded with a *x*% CL. The statistic S is generally constructed to increase monotonically for values of *θ* smaller or bigger than the best estimated value ˆ *θ*, which is by definition the value whose exclusion can be claimed with a 0% CL. For an UL (LL) this means that values of *θ* bigger (smaller) than *θ<sup>x</sup>* are excluded with higher CL and they are therefore less likely to be true. This is schematically shown in the left panel of Figure 8 where the statistic S is shown as a function of the parameter *θ* for different experiments in which *θ* is fixed to the true value ¯ *θ* (vertical line).

By searching for *<sup>θ</sup><sup>x</sup>* such that S(*θx*) = S<sup>∗</sup> we obtain a LL *<sup>θ</sup>LL <sup>x</sup>* and UL *θUL <sup>x</sup>* . By construction only *<sup>x</sup>*% of these curves have <sup>S</sup>(¯ *θ*) ≤ S∗, which are shown in black in the left plot of Figure 8, while the remaining curves are shown in grey. This implies that the true value ¯ *θ* lies in the interval *θLL <sup>x</sup>* , *θUL x x*% of the time. Such interval is referred to as confidence interval (CI) and it is said to *cover* the true value of *θ x*% of the time.

**Figure 8.** Evolution of the statistic S as a function of the model parameter *θ* from different pseudoexperiments with fixed true value ¯ *θ*. Vertical line shows the true value of *θ*, while the horizontal one shows the threshold <sup>S</sup><sup>∗</sup> for the statistic such that <sup>S</sup>(¯ *θ*) ≤ S<sup>∗</sup> only *x*% of the time. Black curves are those that fulfill this condition while grey ones are those that do not. In the left plot the intersection between the curves and the line S = S<sup>∗</sup> defines CIs which by construction *cover* the true value of *θ x*% of the time. These CIs cannot be anymore constructed for the curves in the right plot where the statistic has below S<sup>∗</sup> more than one minimum. In both plots the curves shown are not specific on any particular problem but only serve as a schematic representation.

The condition that values more extreme than the obtained bounds are rejected with more CL applies for the analysis described in Sections 3.1–3.3, but in general it is not always true, as shown schematically in the right plot of Figure 8. In such cases LLs, ULs and CIs do not have a straightforward interpretation. This is the reason why it is good practice to report in the result of the statistical analysis also the likelihood shape as a function of the free parameters.

If *x* is chosen to be 68, the interval between the lower and upper bounds defines the so-called 68% CI. When the background is estimated from an OFF measurement (see Section 3.3) we can use the statistic defined in Equation (27) which is a *χ*<sup>2</sup> random variable with 1 degree of freedom14. By looking for the bounds for which S = 1 we obtain the 68% CI of *s*, that for large count numbers is given by

$$\left[\ $-\sqrt{N\_{on} + a^2 N\_{off}} \text{ , } \$ +\sqrt{N\_{on} + a^2 N\_{off}}\right] \tag{40}$$

where *s*ˆ is the estimated signal given by *Non* − *αNoff* .

When looking for the UL, the CL is usually set to 95%, with other common values being 90% or 99.9%. In this case the UL *s*<sup>95</sup> is obtained by solving S(*s*95) = 3.84. The coverage of the 68% and 95% CI is shown in Figure 9 for different true signal and background counts. As one can see from this figure by imposing S = 1 or 3.84 we have a good coverage (of 68% and 95%, respectively) for large enough counts. Although when the counts are too small the CIs tend to *undercover* the true value of *s*. Such problem is well-known and it requires ad hoc adjustments [47,64] in order to recover the desired coverage.

In the Bayesian context, the concept of *coverage* is meaningless, since the objective of the analysis is not to look for the long-run performance of a given statistic, but to provide a probabilistic statement on the model parameters. In this case, CIs are replaced by credible intervals whose purpose is to provide the analyzer an interval where the model parameter lies with a given probability. Let us assume that we are interested in finding the 68% credible interval [*s*1,*s*2] of the signal *s*. By using the PDF of *s* in Equation (33) such interval is defined as follows

$$\int\_{s\_1}^{s\_2} p(s \mid N\_{on}, N\_{off}; \alpha) \, ds = 0.68 \tag{41}$$

where *s*<sup>1</sup> and *s*<sup>2</sup> can be chosen<sup>15</sup> to include the values of highest probability density. Similarly the 95% UL *sUL* on *s* is obtained from

$$\int\_{s\_{UL}}^{\infty} p(s \mid N\_{on}, N\_{off}; \alpha) \, ds = 0.05. \tag{42}$$

**Figure 9.** Evolution of the coverage of the CIs obtained from solving S = 1 (grey line) or S = 3.84 (black line) as a function of the signal *s*. S is the statistic defined in Equation (27). The dashed horizontal lines show the expected coverage from the assumption that <sup>S</sup> is a *<sup>χ</sup>*2-random variable with 1 degree of freedom. In each MC simulation the observed counts *Non* and *Noff* were simulated from a Poisson distribution with expected counts *s* + *αb* and *b*, respectively. **Left panel**: the expected background count is fixed to 1. **Right panel**: the expected background count is fixed to 10. In both plots *α* = 0.5 is assumed.

A comparison between the confidence and credible intervals, computed with the frequentist and Bayesian approach, respectively, can be found in Refs. [59,61,63]. When comparing them it is although important to remember that the two approaches are providing the answer to two completely different questions. In the frequentist case the analyzer is given a procedure for computing the interval that in infinite experiments will cover the true value of the parameters a desired fraction of the time. The parameter is fixed in these infinite experiments and the coverage is usually checked by performing MC simulations. In the Bayesian approach instead the model parameters are not fixed and they lie in the computed interval with a given probability.

### **4. Flux Estimation and Model Parameter Inference**

We have reached the final step of the inference analysis (see Figure 1) which started in Section 2 from the shower image data: estimate the source flux and model parameters. Starting point of this analysis is the expected signal count *s*, whose estimation from the events list is described in Section 3. Taking into account the exposure of the observation given by the energetic (*E*), temporal (*t*) and solid angle (Ω) range (hereafter denote by Δ) in which the events have been collected we have

$$s = \int\_{\Delta} \Phi'(E\_{r}, \hbar\_{r}, t) dE\_{I} d\hbar\_{I} dt \tag{43}$$

where Φ is the differential observed flux with units of 1/( solid angle × time × energy), while *Er* and **n**ˆ *<sup>r</sup>* are the reconstructed energy and arrival direction (for the time a perfect temporal resolution is assumed being of the order of hundreds of nanoseconds). The observed flux is given by the convolution of the differential source flux Φ with the IRF of the telescope:

$$\Phi'(E\_r, \hbar\_r, t) = \int\_E \int\_{\Omega} \Phi(E, \hbar, t) \cdot \text{IRF}(E\_r, \hbar\_r, E, \hbar) \, dE \, d\hbar. \tag{44}$$

The IRF can be though as the probability of detecting a photon with energy *E* and arrival direction **n**ˆ and assigning to it a reconstructed energy and arrival direction *Er* and **n**ˆ *<sup>r</sup>*, respectively. Following the rules of conditional probability the IRF can be expanded as follows:

$$\text{IRF}(E\_r, \hbar\_r, E, \hbar) = \text{IRF}(E\_r \mid \hbar\_r, E, \hbar) \cdot \text{IRF}(\hbar\_r \mid E, \hbar) \cdot \text{IRF}(E, \hbar). \tag{45}$$

Since *Er* and **n**ˆ *<sup>r</sup>* are conditional independent variables16 given *E* and **n**ˆ, the above expression can be rewritten as

$$\text{IRF}(E\_r, \hbar\_r, E, \hbar) = D(E\_r \mid E, \hbar) \cdot \text{PSF}(\hbar\_r \mid E, \hbar) \cdot \varepsilon(E, \hbar), \tag{46}$$

where we have identified the first term with the energy dispersion *D*, the second one with the point spread function (PSF) and the last term with the collection efficiency *ε* of the telescopes. To date, we have made the assumption that the observation parameters (such as the zenith angle of the observation) are (or can be assumed to be) constant during the observation.

We can further simplify the IRF expression by ignoring (i.e., by integrating out) the information on the arrival direction **n**ˆ, thus reducing the dimension of the problem from 3 to 1. This assumption is justified if, for instance, the observation is performed on a point-like source, which will be also assumed hereafter to be steady. Having simplified our problem17, Equations (43) and (44) can be then rewritten, respectively, as

$$s = \int\_{\Delta} \Phi'(E\_r) dE\_r \tag{47}$$

$$\Phi'(E\_I) = \int\_E \Phi(E) \cdot D(E\_I \mid E) \cdot \varepsilon(E) dE. \tag{48}$$

In order to get the flux Φ from the expected counts *s* two approaches are used: *unfolding* and *forward folding*.

### *4.1. Unfolding*

If we divide the energy range in bins, the expected counts of gamma ray in the *i*-th bin Δ*<sup>i</sup>* of reconstructed energy is (when combining Equations (47) and (48))

$$s\_i = \int\_{\Delta\_i} dE\_r \sum\_j \int\_{\Delta\_j} dE \Phi(E) D(E\_r|E) \varepsilon(E) \equiv \sum\_j R\_{i,j} \mathbb{S}\_{j'} \tag{49}$$

where *s*¯*<sup>j</sup>* is the expected number of gamma rays from the source flux in the *j*-th bin Δ*j*, i.e.,

$$\bar{s}\_{\dot{\gamma}} = \int\_{\Lambda\_{\dot{\gamma}}} \Phi(E) dE. \tag{50}$$

The matrix *Ri*,*<sup>j</sup>* is the *response matrix* which is the probability of detecting (due to the collection efficiency *ε*) a photon with energy in the range Δ*<sup>j</sup>* and assign to it (due to the energy dispersion *D*) a different energy bin Δ*i*. From Equation (49) its expression in formula is given by

$$R\_{i,j} = \frac{1}{\bar{s}\_{\dot{j}}} \int\_{\Delta\_i} dE\_{\mathcal{I}} \int\_{\Delta\_{\dot{j}}} dE \Phi(E) D(E\_{\mathcal{I}}|E) \varepsilon(E),\tag{51}$$

while in practice *Ri*,*<sup>j</sup>* is estimated from MC simulations as

$$R\_{i,j} = \frac{n\_{i,j}^{\gamma}}{N\_j^{\gamma}} \tag{52}$$

where *N<sup>γ</sup> <sup>j</sup>* is the total number of gamma ray events simulated (according to an assumed source flux <sup>Φ</sup>) with true energy in the energy range <sup>Δ</sup>*j*, and *<sup>n</sup><sup>γ</sup> <sup>i</sup>*,*<sup>j</sup>* the number of those same events that have been detected by the telescope with a reconstructed energy in the energy range Δ*i*. Clearly, by summing over *i* we recover the binned collection efficiency

$$
\varepsilon\_{\vec{j}} = \sum\_{i} \mathcal{R}\_{i, \vec{j}}.\tag{53}
$$

An example of the binned collection efficiency along with the energy dispersion from the same experiment can be found in Figure 10.

**Figure 10. Left panel**: Evolution in energy of the collection efficiency *ε*(*E*) multiplied by the collection area of the telescope, which for IACTs is generally of the order of 105 m2. **Right panel**: evolution in reconstructed/estimated energy and true energy of the binned dispersion energy (or migration matrix). Both figures are Reprinted with permission from Ref. [67] Copyright 2007 Albert et al.

Goal of the unfolding procedure is to find a solution of Equation (49), by inverting the response matrix

$$s\_{\vec{j}} = \sum\_{\vec{i}} R\_{\vec{j},\vec{i}}^{-1} s\_{\vec{i}}.\tag{54}$$

Thus, unfolding is basically a deconvolution problem and shares its typical problems, like the fact that the response matrix is, in general, non-invertible. As all ill-posed problems *regularization* procedures are required in order to find a solution and to prevent overfitting. In the context of IACTs analysis, common regularization procedures are those of Tikhonov [68], Bertero [69] and Schmelling [70]. For a more detailed discussion and comparison of these approaches with applications to data collected with the MAGIC telescopes see Ref. [67]. It is good practice to show the unfolding result with several of these approaches to cross-check the reconstructed flux and to also report along with the reconstructed flux points *s*¯*<sup>j</sup>* their correlation matrix. Such a correlation matrix is needed if one is willing to fit the flux points *s*¯*<sup>j</sup>* with a spectral model.

To date, the unfolding approach has been discussed as a geometrical problem: given a known vector *s* and a known matrix *R*, we wish to invert *R* in order to find the unknown vector *s*¯ as shown in Equation (54). In the *Bayesian unfolding* approach instead the problem is a probabilistic one: given our prior knowledge *I* and the expected counts *si* in the reconstructed energy bins, we wish to get the probability distribution of *s*¯*<sup>j</sup>*

$$p(\bar{s}\_{\dot{j}} \mid s\_{i\prime}I) = \frac{p(s\_i \mid \bar{s}\_{\dot{j}\prime}I) \cdot p(\bar{s}\_{\dot{j}} \mid I)}{\sum\_{i} p(s\_i \mid \bar{s}\_{\dot{j}\prime}I) \cdot p(\bar{s}\_{\dot{j}} \mid I)}. \tag{55}$$

The *prior p*(*s*¯*j*|*I*) is the binned normalized ( ∑*<sup>j</sup> p*(*s*¯*j*|*I*) = 1) flux that we initially assumed for the source, while the term *p*(*si* |*s*¯*j*, *I*) is the probability of measuring an expected signal count in the reconstructed energy bin Δ*<sup>i</sup>* given the *true* signal count *s*¯*<sup>j</sup>* in the energy bin Δ*j*. This term is related to the response matrix defined in Equation (51).

An *iterative* method for getting the *posterior* in Equation (55) that takes as a prior *p*(*s*¯*j*|*I*) the posterior obtained from a previous iteration can be found in Ref. [71] and later on revised and improved by the same author in Ref. [72] . More recently the author of Ref. [73] proposed a *fully Bayesian unfolding* with applications to numerous examples.

### *4.2. Forward Folding*

The main advantage of the unfolding algorithm is its ability to show a distribution that is as much as possible equivalent to the observed distribution of events with physical and instrumental effects being removed. Although some assumptions about the flux are inevitable as discussed in Section 4.1, the desired outcome of the unfolding procedure is to "interpret" the data as little as possible.

In the total opposite direction, we can find the forward folding approach. In this case, a parametric model for the intrinsic flux is assumed and the final result is to provide an estimate of the free model parameters *θ*. When it is reasonable to believe that the source flux can be described by one or a family of parametric functions, the forward folding is always preferable to the unfolding one, being easier to implement and free of those problems that are typical of the unfolding methods which are cured by regularization. The problem falls therefore within the realm of "fitting" problems: searching for the values ˆ *θ* that maximize the likelihood function, defined as the probability of getting the observed data given the model parameters *θ*. The observed data are the list of *Non* events (obtained from the shower image as discussed in Section 2) with their reconstructed energy, arrival direction and time (hereafter denoted for simplicity *x*). If the background is estimated from an OFF region (see

Section 3.3) one has to take into account also the observed counts *Noff* in the OFF region and the normalization factor *α*. The likelihood function is

$$\begin{aligned} \mathcal{L}(\boldsymbol{\theta} \mid \boldsymbol{\pi}\_{\boldsymbol{r}} \text{ N}\_{\boldsymbol{\theta}\text{m}}, \text{N}\_{\boldsymbol{\theta}ff}, \boldsymbol{a}\_{\boldsymbol{r}} \mathbf{x}\_{1}, \dots, \mathbf{x}\_{\text{N}\_{\text{on}}}) &= \\ p(\boldsymbol{\pi} \mid \boldsymbol{\theta}) \quad \mathcal{P}(\text{N}\_{\boldsymbol{\theta}\text{m}} \mid \boldsymbol{s} + \boldsymbol{a}\boldsymbol{b}) \cdot \mathcal{P}(\text{N}\_{\boldsymbol{\theta}ff} \mid \boldsymbol{b}) \cdot \prod\_{i=1}^{\text{N}\_{\text{on}}} \left( \frac{f\_{\text{s}}(\mathbf{x}\_{i} \mid \boldsymbol{\theta}, \boldsymbol{\pi}) + f\_{\text{b}}(\mathbf{x}\_{i})}{\mathbf{s} + \mathbf{a}\boldsymbol{b}} \right) \end{aligned} \tag{56}$$

where *π* are the nuisance parameters of the model and *p*(*π* | *θ*) their probability distribution given *θ*. The function *fs* is the differential source flux with the IRF of the telescopes being taken into account, such that

$$\int d\mathbf{x} \, f\_s(\mathbf{x} \, | \, \theta, \pi) = \mathbf{s},\tag{57}$$

where *s* is the expected signal counts in the ON region. The expected background count *b* in the OFF region is instead provided by

$$\int d\mathbf{x} \, f\_b(\mathbf{x}) = b$$

where *fb* is the differential background template model. The function P is the Poisson distribution defined in Equation (12). The likelihood function defined in Equation (56) is usually referred to as "unbinned likelihood" to distinguish it from its binned version

$$\mathcal{L}(\boldsymbol{\theta} \mid \boldsymbol{\pi}, \,\mathrm{N}\_{\mathrm{on}}^{(1)}, \dots, \,\mathrm{N}\_{\mathrm{off}}^{(1)}, \dots, \boldsymbol{a}) = p(\boldsymbol{\pi} \mid \boldsymbol{\theta}) \cdot \,\prod\_{i=1}^{\text{all bins}} \mathcal{L}\_{i}(\boldsymbol{\theta} \mid \boldsymbol{\pi}, \,\mathrm{N}\_{\mathrm{on}}^{(i)}, \operatorname{N}\_{\mathrm{off}}^{(i)}, \boldsymbol{a}),\tag{59}$$

where <sup>L</sup>*<sup>i</sup>* is the likelihood of the single *<sup>i</sup>*-th bin (in which *<sup>N</sup>*(*i*) *on* and *<sup>N</sup>*(*i*) *off* counts have been observed in the ON and OFF region, respectively) given by

$$\mathcal{L}\_i(\theta \mid \pi, N\_{on}^{(i)}, N\_{off}^{(i)}, a) = \mathcal{P}(N\_{on}^{(i)} \mid s\_i(\theta, \pi) + ab\_i) \cdot \mathcal{P}(N\_{off}^{(i)} \mid b\_i). \tag{60}$$

The variable *bi* has to be treated as a nuisance parameter and fixed to the value given in Equation (26) if the frequentist approach is implemented or integrated out in the Bayesian one. The variable *si*(*θ*, *π*) is instead obtained by the integral in Equation (57) with integration limits on *x* defined by the bin under consideration.

The forward folding procedure applied in IACTs can be found for instance in fundamental physics studies, such as the search for dark matter [74] or tests of the equivalence principle from the time of flight of cosmic gamma rays [75].

### **5. Discussion**

From the discovery of the TeV emission from the Crab nebula in 1989 by the Whipple collaboration [76], IACTs have been able in recent decades to give birth to a mature field of gamma ray astronomy. Instruments such as MAGIC, HESS, and VERITAS discovered numerous astrophysical sources at TeV energies, allowing investigation of the physics of remote cosmic objects. Apart from the technological development needed for the construction and maintenance of these telescopes, a huge effort has been carried out in recent decades to adapt and explore statistical tools aimed at extracting all of the information contained in the collected data.

The most challenging issue in the analysis of IACTs data is the predominant presence of background events that require detailed studies such as the estimation of the background from OFF regions as discussed in Section 3.3. Gamma rays only compose a small fraction of the cosmic flux that hits our atmosphere producing the Cherenkov light observed by the telescopes. Techniques such as the "multivariate analysis" (see Section 2.5) or the CNNs (see Section 2.6 ) are the current *state of the art* for discriminating gamma rays events from the background.

Another important factor that makes the statistical analysis so important and challenging for these instruments is that unfortunately we do not have a pure source of gamma rays, which is steady and under our control, and can then be used for calibrating the telescopes. The closest we have to a steady and bright gamma ray source is the Crab Nebula [77], which is indeed used as a standard for calibrating the instrument whenever an IACT observatory starts its operations. In order to infer instrumental properties, such as the energy threshold, a signal from the Crab Nebula is collected and then compared with the expected (obtained from MC simulations) response. MC simulations are therefore of huge importance for IACTs and, moreover, they also provide a way for training the BTD and RF algorithms briefly discussed in Section 2.5. The problem is that instrumental effects (such as the reflectance of the mirrors) and the atmospheric conditions have to be taken into account in these MC simulations, which in most cases requires approximations and idealized instrumental parametrizations. This is the reason why different efforts were made as discussed in Section 2.4 for making these MC simulations as realistic as possible.

Once the above issues are overcome, we have to quantify, given the list of detected events, how likely it is that a flux of gamma rays has been detected and how confidently we can set some values to such a flux. This has been discussed in Section 3 where we showed, using the frequentist and Bayesian approach, different solutions to this problem, emphasizing with examples their differences and mentioning some of the most recent developments. Lastly in Section 4, the IRF is taken into account in order to provide a flux estimation which is as much as possible similar to the intrinsic flux of gamma rays.

With the construction of CTA [78], the next generation of IACTs has ten times more sensitivity than the current instruments, and the statistical tools described in this review will become more and more indispensable in order to put the capability of the telescopes at its limits.

**Funding:** This review work was funded by the Research Council of Norway grant, project number 301718.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We would like to thank Julia Djuvsland for useful comments on a early version of this manuscript and the anonymous reviewers for their insightful inputs and suggestions.

**Conflicts of Interest:** The author declares no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **Notes**


### **References**

1. Aleksi´c, J.; Ansoldi, S.; Antonelli, L.A.; Antoranz, P.; Babic, A.; Bangale, P.; Barceló, M.; Barrio, J.; González, J.B.; Bednarek, W.; et al. The major upgrade of the MAGIC telescopes, Part II: A performance study using observations of the Crab Nebula. *Astropart. Phys.* **2016**, *72*, 76–94. [CrossRef]


### *Review* **Evolution of Data Formats in Very-High-Energy Gamma-Ray Astronomy**

**Cosimo Nigro 1,\*, Tarek Hassan <sup>2</sup> and Laura Olivera-Nieto <sup>3</sup>**


**Abstract:** Most major scientific results produced by ground-based gamma-ray telescopes in the last 30 years have been obtained by expert members of the collaborations operating these instruments. This is due to the proprietary data and software policies adopted by these collaborations. However, the advent of the next generation of telescopes and their operation as observatories open to the astronomical community, along with a generally increasing demand for open science, confront gamma-ray astronomers with the challenge of sharing their data and analysis tools. As a consequence, in the last few years, the development of open-source science tools has progressed in parallel with the endeavour to define a standardised data format for astronomical gamma-ray data. The latter constitutes the main topic of this review. Common data specifications provide equally important benefits to the current and future generation of gamma-ray instruments: they allow the data from different instruments, including legacy data from decommissioned telescopes, to be easily combined and analysed within the same software framework. In addition, standardised data accessible to the public, and analysable with open-source software, grant fully-reproducible results. In this article, we provide an overview of the evolution of the data format for gamma-ray astronomical data, focusing on its progression from private and diverse specifications to prototypical open and standardised ones. The latter have already been successfully employed in a number of publications paving the way to the analysis of data from the next generation of gamma-ray instruments, and to an open and reproducible way of conducting gamma-ray astronomy.

**Keywords:** very-high-energy gamma-ray astronomy; astroparticle physics; open science; data format

### **1. Introduction**

Gamma-ray astronomy, currently observing the non-thermal universe over more than 7 decades in energy, is conducted with different classes of instruments operating in two complementary energy ranges [1]. Space-borne telescopes, sensitive in the so-called highenergy regime (HE, 100 MeV < *E* < 100 GeV), directly detect the gamma rays through their pair-conversion in an instrumented volume [2]. Ground-based telescopes, sensitive in the so-called very-high-energy regime (VHE, *E* > 100 GeV), detect the particle cascade (or shower) generated by gamma rays interacting with atmospheric nuclei (via e± pair production and Bremsstrahlung) using two different techniques [3]. Imaging atmospheric Cherenkov telescopes (IACTs) use a large reflector (∼10 m) and a photomultiplier camera to image the Cherenkov light emitted by the charged component of the shower. Particle samplers rely on an array of detectors (distributed over a surface up to a <sup>∼</sup>km<sup>2</sup> ) to directly sample the charged component using, for example, scintillators or water tanks in which further Cherenkov light is produced and detected (water Cherenkov detectors, WCD). VHE astroparticle physics will be revolutionised in this decade by an upcoming generation

**Citation:** Nigro, C.; Hassan, T.; Olivera-Nieto, L. Evolution of Data Formats in Very-High-Energy Gamma-Ray Astronomy. *Universe* **2021**, *7*, 374. https://doi.org/ 10.3390/universe7100374

Academic Editors: Ulisses Barres de Almeida, Michele Doro and Binbin Zhang

Received: 9 September 2021 Accepted: 30 September 2021 Published: 8 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

of ground-based instruments built with the objective to improve by an order of magnitude the sensitivity of the current ones: the Cherenkov telescope array (CTA) [4] for IACTs; the Large High Altitude Air Shower Observatory (LHAASO) [5] and the Southern Wide-field Gamma-ray Observatory (SWGO) [6] for particle samplers.

In addition to different detection techniques, the current generation of HE and VHE instruments adopt distinct data and software policies. As typical for space observatories, HE gamma-ray telescopes retained their data proprietary for a limited amount of time (usually one year) before releasing them publicly. This has been the case for both currently operating HE gamma-ray telescopes: the *Fermi* Large Area Telescope (*Fermi*-LAT) [7] and the Astrorivelatore Gamma ad Immagini Leggero (AGILE) [8]. Their data are nowadays made promptly available via web-based platforms, referred to as science data centers, providing astronomers with an interface to retrieve the data and the science tools to perform their analysis [9,10]. VHE telescopes of the current generation, on the other hand, have been operated under more strict data and software policies. Telescopes like the high energy stereoscopic system (H.E.S.S.) or the major atmospheric gamma-ray imaging Cherenkov (MAGIC) have traditionally produced scientific results with proprietary data and closed-source software [11,12]. Few examples of public VHE data or software exist, worth mentioning are: the very energetic radiation imaging telescope array system (VERITAS), that has publicly released under an open-source license one of its analysis chains [13]; the first g-Apd Cherenkov telescope (FACT), that has made public its analysis chain [14], a small sub-sample of its data and quick-look analyses results on all the data collected [15]; and the high-altitude water Cherenkov (HAWC), that has provided some high-level data, mostly meant to reproduce results of major publications [16]. More recent efforts of data sharing in a standardised format will be covered later in this review. Generally speaking, beside sparse endeavours, VHE gamma-ray data largely remain inaccessible to astronomers outside the collaborations gathering them. This situation is bound to change with the forthcoming CTA that will represent the first gamma-ray experiment operated as a proposal-driven open observatory [17]. External scientists will be able to submit observational proposals; data will be proprietary to the principal investigators typically for one year and then released to the public. This implies, as in the case of HE gamma-ray instruments, the necessity to produce accessible data and tools for users external to the collaboration to perform their scientific analyses.

In light of these requirements, VHE gamma-ray astronomers have started developing open-source data-analysis tools (e.g., ctools [18] and Gammapy [19]) and, in parallel, a standardised format for astronomical gamma-ray data. This review will focus on the latter. The data level expected to be shared by the next generation of VHE observatories with external observers (as already routinely done by *Fermi*-LAT and AGILE) is a *high* data level whose purpose is the production of scientific results (i.e., measurement of the properties of an astrophysical source: flux, morphology, etc.). It contains a reduced amount of information compared to the *low* (or calibrated) data level strictly connected with the particular detection or analysis technique. Specifically it contains lists of detected photons with their estimated physical observables (energy, direction, etc.) and a characterisation of the response of the system. It is abstract enough to represent data from instruments employing diverse detection techniques, such as IACT and WCD. Being difficult to detach the discussion on high-level data format from the software provided to analyse it, we might comment as well upon aspects of software development and policies.

This review is thus structured: the progression of the data format from previous specifications is discussed in Section 2, along with its current status and working principles. In Section 3 we review some projects that have already successfully employed the format, either to validate the capabilities of the science tools, to illustrate the possibility of multiinstrument analysis with current gamma-ray instruments and to extend the format to particle samplers. In Section 4, we gather some ideas for the future of the format and its possible expansion. We provide our conclusions in Section 5.

### **2. Data Formats for Very-High-Energy Gamma-Ray Astronomy**

*2.1. Background: Data Model in the Current Generation of VHE Instruments*

VHE gamma-ray astronomy inherited, along with the hardware techniques, the software solutions of particle physics. In the late 1990s and early 2000s, C++ and the ROOT [20] framework dominated the field. Hence, software for VHE data reduction and analysis has been mostly built in this environment. As already commented, even if some of these tools are accessible, little documentation is publicly available about the private analysis chains and the data they produce. Nonetheless, from the available material, a common data reduction workflow can be inferred for VHE gamma-ray telescope, sketched in Figure 1.

**Figure 1.** Schematisation of the progressive data reduction and data levels of an IACT. Raw data contain the signal sampled from the photomultipliers at the occurrence of a trigger event (Data Level 0). Calibrated data (Data Level 1) contain the pixelated image of the Cherenkov light of the shower. The latter can be parametrised with few geometrical quantities and used to determine the observables of the original shower, including its probability of being a gamma-ray shower (Data Level 2). The detected events can be gathered in a list of gamma ray candidates, together with the functions representing the response of the system (the so-called instrument response function, IRF), e.g., the collection area of the system as a function of the energy or the bias of its energy reconstruction (Data Level 3). This information can be used to perform a statistical analysis obtaining the so-called science products, in this case the spectrum of the source (Data Level 4).

In the case of an IACT, the raw output of the data acquisition typically consists of binary files containing the waveforms of all the camera pixels, sampled at the occurrence of a trigger event. The raw data are reduced to a list of quantities per pixel (e.g., charge and arrival time) aggregated in the so-called *calibrated* files with size of several GB for each observational run, typically around ∼30 min (in what follows the sizes indicated per each data level are taken from [21], so they refer to VERITAS. One can compare with similar figures reported in [22] for MAGIC). The Cherenkov light of the shower typically illuminates a few pixels in the camera, this pixelated image, representing the distribution of Cherenkov photons, can be parametrised with simple geometrical quantities [23] connected to the shower properties. Image parameters can be fed, at the next data level, to algorithms estimating these properties (e.g., energy and direction of the primary) and classifying the showers initiated by gamma rays against those initiated by cosmic rays, the irreducible background of ground-based gamma-ray telescopes. In the case of particle samplers, such as WCD, the data reduction workflow is similar but instead of camera images, the information is extracted from the pattern in the charge deposited by the shower across the array, as well as from its time evolution. Raw parameters derived from this charge distribution are fed into reconstruction algorithms that, in turn, estimate the relevant shower parameters, such as those mentioned above (see [24] for an overview of the HAWC data reduction pipeline). Having estimated the properties of the shower and of the primary particle generating it, a list of gamma-ray candidates can hence be assembled at the next data level.

At this stage, the information stored within the data products, generally denoted as *high-level*, is independent of the detection technique, as well as the calibration and analysis methods. High-level data typically consist of a list of gamma-ray events along with a parametrisation of the response of the system, the so-called instrument response function (IRF). The latter provides the information necessary to perform a statistical analysis estimating, for example, the significance of the signal, the flux spectrum, or the light curve of the source, which we refer to as science products.

All along the current-generation closed-source analysis chains the data, progressively reduced, are stored in the format associated with the ROOT framework, with each collaboration reiterating the effort of defining custom specifications for a data model that shares several commonalities between different experiments. Moreover, even if readable via ROOT, the content of these data products cannot be interpreted by a non-expert analyser. There are noticeable efforts to provide analysis tools wrapping these diverse analysis software such as the multi-mission maximum likelihood framework [25]. The ultimate limitation of these tools is though the availability of the experiments to expose their closed-source software and data format and the necessity to implement a new plug-in for each of the instruments considered.

Without a common data model or a general software tool oriented to external users, the current generation of VHE instruments faces different concerns in different time perspectives. At present, multi-instrument analyses simply cannot be performed within a common analysis framework using their proprietary data products. For what concerns the future, as the end of their operation approaches, it is worth to start considering the access to the wealth of data they gathered. If their *legacy* data are to be made public then a release in their original format will make necessary a release of the analysis software as well, which, in turn, has to be maintained. In addition to not being designed for the usage by a large community, this software can rely on libraries that will eventually become deprecated.

### *2.2. GADF: A Unifying Effort*

In the second half of the 2010s, partly to prototype the high-level data format of the forthcoming CTA and partly to exploit the newly available open-source data-analysis software such as ctools and Gammapy, VHE astronomers started to explore several softwareindependent implementations of these high-level data. In 2016, in order to coordinate the parallel efforts and to foster the definition of a common and standardised data model, the *Data Formats for Gamma-ray Astronomy* forum (shortly referred to as the "gamma astro data formats", GADF) [26] was established. A community-driven initiative, the GADF consists of a documentation [27] hosted on GitHub [28] (Figure 2), specifying the naming scheme, the content, and the metadata of the files containing high-level gamma-ray observations. Though high-level products are the focus of the initiative, specifications for science products are also under discussion. The documentation, openly provided with a Creative Commons Attribution 4.0 license, evolves with the typical GitHub workflow: any interested user can propose changes via *issues* that will be discussed among the active members of the initiative, and implemented via *pull requests* that will be ultimately merged once a consensus is reached. Despite the bias towards IACTs, the flexible development of the format allows to accommodate data from other types of instruments, such as space-borne telescopes or WCD. The format has achieved a stable definition and counts already two minor releases, the present being 0.2 [29].

This section illustrates the guiding principles adopted in the development of the GADF specifications, gives an overview of their actual content and highlights the features that make them generalisable to different gamma-ray instruments. The first version of the GADF was designed for IACT, since the major contributors were VHE astronomers preparing for CTA. The data model and the breakdown of the data levels foreseen for CTA are presented in [30], introducing the following naming convention (see also Figure 1): the raw output of the data acquisition is defined as data level 0 (DL0); calibrated files as data level 1 (DL1); reconstructed shower parameters as data level 2 (DL2); sets of selected

gamma-ray events and the instrument response as data level 3 (DL3); science products (spectra, light curves, sky maps) as data level 4 (DL4), and observatory results as catalogues, such as data level 5 (DL5). This nomenclature is used within the GADF and will be also adopted in the following text.


**Figure 2.** (**Left**): GitHub repository hosting the development of the *data formats for gamma-ray astronomy* specifications. (**Right**): The repository contains a documentation written in sphinx whose html version can be explored on readthedocs.

### 2.2.1. Format Specifications

As the GADF is currently the only provider of standardised specifications for highlevel VHE gamma-ray data, science tools as ctools and Gammapy base their data structures on them. Compatibility with open-source data-analysis software is not the only objective of the standardisation effort. One of the guiding principles of the GADF is to produce data whose content is clearly documented and easy to interpret. The file format chosen to host the data is the flexible image transport system (FITS) [31], representing a long-time standard in astronomy at all wavelengths. Another fundamental requirement in the design of the data specifications was to rely as much as possible on already well-established standards used in other FITS files productions, such as those by the missions gathered under NASA's High Energy Astrophysics Science Archive Research Center (HEASARC) [32]. NASA's Office of Guest Investigator Program (OGIP) FITS working group [33] already disseminates to the high-energy astrophysics community recommendations on FITS data productions. These include standards on keyword usage in metadata, on storage of time information, representation of response functions that the GADF extensively follows. The adherence of the GADF to widely used standards ensures additional compatibility with tools already in use by high-energy astrophysicists, such as the FTOOLS [34].

As pointed out, the aim of the GADF initiative was to produce specifications for high-level data, therefore, it mostly focuses on the DL3. Nonetheless, the forum discusses data levels higher than the DL3. For example, the OGIP spectral file format [35] is adopted to represent VHE gamma-ray one-dimensional (energy-dependent) spectral data. The compatibility with the OGIP standards ensures that DL3 products can be reduced to spectral data digestible by other established multi-mission analysis tools such as sherpa [36,37]. Prototypical specifications for DL4 (such as sky maps, flux points and lightcurves) are under discussion and not yet stable.

### 2.2.2. GADF DL3 Data

The DL3 is the data level that contains a list of gamma-ray event candidates and the response of the system. All the information in the DL3 files is therefore post-calibration, i.e., already incorporating all the low-level information related to the detector (calibration, gain corrections, digital-count-to-photo-electron conversion) that is hence omitted. A FITS file consists of many extensions, called header data units (HDUs). Each HDU is composed by a header unit, typically containing metadata, and a data unit, containing a n-dimensional

array (an image) or a table (in ASCII or binary format). All data units in DL3 files are stored as binary tables.

One of the file extensions contains the event list and, in the associated data unit, a flat table with a column for each event property (see Figure 3). In the current specifications columns listing the events identification number (in the DAQ system), energy, sky coordinates (right ascension and declination) and timestamp are mandatory. Optional columns might include results of the classification algorithms (e.g., a *gammanness* score) and quantities related to the reconstruction (e.g., image or shower parameters). Each file corresponds to a single observing run, therefore the events header unit contains the identification number of the data acquisition run, the type and number of telescopes used in the observation, information about the location of the instrument and its observation mode along with time and duration of the observation. Another HDU is dedicated to a list of good time intervals (GTI), specifying the time periods within the event lists with adequate scientific quality.


**Figure 3.** Example of a DL3 file compliant with the GADF specifications. Top: the header data units (or extensions) of the file contain the event lists, under (EVENTS), followed by those representing the good time intervals (GTI) and the instrument response components: effective area (AEFF), energy dispersion (EDISP), point spread function (PSF) and background (BKG). Bottom: event list table and its content.

The response of the system is needed to properly relate the reconstructed events with astrophysical source properties. It is assumed that this response can be factorised in different components. The components considered are: the effective area, describing the acceptance of the system to gamma-ray events; the energy dispersion (or migration matrix), describing the probability distribution of the energy estimator and the point spread function (PSF), describing the probability distribution of the direction estimator. The background rate (measuring the rate of cosmic ray events misclassified as gamma rays) might be included among the IRF components, however it is not mandatory. The IRF components depend on observational (e.g., atmospheric conditions, zenith and azimuth angle of the pointing) and physical quantities (e.g., the energy or direction of the showers). The IRF components considered in the format are valid for a single exposure, which is typically defined by constant observational conditions (e.g., zenith range, atmospheric

quality, etc.), hence considering any such dependency of the IRF averaged out. In the current specifications, the dependencies on physical quantities considered are the photon energy and the offset of its position from the centre of the instrument field of view (a response symmetric with the offset coordinate is assumed). As an example, Figure 4 illustrates the energy and offset dependency of the effective area component for a H.E.S.S. observation stored in the GADF DL3 format. IRF components are not stored in flat tables: energy and offset bin edges are stored in separate columns, and a last column contains a multi-dimensional array corresponding to the response in each bin. OGIP specifications are followed in storing both events and IRF components.

**Figure 4.** Example of visualisation of the effective area of a IACT and its dependency on energy and offset angle. The IRF component is read from DL3 files and displayed using Gammapy.

### **3. Projects Successfully Using the Standardised Data Format**

To illustrate the maturity of the GADF standardisation effort, we review, in the following sections, projects that have successfully employed its specifications.

### *3.1. The H.E.S.S. First Public Test Data Release*

The H.E.S.S. collaboration was the first to publicly release a test dataset in a DL3 format compliant with the GADF specifications. Few observations, amounting roughly to ∼50 h of observation time, gathered between 2004 and 2008 were published in the so called H.E.S.S. DL3 Data Release 1 (H.E.S.S. DL3 DR1) [38,39] to promote the standardisation effort but also to allow to test the open-source science tools in development with actual IACT data. The data release contains 30 h of observations of sources representing different galactic and extragalactic science cases, and 20 h of observations of field of views empty of known gamma-ray emitters, also labelled as *off* data, to be used for background estimation. Table 1 summarises the content of this data release.

**Table 1.** Content of the H.E.S.S. Data Level 3 Data Release 1.


### *3.2. The Joint-Crab Project*

With multi-instrument analyses being one of the main objectives of the standardisation effort, after the first public release of GADF-compliant DL3 data, the next step in the format validation would have naturally been the combination of data from different experiments. In the so-called *joint-crab* project [40], Crab Nebula observations from *Fermi*-LAT and four of the currently operating IACTs, produced in a GADF-compliant format, were combined in the first multi-instrument and fully-reproducible gamma-ray analysis. The datasets used were:


To illustrate a prototypical analysis example, the Crab Nebula spectrum (Figure 5 right) was estimated, combining all the observations in an energy-dependent (or onedimensional) joint binned likelihood. In this analysis technique, classically employed by IACT, source and background events are extracted via aperture photometry (Figure 5 left) and then an energy-dependent analytical flux model is folded with the response of the system to estimate the number of counts maximising the Poissonian likelihood describing the counts in each energy bin. The *joint-crab* project relied only on open-source software for its statistical analyses (Gammapy). Datasets, scripts reproducing all the analysis steps and tutorial notebooks are publicly provided on GitHub [41], along with a conda environment freezing the exact dependencies used in the paper and a docker container [42] to guarantee a long-term reproducibility. The entire package was also archived on zenodo [43]. Given the approach proposed and the assets openly made available, this work not only implements the first fully-reproducible gamma-ray analysis but also constitutes the first joint public release of IACT DL3 data.

**Figure 5.** (**Left**): Source counts vs estimated energy extracted via aperture photometry, per each of the instrument datasets in [40]. (**Right**): Estimated flux spectrum of the Crab Nebula obtained from the individual instrument datasets (same colour code as in the figure on the left) and considering all the datasets in the same joint likelihood (red). The grey dashed line represents a bibliographic reference. In all cases the analytical flux model considered in the likelihood is a curved power-law. Figures from [40].

### *3.3. Analysis of the H.E.S.S. Public Data Release with Ctools*

In addition to evolving in parallel with the GADF, the open-source science tools can recognise data with its specifications as input. In [44], the H.E.S.S. DL3 DR1 (Section 3.1) was used to test the capabilities of ctools, until then mainly used to analyse simulated CTA observations and calculate prospects for its observational capabilities. The authors presented a method to build a parametric model describing the spatial and spectral distribution of the background events in the H.E.S.S. DL3 DR1. The latter was used to perform a spectro-morphological (three-dimensional) analysis estimating the spectrum of the 4 sources included in the data release. Differently than in the one-dimensional analysis described in Section 3.2, the sources positions and morphology are included among the

parameters of the model used to estimate the flux. Source and background counts are not separated, rather the background is included among the components of a model that in this case predicts the flux in the entire field of view, allowing to take into account multiple sources at a time (see [18] Section 2 for a detailed explanation). This approach has been successfully used by the *Fermi*-LAT collaboration for all its scientific publications. The results of binned and unbinned three-dimensional likelihood analyses are compared against the simpler one-dimensional binned analysis, also implemented in ctools, and against bibliographic references obtained from the same sources. The consistency of the results obtained with ctools with the different statistical methods applied and with the literature (see Figure 6 left) testifies the maturity not only of the science tool, but also of the GADF scheme that correctly encapsulates all the information needed for correct reproduction of scientific results. The paper finally illustrates the capability of ctools, being built on the gammalib library [18], to simultaneously analyse gamma-ray data with different specifications, i.e., to analyse *Fermi*-LAT data in their own high-level format (without the reduction described in Section 3.2) and IACT DL3 data compliant with the GADF specifications.

**Figure 6.** Flux spectrum of the extended gamma-ray source RXJ1713.7-3946. (**Left**): Comparison of the result obtained from the H.E.S.S. DL3 DR1 using ctools' three-dimensional unbinned likelihood analysis (red) against the literature (blue). Figure from [44]. (**Right**): Comparison of the result obtained from the H.E.S.S. DL3 DR1 using ctools' (green) and Gammapy's (blue) three-dimensional likelihood analyses against a result obtained on the very same data sample with the H.E.S.S. private analysis chain (red), performing a one-dimensional analysis. A bibliographic reference, the same as in the figure on the left, is given in grey. Figure from [45].

### *3.4. Validation of Open-Source Science Tools and Background Model Construction in γ-ray Astronomy*

Expanding on the project described in Section 3.3, ref. [45] aims at testing both Gammapy and ctools using the H.E.S.S. DL3 DR1. The results of the one-dimensional and threedimensional analyses provided by both science tools are validated against each other. For the three-dimensional analysis, a novel background model is used, not parameterised from the *off* sources within the H.E.S.S. DL3 DR1, but built using ∼4000 h of H.E.S.S. private observations. For this work, the results of the science tools are validated not only against the literature, but also against the results obtained with one of the closed-source analysis chains of the H.E.S.S. collaboration, performing a classical one-dimensional analysis on the exact same observations included in the H.E.S.S. data release (see Figure 6 right). The agreement of the results of the different science tools among them and with the private analysis chain represents a landmark in the analysis tools and data formats validation for future VHE gamma-ray analyses.

### *3.5. Open and Standardised Formats for γ-Ray Analysis Applied to HAWC Observatory Data*

The GADF specifications were primarily developed by and for the IACT community. However, due to their generality, it is possible to use them to format data from WCD, such as the HAWC observatory, as shown by [46]. In this work the authors presented the first GADF-compliant production of event lists and instrument response functions for a ground-based wide-field instrument. These data products were then used to reproduce

with excellent agreement the published spectrum of the Crab Nebula as measured by HAWC. This result, shown in Figure 7, was obtained using the open-source software Gammapy. As highlighted by Section 3.2, a common data format and shared analysis tools allow multi-instrument joint analysis and effective data sharing. This synergy between experiments is particularly relevant given the complementary nature of pointing and widefield instruments. This will be specially relevant for the joint scientific exploitation of future observatories such as SWGO and CTA.

**Figure 7.** Estimated flux spectrum of the Crab Nebula derived from the GADF data production using Gammapy (blue) compared against the reference HAWC spectrum from [47] (orange). The bottom panel shows the residual comparison of the obtained flux points with the reference spectrum. Figure from [46].

### **4. Discussion**

The future of data formats in gamma-ray astronomy will very likely be linked to the future of the GADF initiative. As discussed over the text, this community-driven initiative has proposed the first available set of specifications for high-level data for the current and next generation of ground-based gamma-ray instruments. In this section, we will discuss the main limitations affecting current specifications, as well as foreseeable ways in which they will evolve over the next decade.

One of the main drivers of the evolution and improvement of the GADF will be the target requirements imposed by the future ground-based observatories. These will impose high-level data (and especially, the IRFs) to be described and parameterised in more complex ways, directly benefiting also the current generation of instruments. Possible extensions of the format to meet these requirements could include: a better field of view binning approach, removing the assumption of radial symmetry; inclusion of time dependency in the IRF components; distinguishing between different event types based on the hardware, reconstruction or analysis settings. Mature format specifications will be crucial for defining and testing current instruments legacy data, as they face the challenge of digesting decades of data (taken by instruments with evolving capabilities) and ensuring their proper use and interpretation.

In order to confront these challenges and to ensure the long-term feasibility of the GADF specifications, a more formal governance structure is needed. For this reason, a body of representatives from the high-energy ground-based community will be defined to act as a coordination committee. This governance definition effort, currently in progress, will inherit from the evolution of similar community-driven initiatives (for instance, the Astropy Project role responsibilities [48]).

Even if the GADF specifications were inspired by high-energy satellites and primarily developed by and for the IACT community, they are able to represent high-level data products from other event-based high-energy astrophysical instruments. As shown in

Section 3.5, other high-energy gamma-ray observatories, such as WCD (like HAWC or the future SWGO) naturally fit the GADF specifications, allowing the use of available open-source data analysis tools. In the coming years, the inclusion of other observatories will be explored, especially in the context of high-energy multi-messenger astronomy: allowing the inclusion of data from neutrino or even gravitational wave observatories would require some changes to the specifications, but at the same time would naturally allow the use of common science tools for joint multi-messenger analyses.

### **5. Conclusions**

This review presented an outlook on the evolution of the data format in VHE gammaray astronomy from private and diverse specifications to the open and standardised ones proposed under the GADF initiative. The GADF initiative is presented as a communitydriven effort to provide a common and open high-level data format for gamma-ray instruments. The specifications proposed within the GADF refer to high-level data products that would allow the production of scientific results: they are independent of the particular detection technique, thus allowing to accommodate data from different telescopes (e.g., IACT and WCD). The format definition was driven by the requirement to operate the next generation of gamma-ray instruments (such as CTA) as open observatories, with the consequent need of providing non-expert external users with open data products that are easy to interpret. Another aspect of this demand was the development of open-source gamma-ray data-analysis tools, whose evolution is now also linked to the data standardisation effort.

Current GADF specifications have proven to be robust by several publications analysing GADF-compliant data with these open-source science tools, validating their results against those obtained with the established closed-source software in use by current collaborations. These publications confirmed not only the correctness of the information incorporated in the format specifications but, at the same time, the capabilities of this new generation of open-source science tools. Other publications have instead proven the feasibility of multi-instrument and fully-reproducible analyses once the common format and open software are used. Even if future instruments are driving the open data and software development, the current generation can significantly benefit from their advancement. Their adoption ensures a larger user and maintainer base for the legacy data of current instruments, and, eventually, more sophisticated data storage and analysis techniques. The H.E.S.S. collaboration already pioneered a first public release of GADF-compliant data. All currently operating VHE gamma-ray experiments are nowadays also able to produce GADF-compliant data products, though for the moment they have mostly been used internally. Multi-instrument scientific projects using these data products are on their way, sharing data among collaborations through the use of memoranda of understanding.

The standardisation effort remains open to the inclusion not only of more gammaray instruments but also of telescopes observing the universe with other messengers. With the initiative being community-driven, high-energy astrophysicist in need of new extensions to the format are able to propose them. The recent efforts reviewed in this issue successfully employing GADF-compliant data and open-source analysis tools will surely foster their usage for further scientific projects. The GADF does not represent an isolated effort and aims at maintaining compatibility with other established standards in high-energy astronomy, such as the OGIP (on which the GADF largely draws), or those used for high-level products within the virtual observatory [49]. Promoting the use of opensource analysis tools, as well as common open data formats will distinguish high-energy astrophysics in the future as one of the few branches of modern science unconcerned by the reproducibility dilemma affecting many other disciplines [50].

**Author Contributions:** C.N. conceptualized the paper and wrote the original draft. All authors contributed equally to the review of the relevant literature, to writing and editing the paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the European Commission's Horizon 2020 Program under grant agreement 824064 (ESCAPE—European Science Cluster of Astronomy and Particle Physics ESFRI Research Infrastructures), by the the ERDF under the Spanish Ministerio de Ciencia e Innovación (MICINN, grant PID2019-107847RB-C41), and from the CERCA program of the Generalitat de Catalunya.

**Acknowledgments:** We are grateful to Gernot Maier, Maximilian Nöthe, Bruno Khélifi, Catherine Boisson and Karl Kosack for their suggestions and for reviewing the manuscript.

**Conflicts of Interest:** The authors declare no conflicts of interest.

### **References**


### *Article* **The Making of Catalogues of Very-High-Energy** *γ***-ray Sources**

**Mathieu de Naurois**

LLR Ecole Polytechnique, Av. Chasles, 91120 Palaiseau, France; denauroi@in2p3.fr

**Abstract:** Thirty years after the discovery of the first very-high-energy *γ*-ray source by the Whipple telescope, the field experienced a revolution mainly driven by the third generation of imaging atmospheric Cherenkov telescopes (IACTs). The combined use of large mirrors and the invention of the imaging technique at the Whipple telescope, stereoscopic observations, developed by the HEGRA array and the fine-grained camera, pioneered by the CAT telescope, led to a jump by a factor of more than ten in sensitivity. The advent of advanced analysis techniques led to a vast improvement in background rejection, as well as in angular and energy resolutions. Recent instruments already have to deal with a very large amount of data (petabytes), containing a large number of sources often very extended (at least within the Galactic plane) and overlapping each other, and the situation will become even more dramatic with future instruments. The first large catalogues of sources have emerged during the last decade, which required numerous, dedicated observations and developments, but also made the first population studies possible. This paper is an attempt to summarize the evolution of the field towards the building up of the source catalogues, to describe the first population studies already made possible, and to give some perspectives in the context of the upcoming, new generation of instruments.

**Keywords:** very-high-energy *γ*-ray astronomy; atmospheric Cherenkov telescopes; source catalogues

### **1. Introduction**

Soon after the discovery of cosmic rays by Victor Hess in 2012 [1], it was realised that very-high-energy *γ* rays could allow the identification of their sources, mainly because, in contrast to charged cosmic rays, neutral *γ* rays are unaffected by extragalactic and galactic magnetic fields, and therefore travel undeflected in space. Direct observation of high-energy *γ* rays from space is, however, limited to energies 100 GeV due to the steeply falling source flux as a function of increasing energy. At the same time, due to the overall thickness of the atmosphere (≈<sup>1</sup> kg cm<sup>−</sup>2), high-energy particles (*<sup>γ</sup>* rays or charged nuclei) entering the atmosphere do not reach the ground, but interact at high altitudes and trigger the development of a so-called "*extended air shower* (EAS)" of particles. These showers contain numerous ultra-relativistic electrons and positrons, travelling faster than light in the air and consequently emitting ultrashort (nanosecond) flashes of Cherenkov light [2]. After an initial suggestion from Blackett [3], the first attempts to detect the Cherenkov light emitted by atmospheric showers dates back to 1953 [4]. It took, however, several decades before the emergence of ground-based very-high-energy gamma-ray astronomy. The Whipple collaboration established the imaging atmospheric Cherenkov technique [5], whereby large telescopes, equipped with an ultra-fast camera, capture the Cherenkov light emitted by ultrarelativistic electrons and positrons in the atmospheric showers, and form the image of the latter. A detailed analysis of the shower image allows the reconstruction of the parameters of the incoming particle: direction of arrival, impact on the ground, energy, and, on a statistical basis, allows for the discrimination of *γ* rays from the much more numerous charged cosmic rays.

During the last decades, the field of very-high-energy (VHE) *γ*-ray energies over 100 GeV evolved from the observation of isolated, well defined sources to very large projects, spanning several years, and covering a large fraction of the sky. These projects

**Citation:** de Naurois, M. The Making of Catalogues of Very-High-Energy *γ*-ray Sources. *Universe* **2021**, *7*, 421. https://doi.org/10.3390/ universe7110421

Academic Editor: Ulisses Barres de Almeida and Michele Doro

Received: 17 September 2021 Accepted: 28 October 2021 Published: 5 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

resulted in very large and inhomogeneous data sets, with deep exposures on specific regions of interest (ROI) and much shallower exposures on other ones. The resulting large exposure gradients are tricky to handle in analysis pipelines and imposed the development of new acceptance determination and background subtraction techniques. In addition, these large data sets are acquired across several years, resulting in very diverse observational conditions, in terms of array configuration (number of operational telescopes, trigger settings, etc.), zenith angle, night sky brightness (NSB), etc. Dedicated analysis techniques have been developed to permit the consistent analysis of such data sets which are key ingredients for the build-up of catalogues. The first catalogues, elaborated in the last decade, revolutionised our view on the VHE sky and initiated the statistical analysis of populations of the same type, revealing some of their evolution scheme. The next generation of instruments, and in particular, the upcoming Cherenkov telescope array (CTA), will sample the sky with unprecedented sensitivity and is expected to make quantitative studies on source populations a major activity, thus pushing forward our understanding of particle acceleration and *γ*-ray production in VHE sources.

This paper is divided into three sections. The first is dedicated to the technical aspects of catalogue construction. In a second part, the existing major surveys are described, together with the first population studies that they made possible. The third and last section presents the upcoming projects and some personal perspectives.

### **2. Technical Aspect of Survey and Catalogue Constructions**

In the VHE *γ*-ray domain, the construction of catalogues arises essentially from two different observational strategies. On the one hand, observations were historically mainly conducted on sources of particular interest, identified from observations at different wavelengths (targeted observations). This mode of observation is still valid for extragalactic observations, where the density of sources of sufficient brightness is not high enough to undertake systematic surveys. Such targeted observations result in sparse and incomplete catalogues with very heterogeneous depth. On the other hand, a few large-scale surveys (survey observations) have been conducted, essentially in the Galactic plane (see Section 3.1), allowing for (partially) unbiased samples of sources. These two observational strategies have numerous implications, first on the way the array of telescopes is operated, but also on the way in which the analysis pipeline is constructed and run. In this section, we review the technical aspects of the catalogue construction. The important steps towards a source catalogue are:


### *2.1. Observational Strategies*

Observations of IACTs are usually divided into chunks of ∼30 min, called runs, which could correspond to an exposure time in different domains of astronomy. This typical duration results from a trade-off between opposite constraints: on the one hand, it takes some time to slew the telescopes to a different target, to configure the system and to start the observations, so a run should not be too short (at least a few minutes). Since the instrument trigger rate ranges from a few hundreds of Hz to a few kHz, it takes at least a few minutes to collect enough events to be able to assess the instrument performance and stability (and to be able to estimate the background, see Section 2.4). On the other hand, the instrument response function varies strongly with the observational conditions (and in particular with the zenith angle and meteorological conditions, both on time scales of a few minutes), making very long runs more prone to systematics and more complicated to analyse.

Since any astronomical source can only be observed for a few hours every night, and only during certain periods of the year, and given the very low flux of very high energy *γ* rays, even from the strongest known sources, many runs, spread over days, months and even years have to be combined in a consistent manner in the analysis procedure to produce a *stacked* data-set. This observation procedure also requires the performance of the instrument and the atmosphere to be monitored precisely over very long periods of time. The current generation of IACTs carry out two main modes of pointing corresponding to the targeted and survey modes of observation:


**Figure 1.** (**Left:**) Classical Wobble pointing mode, where the source is offset in the field of view. (**Right:**) Survey pointing mode, with observations overlapping with each other. The black "+" mark denotes the pointing direction for the various runs, and the blue circles the instrument field of view.

These two modes of observation correspond also to different possible pointing optimisation schemes. In the wobble observation mode, one usually wants to reach the best possible sensitivity. To achieve that, all telescopes are pointed in the same direction (*parallel* pointing) (Figure 2a), or even pointed at the altitude of the maximum of development of the showers (convergent pointing—Figure 2b) to maximise the collection of light. In contrast, in the survey mode of observation, one might want to increase the sky coverage 3 at the expense of point-like sensitivity. This can be achieved by splitting the array in several groups of telescopes pointing at different directions, or even, although this was not yet effectively used, by implementing a divergent pointing mode (Figure 2c) where telescopes point on directions slightly offset from each other to increase the effective field of view. Technically, divergent pointing can easily be implemented as a convergent pointing to a negative altitude. Convergent pointing at very low altitude (a few km above the ground, Figure 2d), also denoted *skewed pointing* here, can also be used, and is technically not more difficult to implement.

**Figure 2.** From **left** to **right**: (**a**) parallel pointing, (**b**) convergent pointing at high altitude, (**c**) divergent pointing, (**d**) convergent pointing at low altitude, also denoted skewed pointing.

Depending on the telescope angular separation, divergent pointing can result in a non-flat exposure across the sky, which can significantly complicate the subsequent steps of the analysis. To investigate the merits of each telescope pointing strategy, we performed a simulation of an array of 37 H.E.S.S.-I telescopes (5◦- FoV each) placed on a square grid with lines of 3, 5, 7, 7, 7, 5, 3 telescopes at the altitude of the H.E.S.S. site (1800 m a.s.l) (Figure 3), and separated by 120 m each (for a total array size of 720 × 720 m2). Pointing altitudes ranging from 3 km to 10 km above site level were used in both convergent and divergent (negative altitude) modes, and parallel pointing was also included for reference. Diffuse *<sup>γ</sup>*-rays between 100 GeV and 10 TeV (20◦ cone opening angle) were simulated on a circle of 700 m (enclosed in the array), using the kaskade/Smash simulation chain developed for H.E.S.S. [6]. Data were analysed using the Model++ [7] within the H.E.S.S. software framework. Results of this simulation are presented in Figures 4 and 5.

**Figure 3.** Array of 37 H.E.S.S.-I telescopes used in the simulation of various pointings.

Figure 4, **a** shows that convergent pointing at high altitude maximises the event multiplicity (number of triggered telescopes). As a consequence, this mode of observation also maximises the precision of the reconstruction (angular and energy resolution in particular), as shown in panel **d**. In contrast, this corresponds to a rather modest size of the effective field of view, as measured by the squared angular distance of the observed events to the optical axis (panel **b**).

As expected, the largest effective fields of view are obtained by pointing at low altitude, either in divergent or convergent (skewed) modes, as shown in panel **b**, with rather similar

and quite flat distributions. Panel **c** shows the distribution of squared impact distance with respect to the centre of the array, which is used to derive the effective area of the array. Low altitude pointings (convergent or divergent) tend to select mostly events close to the array centre, whereas convergent pointing at high altitude tends to maximise the effective area.

**Figure 4.** Comparison of performances of various pointing strategies. From left to right: (**a**) Event multiplicity, (**b**) Squared angular distance to optical axis, (**c**) Squared impact distance to the centre of the array, (**d**) Event reconstruction precision, measured as the fit uncertainty on the event direction.

From the squared angular distance and squared impact distance distributions (panel **b** and **c**), an *integrated aperture* can be derived, which corresponds, to a normalisation factor, to the rate of detected *γ*-rays. The integrated aperture for the presented simulation is displayed in Figure 5 as a function of the inverse of the pointing altitude (such that parallel pointing is at the origin of the X axis, negative values correspond to divergent pointings, and positive ones to convergent pointings). It turns out that for this particular simulated array configuration, the detection rate is maximised for moderate divergent pointing (at an altitude of −6 km), thus confirming the potential of the divergent observation mode. In particular, for a uniform distribution of sources with sufficient density, as expected in the extragalactic sky, divergent pointing might indeed be the most effective mode of observation. On the other hand, distant sources are affected by *γ*-ray absorption by pair creation on the EBL. Limiting this absorption requires the reduction of the energy threshold to its minimum possible, which is better achieved in convergent pointing mode. These preliminary results, although confirming the findings of other authors (e.g., [8]), need to be confirmed by a full scale simulation using realistic, next generation arrays (CTA) and investigating not only the integrated aperture, but also the event reconstruction, *γ*hadron separation and background subtraction. The question of background modelling and subtraction might become complicated to handle (due to possible non-trivial variations

across the FoV), and will certainly require further studies before such alternate pointing strategies can be used in large-scale surveys.

**Figure 5.** Integrated aperture for the various pointing strategies.

### *2.2. Event Separation and Classification*

Genuine *γ*-rays represent a tiny fraction (≈0.01%) of the events recorded by IACTs, the vast majority being charged cosmic rays, composed of mainly protons and nuclei, but also a small fraction of cosmic electrons. The details of event reconstruction and *γ*-hadron separation are covered in an extensive bibliography and could be the subject of a review on their own, and will not be covered here. A very large spectrum of techniques is indeed used in the field, ranging from simple image parametrisation to template fitting, and even image deep learning techniques. Whatever method is used to reconstruct the events, one or several discriminating parameters are constructed to separate *γ*-ray events from the charged cosmic rays. The probability density functions (PDFs) for the *γ*-ray and charged cosmic-ray events always overlap, rendering a perfect separation impossible. In particular, a small (∼10−3) fraction of protons generate a *<sup>π</sup>*<sup>0</sup> high in the atmosphere, which initiates the development of an electromagnetic shower which is very similar to that induced by *γ*-rays. Similarly, electrons also initiate electromagnetic showers and are therefore almost indistinguishable from genuine *γ*-rays. The discriminating parameters can be used to construct several, well separated event classes used in the subsequent steps of the analysis. Two main event classes are usually used:


Due to the overlap of the PDFs, this classification is incomplete, with many events falling between the two cases. The separation also remains imperfect, as some remaining background events always survive the selection. Alternatively, one can make use of the full PDFs to derive a *"gammaness"* or *"hadroness"* parameter (e.g., [9]), giving the probability for the event to respectively originate from a *γ*-ray or a charged cosmic ray. So far, the subsequent steps of the analysis, and in particular the background subtraction, have not really been adapted to the use of continuous probability distributions, so the use of event classes remains the state-of-the-art for what concerns IACTs.

Different event selection strategies can be used, which can be visualised in an efficiency vs. purity plane (Figure 6), where the efficiency denotes the fraction of *γ* rays that are retained in the analysis, while the purity is the relative fraction of *γ* rays in the selected sample, i.e., one minus the background contamination fraction. It is, in general, possible to achieve a very high *γ*-ray efficiency (retain almost all *γ*'s) but at the price of large background contamination (bottom-right in the plot). It is also possible to have a rather large purity of the sample (almost no background) but at the price of a very bad efficiency to *γ* rays (top-left). One usually denotes as "*loose selection*" a selection corresponding to first case, while "*hard selection*" is used for a high purity, low efficiency selection strategy. In general, low-energy showers are subject to more statistical fluctuations, and are therefore more difficult to distinguish from hadronic showers. As a consequence, hard selections usually lead to a higher energy threshold than loose selections.

**Figure 6.** Efficiency-purity plot.

The question of optimal selection strategy is fully non-trivial, as it is intimately linked to the questions of background subtraction (Section 2.4.1) and background systematics (Section 2.4.9). In terms of pure statistics, a theoretical optimal selection exists along the curve (red point), which maximises the statistical significance of the detection of a given *γ*-ray source. This optimal point however differs for each and every source, as it depends on the source intensity and spectral shape. If it is possible to adapt the selection to the source characteristics in the case of individual, targeted observation, large scale surveys used in catalogue construction require, in contrast, a homogeneous selection to be used consistently throughout the whole data set. One general trend that can guide the choice is the fact that, due to the steeper energy spectrum of cosmic rays compared to that of galactic *γ*-ray sources, the background is reduced faster than the signal when moving towards harder selection cuts. In order to maximise the detection potential of faint sources, rather hard selection cuts were used in most surveys so far, with the drawback of reduced efficiency at low energies. Hard cuts also have the advantage of significantly mitigating the problems arising from the imperfect modelling of the acceptance and uncontrolled background systematics.

Since the population of VHE sources might actually vary with the energy domain, future surveys might be optimised also towards low energy, imposing the use of looser cuts. It is also possible to release several sub-versions of the same catalogue, corresponding to different selection schemes, as has already been done in other experiments such as Fermi-LAT or HAWC.

### *2.3. Acceptance—Background Model*

The term background model or acceptance denotes the shape of the distribution of events across the field of view in the absence of genuine *γ*-ray sources. It can be determined for the various event classes (Section 2.2), and needs to be determined in particular for *γ*-like events prior to background subtraction (Section 2.4). For genuine *γ*-rays it is usually determined from Monte Carlo simulation, whereas for cosmic ray events, it is usually determined directly from the data, either from the considered data set, or from a different, control data set. The background distribution across the field of view depends on multiple parameters, and must be derived for each and every analysis configuration. It depends of course on the array geometry (number of telescopes and position), on the reconstruction method and on the event selection, but also on the observational conditions (zenith angle) and on the energy. The deeper the observations, the more accurate the background models needs to be to avoid uncontrolled systematics across the field of view.

The background model is usually determined on a run-by-run basis, and is then reprojected onto the sky to compute the background model for the full data set, as shown in Figure 7; left: background models are determined for every run, and then stacked together. Several algorithms have been developed for the computation of acceptance:

**Figure 7.** (**Left**) Stacking of background models for a data set with different pointing. (**Right**) Radial acceptance determination in the presence of known or putative *γ*-ray sources.

### 2.3.1. Radial Acceptance

The radial acceptance model is the simplest acceptance model, and the easiest to implement. It assumes a rotational symmetry of the instrument response around the pointing direction, which is an acceptable assumption for not-too-deep data sets. Thanks to its simplicity, it can be determined easily in different energy slices, thus providing the input for a 3D analysis. Radial acceptance curves usually depend also on the zenith angle range. The incorporation of both zenith angle bands and energy slices results in a 3-dimensional model which represents the current state-of-the-art.

To avoid contamination of the acceptance, known or putative *γ*-ray sources can be excluded from the acceptance determination, if not overlapping with the centre of the FoV, by excluding a sector from the radial acceptance determination (Figure 7, right). Additional gradients, due in particular to the variation of zenith angle across the field of view, can also be taken into account.

The evolution of the radial acceptance curves with zenith angle (left) and energy (right) is shown in Figure 8 for the H.E.S.S.-I array of 4 telescopes, and for a given reconstruction (Model++ Std). For a different reconstruction and/or a different set of cuts, the curves will be different but will exhibit a similar trend. As can be seen from the figure, differences of more than 20% between different bands can easily exist, stressing the fact that the use of zenith angle bands is mandatory to avoid systematic effects.

**Figure 8.** (**Left**) Radial acceptance curves for different zenith angle bands. (**Right**) Radial acceptance curves for different energy bands.

The advantages and drawbacks of the radial acceptance model are the following: **Advantages**


### **Drawbacks**


### 2.3.2. 2D Acceptance

Bi-dimensional acceptance (or "2D" acceptance) is relatively similar in principle to radial acceptance, but without the assumption of radial symmetry. The response of the array is computed in the nominal frame (i.e., in the frame attached to the pointing direction) for every run, and then reprojected onto the celestial coordinates. Instead of a radial description of the instrument response, a 2D representation is used. Since the input statistics are spread over a wider phase space, 2D acceptance needs more data than radial acceptance to be produced with a similar level of precision.

The exclusion of known and/or putative *γ*-ray sources is also more complicated than for the radial acceptance, because sources move in the field of view during the observations. One working algorithm is depicted in Figure 9: throughout the observations, an exposure map is computed by counting the faction of time in which each pixel is not within one excluded region (top left). The exposure maps of each run are stacked together (top right), with a weight corresponding to the total number of events per run. An event map is computed at the same time for each run, excluding the events in the corresponding region (bottom left). The event maps of all runs are summed up. The final acceptance map is then computed by taking the ratio of the stacked event map to the exposure map (bottom right). The whole procedure can be performed in parallel for different event classes (*γ*-like, hadron-like), for different zenith angle bands, or for different energy slice bands. An implementation of the 2D acceptance model has recently been made available in gammapy [10].

The advantages and drawbacks of the 2D acceptance model are the following:

### **Advantages**


### **Drawbacks**


**Figure 9.** 2D acceptance determination. (**Top**) Determination of the exposure map by stacking of maps from individual runs. (**Bottom**) Determination of the final acceptance map from the ratio of the event maps to the exposure map. Reproduced from [11].

Both the radial and the 2D acceptance models assume some underlying symmetry. In particular, they assume that the distribution of events in the field of view does not vary with the azimuth angle of the observation (for a given zenith angle band). This assumption appears in practice reasonable for arrays with a sufficient number of telescopes and high degree of symmetry. For very sparse or very asymmetric arrays (or when some telescopes are non-operational), this becomes a limitation. For instance, in the case of a two-telescope system such as MAGIC, the acceptance exhibits an elongated, altitude/azimuth-dependant shape, which can be partially corrected by Monte Carlo simulations (and references therein, [12]). In addition, the asymmetry caused by the direction of the magnetic field and the induced asymmetric broadening of showers can induce some additional acceptance systematics, particularly at low energy. Generating acceptance models for different array sub-configurations and for different azimuth bands can quickly become prohibitive, as it further increases the amount of required data. In very deep observations, the imperfection of the acceptance models can be readily observed [11].

### 2.3.3. Simulated Acceptance

Since the advent of so-called *RunWise simulations* [13], the possibility of generating an acceptance model exclusively from simulations has been investigated [14]. While theoretically possible, the simulation of cosmic ray background is in practice prohibitive in terms of computing time, due to the extremely large phase space and rather low triggering efficiency. Moreover, we are interested mostly in the *γ*-like acceptance, corresponding to the tiny fraction of background events surviving selection cuts. It was instead assumed that *γ*-like acceptance would be rather close to genuine *γ*-ray acceptance, and could be derived from *γ*-ray simulations. For this purpose, diffuse *γ*-ray simulations over the field of view are generated for each individual run, using settings as close as possible to the real observations. Actual calibration coefficients per pixel are used (gains, flat-fielding, non-operational pixels, level of NSB, pixel threshold, . . . ) and the evolution of the pointing direction during the run (due to the rotation of the sky) is reproduced in the simulation. It has already been shown in [13] that the RunWise simulation offers a more precise modelling of the instrument response than classical simulations performed on specific grid points of the phase space. Now, it appears that RunWise simulations can also be used to generate more precise acceptance models, by taking into account properly any inhomogeneity of response, as well as varying, atmospheric conditions.

One important point to address in this scheme is the aforementioned difference between the cosmic ray *γ*-like and *γ*-ray events, which might exhibit a different distribution across the field of view. It has been shown however that diffuse *γ*-ray simulations reproduce fairly well the *γ*-like hadronic background, and that a radial correction, obtained by comparing simulations and actual data from fields free of *γ*-ray emission, can account for the difference and lead to a usable background model. The advantages and drawbacks of the simulated acceptance model are the following:

### **Advantages**


### **Drawbacks**


### 2.3.4. Comparison Elements and Limits

For most moderately deep observations, the radial and 2D models usually perform similarly well. Figure 10 shows a comparison between the radial (top) and 2D acceptance (middle) models for a very large data set of more than 5000 runs (2500 h of observations) in the inner galactic plane (*l* ∈ [−50, 50] ◦). The two models agreed within ≈1% (bottom panel), which is generally sufficient for the standard analyses. This value is similar to what is quoted in [15], where a typical detector acceptance inhomogeneity of the order of 3% is also mentioned, with possible larger values in specific fields that have large NSB variations and/or large zenith angles.

Analysis of deep fields with the current generation of instruments is, however, already dominated by background systematics arising from, amongst others, an imperfect determination of the acceptance. The actual layout of the telescopes has an altitude/azimuth dependant imprint on the acceptance, which is not fully predicted by neither the radial nor the 2D acceptance model. Improving the precision of the acceptance model is a major, but mandatory challenge for the next generation instruments. CTA, with a factor of ten larger effective area, will require the acceptance to be determined with a sub-percent level. This will require significant efforts to include the various sources of systematic differences, arising in particular from the actual array layout or the variation of NSB across the field of view.

**Figure 10.** Comparison of radial and 2D acceptances determination.

### *2.4. Background Subtraction*

The next step in data analysis corresponds to the comparison of the recorded number of *γ*-like events in a region of interest with an expected number of background events, in order to assess the putative presence of a significant excess signalling the presence of a *γ*-ray source. The evaluation of the expected number of background events can arise from different origins: *γ*-like events in different parts of the field of view or in different regions of the sky, with various reprojection techniques, hadron-like events at the same location, or Monte Carlo simulations. Throughout the history of VHE *γ*-ray astronomy, a variety of techniques have been developed; some of them suitable for source detection and morphology determination, some of them also used to derive the energy spectrum of the sources. This dichotomy arises because the detector response varies with observational conditions, and, in particular, depends strongly on the zenith angle: to be able to determine the energy spectrum of the source, the background subtraction needs to be performed in different energy slices ("*Cube*" analysis). Some background subtraction techniques are done on a run-by-run basis; some use the complete stacked data set. The main algorithms used in the field are:


### 2.4.1. Basics of Background Statistics

When subtracting some background estimate from the number of recorded *γ*-like events in an ROI, one needs to assess the significance of the resulting excess (or deficit). The computation of this significance depends on the way in which background is estimated.

Whether the background is estimated from the number of events in a different region of the phase space (i.e., from a different direction, or from a different event class), the number of background events is subject to Poisson fluctuations, just like the number of *γ*-like events in the ROI. In that case, the Li and Ma statistics [16] apply. Considering Non, the number of *γ*-like events in the ROI and Noff, the number of background events with a normalisation ratio *α*, the significance of an excess Non − *α* × Noff is given by *<sup>S</sup>* <sup>=</sup> √−2 ln *<sup>λ</sup>*, where *<sup>λ</sup>* is the likelihood ratio between the null (background only) and the (signal+background) hypotheses:

$$\lambda = \frac{P\_0(\text{N}\_{\text{on}}, \text{N}\_{\text{off}} | \overline{\mathcal{B}}\_0)}{P(\text{N}\_{\text{on}}, \text{N}\_{\text{off}} | \overline{\mathcal{S}}, \overline{\mathcal{B}})} = \left[\frac{a}{1+a} \left(\frac{\text{N}\_{\text{on}} + \text{N}\_{\text{off}}}{\text{N}\_{\text{on}}}\right)\right]^{\text{N}\_{\text{on}}} \times \left[\frac{1}{1+a} \left(\frac{\text{N}\_{\text{on}} + \text{N}\_{\text{off}}}{\text{N}\_{\text{off}}}\right)\right]^{\text{N}\_{\text{off}}} \tag{1}$$

This method applies to the reflected, on-off, ring and template backgrounds (see Sections 2.4.3–2.4.7). In contrast, when the background is estimated from a model, and not subject to Poisson fluctuations (as in the field-of-view background), one should use the so-called cash statistics [17], from which a similar formula can be derived:

$$\lambda = \frac{P\_0(\text{N}\_{\text{on}} | \overline{B}\_0)}{P(\text{N}\_{\text{on}} | \overline{S}, \overline{B})} = \left(\frac{\overline{B}}{\text{N}\_{\text{on}}}\right)^{\text{N}\_{\text{on}}} \exp\left(\text{N}\_{\text{on}} - \overline{B}\right) \tag{2}$$

This method, however, assumes perfect knowledge of the background model, which is, in practice, incorrect. Some ways in which to take into account the uncertainty in the background model are discussed in Section 2.4.9.

### 2.4.2. Excluded Regions

When subtracting some background estimate from the number of events in the ROI, it is of prime importance to ensure that the background estimate used is not itself contaminated by *γ* rays. This issue is relevant when the background is estimated from the population of *γ*-like events in different regions of the sky, as for the reflected or ring background in particular. This concern also applies to the case when some normalisation factor between the number of *γ*-ray candidates and hadron-like events is required, which is the case for the template or field-of-view background. All in all, the definition of proper "*excluded regions*", possibly contaminated by genuine *γ*-ray events from an astrophysical source, appears more or less mandatory.

The definition of excluded regions is usually done manually, at least for targeted observations or for modest regions of the sky. In some cases however, such as when constructing a complete catalogue over a large region of the sky populated with many sources, an automatic procedure is required to avoid biases, and becomes mandatory. Such an iterative procedure was used in the H.E.S.S. Galactic plane survey [18], see Section 3.1, by excluding all regions with a statistical significance > 5 *σ* augmented by a margin of 0.3◦ around them.

### 2.4.3. Reflected Regions

The reflected background uses *γ*-like events from the same observation, and from regions located at the same angular distance from the centre of the field of view (Figure 11). For each observation (pointing direction displayed as black star in Figure 11), the ROI (red circle) is located at a different angular distance from the pointing direction. OFF regions (blue circles) of identical shape are spaced evenly in the field of view, at the same angular distance from the pointing direction. Regions which intersect one or several excluded regions (grey regions) are then eliminated from the background estimate.

**Figure 11.** Illustration of the reflected regions background, for two different observation positions (shown as black stars). The positions of the selected OFF regions are shown as filled circles. The excluded regions, which have non-empty overlap with excluded regions (displayed in grey) are shown as dashed circles.

Besides its technical simplicity, the main advantage of the reflected background resides in the fact that, with all regions being located at the same distance from the pointing direction, no radial dependence of the acceptance has to be taken into account. Only gradients caused by the variation of the zenith angle across the field of view need to be accounted for in the *α* normalisation factor. Moreover, with the acceptance being essentially the same in all regions (with identical energy dependence), the reflected background is very well suited to the determination of the energy spectrum of the source.

In contrast, since the background regions differ for every test position, and are different for each run, the determination of sky maps using this technique appears non-trivial (The author is not aware of any implementation of the reflected background algorithm suitable to the production of sky maps). Note that there is one case in which the reflected background cannot be used: when the ROI overlaps with the pointing direction, no OFF regions can be found with this algorithm. This imposes the need for the careful planning of observations.

### 2.4.4. On-Off Background

The On-Off background is somewhat similar in spirit to the reflected background. It also uses *γ*-like events in the field of view, but instead of taking the control (OFF) regions from different positions in the **same** run, it uses **pairs** of runs with the same observing conditions. This was one of the first methods used in the field [5], as it is particularly robust to systematics. Observations were paired in right ascension, such that the telescope trajectory on the sky was completely identical in both runs, thus cancelling the effect of the varying zenith angle.

The On-Off background also allows the energy spectra to be derived, and is suitable for very extended sources, but presents two main disadvantages: first, the amount of data needed is at least doubled, since for every ON run, a paired OFF run is needed. Using a single OFF run for each ON run gives *α* = 1 in Equation (1), and means that the fluctuations in the background are dominant in the calculation of the significance. To limit the effect of the fluctuations in the background, one might need 5–10 OFF runs per ON run, which further increases the amount of data, and leads to very poor efficiency. Second, it requires the OFF run to be clear of *γ* rays. With the large increase of known *γ*-ray sources in recent decades, this becomes tricky, if not impossible, in crowded regions such as the galactic plane. Nowadays, the On-Off background is barely used anymore. It is still used in very specific projects concerning very extended sources (covering most of the field of view), for which other methods fail, e.g., [19]. OFF runs are no longer taken from dedicated observations, but from archival observations of extragalactic fields taken under similar conditions that are empty of *γ*-ray sources. This method, where archival data are used instead of paired observation, is also called matched run background.

### 2.4.5. Ring Background

The ring background [15] also only uses *γ*-like events, but from a different part of the phase space. The overall idea is to compute the expected number of background events in the ROI using a ring around its position (see Figure 12). The radius and thickness of the ring have a direct influence on the normalisation ratio *α*, and thus on the final statistics. In general, the size (area) of the ring should be set to a value that is large compared to the size of the ROI (to limit the statistical fluctuations in the OFF regions), but should not exceed a significant fraction of the size of the field of view, to avoid introducing additional systematics.

**Figure 12.** Principle of Ring- and Adaptative Ring Backgrounds. From left to right: (**a**) ring background in camera frame, (**b**) ring background in astronomical frame, (**c**) ring background in astronomical frame with excluded regions (**d**) adaptative ring background.

Two different versions of the ring background currently exist, corresponding to different use cases:


By averaging the background over a large region around the ROI, the ring background is rather robust against localised background systematics in the OFF region caused, in particular, by small-scale variations of the NSB (bright stars, . . . ). It also permits large values of *α* in Equation (1), thus reducing the effect of the background fluctuations and improving the statistical power of the analysis. The drawback is that it tends to remove any large structure of *γ*-ray emission, such as the large-scale galactic diffuse emission.

### 2.4.6. Adaptative Ring Background

In very crowded regions, such as the Galactic plane, the presence of (very) extended sources can make large fractions of the ring unusable, as shown in Figure 12d. The normalisation ratio *α* then takes very different values depending on the position of the ROI, leading to inhomogeneous performances. In some cases, the full ring would be excluded, leading to holes in the significance map. For that reason, the concept of adaptative ring background was introduced in [18]: for a given test position, the size of the ring is increased progressively until the acceptance integrated within the ring (and outside excluded regions) reaches at least four times the acceptance integrated in the ROI.

### 2.4.7. Template Background

The template background [20] differs completely from the previous models. It makes use of the fact that only a small fraction of events in the ROI are *γ*-like events, the vast majority being cosmic ray events, also denoted as hadron-like events, which can be used to estimate the background. It assumes that the rate of *γ*-like and cosmic-ray events are, to some predictable factor, proportional. The ratio between the two is estimated from the ratio of the relative acceptances to *γ*-like and hadron-like events calculated previously.

Until relatively recently, the template background was only used to derive the morphology of *γ*-ray sources. Spectrum determination appears very challenging, since the population of events is made from the superposition of 3 categories, for which the response functions have to be determined, either from Monte Carlo simulation or from OFF data:


A method was proposed in [21], in which the template background normalisation is done in reconstructed energy bands, and various lookup corrections are made to correct for the different shape of the acceptance for *γ*-like and hadron-like events. Although it provides consistent results with classical methods and can be applied in crowded regions where there are no *γ*-ray free regions (which will be a clear advantage in the context of the upcoming CTA), its complexity might introduce new systematics which are not easy to assess. This is a substantial problem at low energies where the ratio of *γ*-like events to hadron-like events degrades. In general, such methods work rather well with hard selections, but are subject to large systematics when using loose selections.

### 2.4.8. Field-of-View Background

In the field-of-view background [15], the acceptance is directly used as the background model, with a normalisation factor usually derived from specific regions in the field-ofview (regions assumed to be free from *γ*-ray emission, such as side bands in the case of the Galactic plane). The acceptance can be derived from the same data set, or from OFF observations. Since much larger statistics are used to derive the acceptance at each test position, the statistical fluctuations of the background model are usually considered negligible, and the cash statistics are used (Equation (2)). The field-of-view background can be applied to very extended sources, or even to diffuse structures, and has the largest statistical power (as the normalisation factor *α* is null), but is prone to systematics induced by the imperfect determination of the acceptance.

The field-of-view background was, until now, rarely used in VHE *γ*-ray astronomy. It has recently been used in a detailed comparison between the H.E.S.S. and HAWC views of the galactic plane [22].

### 2.4.9. Assessment of Systematics

When the background is properly modelled, the significance distribution derived from Equations (1) or (2) (depending on the algorithm used) should follow a normal distribution. In the presence of non-negligible systematic differences between the actual background distribution and its model, the distribution is widened. Noting *σ*sig the Gaussian width of the significance distribution, the relative level of background systematics *f*syst in the field of view can be estimated simply as

$$f\_{\text{syst}} = \sqrt{\frac{\sigma\_{\text{sig}}^2 - 1}{\langle B \rangle}} \tag{3}$$

where *B* is the average number of background events per sky bin. This simple evaluation provides an easy-to-calculate, single number per field, but does not take into account the fact that the number of background events varies significantly across the field of view, notably in the presence of strong acceptance gradients. More elaborate models have been developed to quantify more precisely the level of background systematics, e.g., [23]. State-of-the art analyses of IACT data reach background systematics of the order of 1–2%. Background systematics can arise in particular from variation of the night sky background across the field of view, the variation of calibration coefficients (high voltage, pixel gains, . . . ) across the camera, changing atmospheric conditions, but also pointing direction with respect to the earth's magnetic field direction (which affects the lateral development of showers). When using the simulated acceptance (Section 2.3.3), the systematics level should not depend much on the FoV, because most of the predictable, field-of-view dependant effects are properly taken into account in the acceptance. For other acceptance models, the field-of-view effects are expected to be the dominant source of systematics.

Background systematics are already the limiting factor for very deep exposures and/or very extended sources in the current generation of instruments, and have been identified as a major challenge for the next generation instruments, and in particular for CTA. In this context, several strategies for the mitigation of the background systematics have already been investigated. In [24], it is proposed to take the systematics into account by adding an uncertainty to the *α* factor in Equation (1), and by modelling the resulting significance distribution. This restores the correct statistical behaviour of the significance across the field of view (and in particular its normal distribution), but the price to pay is a significant reduction of the sensitivity. In [25], a joint-likelihood is used to compute the total significance instead of stacking the individual observations together. The *α* parameter is modelled as a random variable for each observation. This solves some of the problems that occur when stacking observations with very different values of *α* for which the error propagation appears problematic, while offering equivalent or superior sensitivity, but it implies a good knowledge of the *α* distribution for each observation. In [12], no assumption on the shape of the acceptance is made. Instead, observations are grouped by similar observational conditions (array configuration, zenith angle, . . . ), and a generalised likelihood ratio is used to derive simultaneously the signal and the background at a given position, assuming identical relative acceptance shapes for the observations belonging to the same group. It does not, however, solve the problem of field-of-view systematics which vary from observation to observation, even within the same group.

### 2.4.10. Comparison

A comparison of three background subtraction algorithms, using the same data set as for Figure 10 (100◦ of the H.E.S.S. galactic plane survey), with a top-hat smoothing of 0.25◦), is shown in Figure 13. The panels look overall very similar, however the template and field-of-view backgrounds tend to produce more "diffuse" emission or "bridges" between the well identified, localised sources and exhibit consistently larger systematics than the ring background. Note, however, that in this example, the acceptance model (here 2D acceptance) was determined using the same data set, and might therefore contain some residual contamination from large-scale galactic diffuse emission. Moreover, the excluded regions were not optimised again for this analysis and might be undersized. This example should therefore serve as an illustration of the sensitivity differences between different algorithms, and not as an input for a scientific discussion.

**Figure 13.** Comparison of three background subtraction algorithms for the inner 100◦ of the H.E.S.S. Galactic plane survey, with a source size of 0.25◦ (**Top**) ring background. (**Middle**) template background. (**Bottom**) field-of-view background.

### *2.5. Toward Template Fitting*

Template fitting is the state-of-the-art in high-energy *γ*-ray astronomy, and is the default in Fermi-LAT data analysis: the counts maps or photon lists are compared in an iterative procedure to a composite model using a Likelihood analysis. The model describing the data is built from the following ingredients:


Additional models for large scale components, such as the Fermi Bubbles for instance, can be incorporated as well. The source model is usually constructed iteratively, by adding new sources until the likelihood converges. In contrast to the high-energy domain, template fitting is so far still in its infancy in very high-energy *γ*-ray data analysis, but will certainly become one of the major, if not the default, analysis procedure in the coming years.

Building on its success in high-energy *γ*-ray astronomy, the MAGIC collaboration recently implemented such a template fitting procedure [27]. Open-source software such as gammapy [10] and ctools [28] already propose a template fitting procedure. One very important difference with respect to high-energy *γ*-ray astronomy lies in the way in which the background model is generated: high-energy *γ*-ray instruments are signal dominated, and the so called background consists mostly of genuine *γ* rays, but of diffuse origin. This model can be incorporated directly in the final part of the analysis, using the standard instrument response functions. In contrast, IACTs are background-dominated, and the remaining background consists of mostly hadronic or electronic cosmic rays, which are

much more complicated to evaluate. The model used in template fitting analysis must, therefore, incorporate such a background model, or acceptance, produced by the procedure described in Section 2.3.

### *2.6. Catalogue Pipelines*

In this section, the tools used in the final part of the catalogue construction are described.

### 2.6.1. Requirements

Until recently, the analysis of large data sets was done in a completely supervised way, with most tasks being the responsibility of the scientist. In particular, the excluded regions were defined manually, based on the known existing sources and on results obtained previously. Similarly, source identification was done based on existing spatial overlap and similarly in shape with counterparts at other wavelengths, and was subject to human judgement. With the increasing exposure and consequent depth of the data sets, the problem of source confusion and overlapping has also become crucial, pushing for fully automated catalogue pipelines. The main tasks of an automated catalogue pipeline are:


The whole procedure usually needs to be executed several times in an iterative way: when new sources are identified at step 5, the excluded regions from step 2 need to be refined, and the whole loop needs to be performed again. Some quantitative criteria are also needed to decide when to stop the iterations. The analysis pipeline can also incorporate additional tasks, such as automatic searches for transients events and for source variability, as well as search for counterparts at other wavelengths, which are currently still mainly done manually, since this requires some physics expertise.

### 2.6.2. Completeness, Angular Resolution and Horizon

As mentioned already, IACT are background-dominated instruments. This has numerous implications for the large-scale surveys and for the construction of catalogues. For sufficiently high statistics and low signal to background ratio (reasonable assumption), the significance of a detection scales with the source intensity and observation time as:

$$
\sigma \approx \phi \, A \sqrt{\frac{t}{B}} \tag{4}
$$

with *φ* the source flux (at Earth), *A* the effective area of the array, *t* the observation time and *B* the background rate, which depends on the detector characteristics and, therefore, indirectly on the effective area. The minimum detectable flux thus scales as 1/ <sup>√</sup>*t*, which usually limits the depth of existing surveys.

The background rate, *B*, depends on various instrumental characteristics (array geometry, background rejection power, and angular resolution, among others), but also on the source extent. In the context where most of the galactic sources are (very) extended, as demonstrated by the results accumulated over recent years, one can neglect the effect of the angular resolution and assume that the background rate scales as the source solid angle (*B* ∼ *b* × Ωs). Assuming a source at a distance *d* with a physical extent *R* and an intrinsic luminosity *L*, the scaling of the significance for point-like and extended sources becomes, respectively:

$$
\sigma\_{\text{pt-like}} \propto \frac{LA}{d^2} \sqrt{\frac{t}{B}}; \qquad \sigma\_{\text{extended}} \propto \frac{LA}{d^2R} \sqrt{\frac{t}{b}} \tag{5}
$$

It follows that:


### **3. A New View on the Milky Way**

Over the last 20 years, major collaborations in the field have conducted several surveys of varied angular extent, completeness and depth, which has led to the discovery of many sources, and allowed for the first population studies to be performed.

### *3.1. Existing IACT Surveys*

### 3.1.1. Early Times

The HEGRA collaboration conducted the first systematic survey of modern TeV *γ*-ray astronomy [29]. It consisted of 176 h of observations covering one quarter of the Galactic plane (−2◦ < *l* < 85◦) and resulted in no source detection, thus placing upper limits in the range between 0.15 to a few Crab units, depending on the observational conditions. Source stacking on some source populations (bright GeV sources, nearby Supernova Remnants, powerful and nearby pulsars) was used to derived more constraining, so-called ensemble limits.

### 3.1.2. Galactic Plane Surveys

Following on this, the H.E.S.S. collaboration conducted the most comprehensive survey of the Milky Way so far, as part of a decade-long observational program, dubbed HGPS. Nearly 2700 h of good quality data were accumulated between 2004 and 2013, in the longitude range (−110◦ < *l* < 65◦), with a sensitivity better than ≤1.5% Crab flux. The survey was published in 3 successive papers [18,30,31], comprising data sets of increasing size. Whereas the first two papers used a manual source identification procedure, the last paper proposed, for the first time, a semi-automated pipeline, similar to that described in Section 2.6.1. The resulting flux map is shown in Figure 14 and comprises 78 firmly identified VHE sources.

Out of these 78 sources, the majority (47) are associated with an energetic pulsar, and 12 of them correspond to a firmly identified pulsar wind nebula (PWN). The second population by frequency corresponds to supernova remnants (SNR), with 24 sources associated with a shell-type SNR (although the number of chance coincidences is nonnegligible, due to the number and the large angular extent of such objects). Six VHE sources are firmly identified as SNRs, with two additional candidates based on their shell-type morphologies. Three binary systems finally form the only class of variable galactic sources at these energies (so far).

It should be noted that a large number of sources (36) cannot be firmly identified with the rather strict association criteria used in the process (positional evidence and, depending on the source class, energy-dependant morphology consistent with other wavelengths, variability, . . . ). In most cases there are, however, plausible counterparts. Eleven sources, denoted as "Not associated", did not have any plausible association at the time of publication of the paper.

While a rather large fraction of the galaxy has been sampled to 10% of the Crab flux (point-like sensitivity), a flux limit of 1% Crab can only have been reached in the solar system's neighbourhood. From the log *N*–log *S* distribution an estimate of ∼600 sources in the Galaxy above 1% Crab was obtained (with a statistical error of a factor of 2). The HGPS included a large-scale emission model, accounting for both unresolved sources and genuine, diffuse emission, due to the interaction of cosmic rays with the interstellar medium. This "diffuse" component, already established in [32], has a latitude distribution similar to that of the HGPS sources. Based on a source population synthesis, [33] estimated that a significant fraction (13–32%) of the the *γ*-ray emission within the HGPS is due to yet unresolved sources. They estimate the total number of VHE sources in the Galaxy to be in the range from 800 to 7000.

**Figure 14.** H.E.S.S. Galactic Plane Survey: Integral flux above 1 TeV. Reproduced from [18].

H.E.S.S. also performed a deep survey of the large magellanic cloud [34], with 210 h of data. Although the LMC is located much further away than the Galactic Centre, the survey resulted in the detection of three sources of exceptional intrinsic luminosity: the superbubble 30 Dor C, the energetic pulsar wind nebula N 157B, and the radio-loud supernova remnant N 132D. Since the LMC is seen almost face-on, source confusion is not a problem as it might be in the Milky Way. N 157B and N 132D belong to the classes of sources that are represented in the HGPS, but they stand out by their distinguishing characteristics. N 157B is indeed being powered by the most energetic young pulsar known so far, while N 132D is one of the oldest VHE *γ*-ray emitting SNRs, with possibly a very high cosmic-ray acceleration efficiency.

### 3.1.3. Particular Regions

A few years ago, VERITAS published a survey of the Cygnus region [35], based on 300 hours of data collected over 7 years. This region, where the Cygnus arm of the galaxy is observed tangentially, is the brightest region of diffuse *γ*-ray emission in the northern sky, and could also exhibit one of the largest density of sources of *γ*-rays. The VERITAS survey covered a region of 15◦ by 5◦ (Galactic latitude *l* ∈ [67◦; 82◦]) and reached a pointlike sensitivity of 3% Crab. Four already known *γ*-ray sources (out of which three are significantly extended) are detected in this survey. Detailed analysis of the significance distribution did not indicate the presence of additional, sub-threshold sources. Upper limits on a large number of potential targets were derived (including, in particular, energetic pulsars and supernova remnants). Many Fermi-LAT sources visible at lower energies were not detected in VHE in this survey, and the ratio of VHE to HE sources appears rather similar to that in the H.E.S.S. survey region.

### *3.2. Results from Particle Array Survey Instruments*

Non-imaging particle array instruments such as Milagro and its successor, HAWC, rely on a completely different technique. Instead of detecting the Cherenkov light emitted by the charged particles in the atmospheric showers, they detect the particles of these air showers that reach the ground. Various techniques have been investigated in the past, including very large surfaces of resistive plate chambers [36], plastic scintillators [37], and water Cherenkov [38,39]. More recently, LHAASO started to operate a system consisting of three interconnected detectors, combining water and air Cherenkov with scintillators [40].

Particle array survey instruments are confronted with very large amounts of raw data collected over many years, which pose some specific challenges for the analysis. Data analysis techniques usually use a likelihood formalism, e.g., [41], in which a physics model (sky position of *γ*-ray sources, spectrum, angular extent, etc.) is confronted with the data through a likelihood maximisation routine that takes into account the detector response. The number of background events (hadronic events passing the selection cuts) in each sky bin is usually estimated directly from the data, either prior to the maximisation procedure (using off-source data), or directly in the procedure via an additional, nuisance parameter inserted in the log-likelihood.

Compared to IACTs, survey instruments have a much better duty cycle (close to 100%), very large fields of view (nearly half of the sky), but poorer hadronic rejection and reconstruction capabilities, leading to poorer angular resolution (of the order of 1◦) and limited spectral performances. They are, however, well suited to the analysis of extended sources in general, and to the study of large-scale diffuse emission in particular. HAWC, with a sensitivity improved by one order of magnitude compared to Milagro, started to provide a very complementary view on the Milky Way in VHE with unbiased, large-scale surveys.

While the Milagro survey only yielded two sources of *γ* rays, the Crab Nebula and Mrk 421 [42], the first HAWC catalogue [43], with an incomplete array, already contained ten sources and candidate sources, three of them being detected with significances >5*σ* (post-trials). The two following catalogues, 2HWC [44] and 3HWC [45] contain, respectively, 39 and 65 sources (among which 17 are considered as secondary sources, being not well separated from neighbouring sources). As the first, large-scale, unbiased catalogue, this constitutes a major contribution to the field. The all-sky significance map, under the assumption of point-like sources, is displayed in Figure 15: most VHE sources are, like for H.E.S.S., concentrated along the galactic plane.

**Figure 15.** All-sky significance map of the third HAWC Catalogue of Very-high-energy Gamma-Ray Sources. Reproduced from [45].

The overall analysis and catalogue construction is very different from that in use for IACTs, and is not the main subject of this paper. In general, the shower core is reconstructed using the density of particles on the ground, while the timing provides the shower axis, and thus the reconstruction of the direction. The homogeneity of the particle density is used to discriminate between *γ*-rays and charged cosmic rays, and a likelihood ratio procedure is used to produce the significance map.

One of the most interesting features of the HAWC data is the presence of very extended *γ*-ray emission around young pulsars (and in particular Geminga and Monogem [46]), which indicates that such extended pulsar "halos" could be a rather common feature, even for old pulsars which could have already left their SNR shell, or whose shell could have already vanished. Indeed, out of the 65 detected HAWC sources, 56 have a pulsar as a plausible counterpart. This could open new prospects for the quite numerous unidentified VHE sources.

Since the results from H.E.S.S. and HAWC, in the part of the sky that is visible to both instrument, appeared to be rather different at a first glance (due to the different instrumental performances), it appeared mandatory to compare the results more thoroughly. This was done in [22], using the field-of-view background (suitable for very extended sources) and smoothing the H.E.S.S. data to mimic the HAWC angular resolution. The results, shown in Figure 16, indicate a reasonable agreement with some remaining, intriguing, differences.

**Figure 16.** The galactic plane as see by H.E.S.S. and HAWC with the same angular resolution. Adapted from [22].

### *3.3. Meta-Catalogues and Population of VHE Sources*

Meta-catalogues are online catalogues collecting the results of several instruments in a unique database. IACTs sometimes publish such catalogues (e.g., [47]) summarising many years of observations. In the field of VHE astronomy, TeVCat [48] is the standard tool (http://tevcat2.uchicago.edu/, accessed on 28 October 2021). Although the collected data correspond to different thresholds and uneven exposures, these catalogues are useful to perform statistical studies, but do not constitute unbiased and/or complete samples, and need to be filled manually (for the time being). TeVCat only reports positive detections and not upper limits, which could be useful to study source variability and, for instance, examine a transition from an emitting state to a non-emitting state or vice versa.

The populations of VHE sources, as for September 2021, were extracted from TeV-Cat, and are displayed in Figure 17 for galactic sources (left) and extragalactic sources (right). Whereas PWNs comprise the largest population by number in the galactic plane, followed by SNRs and binary systems, it should be noted that the majority of the sources remain unidentified. Most galactic sources are (very) extended, and thus several plausible counterparts exist. In contrast, the extragalactic sky is currently largely dominated by well-identified BL Lacs (plus some other AGNs), which might well result from a selection bias, since no systematic survey of the extragalactic sky has been conducted so far. The identification of extragalactic sources is, except for in very rare cases, not problematic, due to their (mostly) point-like nature and to the lower density of possible counterparts.

**Figure 17.** Population of established VHE sources extracted from the TeVCat [48] meta-catalogue. (**Left**) Galactic sources. (**Right**) Extragalactic sources.

### *3.4. Population of VHE Sources*

Unbiased surveys are essential tools for the analysis of source populations, which can identify global trends and possible evolution schemes within one source class. For the first time ever, VHE *γ*-ray astronomy is now opening this possibility with large scale surveys. The moderate depth and relative incompleteness of the existing surveys makes these first studies not completely conclusive though and subject to future improvements. Two main population studies were already performed based on the H.E.S.S. HGPS data.

### 3.4.1. Population of Pulsar Wind Nebula

The H.E.S.S. HGPS data have been used in a systematic population study of pulsar wind nebulæ [49]. In addition to the 14 HPGS sources firmly identified as PWNs, 10 additional sources are found likely to be PWNs. Actually, most young and energetic pulsars are found to be associated with a plausible PWN candidate (Figure 18, left). The data showed, for the first time, a correlation of the TeV surface brightness with pulsar spin-down power *E*˙ , which can be quite well explained by a rather simple evolutionary model of PWNs, indicated by blue bands in the various plots: assuming a simple dipole-like radiation, the pulsar spin-down power decreases with increasing age (as measured from its characteristic age *τ<sup>c</sup>* = *P*/2*P*˙). The dynamical evolution of PWNs is then modelled in three distinct phases, first the free expansion phase which lasts for a few kyr, followed by the reverse

shock interaction (until some tens of kyr), and finally the relic stage. A one-zone, timedependant injection model is then used for the population of electrons, from which the TeV luminosity is computed using standard radiative models. The results of this model are reproduced in Figure 18. The extension increases quickly in the free expansion phase (middle, *R* ∼ *t* 1.2) and then slows down at later stages (*<sup>R</sup>* ∼ *<sup>t</sup>* 0.3). The TeV luminosity vs characteristic age (right) shows a rather large data scatter, still compatible with the varied model band (blue bands in the plot). This scatter might reflect the intrinsic variability of the PWNs and their environments.

This study is a first attempt to model, in a rather comprehensive way, the TeV emission of PWNs. It suffers, however, from several selection biases, due to the incompleteness of the survey and the difficulty in detecting very extended nebulæ. Going beyond this result requires the use of a population synthesis model to address these biases in a proper way. Future, deeper surveys will also aim to improve the precision of the modelling.

**Figure 18.** Results of PWN population study. (**Left**) Population of pulsars in the spin-down power *E*˙ vs. characteristic age plane. Young and energetic pulsars, clustering at the top-left, are associated with plausible PWN candidates. (**Middle**) PWN extension evolution with time. (**Right**) Evolution of the PWN TeV luminosity with characteristic age. Reproduced from [49].

### 3.4.2. Supernova Remnant Populations

The second HGPS population study is related to the second galactic population by frequency, namely the supernova remnants [50]. In this study, upper limits are computed for all SNRs that fall in the HGPS region and which are not detected at VHE (i.e., not overlapping with a significant excess). A sample of 108 SNRs is constructed this way, biased towards low flux, since the detected SNRs are excluded from the sample. Using the canonical cosmic ray paradigm, constraints on the typical ambient density values around SNR shells (*<sup>n</sup>* ≤ 7cm<sup>−</sup>3) and on the electron-to-proton energy fraction ( *ep* ≤ <sup>5</sup> × <sup>10</sup><sup>−</sup>3) are derived. A shift of 1.01 (mean) is observed in the significance distribution, which might be due to the cumulative effect of sub-threshold SNR shells and the galactic diffuse component. Using the SNR shells that are detected in the VHE band, some constraints on the luminosity evolution of SNRs in the radio and VHE bands are also derived. The (*L*VHE/*L*radio) luminosity ratio exhibits a clear correlation with source age, which is interpreted as being due to the fact that, in the first several thousand years, the radio-synchrotron emission of the SNRs decreases quickly, while the VHE emission decreases slowly.

Here again, the understanding of SNR evolution will greatly benefit from future, deeper surveys.

### **4. Perspectives and Outlook**

After the tremendous success of the third-generation IACTs during the last two decades, driven essentially by H.E.S.S., VERITAS and MAGIC, a new step towards an international facility is currently being taken, merging the efforts of the different collaborations into a single, world-wide project, named "The Cherenkov Telescope Array" (CTA). Lessons learnt from the various concepts tested in the third generation instruments are being used

to design a new array, focusing on (i) performance, (ii) reliability and (iii) flexibility, with some major challenges ahead. Recent developments in survey instruments established the use of particle array survey instruments as a viable and complementary technique to IACTs, particularly suited to large-scale surveys and to all-sky monitoring. New technical developments and new upcoming projects are expected to further boost the performances and, on a longer timescale, provide a nearly full sky coverage.

### *4.1. CTA, the Next Generation IACT*

The Cherenkov telescope array (CTA) is the next generation array of imaging atmospheric telescopes, currently in the prototyping stage. It aims to transform our understanding of the VHE universe. It will consist of two arrays; one in the Canary Islands, and one in Chile [51], with different telescope layouts for a total of ∼100 telescopes. In order to increase the dynamical range in energy, telescopes of three different sizes will be combined in the same array: large-sized telescopes (LSTs) with a field of view of >4.5◦, and a dense layout will focus on the lowest energies, medium-sized telescopes (MSTs), with a larger field of view of >7◦ on a sparser layout will provide the sensitivity in the core of the energy domain, and small-sized telescope (SST) with an even larger field of view (>8◦) spread over a very large area will explore the highest energies. These arrays of telescopes of different size have been designed to provide an improvement by a factor of ∼10 in sensitivity compared to the previous generation, with a substantial improvement in angular and energy resolution, but at the cost of a much higher (∼×10) event rate [52], and a huge data volume (∼PB/year).

Amongst the key science projects that have been identified for the first years of operation, large surveys play a particular role in providing unbiased samples of particle accelerators, but also to search for the unexpected. In the design of the telescopes and the array, a strong focus has been made on the survey capabilities, in particular through the conception of large field-of-view cameras and the first investigation of an alternate pointing scheme, such as divergent pointing (Section 2.1). Three major surveys are currently foreseen [53]:


The characteristics of the three surveys differ in terms of physics goals. Most likely, the configuration of the array will have to be optimised accordingly. With the density of sources in the extragalactic sky being fairly low, and the sources being (almost) pointlike, the angular resolution and background systematics requirements are not subjects of major concern (except perhaps at the lowest energy end). To quickly cover a large fraction of the sky, and to increase the chances of catching transient events, one might want to increase the effective field of view by using, for instance, the divergent or skewed pointing mode (Section 2.1). In contrast, due to the absorption of VHE *γ*-rays by pair creation on the extragalactic background light (EBL), one might want to achieve the lowest possible energy threshold, which is best obtained in the convergent pointing mode (at high altitude), because it maximises the collection of light. The use of LSTs is, therefore, being considered to lower the energy threshold, but they have a smaller field of view than MSTs and SSTs, resulting in (i) a longer time being required to cover the survey region (ii) a possibly complicated acceptance shape when used in conjunction with the other telescopes. Further optimisation (e.g., grid spacing on the sky, run duration, . . . ) is still ongoing.

For the galactic (Figure 19) (and the LMC) surveys, the angular resolution is of prime importance to mitigate the source confusion problem. The angular resolution is optimal in convergent pointing mode (with the maximum telescope multiplicity), and improves in the core of the energy range (∼ TeV), thus calling for the use of MSTs and SSTs mainly. Moreover, during recent years, it has been observed that, for many galactic sources (and the PWNs in particular), the extension decreases at high energy, thus pushing in favour of the best possible angular resolution. Two points remain large points of concern for the galactic survey:


**Figure 19.** Simulated results from the CTA Galactic Plane Survey in very high-energy *γ* rays for half of the plane. Reproduced from [54].

### *4.2. Next Generation Particle Array Survey Instruments*

Survey instruments are also preparing new upgrades to boost their sensitivity. In 2018, HAWC completed [55], a major upgrade consisting of the addition of a sparse array of 345 small water Cherenkov tanks spread over a large area. By improving the rejection of showers that are not well contained in the main array, this upgrade allowed the core resolution to be improved by a factor of ∼3 above 1 TeV, and the effectives are to be increased substantially, particularly at the highest energies [56]. Further optimisation of the analysis to include these data is under way.

The Southern Wide-field Gamma-ray Observatory (SWGO) [57] aims to become the next generation, large scale survey instrument in the southern hemisphere, covering an energy range from 100 GeV to hundreds of TeV. It is similar in concept to HAWC, but ∼×4 larger (for the inner array), and would include a sparser, outer array of <sup>∼</sup><sup>1</sup> km<sup>2</sup> to expand the energy range towards the highest-energy frontier. Planned for installation in South America, it will cover the central regions of the Galaxy with an unprecedented sensitivity, and will complement the CTA view. SWGO is not yet funded for construction.

LHAASO, currently being deployed at high altitude (4410 m above see level) in the Sichuan Province, China, is a novel concept combining three interconnected detectors: an array of underground water Cherenkov detectors, a kilometre square array made of plastic scintillator and an array of wide field-of-view Cherenkov telescopes. Early data from LHAASO demonstrate the presence of at least twelve source of petaelectronvolt *γ* rays in the Galaxy [40], thus boosting the interest for the extremely high energy frontier. It should be noted that LHAASO is a multi-messenger observatory with unprecedented capabilities in the field of cosmic-ray physics. Its deployment should be completed by the end of 2021.

Particle array survey instruments are currently becoming invaluable companions to IACTs. They can provide an unbiased view on the *γ*-ray emission from the Galactic plane (Section 3.2), whereas IACTs can perform deeper observations, revealing the details of the cosmic-ray acceleration and *γ*-ray emission mechanisms. Through their all-sky monitoring capabilities, Particle array survey instruments can also monitor the long-term activity of variable sources, and alert the community to particular eruptive events that IACTs can sample with much greater precision. The synergy between targeted, IACT observations and long-term, particle array monitoring instruments has recently been put under the spotlight with the detailed and anticipated H.E.S.S.-HAWC comparison [22]. These efforts should gain additional visibility in the coming years.

### **5. Conclusions**

Over the course of the last ∼20 years, the field of VHE astronomy has experienced an incredible and somewhat unexpected blooming caused first by (i) the developments in high-speed acquisition techniques, (ii) the advent of third generation instruments building on the success of the previous instruments, and (iii) the increased capabilities in image classification and pattern recognition. This evolves into an exponential increase in the number of VHE *γ*-ray sources detected with time. The so-called "Kifune-plot" (Figure 20), named after T. Kifune, who first showed a first version of this figure at the 1995 ICRC conference in Rome, indicating that the number of X-ray, HE and VHE sources detected has not yet saturated, and the CTA simulations predict a continuation of this trend.

Moving away from the analysis of single, well-targeted sources, scientists have developed new algorithms to map the *γ*-ray emission of large regions of the sky in varying observational conditions. The analysis of very large, heterogeneous data sets comprising observations spread over several years on very diverse positions has been implemented, leading to major developments in acceptance determination (Section 2.3) and background subtraction techniques (Section 2.4), and has led the way towards the first VHE source catalogues. Recent large-scale surveys of unexplored regions were the main driver for the discovery of new sources, and made the first population studies finally possible (Section 3.4).

At the same time, many developments in pattern recognition and image analysis led to elaborate reconstruction and separation techniques, which are now rather close to the fundamental limits, thus with only moderate improvements being possible in the future. Tremendous efforts were made to improve the shower and detector simulation codes, by including subtle instrumental and atmospheric effects. State-of-the-art, realistic simulations are now able to reproduce the background with sufficient resolution to open a new paradigm, replacing the classical background subtraction techniques with a modern template-fitting approach, including a fully simulated background model (Section 2.3.3).

Particle array survey instruments (Section 3.2) recently demonstrated their maturity and their strong synergies with IACTs, delivering a complementary and unbiased view on the VHE sky. The exponential rise in the number of sources shown in Figure 20 indicates that the number of sources is currently not limited by their scarcity, but by the sensitivity of the instruments. The next generation of instruments, and in particular the Cherenkov telescope array, will most likely have to deal with hundreds, if not thousands of sources. Major projects, such as deep surveys of the Galactic plane, but also the first survey of a significant fraction of the extragalactic sky with unprecedented sensitivity (Section 4.1), will deliver large and unbiased catalogues of VHE sources, enabling the statistical analysis of source populations and the clarification of the underlying evolution models. They will, however, face fundamental challenges caused by the huge amount of acquired data. The background will need to be understood and modelled with a sub-percent precision to avoid uncontrolled background systematics, which would limit the sensitivity of the instruments. The proper background estimation will require very detailed monitoring of the instrumental and atmospheric conditions, and use extensive simulations of the instrument response to varying conditions and the incorporation of these effects in the acceptance determination algorithms. Source confusion will most likely become a major issue in regions of the sky with large source density, such as significant parts of the Galactic plane. Improved angular resolution will be of little help due to the large size of most VHE sources. Modern analysis approaches, including template fitting and 3D modelling of the VHE sources, provide promising paths currently being explored.

**Figure 20.** Number of established sources as function of time in different energy domains, also dubbed the "Kifune Plot", in honour of Prof. Tadashi Kifune, who produced the first version of this figure.

The status of VHE *γ*-ray astronomy is, in fact, similar to that of X-ray or high-energy *γ* rays: every time a new astronomical window is opened and a sensitivity threshold is achieved, one can observe an exponential rise in the number of sources. From that, there is little doubt that the field of VHE astronomy can look forward to a very bright future.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Contained in the main text.

**Informed Consent Statement:** Contained in the main text.

**Data Availability Statement:** Contained in the main text.

**Acknowledgments:** We thank S. Wagner, spokesman of the H.E.S.S. Collaboration and O. Reimer, chairman of the Collaboration board, for allowing us to use H.E.S.S. data in this publication. We are grateful to D. Horan for carefully reading the manuscript and for providing us with very useful suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:



### **References**


### *Review* **High-Energy Alerts in the Multi-Messenger Era**

**Daniela Dorner 1, Miguel Mostafá 2,3,4,\* and Konstancja Satalecka <sup>5</sup>**


**Abstract:** The observation of electromagnetic counterparts to both high energy neutrinos and gravitational waves marked the beginning of a new era in astrophysics. The multi-messenger approach allows us to gain new insights into the most energetic events in the Universe such as gamma-ray bursts, supernovas, and black hole mergers. Real-time multi-messenger alerts are the key component of the observational strategies to unravel the transient signals expected from astrophysical sources. Focusing on the high-energy regime, we present a historical perspective of multi-messenger observations, the detectors and observational techniques used to study them, the status of the multimessenger alerts and the most significant results, together with an overview of the future prospects in the field.

**Keywords:** multi-messenger; real-time; high-energy; alerts

**Citation:** Dorner, D.; Mostafá, M.; Satalecka, K. High-Energy Alerts in the Multi-Messenger Era. *Universe* **2021**, *7*, 393. https://doi.org/ 10.3390/universe7110393

Academic Editors: Ulisses Barres de Almeida, Michele Doro and Ezio Caroli

Received: 22 September 2021 Accepted: 15 October 2021 Published: 20 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

### **1. Introduction**

The last century was marked by a fast evolution in our understanding of the Universe from the discovery of cosmic rays in 1912 to the first direct observation of gravitational waves in 2015. Particle physics and astrophysics went hand in hand towards discoveries of new particle species (antimatter, muons, pions, kaons, etc.) and the development of detection techniques (from cloud chambers to semiconductor detectors). Nowadays, not only photons but also charged particles (cosmic rays, CRs), neutrinos (*ν*), and gravitational waves (GWs) act as cosmic messengers and bring us news from distant corners of the Universe. We entered the era of multi-messenger astrophysics.

Very-high-energy (VHE, >100 GeV) *γ* rays play an important role in the multimessenger picture. They can be produced in both leptonic and hadronic interactions and, therefore, together with neutrinos, can reveal the localization and properties of the sources of CRs. They tell us about the highest particle energy (protons or electrons) reachable in these objects. The clues from their spectral shape reveal information about the environment (photon or matter fields) and the magnetic field strength both internal and external to the source. Recent discoveries of VHE *γ*-ray emission from Gamma Ray Bursts (GRBs) provide us with evidence for their connection to GW emitting objects.

In this review, we summarize the last 30 years of involvement of ground-based *γ*-ray telescopes in multi-messenger astrophysics, and give our perspective on the future developments in this field. We put a special emphasis on the follow-up of real-time alerts, which has already proven to be a very successful approach to study the multi-messenger universe.

### **2. Historical Perspective**

The source 1ES 1959+650, a near-by (z = 0.048) BL Lac type object, was the third extragalactic source discovered in VHE *γ* rays [1]. Therefore, regular observations of

this object were carried out, also in a form of coordinated multiwavelength campaigns, which was a novelty at the time. One of them, performed in 2002 by the Whipple and HEGRA VHE *γ*-ray instruments together with the Rossi X-Ray Timing Explorer, Boltwood and Abastumani optical observatories as well as VLA and UMRAO, brought a rather unexpected result. As reported in Ref. [2]: "Although the X-ray and *γ*-ray fluxes seemed to be correlated in general, we found an *orphan γ*-ray flare that was not accompanied by an X-ray flare." This short statement, indicating a departure from a purely leptonic emission scenario and a possibility of hadronic interactions at play [3] stirred the astro-particle community. This was particularly good news for the scientists looking for cosmic neutrino emissions with the AMANDA-II neutrino telescope. This small (∼0.01 km3) predecessor of IceCube had been recently completed (in 2000) and taking data. Soon, reports of five neutrino events observed from the direction of 1ES 1959+056, two of which arrived around the time of the orphan VHE *γ*-ray flare, followed [4,5]. Unfortunately, due to the trial factors arising from *a posteriori* choices of the time windows to be used for the statistical test, the significance of this coincidence cannot be properly estimated. Nevertheless, it proved the interest in simultaneous VHE *γ*-ray and neutrino observations—the idea for neutrinobased target-of-opportunity (ToO) triggers was born [6]. The other way around is also true. Real-time alerts of VHE *γ*-ray flares are of significant interest for the neutrino community.

While neutrino telescopes continuously monitor a large fraction of the sky, Imaging Air Cherenkov Telescopes (IACTs) are pointed instruments. For simultaneous observations, either a monitoring program or ToO observations are needed. As the sources of astrophysical neutrinos are not yet clearly identified, the ToO approach had to be chosen. One such program is the Gamma-ray Follow-Up (GFU), run since 2012 by IceCube [7] in collaboration with MAGIC, VERITAS, and H.E.S.S. [8]. The key idea is to identify neutrino source candidates among the *γ*-ray emitters and search in real-time for time-variable neutrino emission to reduce the overwhelming (six orders of magnitude) CR-induced background. In the current implementation, there are more than 330 monitored sources (mostly Active Galactic Nuclei, AGN) and the duration of the potential neutrino flare can vary from a few seconds to six months.

The joint work between the neutrino and electromagnetic (EM) observatories, as in [9] for example, demonstrated the power of a multi-messenger approach in astrophysics, and led to the concept of the Astrophysical Multimessenger Observatory Network (AMON) [10,11]. AMON is a continuous, real-time system designed to enable the discovery of the sources of transient multi-messenger signals by sifting through streams of sub-threshold events from several multi-messenger facilities, and correlating them in real-time in search of spatial and temporal coincidences. The first prototype of the AMON real-time system was built in 2013. The AMON server was designed to receive data (including sub-threshold events) from the member observatories, and to send alerts to the follow-up facilities when coincident signals were found. The prototype of the AMON server went online (i.e., started processing archival data) in July 2014, and the HTTPS client started running successfully at IceCube in August 2014. IceCube transmitted simulated neutrino events to AMON from August 2014 until February 2015, when AMON started receiving neutrino data from IceCube in real-time. This was the first real-time subthreshold data stream. The connection with the Gamma-ray Coordinates Network (GCN) was established in May 2015, and in August 2015, AMON started issuing real-time alerts via GCN to some collaborators (e.g., VERITAS and MASTER) for developing and testing their follow-up software and capabilities. By September 2015, archival data from Auger, ANTARES, IceCube, *Swift*-BAT, *Fermi*-LAT, and aLIGO (S6) were written to the AMON databases. AMON started the first coincidence analysis of archival data from participating neutrino (IceCube) and EM observatories (*Fermi*-LAT and *Swift*-BAT) in December 2015. The two high-uptime AMON production servers became fully operational in February 2016. These redundant servers were designed to have less than one hour of downtime per year. AMON started distributing in real-time the IceCube high-energy starting event

(HESE) [12,13] and the extremely high energy (EHE) [12] alerts to the broader astrophysical community via GCN in April 2016.

### *A New Multi-Messenger Era*

The first direct observation of gravitational waves from the merger of two black holes occurred on 14 September 2015, and it was announced by the LIGO/Virgo Collaboration (LVC) on 11 February 2016 [14]. Then, a GW event from the merger of two neutron stars was observed on 17 August 2017 [15], and the aftermath of this merger was seen across the electromagnetic spectrum [16]. This significant breakthrough in multi-messenger astrophysics clearly emphasized the urgency to establish a real-time alert system for GW events. The LVC started issuing public alerts through the GCN/TAN system from the beginning of the third observing run (O3) in April 2019. A significant percentage of O3 candidate events detected by LIGO were accompanied by corresponding triggers at Virgo. The values of the false alarm rate (FAR) were spread over a large range, with more than half of the events with a FAR greater than one per 20 years. The Kamioka Gravitational Wave Detector (KAGRA) in Japan became operational on February 25, 2020. However, KAGRA does not report their signals in real-time as LIGO and Virgo do. The O3 run (and therefore the GW alerts) ended earlier than scheduled on 27 March 2020 due to the COVID-19 pandemic.

### **3. Paving the Way**

### *3.1. Ground-Based Gamma-Ray Telescopes*

IACTs are pointed instruments with a field of view (FoV) of ∼3◦ to 5◦. Therefore, when receiving alerts, they require accurate information on the arrival direction of the multi-messenger events. Fast repositioning is an important feature of these instruments to be able to follow-up neutrino and other real-time alerts.

An example of the IACT adaptation for rapid follow-up are the MAGIC Telescopes. They were designed to detect VHE *γ*-ray emission from GRBs. Therefore, they were built with an extremely light-weight structure to allow for fast telescope movements. The system for repositioning and tracking sources was designed for rapid follow-up and has been equipped with calibration devices to track possible deformations of the structure [17]. Based on the fast repositioning, an automatic follow-up procedure was set up [18]. This 'GRB mode' needs to be enabled at the beginning of the night. To ensure the safety of the shift crew, there are switches that stop the motion of the telescopes when the door on the fence around the telescopes is opened. If an alert occurs when the 'GRB mode' is enabled, the automatic alert system takes control over all the subsystems to point to the position of the alert and start data taking as quickly as possible. Sub-minute times have been achieved from receiving the alert to taking data [19,20].

Other IACTs have also implemented similar automatic follow-up systems, for example, [21,22]. For example, the H.E.S.S. collaboration constructed the largest IACT in the world to obtain the lowest energy threshold, which is important for these extragalactic observations. The large size is combined with rapid slewing capabilities (<1 min anywhere on the sky) and the deployment of a fully automatic alert system [21].

Although these automatic follow-up procedures were originally designed for GRB alerts, it is possible to adapt them to respond to neutrino and GW alerts, for example, [23].

### *3.2. Neutrino Telescopes*

The largest (1 km3) operating neutrino observatory is IceCube [24], located at the South Pole. Two smaller detectors—ANTARES (∼0.01 km3) [25] and Baikal (∼0.4 km3) [26]—are immersed in the Mediterranean Sea and Lake Baikal, respectively. All of them detect neutrinos using the Cherenkov radiation from the secondary particles (muon, electron, tau, and hadronic showers) that are produced in neutrino interactions in water or ice. Neutrino telescopes use the bulk of the Earth as a shield against the atmospheric particle background. Therefore, they are most sensitive to the sources located on the hemisphere

opposite to their location (e.g., the Northern hemisphere in the case of IceCube). The first astrophysical neutrino events were discovered with IceCube in 2013 [13], and more than 100 astrophysical candidates have been detected so far [27]. Although several types of searches for neutrino sources have been performed, including correlation studies with different sources catalogs [28,29] as well as blind searches in space and time [30,31], no unambiguous identification of a neutrino source has been made at a discovery level. The most significant excess (4.2*σ* pre-trials) in the IceCube 10-year exposure map is at the location of the starburst galaxy NGC 1068. No VHE *γ*-ray excess has been detected from this source, despite a deep observation campaign performed by MAGIC [32].

The first neutrino events identified as astrophysical in origin, the High Energy Starting Events (HESE), were mostly of the cascade topology. This made the estimation of the arrival direction difficult, especially in ice, where the scattering length is shorter than in water. HESE events have localization uncertainties of the order of hundreds of square degrees. Due to the improvements in background rejection and event reconstruction, the diffuse astrophysical neutrino flux was also measured with track-like events; a.k.a. Extremely High Energy (EHE) events [33]. The localization uncertainty of EHE events was reduced to a few square degrees, that is, an area comparable to the typical field of view of an IACT. The experience gained with these two streams and the advances in the online event reconstruction allowed IceCube to establish the first public real-time neutrino channels: the EHE and HESE alerts. The IceCube collaboration issued the first real-time alert (EHE-160424A) on April 24, 2016, and it was followed up by all IACTs [34].

### *3.3. The GCN/TAN System*

As the duration of GRBs is of the order of <2 (∼10–100) s for short (long) bursts, followup observations must rely on fast communications. For this reason, the GRB Coordinates Network (GCN)/Transient Astronomy Network (TAN) was started in 1992 [35]. The GCN/TAN system currently distributes the locations of GRBs and other transients (in the Notices) and also the reports (called Circulars) of the follow-up observations. The GCN/TAN Notices consist of the real-time (and near real-time due to telemetry downlink delays) distribution of GRB/transient locations (including images, spectra, and light curves) detected by various spacecrafts such as *Swift*, *Fermi*, *MAXI*, *INTEGRAL*, *IPN*, *MOA* (gravitational lensing events), and others. The latency for missions with real-time downlinks is in the 2–10 s range. The GCN/TAN Circulars are natural language, prosestyle messages (as opposed to the "TOKEN: value" style of the Notices, easily readable by a machine) from follow-up observers (ground-based and space-based optical, radio, X-ray, TeV, and other particle observers) reporting on their results (detections or upper limits). The GCN/TAN System is a very important tool in the context of multi-messenger astrophysics for the communication of alerts and the coordination of follow-up observations.

Looking forward, the Time-domain Astronomy Coordination Hub (TACH) [36] is a new effort to expand upon the GCN/TAN system. TACH will add new capabilities to GCN to enable community coordination of follow-up observations including improved user configuration flexibility, output protocols, reliability, speed, and cross-correlation between missions. TACH will also provide the infrastructure for joint *γ*-ray mission localizations in an open source platform, which will be especially relevant for the upcoming generation of GRB satellites.

### *3.4. ATels*

Discoveries of supernovae and other transient objects are usually made public via Astronomer's Telegrams (ATels [37]). ATels are short web-based notifications commonly used to report and comment upon new astronomical observations of transient sources and to eventually trigger follow-up observations. The prompt nature of the ATels facilitates the distribution of observational results in the context of an unfolding transient astrophysical event. In many cases, the triggering observation has more information, for example, the full light curve, which is not included in the original ATel. Besides the daily emails with the selected areas of interest, readers may request the Instant Email Notices, which are used to report the discovery of one of several different types of objects, as well as new outbursts of previously known transients. The Instant Email Notices are sent immediately upon receipt.

Both GCN and ATels are widely used by all IACTs to coordinate their observations and to announce source discoveries or alert follow-up results.

### **4. Current Situation**

In this section, we present the state of the alerts from the point of view of *γ*-ray telescopes. We first describe the alerts that are being sent from/within the *γ*-ray community. We then introduce the alerts that are being sent to the *γ*-ray community both from individual-messenger channels (either through AMON or direct-communication channels) and from multi-messenger real-time coincidence analyses. Finally, we discuss how IACTs receive and react to these alerts.

### *4.1. Sending Alerts*

The *γ*-ray community communicates alerts very actively in the context of transients and coordination of multiwavelength follow-ups for interesting events such as AGN flares. For those events and based on a memorandum of understanding (MoU), the *γ*-ray instruments such as FACT, *Fermi*, HAWC, H.E.S.S., MAGIC, and VERITAS exchange information directly. Different trigger thresholds are set depending on the brightness of the sources. These alerts are communicated via email. In the case of events with a wider range of interest like the detection of a new *γ*-ray source or an exceptional flaring activity, an additional ATel is issued. (ATels are also used to request multiwavelength observations.) As described in Section 3.3, the GCN Circulars are the method of communication for follow-up observations or upper limits in the case of non-detections.

In addition to the emails under the MoU, GCN Circulars, and ATels, FACT informs all interested astronomers about flaring activities of blazars. A trigger threshold of three times the flux of the Crab Nebula (Crab Unit, CU) at TeV energies is used for the brightest blazars: Mrk 421 and Mrk 501, while for other blazars 0.5 CU is used [38]. Additionally, these alerts are also sent to AMON. As the correlation of sub-threshold events is an important feature of AMON, more elaborate trigger thresholds and algorithms are under investigation [39].

The HAWC collaboration sends two types of alerts under the MoU. The first alert stream is an online flare monitor that is based on the detection of variability using Bayesian blocks [40,41]. This stream is comprised of four channels that are used for different source classes. The alerts are issued with a FAR of 7.5 events per year for each channel, resulting in a total FAR of 30 events per year. One channel is dedicated to Mrk 421 and Mrk 501, and a second channel is used for the rest of the known TeV sources. The other two channels are for sources from the Fermi-LAT 2FHL catalog [42], one for nearby sources (with known redshift smaller than 0.3) and the other one for the rest of the sources in that catalogue. The second alert stream from HAWC is based on significant excesses (>5*σ* post trials) in the all-sky map in different time scales (0.5, 1, 2 and 3 transits across the field of view of HAWC) [43]. These alerts are sent by email to MoU partners using different, pre-established FARs.

IACTs are follow-up instruments in general. They can produced their own alerts though. For example, an AGN that flares while is being observed can trigger an alert. In any case, there are no alerts being produced automatically as it can be done with an instrument with a high duty cycle like HAWC.

### *4.2. AMON Pass-Through Alerts*

Among the data constantly being sent to AMON, there are also single-messenger events, that the triggering observatories want to distribute to the community in real time. In these cases, AMON passes the event information as an alert to enable follow-up observations in real time. An example of the importance of these pass-through channels is the event IceCube-170922A [44] discussed below in Section 5.2.

### 4.2.1. IceCube Tracks

AMON distributes alerts in real-time via GCN Notices containing the information about neutrino events observed with IceCube that are likely of astrophysical origin [33]. There are two pass-through streams from IceCube. The first one is based on track-like neutrino candidates. For each candidate event, the alert includes the localization in the sky (with the 50% and 90% uncertainty radii), an estimate of the neutrino energy, a likelihood that the event is an astrophysical neutrino (*signalness*), and the corresponding FAR.

The IceCube collaboration has developed three event selections that result in these prompt high-energy track alerts. The Gamma-ray Follow-Up (GFU) track selection uses a machine learning algorithm that, based on the muon energy or the deposited charge, identifies coincidences of two or more tracks with a time window of up to three weeks [7]. These searches target sources from a predefined source catalog. Thus, GFU alerts are optimized for *γ*-ray follow-up as they are likely to be of astrophysical origin. The first GFU alert was sent in 2012. The HESE selection identifies high-energy neutrino events where the interaction vertex is contained within the instrumented volume [13]. The EHE track selection identifies PeV neutrinos. This selection was improved from the diffuse EHE neutrino search [45] by optimizing the requirements on the quality of the fit, the declination, and the total charge observed in the event (as a proxy for the neutrino energy). This modification increased the astrophysical purity of track-like candidates and the sensitivity to PeV neutrinos. These three selections combined provide a sample of likely-astrophysical track-like neutrino events, predominantly arising from muon neutrino charge-current interactions. The median angular resolution of these events is energy-dependent, but is better than 0.25◦ for neutrino energies above 200 TeV.

These alerts are further classified (since 2019) as either Gold or Bronze alerts depending on their average astrophysical purity1. Gold (Bronze) alerts are well-reconstructed, highenergy, track-like neutrino candidates with a *signalness* greater than 50% (30%).

The expected all-sky annual alert rate is 28, where eight alerts are from astrophysical neutrino events and 20 from atmospheric backgrounds. These rates are consistent with the observed rates of alert-qualifying events in the 7-year archival IceCube data. These alerts are not expected to be uniformly distributed in declination because there is a zenith dependence of the atmospheric backgrounds and the distribution itself is energy dependent due to the absorption of high-energy neutrinos in the Earth's core. The declination distribution of these alerts is expected to peak just above the equator in the Northern hemisphere.

### 4.2.2. IceCube Cascades

The high-energy track events were the first real-time alerts from IceCube because of the good angular resolution and fast reconstruction of these events. In recent years, the selection and reconstruction of high-energy cascade events has also been improved (and made possible in near real time) by the use of two neural networks.

A first neural network [46] is used to select cascade events contained inside of the detector; that is, the cascade events in the HESE sample are the starting point for this alert stream. The inputs of this classifier are the digitization of the waveforms (charge vs. time) recorded at each IceCube optical module. The architecture of this deep neural network is described in detail in Refs. [46,47]. This improved event selection is able to better reject tracks. Thus, it reduces the amount of atmospheric muons and neutrinos and leads to ∼8 events per year with an astrophysical purity larger than 85%. It is important to mention that this event classifier does not make a selection based on angular resolution. However, follow-up observatories are able to select events based on the angular uncertainty (provided in the alerts) that best fit the capabilities of their instrument.

A second neural network [48], used for the event reconstruction, improves the angular resolution. It takes a few seconds to run (instead of about a day for the previous reconstruction), allowing the reconstruction of cascade events in real time. Similarly to the event classifier, the input of the reconstruction network consists of a three-dimensional grid approximating the detector, as well as two other grids for the lower and upper parts

of DeepCore. These grids also have a fourth dimension containing the time and charge information for each optical module. This neural network outputs the direction and the energy of the incoming neutrino, and the uncertainties on both parameters. The improved reconstruction results in 50% of the events having an angular resolution (68% containment) better than 7◦, including systematic uncertainties. The 50% and 90% containment angular error radii, corresponding to a circularized error, are reported in the alerts. However, the neural network is able to compute more sophisticated error contours that can be asymmetrical. These contours are also reported in the alerts as FITS files containing the probability density of the neutrino direction.

Cascade events from the Southern sky have a larger astrophysical purity because atmospheric neutrinos can be rejected. Additionally, the number of cascade events from astrophysical origin is lower in the Northern sky because the Earth is not transparent to high-energy neutrinos. Thus, the probability that an event is of astrophysical origin that is reported in the alerts accounts for three bins in zenith distance. Alerts are produced for events with an astrophysical likelihood higher than 0.9, which corresponds to a FAR of 0.311 events per year.

### 4.2.3. HAWC GRB-like Triggers

Another important pass-through stream is the one dedicated to alerts from short timescale excesses in HAWC data. HAWC monitors the multi-TeV sky with an instantaneous field of view of 2 sr and a duty cycle greater than 90%. These alerts contribute to the on-going searches for VHE emission from GRBs, and especially for multiwavelength and multi-messenger studies.

The search for excesses uses a fixed-width sliding time window over the ∼25 kHz of air showers reconstructed online at the HAWC site. The spatial search is done up to 50◦ in zenith distance (i.e., declination approx. between −31◦ and 69◦), in ∼2◦ × 2◦ square bins (in right ascension and declination), and for time windows of 0.2, 1, 10 and 100 s. In each time window, all points in the FoV are tested against the null hypothesis that the local air-shower count comes from the rate of cosmic rays that pass the background suppression (∼500 Hz). Significant upward fluctuations from the expected number of background counts are interpreted as candidate VHE photons from GRB emission.

The uncertainty in the position of the excess reported in the alerts was derived from Monte Carlo simulations. The 68% containment radius is between 0.4◦ and 0.8◦ depending on the number of background events.

The threshold for sending these alerts was set at one event per year. This high falsepositive threshold is due to the number of trials associated with searching the entire field of view for all time windows. The trials calculation takes into account the fact that the search algorithm compares probabilities from multiple bins to select the result that is least consistent with the null hypothesis. The search algorithm and, more importantly, the calculation of this post-search false positive rate are explained in more detail in Ref. [49].

### *4.3. Gravitational Wave Alerts*

The LIGO Scientific Collaboration and the Virgo Collaboration jointly analyze their data in real time to detect and localize transients from compact binary mergers and other possible sources of gravitational waves. When a signal candidate is found, an alert is distributed through the GCN/TAN System to search for possible counterparts such as electromagnetic waves or neutrinos. It is important to note that the LVC does not use any broker, and the GW alerts are sent directly to GCN by the LVC.

Before the initial alert, there might be cases in which an *Early Warning Notice* may be issued up to tens of seconds *before* the merger. *Early Warning* alerts are rare, and only possible for exceptionally loud and nearby coalescence events. Then, a *Preliminary Notice* is issued automatically a few minutes *after* the GW candidate was detected. In both cases, the signal must have passed automated data quality checks, and there is no accompanying GCN Circular. Thus, these may be retracted later. An *Initial Notice* is issued only after human vetting, which is accompanied by a GCN Circular. If further analysis of the data results in improved estimates of the sky localization, significance, and/or event classification, an *Update Notice* is issued. There may be multiple updates for the same GW event, and these updates may be issued within hours, days, or even weeks after the event. As described in Section 4.5, further analysis may also conclude that the event is unlikely to be from an astrophysical origin. In these cases, the candidate is withdrawn and a *Retraction Notice* is issued.

GW alerts contain a unique identifier, a FAR, and a sky localization. The FAR quantifies the significance of the event, and the sky localization consists of the posterior probability distribution of the position of the source in the sky. Additionally, alerts for coalescence events contain information about the luminosity distance, and inferred source classification and properties.

The sources of GWs are localized in the sky using the observed time delays, amplitude and phase consistency of the GW signals at the different sites. Two interferometers can constrain the sky location to a broken annulus, and the presence of additional detectors in the network improves the localization further. For example, the average sky localization area (90% credible region) was 655 deg<sup>2</sup> for the eleven confident signals detected during O1 and O2, while the expected median for all types of binary systems during O3 with the Advanced LIGO and Virgo network is ∼300 deg<sup>2</sup> [50].

### *4.4. Multi-Messenger Alerts*

AMON receives and stores astrophysical data, searches for multi-messenger coincidences, and distributes electronic alerts for follow-up observations. As mentioned before, AMON issues two distinct types of alerts, the pass-through alerts described in the previous section in which AMON serves simply as a conduit for propagating the single-messenger event information, and a second type from the coincidence analyses of two or more data streams.

A coincidence analysis combines two or more data streams using a likelihood calculation to quantify the degree of correlation between different events. An FAR is determined from scrambled datasets to build a representative distribution of random coincidences, and a test statistics value is used to rank the coincidences. The FAR thresholds are verified using archival data [51]. Finally, the collaborations contributing the streams used in the coincidence analysis specify the thresholds for distributing the alerts.

### 4.4.1. ANTARES-Fermi

This alert is a real-time search for coincidences between *γ* rays observed with *Fermi*-LAT and neutrinos detected with ANTARES. The ANTARES collaboration sends their neutrino data to AMON in real time over a private stream. Photon data from the LAT is downloaded, as it becomes publicly available on the LAT FTP server2. The LAT data are filtered to select photons with energies between 100 MeV and 300 GeV, a LAT zenith angle of less than 90◦, and arrival times within the boundaries of the good time intervals provided by the LAT satellite files. A *γ*−*ν* coincidence is defined as any photons arriving within 5◦ and 1000 s of an ANTARES neutrino. A pseudo-log-likelihood test statistic (TS) is calculated for each coincidence, as described in Ref. [52]. This TS takes into account the point spread functions (PSF) of each photon and each neutrino, and the number of neutrinos and gamma rays in the coincidence. There is also a temporal weighting function for each neutrino and *γ* ray in the coincidence that is equal to one for particles within 100 s of the (average) arrival time. Between 100 s and 1000 s, this function scales inversely proportional to this time difference to allow for possible longer-timescale associations (as might result from low-luminosity GRBs, for example) while maintaining a preference for shorter-timescale associations. The TS also accounts for the *γ*-ray backgrounds for each photon at the coincidence location, and a factor (established by the ANTARES Collaboration for each individual neutrino candidate) that represents its likelihood for being of astrophysical origin. Alerts more significant than a 4/year threshold are sent out via GCN to the AMON follow-up partners.

The latency of these alerts is temporarily of the order of one day because the ANTARES collaboration reviews manually the neutrino(s) that participate in the coincidence.

### 4.4.2. IceCube-HAWC

These AMON alerts are also based on the sub-threshold detections of *γ* rays and neutrinos. More specifically, they are produced using data from the daily monitoring stream of the HAWC Observatory [49] and the track stream from IceCube [33]. The HAWC data consist of the daily excesses with a significance higher than 2.75*σ*. These so-called "hotspots" are locations in the sky with a cluster of events above the estimated cosmicray background level, and are identified online during a full transit of that sky location above the detector. Most of these events are expected to be background fluctuations. The IceCube data are generated by the online event selection (described in Section 4.2.1) and reconstruction algorithms that are tuned to select track-like, through-going neutrino events. It is also dominated by background events, which in this case are atmospheric neutrino events in the Northern hemisphere, and high-energy atmospheric muons in the Southern hemisphere.

The coincidences are defined by a temporal and spatial criteria. Namely, the selected neutrinos must be detected during the transit time of the HAWC hotspot, and its reconstructed arrival direction must be within a radius of 3.5◦ from the HAWC hotspot localization. A ranking statistic, based on Fisher's method, is then calculated by combining the spatial uncertainties of the *γ*-ray and neutrino events, the probability of the HAWC event being compatible with a background fluctuation, the probability of seeing more than one neutrino from background in the HAWC transit period, and the probability of measuring such an energy (or higher) for the IceCube event assuming it is a background event. The uncertainty on the best position of the coincidences is O(0.2◦), because it is dominated by the uncertainty on the location of the *γ*-ray excesses. The alerts include the 50% and 90% containment radii around the best position of the coincidence. The FARs were determined from scrambled datasets. The reported FARs are derived for each coincidence based on the ranking statistic described in detail in Ref. [53]. The rate of alerts being sent to GCN is four per year. The latency of these alerts could be about 6 h because the location of the hotspot has to transit above the HAWC detector before the analysis can start. The coincidence analysis in the AMON servers takes less than a minute to run after the sub-threshold triggers are received.

### *4.5. Offline Updates, Revisions of Alerts*

The real-time distribution of transient events is possible through rapid computations on-site (in space, at the South Pole, etc.) without human intervention. Thus, it may occur that an offline, more precise and more time-consuming reconstruction differs from the original information in the notice. In these cases, either a correction or a retraction is issued. For example, if a high energy neutrino is no longer identified as astrophysical, then a retraction notice is issued. Similarly, the automated on-board *Fermi* flight software generates GRB position notifications in real time. But if the ground-based, human-involved processing results in a revised position, the updated information is transmitted within a few hours after the burst. This is a feature common to all the alert streams that is crucial to conduct efficient follow-up observations.

### *4.6. Receiving Alerts, Follow-Ups*

The ground-based *γ*-ray telescopes receive the various alerts by GCN, VOEvent and/or email. Details on the follow-up programs of the current generation of IACTs are described in [7,34,44,54–58]. In the initial stages, follow-up observations were manually scheduled. Currently, if an alert arrives during a dark night, IACT telescopes use a fast, automatic re-pointing that allows for an immediate observation. In the case of full Moon or bad weather, the observations are sometimes postponed for up to a few days after the alert and then scheduled manually.

A recent report discusses the single-neutrino alerts that have been followed up by IACTs since October 2017 [8]. No detection of VHE *γ*-ray emission was announced in connection to a single neutrino alert. No change in the source emission was observed following-up a flare detected within the GFU program.

H.E.S.S. is the only operating IACT located in the Southern hemisphere, and the ANTARES neutrino detector also obtains the best sensitivity in the Southern sky. Therefore, the two experiments complement each other very well. Their alert systems are linked directly using a VOEvent protocol [59,60], which allows for a rapid exchange of alerts between the two experiments. So far H.E.S.S. has reported follow-up observations of two ANTARES alerts, one of them (ANT170130) with a record response time of 32 s after the neutrino alert [61]. This work highlights the synergy between KM3NeT and CTA South, and will facilitate the implementation of an early system for mutual real-time follow-up observations, and exchanges of data such as flares, spectral/angular shapes, and so forth.

In the early phase of GW searches (corresponding to the O1 and O2 runs), the alerts were private; that is, GCN Notices were sent only to those observers that had an MoU with the LVC. Notices were made public after the LVC published the corresponding GW events. Ad hoc follow-up observations during the O1 and O2 interval include the BH–BH merger GW151226 [62] by MAGIC [63] and GW170104 [64] by VERITAS3. The first complete scans of the uncertainty regions were possible during O2 when GW events started being detected by all three interferometers. For example, the binary black hole merger GW170814 [65] was followed up by H.E.S.S.<sup>4</sup>

Although during O3 the Notices were immediately publicly distributed without the need of an MoU, the issue of the large localization maps remained. Thus, dedicated follow-up procedures needed to be devised. Examples of these procedures include the MAGIC Automatic Alert System [66] and the H.E.S.S. automatic GW follow-up chain [23]. This algorithm was optimized to initiate GW follow-up observations within less than 1 min after receiving the alert. As a consequence, H.E.S.S. observed six GW events out of the 67 non-retracted GW events detected during the first three observation runs of LIGO and Virgo.

### **5. Highlight Results**

Multi-messenger observations allow us to exploit the synergies that are inherent in the signals emitted from cosmic sources via the electromagnetic, gravitational wave, highenergy neutrino, and cosmic-ray channels. The candidate sources expected to emit in two or more of these messengers include active galactic nuclei, gamma-ray bursts, supernovae, white dwarfs, and neutron stars. The EM counterparts have already made possible the first identification of the source of a binary neutron star (BNS) merger and have provided evidence of an association between a high-energy neutrino and a gamma-ray flare from a blazar. More importantly, we think that multi-messenger observations have the potential to illuminate questions of fundamental physics and to provide unique measurements and independent insights that will completely revolutionize our understanding of highenergy astrophysics. The results that we highlight in this section emphasize the need for a systematic approach to move forward from the "pioneering" phase into the "expansion" stage, where Multi-Messenger Astrophysics becomes a field of its own.

### *5.1. Gamma-Ray Bursts*

GRBs are the most luminous explosions in our Universe. Their energy release in *X* rays and *<sup>γ</sup>* rays can be ∼1051−<sup>54</sup> erg in a matter of seconds (assuming that GRBs radiate isotropically) [67]. Known since the 1970's and observed at a rate of ∼1/day by X-ray satellites, GRBs were the reason behind the implementation of automatic alert follow-up systems by the IACTs. As described in Section 3.1, the process evolved through several stages. For example, MAGIC obtained light-weighted, fast moving telescopes from a dedicated hardware design. Trigger and software solutions also lowered the energy threshold, and allowed for detection of distant sources up to redshifts of ∼1.0 [68–70]. These factors and the observation strategies honed for more than a decade resulted in the discovery of GRB 190114C by MAGIC [19,71] and the detection of GRB 180720B afterglow by H.E.S.S. [72]. The former was obtained with observations starting as fast as 52 s after the trigger. The latter came from observations ∼11 h after the trigger. These two significant results demonstrate how the differentiation of the follow-up strategies allow us to study the temporal evolution of transient events. Both observations confirmed that the *γ*-ray emission from GRBs can reach up to TeV energies and revealed that this component is as powerful as the already known low-energy synchrotron component.

Since those breakthrough results, two more detections were announced: GRB 190829A by H.E.S.S. [73], and GRB 201216C by MAGIC5. The H.E.S.S. observations of the former were performed on three consecutive nights, between 4 hrs and 56 hrs after the trigger. The similar characteristics in the X-ray and *γ*-ray bands proves difficult to describe the emission with simple one-zone models. The latter, at *z* = 1.1, is the most distant source detected by an IACTs so far. All the discussed detections concern long GRBs, whose precursors are Supernova explosions. So far only hints of VHE *γ*-ray emission from a short GRB have been reported for GRB 160821B [20]. Short GRBs are particularly interesting due to their connection to GW-emitting BNS mergers. For a more detailed discussion of GRBs and the role of IACT follow-up observations, we refer the reader to another review in this issue [74].

### *5.2. TXS 0506+056—Neutrino Blazar?*

On 22 September 2017, a high-energy track event was detected by the IceCube realtime alert system. An automated alert<sup>6</sup> was distributed to the community within less than a minute, including an initial estimate of the arrival direction and energy (∼120 TeV) of the event. Subsequent multiwavelength EM observations revealed a high probability coincidence with the known blazar TXS 0506+056, which was flaring in *γ* rays at that time [44]. The chance coincidence for the correlation was excluded at a confidence level of 99.73%, making this the most significant association between a HE neutrino and an astrophysical *γ*-ray source up to date.

The H.E.S.S. telescopes were the first IACT on target with the follow-up delay of only 4 h with respect to the neutrino event arrival time. No significant *γ*-ray emission was detected [75]. VHE *γ*-ray emission from this source was discovered with the MAGIC telescopes thanks to the low-energy threshold and persistent observation strategy [76,77], and later confirmed by the VERITAS observatory [78]. IACT observations during this flaring period show TXS 0506+056 as a highly variable VHE *γ*-ray emitter [54].

Various theoretical explanations followed these exciting results [9,79–81]. Among others, the MAGIC Collaboration explored the multiwavelength broad-band emission from this source, and the connection of the observed *γ* rays with the observed neutrino and cosmic rays in the jets of the AGN [77]. In fact, this work shows numerical evidence that TXS 0506+056 is able to accelerate CRs up to energies of ∼1018 eV. The assumptions on the geometrical structure of the TXS 0506+056 jets, which were made to explain the neutrino emission, were confirmed with 15 GHz very long baseline interferometry (VLBI) imaging [82].

A dedicated study of past IceCube observations of this object revealed another highly significant period of neutrino activity in 2014–2015, establishing it as a first potential source of cosmic neutrinos [83]. It is worth mentioning that this flare did not trigger a GFU alert only because the source redshift was unknown at that time [84] and the GFU program relies on sources with known redshift. TXS 0506+056 was added to the source list during the GFU upgrade in 2019.

Unfortunately, there is very little multiwavelength data, in particular no VHE *γ*-ray data, for the 2014–2015 neutrino flaring period. IACTs started to regularly monitor this source after the neutrino alert of 2017. (VERITAS observed TXS 0506+056 in 2016–2017; that is, prior to IC170922A, with no detection.) Both MAGIC and VERITAS, together with multiwavelength partners, perform unbiased monitoring observations to study the

long term behavior of TXS 0506+056 [85,86]. Open questions include the object's nature (FSRQ vs BL Lac [87]) and its duty cycle, which is an important ingredient to derive the coincidence probability between the different messengers.

### *5.3. GW170817*

The detection of the first BNS system GW170817 [15] marked the start of a new era of multi-messenger astrophysics [16]. The multi-year follow-up campaigns of this single event enabled observations across the entire EM spectrum that confirmed that the merger of two neutron stars is able to power high-energy transients, such as short GRBs [88] and kilonovae. The H.E.S.S. observations of this event started on 17 August 2017 at 17:59 UTC, only ∼5 min after the localization map was made public by the LVC, and 5.3 h after the original GW170817 alert. They found no significant *γ*-ray emission. Their upper limits on the VHE *γ*-ray flux constrained for the first time the non-thermal, high-energy emission following a BNS merger [89]. H.E.S.S. also performed a long-term campaign (between 124 and 272 days after the GW event) on the remnant of the BNS merger, covering the maximum in the X-ray emission. They derived limits on the magnetic field of the remnant in the context of different source scenarios [90].

Results from this single multi-messenger event clearly demonstrate the potential of multi-messenger astrophysics. This event also highlights the need for population studies to disentangle the physics at the source and the interactions in the environment.

### **6. Discussion**

The multi-messenger detections in recent years have produced an enormous leap forward in our understanding of highly energetic, transient events. On the other hand, the observation of astrophysical sources via non-electromagnetic messengers has faced equally enormous challenges. New solutions are needed to meet the increased demand for low-latency analyses, and reach the promised greater rewards in the years ahead.

### *6.1. Real-Time Tools, Alerts, and Follow-Up Campaigns*

Multi-messenger astrophysics relies heavily on sharing alerts in real time. In turn, these alerts have to be prioritized by each experiment considering their features, capabilities, and scientific objectives. It would be ideal to develop a set of common tools to identify potential sources of GWs and high energy neutrinos, and a community-wide coordination to maximize the number of multi-messenger observations. Only automated, systematic campaigns will guarantee unbiased coincidences and an optimized use of wide-field, allsky survey instruments and target of opportunity observations with narrow field of view telescopes. Given the number of alerts expected from the next generation of neutrino, GW, and electromagnetic observatories, this automatization of the observing campaigns has become a requirement.

Although many resources for EM counterpart identification exist, such as source catalogues and archival repositories (e.g., ASDC7, OpenUniverse8, VO9), they are still not fully connected to the real-time alert brokers. The first steps toward this connection are starting to be realized. One example is the novel tool called AstroCOLIBRI [91] that collects in one place the real-time alerts and the relevant information about the persistent and transient sources in the vicinity of the event. There is still some work to be done to fully integrate the natural language resources (e.g., GCN Notices and ATels) into the system.

It is also clear that, thanks to the increasing number of more sensitive telescopes that are planned or under construction (e.g., Vera Rubin Observatory, SKA), astronomy is entering the era of Big Data. Processing large amount of information very fast will be crucial to enable immediate follow-up of potential counterparts.

### *6.2. The Big Data Challenge and Real-Time Analyses*

On the receiver end, efficient observation routines need to be implemented. This is especially important for pointing instruments that face hard choices such as: where to point? For how long? Is the origin of dark matter more important topic than GW follow-up? It is important to solve these issues before the next generation of experiments bring us an order of magnitude more real-time alerts.

In fact, the difficulties do not stop with an established multi-messenger coincidence. To estimate the significance of such coincidences, we will need data on the long-term behavior of the sources or reliable statistics of the transient occurrences. Currently, only a small number of VHE *γ*-ray emitters are monitored regularly and most of them are nearby blazars, for example, [92]. Moreover the monitoring is often biased towards flaring events, which prevents a proper estimation of the emission duty cycle.

### **7. Outlook**

### *7.1. Multi-Messenger Alerts*

There are several AMON coincidence alert streams under development. Similarly to the channel described in Section 4.4.2, AMON is in the final stages to start issuing alerts using sub-threshold detections of *γ* rays from HAWC and neutrinos from ANTARES [93]. The AMON Team is working to generate and distribute low-latency alerts from the realtime coincidence analysis between *Fermi*-LAT data and the sub-threshold stream from IceCube. The archival analysis [94] showed hints of a possible correlation between the neutrino positions and persistently bright portions of the *γ*-ray sky. These AMON alerts will prompt rapid-response follow-up observations that could test a possible signature of *γ*-ray correlated structure in the high-energy neutrino sky. The AMON Team is also developing a correlation analysis using a new capability of the *Swift* Observatory that provides event-level data from the BAT on demand [95] to respond in real-time to transient triggers from sub-threshold GW events [96]. The goal of this coincidence stream is to identify short GRB-like emission with an arcminute localization for the sub-threshold GW triggers during O4. Finally, the AMON Team is developing an outlier detection method [97] to make a model independent combination of the sub-threshold data from multiple neutrino and gamma-ray experiments [98].

While cosmic rays have played a small role in real-time multi-messenger alerts, the Pierre Auger Collaboration is implementing a new data stream that will send to AMON ultra-high energy photon candidates. This stream will contain events with energies above 1 EeV that satisfy a certain sub-threshold photon selection based on a Multivariate analysis [99].

### *7.2. VHE γ Rays*

The IACT community will also soon posses a new instrument to investigate the Universe at very high energies and with sensitivity an order of magnitude higher than the current telescopes generation. The future Cherenkov Telescope Array (CTA) [100] is under construction in two locations. CTA-North will be hosted at the Canary Island of La Palma (the same site as FACT and MAGIC) and CTA-South near Paranal in Chile. Both will consist of several telescopes, whose capacities are adapted to efficiently cover different energy ranges: 23-meter Large Size Telescope (LST), 12-meter Medium Size Telescope (MST) and 4-meter Small Size Telescope (STS). CTA has several performance characteristics which are important in the context of the follow-up of transient astrophysical messengers, like neutrinos and GWs. The first one is the ability to rapidly re-position to any location in the sky. For example the LSTs can re-position to anywhere in the sky above 30◦ in just 20 s. The second one is a large FoV. For LSTs it is ∼4◦, for MSTs ∼8◦ and for SSTs ∼10◦. What is more, the CTA real-time analysis will search for transients in the whole CTA FoV and send alerts [101,102]. Therefore CTA will not only passively receive and follow-up alerts but be an active player among the multi-messenger observatories. CTA will explore the GeV−TeV sky with a deeper sensitivity than previous IACT instruments. The larger field of view, the flexibility to map very large and arbitrary sky patches, and the rapid response time will make CTA an ideal instrument to detect possible VHE *γ*-ray counterparts to GW and neutrino events.

The Large High Altitude Air Shower Observatory (LHAASO) has recently started operations and has already published about a dozen UHE cosmic accelerators in our galaxy [103]. Although LHAASO has also detected *γ* rays with energies above 1 PeV [104], their real-time alert system is under construction. In a longer time scale, the Southern Wide-field Gamma-ray Observatory (SWGO) will complement the existing and planned instruments to conduct these crucial observations in the Southern Hemisphere. SWGO will participate in archival analyses and in real-time searches for transients at different timescales covering a region of the sky not accessible to either HAWC or LHAASO [105].

### *7.3. HE Neutrinos*

After ten years of data taking, the IceCube detector has started to see hints of neutrino emission from several sources, one of them being the famous blazar TXS 0506+056 [30]. However, those hints are at the level of ∼3 standard deviations and do not yet reach the discovery level of ≥5 standard deviations. In the next five years or so, neutrino astronomy will get a significant boost through the installation of several km3-scale facilities, such as Baikal-GVD [26] and KM3NeT [106]. By ∼2025, each one of these should have an effective detection volume comparable to the current IceCube observatory. A new initiative called the Planetary Neutrino Monitoring System (PLE*ν*M [107]) has been proposed to move beyond the signal hints into source detections and neutrino astronomy. The goal is to combine the exposures of current and future neutrino telescopes distributed around the world. PLE*ν*M will have full-sky coverage and will reach up to four times the exposure available today. On a longer timescale, an extension of the existing IceCube experiment, called IceCube-Gen2 [108], is planned to be deployed in the 2030 decade. The hybrid design, including radio and light-sensor arrays in ice and a surface array incorporated as a veto, will boost the discovery potential at the ultra-high energies (>100 PeV), not yet accessible to IceCube. On a similar timescale, the Pacific Ocean Neutrino Experiment (P-ONE [109]) will deploy a multi-cubic-kilometre neutrino telescope in the Pacific Ocean.

### *7.4. Gravitational Waves*

On the GW front, the LIGO/Virgo/KAGRA collaboration is making good progress in preparation for O4. Although there is no definitive start date for O4 at the time of this writing, it is clear that O4 will not begin before August 2022. Assuming that there are no unexpected obstacles, the four-detector network is expected to achieve design sensitivity with a range of almost 200 Mpc in O4. Further down the line, it is possible that the LIGO-India detector may come online and become part of the international GW network at some point during O5.

As seen in the past observations, source localization using only timing for a two-site network yields an annulus on the sky. Adding the signal amplitude and phase resolves this to only parts of the annulus. However, even then sources are localized to regions of hundreds to thousands of square degrees [110,111]. For three detectors, the time delays restrict the source to two sky regions which are mirror images with respect to the plane passing through the three sites. Requiring consistent amplitudes and phase in all the detectors typically eliminates one of these regions [112]. Thus, this typically yields regions with areas of several tens to hundreds of square degrees. A large improvement of the localization capability is expected for O4, where the expanded network of detectors is also accompanied by higher sensitivities. With four (or more) interferometers, timing information alone is sufficient to localize to a single sky region, while the additional baselines help to localize within regions smaller than ten square degrees for some signals.

The LIGO/Virgo/KAGRA collaboration expects ∼10 detections of BNS mergers per year in O4. (For estimating the number of events expected to be detected in O4 they used an intermediate sensitivity curve for KAGRA and the target sensitivity curve for Advanced LIGO and for Advanced Virgo.) The median 90% credible region for the localization area of BNS is 33 deg2, while 38–44% (12–16%) of the events are expected to have a 90% credible region smaller than 20 deg2 (5 deg2). Then, there are ∼80 expected detections of binary black holes (BBH). The median 90% credible region for the localization area of BBH is 41 deg2. Similarly, 35–39% (11–14%) of the BBH events are expected to have a 90% credible region smaller than 20 deg2 (5 deg2) [50].

It is natural to expect even in the near future an increasing number of multi-messenger detections of binary mergers. This will make it possible to determine the equation of state of neutron stars, to probe the properties of different components of the ejected mass, to answer whether the BNS mergers are the primary channel of formation of heavy elements, and to understand the structure of the relativistic jets and the physics behind their formation. Also joint GW—neutrino observations with IceCube and KM3NeT may reveal coincident emissions of high-energy neutrinos from BNS mergers or other energetic astrophysical phenomena.

### *7.5. Supporting Infrastructure*

Besides the current efforts to combine *γ*-ray data with the detections of gravitational waves and high-energy neutrinos, there are several initiatives to support and enhance multimessenger astrophysics. One example of these projects is the Scalable Cyberinfrastructure to support Multi-Messenger Astrophysics (SCiMMA) [113], which is funded by the U.S. National Science Foundation. The main goal of this community-wide project is to provide the necessary scalable cyberinfrastructure to support multi-messenger astrophysics by rapidly handling, combining, and analyzing the very large-scale distributed data from all the types of astrophysical measurements. Their proposed cyberinfrastructure will allow the community to take full advantage of the current facilities and also the next-generation projects for multi-messenger astrophysics.

Similarly, on the other side of the Atlantic the European Open Science Cloud (EOSC) is an open multi-disciplinary environment for hosting and processing research data to support EU science. The EOSC Future project plans to develop an environment with interoperable research data sets and other research outputs including publications and code, professional data services, and access to resources such as compute and storage and services like data discovery and archive. The EOSC Future will support European researchers in managing the entire lifecycle of data from sharing, managing and exploiting their own data to discovering, re-using and recombining the data sets of others [114].

**Funding:** M.M. acknowledges support from the U.S. National Science Foundation (NSF) under grants PHYS-1506232, PHY-1708146, PHY-1806854, and PHY-2110809, and from the Institute for Gravitation and the Cosmos (IGC) of the Pennsylvania State University. D.D. acknowledges the support of the German Bundesministerium für Bildung und Forschung (BMBF) and the Institut für Theoretische Physik und Astrophysik (ITPA). K.S. acknowledges the support of the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, IGC, BMBF, ITPA, or the Horizon 2020 Programme.

**Data Availability Statement:** Data sharing is not applicable to this article as no new data were created or analyzed in this study.

**Acknowledgments:** We would like to thank Fabian Schüssler and Marcos Santander for their careful reading of an early draft, and for their thoughtful comments that provided insight and expertise and helped us shape the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **Notes**


### **References**


### *Review* **Gamma Rays as Probes of Cosmic-Ray Propagation and Interactions in Galaxies**

**Luigi Tibaldo 1,\*, Daniele Gaggero <sup>2</sup> and Pierrick Martin <sup>1</sup>**


**\*** Correspondence: luigi.tibaldo@irap.omp.eu

**Abstract:** Continuum gamma-ray emission produced by interactions of cosmic rays with interstellar matter and radiation fields is a probe of non-thermal particle populations in galaxies. After decades of continuous improvements in experimental techniques and an ever-increasing sky and energy coverage, gamma-ray observations reveal in unprecedented detail the properties of galactic cosmic rays. A variety of scales and environments are now accessible to us, from the local interstellar medium near the Sun and the vicinity of cosmic-ray accelerators, out to the Milky Way at large and beyond, with a growing number of gamma-ray emitting star-forming galaxies. Gamma-ray observations have been pushing forward our understanding of the life cycle of cosmic rays in galaxies and, combined with advances in related domains, they have been challenging standard assumptions in the field and have spurred new developments in modelling approaches and data analysis methods. We provide a review of the status of the subject and discuss perspectives on future progress.

**Keywords:** gamma rays; cosmic rays; interstellar medium; Milky Way; galaxies; radiation mechanisms: non-thermal

### **1. Context and Scope of the Review**

Cosmic rays (CRs) are energetic particles first observed around the Earth with energies ranging from MeV to above 10<sup>20</sup> eV and with approximately isotropic arrival directions. They are composed mainly of completely ionised nuclei, with protons accounting for a total fraction >90% at GeV energies. They also include electrons, positrons, and antiprotons. The overall CR spectrum follows an approximate power-law distribution, which attests to the non-thermal origin of the particles. A most remarkable change of the power-law spectral slope occurs around 10<sup>15</sup> eV, the so-called knee of the CR spectrum. Below the knee, the standard paradigm holding since the 1960s [1] states that the particles originate in the Milky Way, very likely from shock acceleration in supernova remnants (SNR), and diffuse on turbulent magnetic fields in a kpc-sized halo encompassing the disk of the Galaxy for durations exceeding one Myr (see Gabici et al. [2] for a recent critical review).

CRs interact with interstellar matter and fields, producing secondary particles and radiation that are indirect means to study CRs in distant locations of the Milky Way, as well as in other galaxies. These observables usefully complement direct measurements of CRs in the heliosphere and allow us to develop our understanding of CR propagation and interactions. Among these indirect probes we find continuum gamma-ray emission produced by inelastic nucleon–nucleon collisions, Bremsstrahlung of CR electrons and positrons interacting with matter, and inverse-Compton (IC) radiation by CR electrons and positrons scattering off low-energy photons.

Overall, such probes show a fairly good agreement with the standard CR paradigm. However, many aspects are still debated or even largely uncertain, including the range of relevant transport and interaction mechanisms, their uniformity within galaxies, if and how they change based on galactic conditions, the microphysics foundation of all of these aspects, and the role played by CRs in galactic ecosystems. For the latter aspect, let us just

**Citation:** Tibaldo, L.; Gaggero, D.; Martin, P. Gamma Rays as Probes of Cosmic-Ray Propagation and Interactions in Galaxies. *Universe* **2021**, *7*, 141. https://doi.org/ 10.3390/universe7050141

Academic Editor: Ulisses Barres de Almeida

Received: 30 March 2021 Accepted: 27 April 2021 Published: 11 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

briefly mention that CR astrophysics is getting increasing attention for its importance to other astronomical disciplines: astrochemistry and star formation (for a review see [3]), galaxy formation and evolution e.g., [4–6], and astrobiology e.g., [7].

As early as in 1952 Hayakawa [8] had predicted that the decay of *π*<sup>0</sup> produced in inelastic nucleon–nucleon collisions in the Galactic disk would produce a measurable gamma-ray flux. This was confirmed in the 1960s and the 1970s thanks to the *OSO-3* [9] and *SAS-II* [10] satellites, which detected a gamma-ray signal associated with Galactic interstellar matter. The breakthrough in the field came thanks to the *COS-B* satellite (1975– 1982), whose observations in the 50 MeV-5 GeV energy range enabled a detailed study of the correlation of gamma-ray emission with interstellar medium (ISM) tracers and provided the first measurements of the large-scale distribution of CRs in the Milky Way [11].

The *Compton Gamma-Ray Observatory* (*CGRO*, 1992–1999) fully covered the energy range 1 MeV–30 GeV thanks to its two instruments COMPTEL and EGRET. *CGRO* data led to many in-depth studies of CRs in the Milky Way. EGRET also first probed CRs in external galaxies in gamma rays by detecting emission from the Large Magellanic Cloud (LMC) [12] and by setting an upper limit on emission from the Small Magellanic Cloud (SMC) [13]. The latter provided an upper limit on GeV CR densities at one third of the value observed locally in the Milky Way, and therefore established observationally the galactic origin of the particles as suggested twenty years earlier by [14].

The twenty-first century brought numerous and rapid advances in the domain of continuum interstellar gamma-ray emission studies:


The next decade holds the promise for further observational advances thanks to new facilities already in the making or still in the planning/proposal phase, with guaranteed steps forward to be made in the TeV domain.

The rapid advances of the past few years and upcoming facilities make it timely to have a new review focussing on gamma rays as probes of CR propagation and interactions in galaxies, as an update to the previous ones touching these subjects [2,11,15,16]. In Section 2 we will briefly summarise the status of observational techniques, CR transport theory, modelling and data analysis tools, as well as complementary multi-wavelength and multimessenger observations necessary to interpret the gamma-ray data. The following sections will review recent observations, their implications for CR physics, and future perspectives. They are organized around four broad questions.

**Section 3** What do we learn from observations of the local interstellar medium near the Sun and how can we use them to connect direct and indirect CR measurements?

**Section 4** What does interstellar emission tell us about the large-scale distributions of CRs in galaxies and what does it teach us about CR transport?

**Section 5** What do we know about particle propagation and interactions in the vicinities of sources and what role does this phase play in the CR life cycle?

**Section 6** What are the properties of gamma-ray emitting galaxies as a population and what do we learn about the variety of CR transport under different environmental conditions?

We will conclude with some final remarks and an outlook on perspectives on the coming years in Section 7.

This review will not cover the important and closely related aspects of CR origin and particle acceleration, treated in a companion paper in this volume [17], nor the propagation of CRs in the heliosphere for a review see [18].

Before entering the main matter, we define here some terminology that will be used in the paper.

**Interstellar/diffuse emission:** we will refer to gamma-ray emission produced by CR interactions with interstellar matter and fields as interstellar emission. Conversely, we will use the term diffuse emission to refer to all emission that cannot be associated with a localized object (e.g., a pulsar, a binary system etc.) that is individually detected. Based on this definition, diffuse emission will comprise of interstellar emission plus collective emission from populations of sources not detected individually (due. e.g., to instrumental sensitivity limitations).

**Large-scale galactic CR population:** we will use this term to refer to the CR population in a galaxy on spatial scales much larger than those where an individual CR source or sink (or localized groups thereof) can influence significantly the CR properties. The fact that this definition is useful in practice is based on observations of the Milky Way and local-group galaxies that will be discussed later in the paper. Conversely, we will avoid the term "CR sea", sometimes used in the literature with an ambiguous meaning that can refer to the CR population around the Earth, the large-scale galactic CR population (according to our definition), or the large-scale galactic CR population with an implicit assumption that it is uniform within (or even beyond) the galaxy.

### **2. The Toolbox to Study Interstellar Gamma-Ray Emission**

### *2.1. The Progress of Observational Techniques in Gamma Rays*

Historically the most important facilities for the observations of interstellar gamma-ray emission have been space-based pair-tracking telescopes that cover the energy range from a few tens of MeV to tens of GeV and beyond. This is due to a combination of instrumental and intrinsic characteristics. Pair-tracking telescopes have a large field of view and a lower background than other gamma-ray detectors. Therefore, they are ideally suited to study diffuse emission, and for a long time they have been unrivalled in terms of sensitivity in the gamma-ray domain. Furthermore, in the GeV energy range we find the peak of energy output from CR interactions in the ISM, interstellar emission prevails over discrete sources, and it is dominated by hadronic emission correlated with interstellar gas (characterised by a well-defined morphology known from observations at other wavelengths, and, therefore, easier to separate from other emission components). The energy range covered by these instruments is often referred to as high-energy (HE) gamma rays.

In the past decade advances in the HE range have been driven by the *Fermi* LAT [19]. Thanks to the use of Silicon tracking devices the LAT has reached in the GeV domain an unprecedented sensitivity, field of view (2.4 sr), and angular resolution (better than ∼0.8◦ at energies >1 GeV and better than ∼0.15◦ at energies >10 GeV). The LAT has also extended the energy reach of this observing technique up to TeV owing to a combination of instrumental improvements, notably the use of a segmented anticoincidence detector for CR background rejection.

Gamma-ray observations at lower energies require the use of space-borne telescopes exploiting different detection techniques: coded masks in the energy range from hundreds of keV to MeV and Compton detectors at MeV energies. In this domain the state-of-the-art instruments date back to twenty or even thirty years ago with *INTEGRAL* SPI [20] (for *INTEGRAL* legacy results see also [21]), and COMPTEL [22]. Their performance cannot compete with the level reached by the LAT in the GeV domain e.g., [23]. New missions have been proposed to improve observational capabilities in the MeV-GeV energy range thanks to Silicon detectors that can carry out at the same time Compton and high-angularresolution pair-tracking measurements, most notably ASTROGAM and AMEGO [24–26]. Alternatively, GECCO is a concept of combined dual mode telescope that can improve measurements in the sub-MeV to MeV energy range thanks to an innovative imaging

calorimeter as a standalone Compton detector and, at the same time, as a focal-plane detector for a coded aperture mask [27].

The limited size of space-borne instruments makes measurements at energies beyond several tens of GeV more and more difficult. Therefore, at higher energies ground-based instruments are used. Their energy range is often referred to as very-high-energy (VHE) gamma rays, or ultra-high-energy (UHE) gamma rays beyond 100 TeV. Observational techniques in this energy range are covered in detail in a companion paper in this volume [28]. Below we will summarise the most important aspects with emphasis on the study of interstellar emission.

Ground-based Imaging Air Cherenkov Telescope (IACT) arrays have proven to be a very effective way to study gamma rays above a few tens of GeV e.g., [29]. IACTs have fields of view of a few degrees and a high level of background due to CRs misclassified as gamma rays, and an angular resolution of several to a few arcminutes. This technique has reached its maturity with the current generation of arrays comprising 2 to 5 IACTs, namely H.E.S.S., MAGIC, and VERITAS. Among them, H.E.S.S., which is located in the southern hemisphere and therefore has access to the inner part of the Milky Way, has engaged a systematic survey of the Galactic plane and has achieved the detection of diffuse emission that is likely to be, at least in part, of interstellar nature [30,31]. In this energy range, however, discrete sources prevail, thus we expect a sizeable fraction of diffuse emission to be due to unresolved sources not yet detected individually with the current sensitivity limitations (Section 2.4).

The field of IACTs is going to be revolutionised in the next few years by the advent of the Cherenkov Telescope Array (CTA) [32,33]. CTA will feature more than one hundred IACTs of different sizes located on two sites in the northern and southern hemispheres, thus it will be able to observe the entire sky. It will cover the energy range from a few tens of GeV to >300 TeV with a sensitivity one order of magnitude better than current IACTs, a field of view reaching 10◦, reduced CR background, and an angular resolution of a few arcmin.

A complementary observing technique for TeV gamma rays consists of ground-based shower particle detectors. Milagro has pioneered the use of water Cherenkov detectors [34], currently exploited by HAWC, which provides the best sensitivity among all existing instruments at energies >10 TeV [35]. Alternative approaches are the use of scintillator detectors adopted by the Tibet Air Shower Array [36], or of resistive-plate counters adopted by ARGO-YBJ [37]. These instruments have a large field of view, corresponding to the entire sky not occulted by the Earth, and a high duty cycle (contrarily to IACTs that operate only at night). Conversely, their angular resolution is not as good as for IACTs. For example, for HAWC the angular resolution varies between 1◦ and 0.2◦ [35].

The LHAASO observatory, still under construction, combines shower particle detectors and IACTs. The expected steady-source sensitivity will be superior to that of CTA above a few tens of TeV [38]. All of the ground-based shower particle detectors mentioned so far were or are located in the northern hemisphere. A new project has been proposed to install a water Cherenkov shower particle detector system in the southern hemisphere, SWGO, which will then be able to observe the inner Milky Way and the Magellanic Clouds [39].

To illustrate the status of observations Figure 1 shows some recent maps of the Milky Way from different instruments. The all-sky observing capabilities of the *Fermi*-LAT make it an invaluable source of information to study the entire range of manifestations of CR propagation and interactions. We note that features correlated with interstellar structures in the Milky Way are clearly visible in the map in Figure 1a even though no background subtraction has been applied beyond event-wise selection of candidate photons.

**Figure 1.** Images of the Milky Way from different instruments. (**a**) *Fermi*-LAT, 12 years of P8R3 data, energies > 1 GeV, *Source*/PSF3 event class/type, zenith angles <100◦, smoothed with a Gaussian kernel of *σ* = 0.25◦. (**b**) H.E.S.S. Galactic plane survey, map described in [40] (0.2◦ correlation radius). (**c**) HAWC survey, map construction described in [41] (test source with 0.5◦ extension and a power-law spectral index of 2.5). The footprint of map (**b**) is overlaid to the all-sky maps in (**a**,**c**) as a dashed rectangle. Map (**a**) displays observed counts not corrected for residual CR background, while maps (**b**,**c**) are given in units of significance of gamma-ray emission above the residual CR background. For maps (**b**,**c**) the actual energy threshold varies across the map and the figure provides a representative value.

*Fermi*-LAT observations are complemented at higher energies by observations with IACTs, notably H.E.S.S., that for the moment can cover much more limited regions of the sky albeit with greater detail thanks to their superior angular resolution. Another limitation of IACTs is related to the presence of a much larger background due to CR interactions with the Earth's atmosphere. The traditional analysis techniques employed to deal with residual CR background are based on "Off" regions, most often chosen within the same field of view of the position of interest. The map in Figure 1b is constructed using "Off" regions in the shape of rings centred at the same position as the region of interest, or "On" region, that is a circle of fixed radius, sometimes referred to as correlation radius see [42] for an illustration, in Figure 4, as well as for a review of the traditional background estimation techniques. Due to the limited field of view of IACTs this results in a lesser sensitivity to extended emission see, e.g., the discussion in [43]. Alternative background estimation methods, either data-driven or simulation-based, are sought [44–47]. The large times necessary to map large portions of the sky with IACTs and lesser sensitivity to extended emission makes their contributions to the study of large-scale interstellar emission less rich than those from wide-field of view instruments.

Shower particle detectors kick in at even higher energies offering a wide-field view of the sky as illustrated in the HAWC skymap shown in Figure 1c. In spite of a large CR background, the large field of view of these detectors makes it possible to routinely employ a data-driven background estimation method known as the "direct integration" technique [48], which exploits the facts that the CR background is stable in time and varies smoothly as a function of conditions in the atmosphere and the detector (e.g., trigger rate). Background estimation is performed independently in large declination bands, with an accuracy of order 10−<sup>4</sup> limited by anisotropies in the primary CR arrival directions [48]. This technique is therefore well suited for the study of large-scale emission.

### *2.2. Data Complementary to Gamma Rays: Recent Step Forwards*

The study of interstellar gamma-ray emission is deeply intertwined with other multimessenger/multi-wavelength measurements and observations, which we can group into five broad categories:


In this section we very briefly review these five domains and highlight some recent results, with emphasis on aspects of particular importance for the subjects covered in the review.

**Direct CR measurements** assess the spectra, composition, and arrival direction anisotropies of charged particles around the Earth. For sub-knee CRs this is prevalently achieved using satellite- or balloon-borne particle detectors, although ground-based instruments studying the byproducts of CR interactions in the atmosphere can also explore the energy range around the knee.

The past few years have been marked by the high-precision measurements of CR spectra performed by AMS-02 on the International Space Station of a wide range of species, including: light nuclei [49,50], heavier nuclear species [51–53], electrons [54], and secondary species produced by CR interactions in the ISM [55]. They are complemented by new results on the abundances of heavy nuclei, e.g., [56]. Linking these measurements with gamma-ray observations is complicated by the fact that charged particles near the Earth below rigidities of ∼100 GV are affected by the solar wind, which modulates their spectra as a function of the solar cycle phase. This limitation has been overcome only recently for rigidities below ∼1 GV thanks to measurements of light CR species in interstellar space with the Voyager 1 probe. [57]. On the higher-energy portion of the spectrum, the TeV domain

has also witnessed a significant advance, thanks to the measurements of several different balloon/satellite/ground experiments, both for hadronic species (e.g., from DAMPE [58], ATIC [59], and NUCLEON [60]) and for leptons (e.g., from H.E.S.S. [61,62], CALET [63], and DAMPE [64]).

**Interstellar matter** constitutes a target for the production of gamma-ray emission via nucleon–nucleon inelastic collisions and electron Bremsstrahlung. The gamma-ray yield is proportional to the mass which resides predominantly in gas, the most important contributor being hydrogen, either in the atomic, molecular, or ionized form. *Atomic hydrogen* (H I) is widely distributed in galaxies, and can be traced thanks to the 21 cm hyperfine transition line. The velocity-integrated brightness temperature of the 21 cm line is directly proportional to the gas column density in the optically thin limit. Often we need to account for H I opacity, which is typically done under the approximation of a uniform spin temperature. Recent years brought remarkable advances in the observations both for the Milky Way at large [65,66], and for specific Galactic or extragalactic regions. *Molecular hydrogen*, H2, mostly concentrated in cold clouds, is difficult to observe directly and is most often traced indirectly using molecular lines of other species. The mm rotational lines of the second most abundant interstellar molecule, CO, with its different isotopes, have been a major tool in gamma-ray astronomy. While for Milky Way on large scales we still rely on the survey by Dame et al. [67], high-resolution surveys of specific regions or external galaxies become more increasingly available, e.g., [68,69]. It is empirically established that molecular hydrogen column densities, *N*(H2), are approximately proportional to the velocity-integrated brightness temperature of the 12CO *<sup>J</sup>* = <sup>1</sup> → 0 line, *<sup>W</sup>*CO, via the *<sup>X</sup>*CO =*N*(H2)/*W*CO factor, for which, however, variations are both observed and expected [70] (see also the discussion in Section 3). *Ionized hydrogen*, H II, is present in regions around star-forming regions and in a kpc-wide layer around the Galactic disk. The H*α* recombination line in the visible band is heavily absorbed in the ISM. Alternative tracers are provided, under different kinds of hypotheses and approximations, by microwave free-free emission [71], pulsar dispersion measurements [72], and radio recombination lines [73]. Owing to Doppler shift from the Galactic rotation, lines of all kinds can be used to separate different structures along the line of sight, and approximately locate them.

An alternative approach to trace interstellar matter relies on *dust*. Dust grains make a tiny fraction of the mass in the ISM, but they produce bright thermal emission in infrared and are responsible for stellar extinction in the near-infrared to visible domain. They are thought to be well mixed with gas, therefore their emission/extinction can be used as a tracer of total ISM masses. Recent observational developments include the improved mapping of thermal dust emission thanks to the *Planck* satellite [74] and strong advances in 3D dust mapping based on stellar extinction measurements combined with stellar population synthesis models, e.g., [75,76]. As for *X*CO, variations in the ratios between dust observables and matter column densities are both observed and expected, e.g., [77]. The combination of gamma-ray and dust observations (both tracers of the total masses in the ISM) with the H I and CO lines has demonstrated that the aforementioned lines fail to trace the totality of the neutral interstellar medium. The excess with respect to the H I- and CO-bright gas is known as dark gas or *dark neutral medium* (DNM). It is predominantly located at the interface between the molecular-dominated and atomic-dominated parts of interstellar clouds, and it is likely made of a combination of optically thick H I and CO-poor H2 in the outer layers of the molecular regions less shielded from UV photo-dissociation (for a recent review see [16]). The existence of the DNM is confirmed by alternate molecular tracers, e.g., [78,79] and emission from ionized carbon, e.g., [80], and also supported by numerical simulations, e.g., [81].

**Interstellar radiation fields (ISRFs)** constitute a target for the production of gammaray emission via inverse-Compton scattering by CR electrons and positrons. They include the cosmic microwave background, thermal emission from dust grains heated by stellar radiation from sub-mm to infrared, and radiation from stars from near-infrared to UV. Radiative transfer techniques can be used to link the ISRFs to the measured spectral

energy distributions and other observational constraints on the spatial distribution of stars and interstellar dust. Recent years have seen significant advances in this field both on the observational front, notably with the improved measurements of thermal dust emission thanks to the *Planck* satellite [74], and on the modelling front for the Milky Way [82–84] as well as for nearby external galaxies, e.g., [85,86]. The importance for gamma-ray observations was discussed recently by Niederwanger et al. [87].

**Magnetic fields** are relevant to CRs both as a target for synchrotron energy losses/ radiation and as the agent of diffusion (see Section 2.3 for a discussion on the latter). We can separate interstellar magnetic fields into a large-scale regular and a turbulent component. In external galaxies they are known to follow a spiral structure similar to that of interstellar matter and stars (for a recent review see [88]). The origin of the regular field is still debated. It is constrained through rotation of polarized emission from background sources, e.g., pulsars, polarized synchrotron emission, and polarized dust emission. Many recent works have used observational constraints to model the large-scale Galactic magnetic field, e.g., [89–93], and have been revisited on the light of *Planck* results [94]. Magnetic turbulence is thought to be driven by supernova explosions, stellar winds and outflows, shocks and instability induced by galactic rotation, and shear instabilities and baroclinic effects in the ISM. It is related to interstellar turbulence in velocity, matter density, and free-electron density. It is therefore constrained observationally using high-resolution spectroscopy of interstellar lines, high-resolution imaging of interstellar matter, intensity and polarization of synchrotron and dust thermal emission, dispersion of pulsar signals, interstellar scintillation, and rotation of polarized emission from background sources for a recent review see, e.g., [95]. The combination of these observations reveals an overall power-law spectrum as a function of wavenumber with Kolmogorov slope over spatial scales from thousands of km to a few pc [96].

**Indirect CR tracers** other than gamma-ray continuum emission include the already mentioned *synchrotron emission*, observed from radio to microwaves. The microwave sky was recently studied in unprecedented detail by the *Planck* satellite. Studies of synchrotron emission are used to reconstruct the broadband spectrum of CR electrons and to inform the interpretation of observations of IC emission in gamma rays [97,98]. Only a few years ago the first detection of *astrophysical high-energy neutrinos* with IceCube [99] has opened a new window that may enable us to have a complementary tracer of CR nuclei in galaxies, but for the moment only upper limits to a Galactic neutrino signal exist combining data from IceCube and Antares [100]. *Molecular line emission* driven by CR ionization, which yields, e.g., H<sup>+</sup> <sup>3</sup> , OH+, H2O+, H3O+, is observed in infrared and mm waves and provides information on the low energy part of the CR spectrum for a recent review see [16]. Recent calculations show that the observed molecular ion line emission suggests an average ionization rate a factor of 10 larger than what is expected from directly measured CR spectra (including results from Voyager 1) [101,102]. This may point to the existence of an additional CR component emerging at low energies different from those observed directly or through gamma rays, although it seems more likely that ionization sources different from CRs may play a role more prominent than previously thought [103] Furthermore, we note that the methodology used to infer the ionization rate from the data is very sensitive to the composition of the ISM, e.g., to the presence of polycyclic aromatic hydrocarbons [104]. An alternative way to study the CR nuclei population in remote locations below the pion production threshold (kinetic energies of ∼300 MeV/nucleon) would be to observe *nuclear de-excitation lines* in the 0.1–10 MeV range induced by CR collisions with interstellar matter [105,106] thanks to a future MeV telescope [107].

**Energetic objects** play a twofold role: they are potential CR accelerators and they inject energy into the ISM under other forms, e.g., radiation and magnetic turbulence. Knowledge of their census and its recent history is therefore essential to model interstellar gamma-ray emission and interpret the gamma-ray observations. Different challenges are to be faced to determine the distribution in space and time of energetic object at galactic scales against observational uncertainties and biases, or to establish a detailed

picture of individual remarkable regions. Among the different classes of energetic objects, massive stars are interesting at the same time for themselves, and as the progenitors of all other relevant classes such as SNRs and pulsars and their wind nebulae. Our view of stellar populations in the Milky Way is in a transformative phase thanks to the data collected by the *Gaia* satellite in the visible/near-infrared band, which provide precision measurements of positions, parallaxes, and proper motions of over 1.4 billions stars within >4 kpc around the Solar system [108]. This makes it possible to paint a portrait of the history of stellar clusters in the disc of the Milky Way over the past billion years [109], which complements information from observations in near-infrared, e.g., [110], or at lower frequencies, e.g., [111]. At the cluster or star-forming region level, this enables us to go beyond simple models of coeval and colocated star formation, and embrace more realistic descriptions of its spatial [112] and temporal distributions [113]. For SNRs, pulsars, and pulsar wind nebulae (PWNe), the most important wavebands for the observations are radio, X rays, and gamma rays. For the first two bands, the coming decade is expected to be marked by the results from the Square Kilometer Array [114] and its precursors and pathfinders (https://www.skatelescope.org/precursors-pathfinders-design-studies/ (accessed on 29 March 2021)), and the eROSITA space telescope [115]. The role of gammaray observations in understanding particle acceleration in energetic objects is treated in a companion paper in this volume [17].

**Hadronic cross sections** for gamma-ray production are an essential ingredient to model and interpret gamma-ray observations. While leptonic cross sections in principle can be calculated exactly, the modelling of hadronic interactions heavily relies on experimental constraints. Accelerator data provide information with a somewhat limited coverage in terms of energies, angular distribution, and interacting species (mostly p-p), that are then used to model the cross sections resulting in non-negligible uncertainties (for a recent review see [116]). Accelerator data in the crucial energy range above the pion production threshold and around the Δ(1238) isobar resonance, and up to centreof-mass energies of 10<sup>3</sup> TeV mostly date back to between the 1950s and 1980s and have been compiled by Lock and Measday [117], Stecker [118], and Dermer [119]. The energy coverage was recently extended up to 10<sup>8</sup> TeV in the centre-of-mass frame thanks to the LHCf experiment [120,121], and improved also at hundreds GeV energies thanks to the NA61/SHINE experiment [122]. Recent cross-section derivations exploiting these data include Kamae et al. [123], Kelner et al. [124], Kachelrieß and Ostapchenko [125], Kafexhiu et al. [126], Mazziotta et al. [127], Kachelrieß et al. [128], and, with focus on the contributions of species heavier than protons, Mori [129], and Kachelrieß et al. [130].

### *2.3. A Glimpse at the Basics of Cosmic-Ray Transport*

Gamma-ray emission from the galactic ISM is intimately associated with the physical problem of CR acceleration and transport. The problem of CR acceleration is not covered in this review. We just recall that the SNR paradigm is widely considered as the reference guideline, although other classes of sources powered by a variety of mechanisms have been proposed as well (e.g., OB associations, X-ray binaries, and pulsar wind nebulae for leptonic CRs). Within the SNR scenario, the theory of diffusive shock acceleration [131–134] and its non-linear extension [135] describe the process o acceleration of cosmic particles that are diffusing around the forward shock in an SNR, and are able to reproduce the correct CR energetics and overall many of the CR observables. We refer to Blasi [136] for an extensive review on the origin of CRs and the SNR paradigm, and also to Cristofari [17] in this volume for the role of gamma-ray observations in this context.

In this Section we focus instead on the basics of the problem of galactic CR transport. Let us start by mentioning that a large body of evidence demonstrates that high-energy charged CRs are confined in the Milky Way for a timescale that is much longer than the ballistic crossing time. In fact, the analysis of the properties of the CR fluxes that reach Earth outlines the following key features:


The picture is corroborated by the ubiquitous observation of magnetic turbulence in the interstellar environment that we briefly mentioned in the previous section. The multiple, random interactions of charged CRs with these perturbations naturally provide a mechanism to explain why the motion of these particles should be described as a diffusive process. These considerations, together with an increasing amount of data in different channels discussed in the previous section, corroborate the standard paradigm that seems to capture the most relevant aspects of Galactic cosmic-ray physics, as recently extensively reviewed in [2], in which CRs are diffusively confined within a kpc-sized, magnetized, and turbulent Galactic halo.

The simplest way to describe this phenomenon, widely used in the past literature, is provided by so-called *leaky-box models* [143]. In this framework, the galaxy is modelled as a cavity with almost perfectly reflecting "walls". The cosmic particles are allowed to move freely within this environment. The physics of their propagation and escape is all embedded in the energy-dependent parameter *τesc*, i.e., the mean residence time. Thus, the probability of particle escape per unit time is equal to *τ*−<sup>1</sup> *esc* . The model is described by the following equation for the particle density *N*:

$$\frac{\partial N}{\partial t}(E) = \frac{N}{\pi\_{\text{esc}}(E)} + Q(E),\tag{1}$$

where *Q*(*E*) is the source function.

Recently, a description in terms of diffusion has become prevalent. We want to emphasize that it is very challenging to obtain a general expression of the diffusion tensor from first principles. A widely used guideline in this context is the *quasi-linear theory of pitchangle scattering* onto magnetic fluctuations presented in the pioneering papers of [144,145]. The rationale of the theory is to consider the interaction of a charged particle of momentum *p* = *γmv* with magnetic inhomogeneities *δ B*. The key assumptions behind this theoretical framework are the following:


The key result of this approach is that the particles mainly diffuse along the regular magnetic field. The process is resonant, i.e., the Alfvén wavepackets that contribute to the process have a wavelength comparable to the gyroradius of the particle. It is useful to notice that the length scales associated with the energy range usually covered by current CR observations are typically very small compared to the size of a galaxy, and to the scale of injection of turbulence (10–100 pc). For instance, GeV particles resonate with fluctuations with wavelength of the order of few AU.

The resulting scattering rate can be written as [1,136,146]:

$$\nu = \frac{\pi}{4} \frac{k\_{\text{res}} P(k\_{\text{res}})}{B\_0^2 / (8\pi)} \Omega\_{\text{S}}.$$

where Ω*<sup>g</sup>* = *qB*0/(*γmc*) is the gyration frequency and the resonant wavenumber is *<sup>k</sup>*res = <sup>Ω</sup>*g*/*v* (*v* is the velocity component along the coherent magnetic field *<sup>B</sup>*0).

Starting from this expression, it is possible to obtain a (parallel) spatial diffusion coefficient of this form:

$$D(p) = \frac{\upsilon^2}{3\Omega\_\\$} \frac{B\_0^2/(8\pi)}{k\_{\rm res}P(k\_{\rm res})}.$$

It is useful to recast this expression into:

$$D(p) = \frac{1}{3} \frac{r\_L v}{\mathcal{F}(k\_{\rm res})}$$

where *rL* = *p*⊥/*qB*<sup>0</sup> is the Larmor radius of the particle and we have defined

$$\mathcal{F}(k) \equiv \frac{kP(k)}{B\_0^2/(8\pi)}$$

.

This expression shows that a larger power in magnetic fluctuations at a certain scale is associated with a lower diffusion coefficient for the resonating particles, hence a more effective confinement. The dependence on the Larmor radius, both direct and indirect via the resonant wavenumber *k*res, and the empirical power-law dependence on wavenumber of the magnetic turbulence spectrum observed at large scales drive a dependence of the diffusion coefficient on particle rigidity *R*. Standard implementations for the Milky Way feature a diffusion coefficient *<sup>D</sup>*(*R*) = O(1027) *<sup>β</sup>* (*R*/1 GV)1/3 cm2 <sup>s</sup>−<sup>1</sup> in reasonable agreement with a reference estimate of the random field at the injection scale and extrapolation down to the resonant scale. We emphasize that the theory is typically built on an isotropic picture of turbulence. However, the resulting process is highly anisotropic. We will elaborate more on these key concepts in the next Section.

Diffusive confinement is certainly a key feature characterising CR propagation. However, all CR species interact in many different ways with the different components of the ISM, and a variety of other processes occur during their random walk across the parent galaxy. Let us briefly recap the most relevant ones.


with interstellar matter, and production of secondary particles in nucleon–nucleon inelastic collisions, most notably pions (these processes being overall more effective at low energy, in particular in the sub-GeV domain). On the other hand, leptons suffer strong losses mostly due to IC scattering onto low-energy photons, synchrotron emission (with a rate that increases with increasing energy, following a ∝ *E*<sup>2</sup> scaling), and Bremsstrahlung. We refer for instance to [148] for a compilation of the relevant formulae associated with these processes.

• **Spallation:** A complex network of nuclear reactions and decays transform heavier CR nuclei into lighter species as a consequences of inelastic interactions with matter in the ISM. A combination of semi-empirical parametrizations and rescaling procedures to nuclear data (mostly available in the GeV domain) is typically adopted to model these phenomena. See for instance [149–154] and references therein for the modelling of the hadronic nuclear network.

Remarkably, a joint description of the most relevant phenomena mentioned above is possible in the form of a relatively compact *transport equation* that can be solved for each CR species of interest. This general reacceleration-diffusion-loss equation is usually written as follows: [1,146].

$$\begin{bmatrix} -\nabla \cdot (D\nabla N\_i + \mathfrak{v}\_w N\_i) + \frac{\partial}{\partial p} \left[ p^2 D\_{pp} \frac{\partial}{\partial p} \left( \frac{N\_i}{p^2} \right) \right] - \frac{\partial}{\partial p} \left[ p N\_i - \frac{p}{3} (\nabla \cdot \mathfrak{v}\_w) N\_i \right] = \\\ Q + \sum\_{i$$

In this equation: *p* is the particle momentum; *Ni* is the CR density for species *i*; *D* is the spatial diffusion tensor; *Dpp* the diffusion coefficient in momentum space, associated with reacceleration; *<sup>v</sup><sup>w</sup>* the velocity associated with advection; *<sup>Q</sup>* is the source term; (*σj*→*i*, *<sup>σ</sup>i*) are the spallation cross sections associated, respectively, to the creation of the species *i* from parent nucleus *j*, and to the destruction of the species *i*; (*τj*→*i*, *τi*) are the decay times for, respectively, the unstable species *j*, creating *i*, and for *i*, creating lighter nuclei. For a detailed discussion on each term, we refer to the technical papers cited above.

### *2.4. Evolutions of Modelling and Data Analysis Techniques*

Extracting properties of CRs from the gamma-ray data always involves some type of modelling. The modelling of CR propagation and interactions and the associated nonthermal emission can follow different avenues. We review in this Section different methods and discuss the most remarkable achievements and open questions.

The **template fitting** method is a widely used technique aimed at modelling observations of interstellar gamma-ray emission. In its simpler form the key assumption is that CR densities vary mildly on the spatial scales characteristic of interstellar gas complexes. Therefore, gamma-ray emission associated with interstellar gas can be modelled as a linear combination of maps (templates) of gas column density, or a proxy thereof, split for different regions along the line of sight by using the Doppler shift information of interstellar lines. The original implementation of this method used templates derived from the H I 21 cm line to account for atomic gas and from the 2.6 mm CO line as a surrogate tracer of molecular gas [155]. The linear combination coefficients, known as gamma-ray emissivities, encode information on CRs. Notably, the H I emissivity is the gamma-ray emission rate per hydrogen atom, i.e., the convolution of the CR densities with the gamma-ray production cross sections. The gamma-ray analysis can be performed over several independent energy bins to reconstruct the underlying CR spectrum. In recent years the template fitting method has been extended to account for other forms of interstellar gas (dark neutral medium, ionized gas), and other components of interstellar emission, notably IC emission. For the latter templates need to be obtained using predictive models. More recently, the SkyFACT tool [156] introduced the possibility to allow pixel-by-pixel variation within each template guided by a penalized likelihood maximization by combining methods of image reconstruction and adaptive regression. Crucial uncertainties affecting the template fitting technique

come from approximations in the construction of the templates, and cascade effects that stem from those in the component separation procedure.

A specular technique consists of making assumptions about the spectra of different gamma-ray emission components, either based on theory or observations, and use the gamma-ray data to infer the morphology of the components. This technique is known as spectral component analysis or spectral template fitting [157,158]. This alternative incarnation of template fitting has been used less widely owing to the absence of sharp spectral features in typical gamma-ray spectra and because the -5–10% energy resolution of gamma-ray telescopes makes it less effective. We warn the reader that caution is needed in dealing with the energy-dependent point sprwad function (PSF) of the instruments when applying this technique.

Another completely **data-driven** technique aimed at studying gamma-ray emission is represented by the D3PO (denoised, deconvolved, and decomposed) inference algorithm presented in [159]. This method performs a Bayesian inference without the use of templates. Instead, it is designed to remove the shot noise, deconvolve the instrumental response, and finally provide estimates for the different flux components separately. This method is particularly suited in identifying and subtracting point sources from the data to study the remaining emission.

Let us now turn our attention to another widely adopted approach to gamma-ray modelling, which is the use of **predictive models** connected to the physics of CR propagation in galaxies. The rationale of these methods is to compute the equilibrium CR distribution in the galaxy by solving the transport equation (Equation (2)) presented in Section 2.3. We have discussed how such equation captures the variety of physical processes shaping the transport of the cosmic particles from their production at the accelerator sites to their eventual escape from the large-scale diffusive halo. Today we have at our disposal several public numerical codes, equipped with different numerical methods and astrophysical ingredients, aimed at solving that equation for all CR species in the Milky Way. The most important are (in chronological order): GALPROP [160–162], DRAGON [148,154,163,164], PICARD [165,166]. A semi-analytical approach is instead followed by the USINE project [167].

The models based on this concept were remarkably successful in reproducing a variety of local CR data, and, for some of them, in modelling the non-thermal emission from the radio band all the way up to the GeV–TeV gamma-ray domain. In standard implementations the CR transport is typically described as isotropic, homogeneous diffusion characterised by a scalar diffusion coefficient with a power-law rigidity dependence. The amplitude and slope of the diffusion coefficient, together with a set of parameters associated with the astrophysical ingredients of the model (for instance, the *X*CO conversion factor between the CO emission intensity and the molecular gas column density) are typically fitted to a variety of data, including the accurate dataset of secondary/primary nuclei provided by AMS-02, and the gamma-ray maps provided by the *Fermi*-LAT.

However, a number of anomalies have been highlighted over the recent years both in CR and gamma-ray data: a break in the local proton, Helium, light and heavy nuclei spectra at ∼200 GV [49,50,168], an excess of high-energy positrons first highlighted by PAMELA [169] and then confirmed by *Fermi*-LAT and AMS-02 (with better statistics) [170], a hint of an antiproton excess near 80 GeV [171,172], and several observations in gamma rays that are discussed in detail in Section 4.

Some of these anomalies posed a challenge to standard implementations of Galactic CR models, and have spurred a variety of new developments.

• **Three-dimensional modelling.** The transport equation is usually solved under the assumption of cylindrical symmetry. In this widely used setup, that is a distinctive feature of standard Galactic CR model implementations, most astrophysical ingredients that enter the problem and influence the different types of CR interactions (for instance, the distribution of CR accelerators, the magnetic field strength, the interstellar gas, and low-energy photon distributions) are implemented in the form of (smoothly varying) functions of the Galactocentric radius *R* and the vertical coordinate *z* (perpendicular to the Galactic plane). A more realistic description of the interstellar medium, featuring a three-dimensional model for the spiral arm pattern of the Milky Way in the CR source term was first introduced in the context of numerical modelling of leptonic CR species in [164], showing a relevant impact on the local electron spectrum. The consequences of such three-dimensional pattern on CR hadronic species and gamma-ray modelling was later discussed in [173,174]. The authors of [84] further investigated the phenomenological consequences of a spiral arm pattern in both the CR source distribution and the interstellar radiation field (see also [98]).


cascade of turbulent fluctuations can yield a strongly anisotropic CR transport. Moreover, since in the cascade most of the power is transferred to scales perpendicular to a mean-magnetic-field direction, the Alfvénic waves may actually be highly inefficient in confining CRs, as discussed extensively in [184,185], and pitch-angle scattering onto magnetosonic modes may play the dominant role. A non-linear theory of CR scattering onto magnetosonic modes was presented in [186,187]. Very recently, after the seminal attempt presented in [188], the authors of [189] provided the first comprehensive phenomenological study of such theory, and showed how local CR data above the AMS break can be reproduced by solving the aforementioned diffusion equation with the diffusion coefficients computed ab initio from the theory, under reasonable assumptions on the free parameters involved. Recent radio observations support the notion that magnetosonic modes, under some circumstance, may drive CR transport [190].

• The **Monte Carlo approach.** The approach of solving the transport equation in a continuous setup where all the relevant terms are provided as smoothly varying function of the position provides a well-defined prediction of the expected *average* flux of cosmic rays. However, the stochastic nature of sources may play a relevant role in several cases, depending on the type of particle and on the energy range. For instance, high-energy leptons may be highly sensitive to this aspect, especially at energies at which the characteristic time and length scales associated with their momentum losses and spatial diffusion become comparable with the mean spatial and time distance between two different CR injection episodes. An important question is therefore to assess the expected *variance* of the CR flux, and a useful technique to attack this question is a Monte Carlo simulation. In this approach, many stochastic realisations are considered. In each of them, a random set of acceleration events in considered, and the CR flux from each event is typically evaluated by means of an analytic formula and added up. Some relevant examples of works based on this technique are [191–194]. The observables that are investigated are the fluctuations of the CR spectrum and normalization, anisotropy, and chemical composition (especially around the knee).

To conclude, let us discuss briefly another broad class of modelling/data analysis methods that concerns the treatment of **populations of unresolved sources** detected by instruments in the form of diffuse emission that needs to be disentangled from interstellar emission. A first approach to this challenge is to develop source population synthesis models constrained by bright sources already detected, and then use them to predict the unresolved component, e.g., [195–197]. More recently some authors have proposed to use the non-Poissonian spatial fluctuations in photon counts to infer the properties of unresolved sources in data [198]. However, the susceptibility of this technique to systematic uncertainties in the modelling of interstellar emission seems to be sizable, therefore the results should be taken with caution [199].

### **3. Gamma-Ray Emission from the Local Interstellar Medium: The Rosetta Stone of Cosmic-Ray Astrophysics**

Gamma-ray emission from local (1 kpc) interstellar matter is a valuable source of information to link direct CR measurements with measurements of interstellar gamma-ray emission. If the CR population was fully uniform on these spatial scales in the ISM around the Sun, then CR measurements and gamma-ray measurements would be the expression of the same local interstellar spectrum (LIS) of CRs, and could be derived from one another based on the theories of the gamma-ray production processes and of solar modulation. In practice, given the uncertainties existing on all fronts, we can combine data and theories to get the best observational constraints and answer questions such as: how representative are direct CR measurements of the average LIS, and how much do CR densities and spectra vary within in the surrounding of the Sun and on which spatial scales?

The latter question is key to assessing the validity of some hypotheses often made in standard implementations of CR propagation models, namely that the sources are smoothly distributed in space and time, and that transport properties are homogeneous. If those hypotheses were true we would expect CR nuclei densities to vary mildly on spatial scales corresponding to the O(kpc) diffusion lengths for the GeV–TeV CR population built up over durations >1 Myr. Non-uniformities in the local interstellar space therefore can inform us on the relevance of local effects on transport, and on the clustering in time and space of sources.

### *3.1. The Emissivity of Atomic Hydrogen*

The gamma-ray emissivity per H <sup>I</sup> atom *q*H I is the most important observable quantity in this context. On one hand, H I is diffuse and dense enough that it can probe CRs on different spatial scales with a lesser influence from localized CR sources. On the other hand, the H I brightness temperature provides a direct estimation of gas column densities albeit the uncertainties on the optical depth correction (spin temperature).

Figure 2a,c show some recent measurements of the local H <sup>I</sup> emissivity *q*LIS based on *Fermi*-LAT data. Casandjian [200] derived an average emissivity within a few hundred pc from the solar system by analysing LAT data at intermediate Galactic latitudes (10◦ < |*b*| < 70◦). We compare it to the H <sup>I</sup> emissivity for several local clouds in three complexes that sample the same spatial region from recent studies: Chameleon [201], the Galactic anticentre region [202], and the Orion–Eridanus superbubble [203]. These three studies have been selected among the numerous results on the emissivity of local H I (including also, e.g., [204–211] for a twofold reason.) (1) They employ the same sophisticated methodology to separate emission from clouds along the line of sight and from different phases of interstellar gas, which should minimize biases in the template fitting procedure due to pile-up effects and variations of *X*CO and dust specific opacity from cloud to cloud and within clouds themselves [202]. (2) They are distributed over different distances from the solar system between ∼150 pc and ∼400 pc, they lie in different directions opposite with respect to the Sun, and they span a large range of column densities and rates of star formation.

In spite of this wide range of locations and properties, the dispersion between the different emissivity estimates is only of 25%. The emissivity differences do not relate to the cloud altitude above the Galactic disc, nor to the cloud location with respect to the local spiral arm. Conversely, a sizeable fraction of the difference can be attributed to uncertainties in the H <sup>I</sup> optical depth correction. The average emissivity *q*ref was obtained for a uniform H I spin temperature of 140 K, while measurements from the individual regions are based on different uniform values ranging from 100 K to the optically thin approximation (according to the best likelihood in the gamma-ray fit). When the comparison is made using the same or similar spin temperature values the differences are largely reduced [201,203]. Another noteworthy aspect is that the emissivity of H I is separated from other components using the template fitting technique with limitations due to modelling of other emission components and the angular resolution of the data. Therefore, at present we cannot claim any significant variations of *q*LIS in intensity or spectrum in atomic gas within a few hundred pc around the solar system, and we conclude that the data constrain any fluctuations to 25%. In order to use gamma-ray data to probe for smaller fluctuations in the local emissivity spectrum and intensity it is crucial to reduce uncertainties in the derivation of interstellar masses, in particular in the H I optical depth correction. Constraints could also be strengthened in the future by reducing the uncertainties in the measurement of the emissivity thanks to data with improved angular resolution and by extending it to lower energies by means of one of the proposed MeV gamma-ray missions [107].

**Figure 2.** Top panels: gamma-ray emissivity per hydrogen atom in the local interstellar medium. Both panels: reference measurement *q*ref with the *Fermi* LAT that provides an average emissivity within a few hundred pc from the solar system [200] (combination of statistical and systematic uncertainties). (**a**) Measurements from a set of individual interstellar clouds (statistical uncertainties only) in Chameleon (blue [201]), Galactic anticentre (orange [202]), and the Orion–Eridanus superbubble (green [203]). (**b**) Calculations of the emissivity based on direct CR measurements for different choices of the H and He CR spectra, and of the hadronic gamma-ray production models: model A is based on CR nuclei spectra from [212] and hadronic production models from [128] and [123]; model B on CR nuclei spectra from [213] and the same hadronic production models as in model A; model C is based on the same CR nuclei spectra as in model A and on the hadronic production models from [123] and [129]. In all models Bremsstrahlung is calculated based on the electron and positron LIS by [97] and on the cross-section formulae by [214]. Hadronic emission is shown by the dashed lines, while the total including electron Bremsstrahlung is shown by the solid lines. See the text for more details on the models. Bottom panels: relative deviation of all measurements (**c**) and models (**d**) with respect to *q*ref.

Figure 2b,d compare the average emissivity measurement *q*ref to calculations based on estimates of the CR LIS inferred from CR direct measurements combined with models of solar modulation and other multi-wavelength constraints. We calculated the Bremsstrahlung emissivity based on the CR electron and positron LIS derived by Orlando [97] using the GALPROP code tuned by using the most recent experimental results from *Voyager 1* [57] outside the heliosphere, from AMS-02 [54] above a few tens of GeV, as well as complementary constraints from radio, microwave, X-ray, and gamma-ray emission, so that the estimate is independent of solar modulation models. As discussed by the author, the local emissivities measured by the LAT bring important constraints to the derivation of the all-electron LIS within their framework: we use the spectrum from the plain-diffusion (PDDE) model, which best fits the gamma-ray and multi-wavelength/messenger data. We adopt the formulas for the Bremsstrahlung cross section given in [214] for electron kinetic energies >2 MeV. We took into account target H and He in the ISM by assuming that the He/H fraction in the ISM is 9.6% [129]. Figure 2b shows that, for the all-electron LIS considered, Bremsstrahlung is relevant only below ∼1 GeV and is the dominant component below ∼100 MeV.

In the literature there is a controversy about the compatibility of direct measurements of CR nuclei and the emissivity -1 GeV measured by the LAT, and it is not always clear whether this depends on assumptions made on CR LIS and hadronic interaction cross

sections. Therefore, for hadronic emission produced in inelastic nucleon–nucleon collisions we have considered a few different models taken from the recent literature on the subject. Model A features the LIS of CR H, He, C, Al, and Fe derived in Boschini et al. [212] based on data from *Voyager 1* [57], PAMELA [215], AMS-02 [49,50], HEAO-3-C2 [216], and other instruments using the GALPROP and HELMOD codes to model Galactic propagation and solar modulation, respectively. The CR LIS are used to calculate the gamma-ray emissivity per H I atom using the H-H, H-He, He-H, He-He, C-H, Al-H, and Fe-H cross sections by Kachelrieß et al. [128], derived using the QGSJET-II-04m Monte Carlo generator, an update of QGSJET-II [217] taking into account the most recent accelerator data. The cross sections are only available above minimum energies ≥5 GeV depending on the interaction considered. For H-H we combined the results from QGSJET-II-04m at proton energies >20 GeV with the nondiffractive part of the cross section model by Kamae et al. [123] at lower energies owing to the result that this combination best reproduces experimental low-energy accelerator data [125] . For other interactions, below the minimum energy available in Kachelrieß et al. [128] we rescaled the H-H cross sections by the ratio of the cross section at the minimum energy available over the H-H cross section at the same energy per nucleon. We assumed again that the He/H fraction in the ISM is 9.6%.

Model B features the same gamma-ray production models but different H and He LIS derived by Corti et al. [213] based on a broadly overlapping set of experimental data, but a much simpler approach that consists of seeking a parametrized formula to encode the LIS and treating solar modulation by a modified rigidity-dependent force-field approximation. The resulting CR LIS differ by at most 10% w.r.t. those by [212] in the rigidity range 1–10 GV where solar modulation still plays a significant role but there are no direct constraints on the LIS from *Voyager*. This translates to a 5% difference in the resulting gamma-ray emissivity spectrum below a few GeV after convolution with the gamma-ray production cross sections. An additional source of difference between models A and B is that in model B we neglect the interactions from CR species heavier than He, which contribute ∼1%–3% of the emissivity in model A.

Model C employs the same CR H and He LIS spectra as in model A, but different gamma-ray production models, namely the H-H cross sections by Kamae et al. [123], including the diffractive part, and the scaling factors by Mori [129] based on the DPMJET-3 Monte Carlo generator to scale the H-H emissivity in order to account for heavier elements, namely CR He nuclei and He, CNO, Mg-Si, and Fe in the ISM. The scaling factors are available only for CR energies of 10 GeV/nucleon, therefore we neglect their variations with energy, that should be modest at least in the energy range >10 GeV [130] but are expected to become more relevant in the less understood energy range <10 GeV (see Figure 1 of [129], which only shows the results down to energies of 5 GeV/nucleon, below which results become inaccurate). We note that the contribution from target nuclei heavier than He, not accounted for in models A and B, represents ∼2% of the total emissivities in model C.

To summarise, in the hadronic-dominated energy range >100 MeV we find variations between different predictions of the emissivity based on direct CR measurements of 10%, more important at lower energies. The differences are comparable to the uncertainties in the most precise emissivity measurements and a factor of ∼2 smaller than the dispersion between measurements for different individual clouds.

For gamma-ray emission in the energy range > 10 GeV, where the differences between alternative estimations of the CR LIS and between alternative hadronic production models are small, the average emissivity *q*ref exceeds the predictions by 20–30%. This broadly agrees with several independent results in the literature [97,116,218]. The recent precise measurements by AMS-02, in particular of He [50], rule out earlier claims [200] that the average emissivity measured by the LAT is consistent within uncertainties with direct CR measurements as pointed out by Orlando [97]. We also stress that the evaluation of the contributions from species heavier than H must take into account these measurements, which, for instance, is not necessarily the case when using *enhancement factors* taken from

the literature such as the one provided by [129]. The latter aspect is discussed in greater detail by [130].

However, the difference between the predictions and the average emissivity is comparable to the dispersion between different clouds, that, as discussed above, is largely imputable to uncertainties in the H I optical depth correction performed assuming a uniform spin temperature and possibly other analysis features. Once the dispersion of ∼25% among different regions and clouds is taken into account, regardless of its interpretation as a systematic uncertainty related to the extraction of the emissivity or as an effect of real fluctuations in CR densities on spatial scales smaller than a few hundred pc, for the moment it should still be considered as viable that direct CR measurements are in agreement with the local H I gamma-ray emissivities.

Therefore, to also connect direct CR measurements and gamma-ray observations of the local ISM, it is crucial to reduce uncertainties in the extraction of the emissivities and derivation of interstellar masses. At the same time, to reduce uncertainties in the derivation of the gamma-ray emissivities from direct CR measurements it is paramount to improve our knowledge of the gamma-ray production cross sections, especially at low energies reaching the kinematic threshold for pion production where currently only approximate scaling of H-H is readily accessible to account for interactions of heavier species. To this end it would be extremely useful to pursue experimental accelerator campaigns going beyond those already undertaken, to further improve Monte Carlo generators, and to make more results (e.g., [127]) publicly available in the form of parametric/tabulated cross sections for gamma-ray production.

### *3.2. Molecular Clouds and Their Spectra*

An alternative approach involves studying gamma-ray emission from molecular clouds. Molecular clouds provide localized (few tens of pc) targets for CR interactions with large masses. However, with observations of molecular clouds two additional classes of complications arise.


the most important effect. Streaming instabilities driven by CR pressure gradients can alter the CR transport regime from diffusive to advective and so suppress the CR pressure by 10% at GeV energies [221]. The peculiar magnetic field configuration in molecular clouds via magnetic mirroring coupled with energy losses can lower the CR densities by a factor 2–3 in molecular cores in the MeV energy range, while CR enhancements due to magnetic focussing appear a less important effect [222]. Damping of magnetic turbulence by ion-neutral friction coupled again with energy losses can reduce the CR fluxes below 100 MeV kinetic energies, especially for electrons [102]. We note that all propagation effects mainly affect CRs at energies <1 GeV, for which constraints from gamma-ray observations are looser.

To this date there is a controversy in the literature about the uniformity of the spectral shape of gamma-ray emission from molecular clouds in the local interstellar space as observed with the *Fermi* LAT. Although most studies agree on the uniformity of the spectra and their compatibility with the spectrum of H I, Yang et al. [223] claimed low-energy CR enhancements in the Orion A, Orion B, and Chameleon clouds, not confirmed by subsequent studies [201,224]. More recently, Baghmanyan et al. [225] claimed spectral deviations with respect to the LIS for the molecular clouds in the Aquila rift, rho Ophiuchi, and Cepheus. For the latter two the results are at odds with previous studies based on smaller datasets [205,209,224]. The devil seems to lie in the details, that is, how the different studies account (or not) at the analysis level for variations of CR densities in different structures along the line of sight and in the Galactic background, and variations/uncertainties in *X*CO and/or dust-to-gas conversion factors. Analysis assumptions about all of these aspects combined with the energy-dependent PSF of the LAT may easily produce distortions in the derived spectra via cascade effects in the template fitting procedure. The claims of spectral variations therefore require further investigation to asses their robustness.

By stacking the spectra of several nearby molecular clouds measured by the *Fermi*-LAT Neronov et al. [224] could highlight the existence of a break in the CR nuclei spectrum at a rigidity of 18+<sup>7</sup> <sup>−</sup><sup>4</sup> GV, which is consistent with the direct measurements from *Voyager 1* [57] and AMS-02 [49,50] in a rigidity range where the impact of solar modulation is most relevant. Neronov et al. [224] speculated that this coincides with the transition from the Galactic-scale steady-state population observed throughout the disk of the Milky Way (see Section 4) to a local population of CRs driven by stochastic injection around the Sun localized in space and time. We note, however, that uncertainties in the hadronic cross sections have not been taken into account in this work.

First results at TeV energies were recently published by the HAWC collaboration [226]. No detections are reported for a set of seven nearby molecular clouds. Upper limits on their gamma-ray fluxes are less than an order of magnitude larger than expectations based on the extrapolation of direct measurements by AMS-02. For a stacked analysis assuming a power-law CR spectrum with index 2.7 the upper limit on the CR density is approximately at the level predicted by direct measurements. This implies that, if the CR spectrum follows a simple extrapolation of what is directly measured at lower energies, HAWC should reach a detection of nearby clouds by doubling the exposure, or even faster taking into account the ongoing detector upgrades. Complementary results can be expected from LHAASO and, eventually, from SWGO (although most of the nearby molecular clouds in the Gould Belt are visible from the northern hemisphere).

### **4. Large-Scale Interstellar Gamma-rAy Emission: Tracing Cosmic Rays throughout Galaxies**

The large-scale distribution of CRs in galaxies encodes information on their injection sites and spectra, and on the transport mechanisms and their interactions with other components of the ISM. For a long time it has been known that gamma-ray data point to the existence of a large-scale population of CRs in the Milky Way disk with properties similar to that observed near the Earth, see, e.g., [227], and of CR populations with somewhat different properties in the Magellanic Clouds [12,13]. Recent years have seen impressive developments in GeV observations for the Milky Way and local-group galaxies. Their CR populations show diverse and sometimes unexpected properties. Variations in the densities and spectra of CRs across the Galactic disks and halos with departures from expectations based on standard implementations of CR models, combined with ubiquitous unexplained residual features, are questioning our understanding of particle transport and its microphysical foundation. At the same time, the first sub-MeV and TeV observations have been opening new and complementary windows to constrain the large-scale CR distribution for the Milky Way.

### *4.1. Cosmic-Ray Distributions through the Disks of Galaxies*

### 4.1.1. The Milky Way

A lot of recent progress is based on data from the *Fermi* LAT. A first avenue to infer the large-scale CR distribution from the observations is through the emissivity of interstellar gas, which can be extracted using the template-fitting technique for multiple regions along the line of sight thanks to the distance proxy provided by the Doppler shift of atomic or molecular lines (Section 2.4). This approach has been applied to two regions towards the outer Galaxy in the second and third Galactic quadrants for which the separation of different structures and spiral arms along the line of sight is remarkably good and free from ambiguity [205,206]. These studies strengthened the observational evidence for the long-known *gradient problem* [227], i.e., the fact that the gamma-ray emissivity only mildly decreases from the position of the Sun up to Galactocentric radii of ∼15 kpc, while the number density of putative CR sources shows a steep decline. Furthermore, in the third quadrant the data allow only a decrease <20% in the low-density region between the local and Perseus arms, ruling out a strong coupling between CR and ISM densities invoked by some early modelling efforts [228].

The distribution of CRs throughout the entire Galactic disk was derived by several authors by analysing LAT data for the entire sky with gas templates built by separating H I and CO in Galactocentric rings [229–231] and using the H I emissivity to infer the underlying CR spectra. These studies show that the CR proton density above 10 GeV varies by a factor of a few across the disk of the Milky Way, with a peak at ∼4 kpc from the Galactic centre and a mild decrease as a function of radius beyond 5 kpc, which confirms and extends the trend inferred from the dedicated studies of the outer Galaxy. More surprisingly, LAT data have revealed a progressive hardening of the CR proton spectrum toward the inner Galaxy, with the power-law spectral index at 10 GeV going from the local value of ∼2.7 near the Sun to ∼2.5 at a few kpc from the Galactic centre. For these studies, as well as for the outer Galaxy, a major source of uncertainty is due to the H I opacity correction (spin temperature), which is associated with a ∼30% systematic uncertainty in the H I emissivities [206,229]. Another important limitation is the possible contamination by diffuse emission from populations of unresolved sources. The fraction of unresolved sources in the total diffuse emission is estimated to be 3% at 1 GeV [232]. We expect this fraction to increase as a function of energy, but based on reasonable assumptions about the source populations the conclusions on the general trends in CR densities and spectra inferred from LAT data remain unchanged [231]. Other sources of uncertainty include the modelling of IC emission and detected gamma-ray sources.

Furthermore, the impact of the assumption that CR densities are axisymmetric, and, in particular, that the near and far region within each ring for radii smaller than the solar circle share the same CR densities, needs to be checked against observations not affected by the kinematic ambiguity. A complementary approach that may overcome this limitation is the use of well-localized targets. Aharonian et al. [233] have derived the emissivity of nineteen giant molecular clouds located at Galactocentric distances up to 12 kpc. The trend in inferred CR proton densities as a function of Galactocentric radius is generally consistent with earlier works based on ring emissivities [229,230], but the large uncertainties due to the separation of the clouds from foreground/background gas and the conversion of CO intensity to H2 column density makes the results too uncertain to draw robust conclusions.

A few clouds show spectral deviations from the general trend that may point to localized CR excesses, e.g., due to a nearby accelerator. More recently the same authors analysed LAT observations of nine clouds located at Galactocentric distances of 1.5–4.5 kpc employing dust opacity (inferred from thermal emission) as total gas tracer (no separation along the line of sight), and they obtained results at odds with the ring analyses [234]. However, we warn the readers that the latter results are based on the assumption of a constant dust specific opacity across the Milky Way with an uncertainty of 20%, while local clouds show that variations of a factor ∼3 related to evolution of the dust grain properties are possible (Section 3.2), and, moreover, an increase of the dust specific opacity of a factor of a few in the inner Galaxy is expected from the correlation of dust-to-gas ratio and metallicity gradient as a function of galactocentric radius observed for external galaxies [235–237]. Different measurements of the CR proton density and spectrum across the Galactic disk are summarised in Figure 3.

A second avenue to constrain the large-scale CR distribution consists of comparing the data directly to the outcome of predictive models. Ackermann et al. [238] compared predictions by GALPROP to the LAT data for the entire sky by varying hypothesis on the astrophysical input such as CR source distribution or *X*CO. The results are consistent with the trends seen in the template analyses, notably the flat CR profile in outer Galaxy, and higher/harder emission toward the inner Galaxy. Although the data/model agreement is reasonable and demonstrate that standard implementations of Galactic CR propagation models describe the gamma ray data within ∼30%, regions of extended residuals appear for any of the models considered. Residual features are discussed later in Section 4.3. The study [238] also demonstrates the high level of degeneracy between different inputs to predictive models, and therefore the importance to use different complementary approaches to analyse and interpret the data.

Predictions from GALPROP were also compared to data from *INTEGRAL* SPI, which reveals the existence of diffuse emission from the inner 60◦ of the Galactic disk in the soft gamma-ray band from 20 keV to 2 MeV. The diffuse gamma-ray emission above 60 keV is consistent with a dominant origin from inverse-Compton scattering of CR electrons, and connects spectrally with emission measured at higher energies by COMPTEL and pair-conversion telescopes [239–241]. These data tend to favor models with an important contribution from secondary electrons and positrons, which is not necessarily the case for the electron spectrum measured near the Earth [97]. A new space mission for MeV astronomy holds the potential to deepen our understanding of the large-scale properties of IC emission and bridge *INTEGRAL* and *Fermi* observations at a sensitivity largely improved compared to COMPTEL [107].

Recently we gathered the first measurements of diffuse emission in the energy range from hundreds of GeV to one PeV in different regions of the Galactic disk thanks to Milagro [242], H.E.S.S. [31], ARGO-YBJ [243], HAWC [244], and Tibet AS*γ* [245]. In this energy range the key challenge, beside separating diffuse gamma-ray emission from the charged-particle background, is to disentangle the interstellar component from the contributions from unresolved source populations. For example, Steppa and Egberts [197] estimate that ∼30% of the diffuse emission measured by H.E.S.S. can be attributed to unresolved source populations, and Amenomori et al. [245] estimate the same fraction to be 13% for the measurement at energies above 100 TeV with Tibet AS*γ*. While the measurements generally exceed expectations based on the local CR spectrum and there is now firm evidence of emission from CRs up to the knee, there are tensions between observations with different instruments [246] and better constraints on the unresolved source contribution become key to use TeV data to investigate CR properties such as the spectral hardening in the inner Galaxy [247]. At the same time, upcoming measurements with CTA and LHAASO [248,249], and, possibly, SWGO will provide much improved sensitivity, and enable an even higher complementarity with neutrino measurements.

### 4.1.2. Implications of the Gradient Problem and Inner-Galaxy Hardening

The gradient problem and inner-Galaxy hardening observed by *Fermi* have been stimulating a lively debate on our understanding of CR transport in the Milky Way. As far as the gradient problem is concerned, the assumption of a very extended ∼10 kpc diffusive halo seems to alleviate the discrepancy. However, this solution is in tension with the most recent observations of CR isotopic abundances and, possibly, radio/gamma-ray emission from the Milky Way, as discussed in Section 4.2, and also covered for instance in Evoli et al. [175] and references therein. Ad hoc assumptions on the distribution of CR sources may contribute to mitigate the problem, and suffer from similar problems when confronted with catalogues of SNR and pulsar, that are expected to trace the radial distribution of the CR injected power.

**Figure 3.** (**a**) Gamma-ray emissivity/CR proton density as a function of Galactocentric radius derived for the entire Milky Way using ring analyses [229–231], for the outer Galaxy [205,206], and for a sample of giant molecular clouds [233]. Values are normalised for each set of measurements by the value for the region including the solar circle (*R* = 8.5 kpc). For giant molecular clouds the normalization is the average for all clouds in the Gould Belt (*R* between 8.2 kpc and 9.1 kpc). (**b**) CR proton spectral index as a function of Galactocentric radius derived by a subsample of the analyses shown in the top panel. We remark that the gamma-ray energy range considered varies between the different analyses, but whenever readily available we show in the plot the inferred CR proton density and spectral index at 10 GeV. In both panels we include predictions from some of the models discussed in the text, namely a standard CR model implementation [238] (red), two models with non-homogeneous diffusion tuned to reproduce the CR density profile in the outer Galaxy [175] (purple) and the inner-Galaxy hardening [250] (pink), and the non-linear transport model by Recchia et al. [251] with exponential cutoff of the magnetic field strength at large *R* (grey). In the top panel we also show the putative CR source profile taken from [252] (black) as useful reference.

A promising attempt to solve the gradient problem based on non-homogeneous diffusion, also in connection with the anisotropy problem (i.e., the larger dipole anisotropy predicted by standard implementations of the Galactic CR transport model with respect to the observed one) was presented in Evoli et al. [175]. In that paper, a correlation between the CR source density and the normalization of the diffusion coefficient is invoked. The proposed solution (visualized in Figure 3, upper panel, "non-homogeneous diffusion") stems from the following physical argument: a larger turbulence level is expected in the regions of the Galaxy characterised by a larger density of CR accelerators, in particular, along the Galactic plane, in the range of Galactocentric radii close to the so-called molecular ring. Given the topology of the large-scale regular magnetic field, and the overall geometry of the problem, the CRs accelerated in the Galactic plane mainly escape in the vertical direction, perpendicular with respect to the regular field. Given the increase of the perpendicular diffusion coefficient with increasing turbulence level that is highlighted in several numerical simulations (see for instance [253–255]), the aforementioned correlation naturally follows from these considerations.

As far as the hardening problem is concerned, a phenomenological model where the trend is obtained as a result of a smoothly varying scaling of the diffusion coefficient with respect to rigidity was presented in Gaggero et al. [250] (see Figure 3, lower panel, "non-homogeneous diffusion"). A physical interpretation of this trend in terms of specific aspects of transport physics was recently presented in Cerri et al. [177]. In this analysis, the argument is once again based on the nature of perpendicular escape and the geometry of the magnetic field. The starting points are the following considerations: *(i)* the numerical simulations that aim at characterising CR transport in pre-existing (Alfvénic) turbulence, already mentioned above, suggest a different scaling with rigidity as far as parallel and perpendicular diffusion coefficients are concerned, with the parallel transport featuring a harder rigidity dependence; *(ii)* the state-of-the-art models of the large-scale Galactic magnetic field (see for instance [91]) seem to suggest the presence of an X-shaped poloidal component in the inner Galaxy; hence, the vertical escape of CRs may be parallel in the inner Galaxy, and perpendicular in the outer Galaxy, where the field is expected to follow the spiral pattern on the Galactic plane, with a less prominent vertical component. These facts imply a progressively harder scaling of the propagated CR spectral index, as simulated by [177] with an axisymmetric, fully anisotropic version of the DRAGON code.

Following a very different line of thought, another recent work [251] attempts to explain both the gradient and the spectral hardening problems at the same time. In this work the non-linear effects mentioned in Section 2.4 are exploited to provide an explanation to both anomalies. The idea is that the regions with a larger density of accelerators feature a larger CR gradient. Hence, the turbulence growth rate associated with streaming instability is larger: as a consequence, the non-linear phenomenon of CR self-confinement is greatly enhanced. In the GeV domain CR escape is actually the result of the competition between rigidity-dependent (possibly self-generated) diffusion and rigidity-independent advection. The more efficient self confinement implies a lower diffusion coefficient associated with self-generated turbulence. Hence, advection takes over up to larger rigidities, and the propagated spectrum is less affected due to the energy-independent nature of this process. Hence, the inner regions of the Galaxy are expected to feature a harder CR spectral slope closer to the one initially injected in the ISM. A careful numerical treatment of this problem actually shows that the gradient problem may also be fixed, provided that the magnetic field is assumed to drop exponentially at large radii. The model is represented in Figure 3, both lower and upper panel ("non-linear transport").

The interpretation of the spectral hardening in terms of the interplay between advection and non-linear CR confinement has an important consequence. This kind of solution is valid only in the low-energy range where the streaming instability plays a major role, and cannot be invoked if the spectral trend were to be clearly confirmed at rigidities -100 GV. The current data do not allow us to reach a firm conclusion on this point. However, the analysis presented in [231], already discussed above, seems to point towards the presence of a spectral hardening even in high-energy *Fermi*-LAT gamma-ray data. On the other hand, the explanations based on the different scaling relations of perpendicular and parallel transport are expected to hold at all energies.

The different explanations of the inner-Galaxy hardening are expected to result in different phenomenological implication for the multi-TeV gamma-ray interstellar emission. In particular, a harder hadronic CR spectrum towards the Galactic Centre that extends up to the multi-TeV domain (not covered by *Fermi*-LAT data) would imply a significantly larger diffuse gamma-ray emission from the inner Galactic plane. Different scenarios were put forward in order to bracket the uncertainties in this context, mainly due to the extrapolation

of the analyses based on Fermi-LAT data, and the most "optimistic" prediction were shown to saturate the diffuse emission from the Galactic Ridge measured by H.E.S.S., leaving less room for the contribution from a central accelerator [256]. These scenarios featuring the hardening in the multi-TeV domain are compatible with the interpretation based on the idea of anisotropic transport, and show some tension with the interpretation that stems from the non-linear propagation models, given that self-generated diffusion models predict a transition to transport dominated by background turbulence at ∼100 GeV, while, as mentioned before, the peculiar scaling relations associated with anisotropic transport extend to larger energies. However, more data and further studies on the modelling sides are needed. Currently operating and forthcoming air shower and Cherenkov experiments (in particular LHAASO, HAWC, SWGO, and CTA) will help to shed light on this issue. In particular, if the presence of a harder spectrum in the inner Galaxy were to be confirmed in the TeV–PeV range, the interpretation of this effect in terms of the interplay between advection and self-confinement would be disfavored. On the modelling side, as pointed out for instance in [257], more advanced simulations of the multi-TeV gamma-ray emission from the Galactic plane will be needed, also taking into account the crucial effect of absorption that plays a dominant role especially above 50–100 TeV.

We remark that, if confirmed, the presence of a progressive spectral hardening of the hadronic CR population in the inner Galaxy has also interesting phenomenological consequences in a multi-messenger context. In fact, as pointed out for instance in [258], phenomenological models that reproduce this trend by featuring a radially dependent slope of the diffusion coefficient predict a neutrino flux in the inner Galaxy that is 2–5 times larger compared to conventional predictions, based on a constant spectrum across the Galactic plane (see also [259]). This may explain up to 25% of the neutrino events measured by IceCube, and may suggest that a detection of a positive correlation of the IceCube events with the Galactic plane might be just around the corner. This hypothesis was extensively tested by the ANTARES and IceCube Collaborations. The recent analysis [100] provided joint constraints that start to challenge this scenario. The constraints on a Galactic component are based on 10 years of ANTARES showers and tracks (218 showers in total), and 7 years of IceCube tracks (730130 events with 191 events expected from the optimistic Galactic model). The results are in mild tension with the most "optimistic" versions of these phenomenological models, and allow for the possibility to test this kind of prediction in the near future. A more recent analysis [260] based on seven years of IceCube cascade data (characterised by an interaction vertex inside the detector) outlined a 2*σ* hint for a Galactic component consistent with the optimistic models. Future studies are needed to shed light on this issue. A firm detection of a Galactic neutrino component would represent a remarkable confirmation of the presence of a hard hadronic CR population extending above the TeV domain, and would greatly help in shedding light on the microphysics processes that originate this anomaly.

### 4.1.3. Local-Group Galaxies

Moving to external galaxies, the LMC is the best target when it comes to studying how gamma-ray emission connects to components of a galactic ecosystem due to its proximity (distance of ∼50 kpc) and a favourable geometry (disk-like structure with a low inclination angle ∼30◦). The emission at 1 GeV is dominated by radiation seemingly correlated to the gas disk, as could have been expected based on what we see in the Milky Way. The gamma-ray distribution in the disk could be fitted assuming an emissivity profile decreasing by a factor 2–3 from the centre to the outskirts, with a peak value of about 30% of the emissivity measured in the Solar system neighbourhood [261]. This lower value is thought to arise from the smaller size of the LMC, and a corresponding smaller confinement volume. Again, this is very reminiscent of the Milky Way (see above in this section). More surprising, though, is the fact that the emission at 10 GeV is contributed at about 50% by extended components of unknown origin but not evidently correlated with gas or recent star formation, therefore implying localized enhancements in the CR densities by factors

of 2 to 6, or an alternative explanation not related to interstellar emission [261]. To date, the origin of such features is still unexplained, and revisiting the gamma-ray emission from the LMC from the twice larger Fermi-LAT data set available now and increased exposure with H.E.S.S. would be timely.

The SMC is another promising target due to its proximity (distance of ∼62 kpc) but the geometry of the galaxy is much more intricate, with an irregular shape elongated along the line of sight over 20 kpc [262] that complicates the interpretation of observations. Significant gamma-ray emission is detected but no obvious correlation with gas or star formation is observed [263,264]. If interpreted anyway as interstellar emission, the flux observed implies an average CR density of about 15% of the emissivity measured in the solar system neighbourhood. However, it was estimated that the measured flux could be accounted for to a large fraction by an unresolved population of pulsars, which would imply an even lower CR density [265].

At a much larger distance of 785 kpc, another target of choice is M31 (Andromeda) that, as a grand design spiral galaxy, resembles more closely our Milky Way and may allow a more direct analogy. Extended emission from the galaxy is detected but the signal is confined to the inner regions, within 5 kpc from the centre. It does not fill the disk of the galaxy and in particular does not correlate with the regions rich in gas or star formation activity that are mainly located in a large ring at 10 kpc distances from the centre. Emission from the disk at large is however not strongly excluded and may be present at a level of up to 50% of the currently detected flux [266]. The different gamma-ray emission distribution in M31, compared to the Milky Way, can be interpreted as resulting from its global properties: the star formation rate in M31 is about 10 times lower than in the Galaxy, which would decrease the contribution from the star-forming disk, while it has a 5–6 times more massive bulge, which would enhance the contribution from old stellar populations gathered in the central regions.

### *4.2. Cosmic-Ray Distributions through the Halos of Galaxies*

As briefly described in Section 2.3, within the standard galactic CR paradigm it is assumed that particles are confined diffusively in a magnetized halo with a size of a few kpc for the Milky Way. Yet, the formation of the halo and the escape at its boundary are still poorly understood, and usually modelled by imposing free escape at a fixed height *zmax*. The value of *zmax* is then treated as free parameter in CR models adjusted to reproduce CR elemental/isotopic abundances as well as the spectra of CR species measured locally. Recent measurements with AMS-02 and other instruments in this framework point to values of *zmax* = 5 kpc with sizeable uncertainties of a few kpc [194,267].

Until recently, gamma ray and radio emission from CR interactions was used to constrain the propagation in the Milky Way halo only indirectly via aggregate properties (longitude, latitude, and radial profiles and spectra), which is tricky due to severe degeneracies with other unknowns such as the source distribution, e.g., [93,238]. Conversely, kpc-wide synchrotron halos around external edge-on galaxies have been observed in radio for almost thirty years, e.g., [268].

The last decade has seen two major advances in the understanding of the halo and related observational constraints especially from gamma rays: more direct observations in gamma rays thanks to observations with the LAT of clouds at large distances from the Galactic plane, and, possibly, of the halo of M31; the emergency of models of galactic CR halos based on the microphysics of magnetic turbulence and particle transport.

The improved sensitivity of the LAT made it possible to measure the emissivity from atomic clouds at large distances from the disk. Those include notably a set of high- and intermediate-velocity clouds that span heights from a few hundred pc to several kpc [269] with robust distance determinations based on stellar brackets [270]. Their emissivities, shown in Figure 4, testify to a significant decrease of CR densities as a function of distance from the Galactic plane. On one hand, this represents a new and robust observational test of the Galactic origin of CRs below the knee. On the other hand, the emissivity measurements

can be compared to predictions from CR propagation models and directly constrain the CR gradient in the halo. A fit to these measurements yields a *zmax* value of ∼2 kpc with an uncertainty of a few kpc [269]. In particular, the upper intermediate-velocity Arch at a height of 0.7–1.7 kpc above the disk was found to have an emissivity <45% relative to the local value at 95% confidence level.

**Figure 4.** Vertical gradient of gamma-ray emissivities or cosmic-ray densities in the Milky Way halo. The points correspond to emissivity measurements derived from *Fermi* LAT data for intermediateand high-velocity clouds with distance brackets based on stellar probes [269] and for the Eridu cirrus [203]. The horizontal band shows the 25% dispersion of emissivity measurements for nearby clouds (Section 3). The lines correspond to CR densities from two models: the z-dependent part of the solution of the diffusion equation in the plane-parallel geometry (infinitely thin Galactic plane) with a uniform source distribution when only ionization losses are assumed for the halo height *zmax* = 5 kpc inferred from recent direct CR measurements [194,267] and the model by Evoli et al. [181], which describes CR vertical propagation based on a mixture of advected turbulence and of CR self-generated waves (we show the CR densities at 10 GeV which are most representative for the gamma-ray energies considered).

Even more recently, Joubaud et al. [203] reported an emissivity of 0.657 ± 0.031 with respect to the local value for the Eridu cirrus at a modest altitude of 200–250 pc (also shown in Figure 4). We note that the distance to the Eridu cirrus is based on dust reddening [271], a method more indirect than the use of stellar brackets. For the moment it remains unclear whether this observation should be interpreted in terms of the large-scale vertical gradient of CRs or a peculiar magnetic field configuration in this cloud [203].

It is therefore essential to study in gamma rays a broader sample of clouds at large distance from the disk, to map the large-scale distribution of CRs and probe correlations with localized magnetic structures and outflows, especially in the key altitude range between a few hundred pc and a few kpc. Robust estimates of their distances, e.g., using brackets based on Gaia data [272], would be of great help to aid the interpretation of the gamma-ray data in term of CR gradients. Furthermore, we note that observations of emission from the halo of the Milky Way may be corroborated in the near future by observations of the nearby edge-on Andromeda galaxy: claims of detection of gamma rays from its halo already exist, but for the moment it remains unclear whether it is due to CR interactions, either in the form of an escaping CR flux interacting with the intergalactic medium or of nonthermal lobes analogous to those observed in galaxies with an active nucleus, or it is of exotic nature related to hypothetical dark-matter particle annihilations [273–275].

From the theoretical point of view the last few years have shown a renewed interest in treating CR propagation in the halo on a more physical ground rather than relying on imposing a boundary condition at a height *zmax* and a diffusion coefficient adjusted to reproduce the data. Several attempts have been made to explain CR diffusion as a result of the non-linear interaction with plasma waves self-excited owing to the CR streaming instability [5,276–278], as briefly introduced in Section 2.4. In particular, Evoli et al. [181] discussed a scenario in which the diffusion properties of CRs are derived from a combination of wave self-generation and advection from the Galactic disc, with a halo of a few kpc naturally arising as a consequence. All of these models are non-linear, and explore the feedback from CRs on galaxy evolution via the formation of winds and the subsequent impact on star formation and galaxy evolution, while succeeding to reproduce to some extent the observed properties of primary CRs. Predictions by [181] are shown for illustration in Figure 4. Their model overpredicts CR densities at a few kpc from the disk compared to gamma-ray emissivities. However, the tuning of the model parameters did not take into account gamma-ray measurements, which serves as a nice illustration of the importance of halo emissivity measurements in this context. Further observables that can complement the halo gamma-ray emissivities are the isotropic gamma-ray background and the flux of high-energy neutrinos [275,279,280].

### *4.3. New Frontiers: Residual Gamma-Ray Emission*

One of the most interesting and surprising results from *Fermi*-LAT observations has been that extended residual emission as large as ∼30% on a variety of different scales appears on top of large-scale interstellar emission from the Milky Way as accounted for via either template fitting or standard implementations of CR propagation models [229,281]. There are clear indications that large-scale CR trends described in the previous sections are not sufficient to capture the richness of the CR phenomenology in the Galaxy, which also mirrors the gamma-ray enhancements observed in the LMC (Section 4.1). In other words, the sensitivity and breadth of gamma-ray observations is now such that we may have reached the limits of standard basic modelling approaches.

Some of the features emerging in the residuals are considered amongst the most important results from *Fermi*. The *Fermi* bubbles are a large bipolar structure seemingly emanating from around the Galactic centre [282–284], which has been interpreted, for instance, as the result of a hadronic wind that advects particles out of the Galactic disk, or of a leptonic jet accompanied by anisotropic diffusion along magnetic field lines that drape around the bubble surface (for a review, see, e.g., [285]). The Galactic centre excess is a large feature with approximate spherical symmetry around the centre of the Galaxy, which has attracted a lot of attention owing to the possible interpretation as the result of annihilation/decay of hypothetical dark-matter particles, but that could also find a more mundane explanation, for example in terms of unresolved populations of sub-threshold gamma-ray sources such as millisecond pulsars (see, e.g., [286] and references therein). Observations and interpretation of these features are already covered in full reviews by their own, included those just referenced, so they are left out from our paper.

Aside from speculations on possible exotic phenomena such as radiation from dark matter, three main families of solutions to the puzzle of residuals have been proposed, invoking either (*i*) contributions to diffuse emission from unresolved gamma-ray sources, (*ii*) undetected or poorly modelled interstellar gas and radiation fields acting as target for CR interactions, or (*iii*) the effect of CR injection in the ISM localized in space and time possibly accompanied by peculiar conditions of particle transport. While in the context of this review we are mainly concerned with the latter, let us remark that a mix of the three is the most likely explanation for the observations. We cover, in detail, observations of

gamma-ray emission in the vicinity of sources and their implications for CR transport in the next section.

However, the discretized nature of CR sources may explain also large-scale residual emission. In a recent work, Porter et al. [287] have assessed how discrete and steadystate CR injection differs when it comes to predict gamma-ray interstellar emission. Even using simplified prescriptions for discrete injection, i.e., no localized self-confinement around sources and no spatial and temporal clustering from OB associations, the work illustrates the wealth of effects that can be expected. Compared to the corresponding steady-state case, stochastic injection can give rise to intensity features with sizes from a few to a few tens of degrees, both excesses and deficits, with amplitudes reaching 50% and above of that predicted in the steady-state case. The effect is more pronounced at larger energies, at intermediate/high longitudes and latitudes, and for leptonic radiation processes, which constitutes interesting challenges for the new and future VHE instruments that will provide extended coverage of the Galactic plane and give access to larger angular scales (e.g., HAWC, LHAASO, or CTA). In that endeavour, one should keep in mind that simply extracting excess or residual emission in a reliable way may be challenging. Porter et al. [287] illustrate the biases that can result from using a reference interstellar emission model based on a mismatched steady-state smooth source distribution (e.g., including spiral arms or not). Very large scale emission structures can ensue, some of which reminiscent of the *Fermi* bubbles. In addition, isolating excess emission from residual significance maps can lead to various troubles, such as the splitting of the true component into various substructures and a biased determination of other emission components. The most promising avenue to address such issues is to incorporate multi-wavelength information and try to constrain the CR injection and propagation history from a large set of observations, from radio/microwave to X-rays and gamma rays.

### **5. Gamma-Ray Emission in the Vicinity of Sources: The Early Steps of a Long Journey for Cosmic Rays**

Fully understanding the life cycle of galactic CRs requires connecting the properties of particles still confined within their sources and/or undergoing acceleration to the characteristics of a galactic population built up over Myr time scales from thousands of transient accelerators of various subclasses. Recent observations highlight that the early phases in which CRs are still wandering around their original source play a special role within this life cycle, and have a remarkable impact on the associated non-thermal emissions. There is no unambiguous terminology for this particular moment in the CR life cycle: it is sometimes referred to as the escape of CRs, but that term may also refer to the specific process in which CRs are released by the accelerator. For that reason, and also because what happens close to the sources can be more complex than CRs escaping from a single and well-defined accelerator, and, furthermore, sources may have an impact on the older galactic CR population roaming around them, we focus in a more generic fashion on the phenomenology of CRs in the source vicinity.

As will be developed below, there are indications that CRs are not swiftly and seamlessly transferred from their source to the ISM at large. Instead, they likely experience some confinement in the vicinity of the source, and the duration and extent of this confinement will influence their early interaction with the ISM, with possible consequences on several of the observables through which we probe the CR phenomenon: for instance, the isotopic and spectral properties of the local flux of CRs, or the morphology and spectrum of the large-scale interstellar emission [288–290]. Before diving into gamma-ray observations, let us mention that, although a less direct evidence, features in the local CR flux can also be interpreted as resulting from processes related to the early stages of CR propagation. This possibility is important as it competes with dark matter interpretations of anomalies in the CR signal. This is true for instance of the local positron flux, which can be influenced by the dynamics of pair release from nearby pulsars [291–293]. In the context of this review, however, we leave such considerations aside.

In the context of gamma-ray observations, CRs in the vicinity of sources will give rise to emission structures lying at intermediate spatial scales between isolated objects, such as SNRs or PWNe, and the large-scale diffuse emission of the ISM. Here we focus on how the phenomenon fits into the global gamma-ray emission of a star-forming galaxy, with particular emphasis on the Milky Way. CRs in the vicinity of sources can be associated with specific populations of gamma-ray sources or specific regions of the ISM, and we review below the current knowledge on such objects for three categories: emission beyond the shock in SNRs, emission around pulsars and their nebulae, and extended emission coincident with star-forming regions (SFRs). This categorization does not necessarily correspond to specific kinds of acceleration sites, as mixed scenarios are likely to occur: particle escape from a PWN influenced by conditions inherited from the parent SNR, or superposition of processes in rich SFRs. Before going deeper into the topic, let us mention that the topic of CRs in the vicinity of sources is still actively being explored, owing both to the complexity of the physical problem, the diversity of possible astrophysical setups, and the difficulty in giving a clear-cut interpretation to existing observations. Rapid evolutions are therefore expected in this field in the coming years.

### *5.1. Physical Problem*

The release of CRs from a source is most likely more than just a localized and temporary enhancement in CR density. First of all, in general the release of the non-thermal particle content of a source will not be instantaneous, but it will be spread over time with some energy dependence [294,295]. Second, the energy density associated with the CR enhancement around a source will exceed the typical energy density of the ISM for long durations [193], and it will be hard to avoid some dynamical feedback of escaping CRs on the surrounding medium [296]. Both considerations point to CR escape being a complex problem of non-linear dynamics and plasma physics, with strong dependence on both the actual history of particle release from the source and the environmental conditions around it.

The non-thermal particle yield of an accelerator is expected to be released progressively, over time scales comparable to the lifetime of the source and in an energy-dependent way. In the specific case of diffusive shock acceleration in SNRs, the highest-energy particles in the 100 TeV-1 PeV regime detach from the accelerator in the very first few 10<sup>2</sup> yr of the SNR expansion, while 1–10 GeV particles are released after 10<sup>5</sup> yr, when the SNR enters the radiative stage [297]. Particle escape is intimately connected to the acceleration process [298,299], so lacking a fully consistent and effective theory for the latter necessarily impacts our understanding of the former. That difficulty, however, can be turned into an opportunity. Since escape is so deeply rooted into the acceleration process, studying the vicinity of sources provide a complementary opportunity to address several key questions of the CR phenomenology. What is the exact CR spectrum fed into the ISM by the (different categories of) sources? What was the maximum energy attained by accelerated particles and how did it evolve in time? What fraction of the source power/energy went into CRs? What is the nature of the accelerated particles? We illustrate below how some of these questions can be addressed in practice. Before doing so, we emphasise the relevance and challenges, see [300] of searching for signatures of CRs around sources when trying to elucidate the question of acceleration up to the so-called knee region of the local CR spectrum, essentially because the time spent by such very-high-energy particles in the accelerator is small and the probability of finding active sources accelerating particles to PeV energies (PeVatrons) in the Galaxy is accordingly limited [301].

Once decoupled from the acceleration zone, fresh CRs will influence the transport conditions in the medium surrounding the source, via the same processes that governed their confinement in the source, i.e., self-generation of magnetic turbulence from resonant and non-resonant instabilities [299,302]. This modifies the conditions of particle transport around the source for long durations and over large scales, with a strong energy dependence. In [295], it is estimated that CRs escaping in a hot fully ionized medium will

experience suppressed diffusion by a factor reaching up to 20 with respect to the ISM at large within 50–100 pc around the source and over durations of the order of several 10 kyr for 1 TeV particles and several 100 kyr for 10 GeV particles. For a medium containing neutral species, the damping of the self-generated turbulence from ion-neutral friction reduces the confinement duration by nearly a factor of 10 [294,303]. In addition to this impact on the local turbulence, if the density of energy and momentum carried by escaping particles is comparable to or well in excess of what is found in the medium, major dynamical effects can result, such as the clearing of the surrounding medium by an overpressurised bubble of trapped CRs [296].

These processes can be expected to give rise to a large variety of observable situations depending on the actual parameters of the problem: the stage of the process being witnessed, the particle energy being probed, the physical scales accessible to the observation, the objects and processes involved in particle acceleration, the interstellar conditions in which escape takes place and by which it is made visible to us. On the latter point, one should mention that anisotropic diffusion around sources, physically well-motivated, and resulting either from the orientation of the regular background magnetic field or from the actual topology of the large-scale turbulence modes, can produce non-trivial emission patterns hard to identify and interpret [304]. Indeed, as illustrated below, the observational evidence associated with escaping CRs is very diverse and its interpretation is far from unified.

### *5.2. Emission beyond the Shock in Supernova Remnants*

SNRs still remain the leading candidates for the acceleration of Galactic CRs, if not necessarily for the PeV-energy particles at least for the bulk of the lower-energy population, so searching around them for particles in the process of merging with the galactic population is a promising avenue, and may provide useful information complementary to that inferred from on-going acceleration. Observationally, however, the picture can be very diverse: emission from the immediate shock upstream, from particles that are detaching from the shock precursor [305]; emission from shocks crushing into nearby clouds, possibly causing a sudden release of the CRs trapped downstream [306] or the reacceleration of ambient CRs trapped in the clouds [307,308]; emission from escaped particles that are well detached from the shock and have diffused out to some distance and illuminate large gas clouds [309,310]. In the last two cases, the overall picture can be complicated by the fact that escaping particles can diffuse back into their parent SNR, even without participating anymore in the acceleration, and contribute to the gamma-ray emission of the object [311].

Let us focus first on emission relatively close to the shock. Deep H.E.S.S. observations of RX J1713.7-3946 illustrate the challenge of studying the early stages of CR release from an SNR, i.e., particles just detaching from the shock, even with unprecedented angular resolution allowing to probe sub-parsec scales [305]. Significant extension of the gammaray emission beyond the shock (traced by X-ray synchrotron emission) is detected but cannot clearly be attributed to the shock precursor or diffusive escape, nor can it be used to tell the exact nature of emitting particles. It is unclear whether next-generation gamma-ray instruments will provide sufficient improvement in angular resolution to revolutionize such analyses, and here it seems that multi-wavelength studies will be key to advance our knowledge. On slightly larger scales, and on a slightly older object more prone to significant escape, gamma-ray emission beyond the shock of *γ* Cygni was detected over a broad spectral range thanks to *Fermi*-LAT and MAGIC observations [312]. Interpreted in a coherent framework linking acceleration and escape along the SNR's history, the observations illustrate the joint constraints that can be derived on both processes, e.g., the time evolution of the maximum CR energy, the acceleration efficiency, or the diffusion coefficient in the vicinity of the remnant. The results also point to a diffusion coefficient two orders of magnitude smaller than in the Galaxy at large.

Looking at larger physical scales, SNRs interacting with molecular clouds have become over the past decade a growing class of gamma-ray sources. Currently, 8 TeV sources are classified as such in the TeVCat catalogue (see http://tevcat2.uchicago.edu, database version tevcat2\_test.3437 (accessed on 29 March 2021)), and 11 GeV sources were classified as such in Acero et al. [313], based on coincident molecular line emission, especially OH maser emission at 1720 MHz. The two sets overlap and we summarised the full sample in Table 1 for convenience.

**Table 1.** List of the main established or candidate interacting SNRs detected in gamma rays.


Notes: Right ascensions and declinations in degrees were recovered from the CDS and rounded to two decimals. Associations with gammaray sources were obtained from the Centre de Données astronomiques de Strasbourg (CDS) and, for the TeV counterparts, we favoured the HESS naming when available and otherwise used the TeVCat naming. Distance estimates in kpc were reproduced from Acero et al. [313] or from TeVCat when not available in the former reference, except for the distance to the LMC which was set to 50 kpc. Note that LHA 120-N132D was clearly detected as a GeV source but has no specific 4FGL name.

> Although the sample is still limited and there is large scatter in the observed or inferred properties, interacting SNRs seem to be older and more luminous systems, with a softer emission spectra, compared to younger SNRs that are still in their Sedov phase and feature high-velocity shocks [313]. In this evolutionary trend, there may be a gradual shift in emission processes, with younger systems being dominated or having a more significant contribution from IC scattering, while older interacting systems would be dominated by pion decay emission. Increased population statistics provided by the future Galactic plane survey with CTA may well reveal a less clear-cut separation between the two classes. In a few cases the data allow a (model-dependent) estimation of the diffusion coefficient around the SNR, which is found to be a factor of a few to a hundred lower than in the ISM at large [310,314].

> In the VHE range, more interacting SNRs are likely to be found among the ∼60 unidentified low-latitude TeV sources, for instance HESS J1852-000 [40] or HESS J1702-420 [315]. At intermediate energies between the GeV and TeV ranges, Eagle et al. [315] find a dozen unassociated low-latitude and hard-index objects in the 2FHL catalogue of LAT sources emitting above 50 GeV, and argue that some of them may be SNRs interacting with gas clouds, via direct shock interaction or escaping CRs illuminating distant gas; such an interpretation is proposed for two sources with emission coincident with the edge of an SNR [315,316]. In the HE range, Acero et al. [313] indicate that an additional ∼50 significant sources only marginally overlapping with radio SNRs are found and may be interpreted as regions of high-density gas illuminated by escaping CRs that propagated away from their source. Overall, the detected source population at GeV energies is consistent with being mostly composed of SNRs interacting with dense material with effective densities of the order of tens H cm<sup>−</sup>3. The numbers quoted above show the potential of CRs in the vicinity of sources to account for some fraction of the currently unidentified gamma-ray sources, both in the GeV and TeV range.

Tang [317] investigated 10 *Fermi*-LAT SNRs, i.e., the sample of 11 sources from Acero et al. [313] minus HBH 21, which is not detected above 10 GeV [318] owing to its spectral turnover at ∼1 GeV [319,320]. Tang [317] tested two competing scenarios for the origin of the emission: direct interaction of the SNR shock with dense gas clouds, or escaped CRs illuminating nearby molecular clouds. The author concludes that the observed properties of the sample are inconsistent with the escape scenario, because the latter would imply a variety of spectral shapes, especially low-energy cutoffs, that is not observed. Instead, direct interaction involving reacceleration of ambient, potentially harder CRs and adiabatic compression is claimed to explain the diversity of spectral shapes of the sample, in particular the variety of high-energy breaks (as observed in W49B, W51C, or G349.7+0.2). Clearly, a consensus on the physics at play in these objects is not yet reached, and it is not obvious that gamma-ray observations alone will suffice to lift the ambiguities in the interpretation.

The object W28 alone exemplifies how complex the emission scenario may be: the emission observed around the SNR is composed of 4–5 components, most of which are detected at both GeV and TeV energies, and currently proposed interpretations involve a combination of direct shock interaction, possible triggering leakage of CRs from the remnant, escaped CRs illuminating distant clouds, and the contribution of background CRs [306,310].

### *5.3. Emission around Pulsars and Their Nebulae*

Pulsars and their wind nebulae are highly efficient factories of non-thermal electron/positron pairs, which are produced and accelerated in the magnetosphere, the relativistic wind, and its termination shock. PWNe are a good example of the complexity of studying particles freshly detached from their source, with a strong impact from both the original source and its surroundings, and are a remarkable constituent of the non-thermal landscape of the Milky Way, especially in the VHE range [321].

Recently, the phenomenon has acquired a new dimension with the discovery of very extended emission components beyond what was held for the boundaries of PWNe in a few systems. The pulsars concerned have ages of the order of 100 kyr and have reached an evolutionary phase where a large fraction of the accelerated electron/positron pairs can rapidly escape from the shocked pulsar wind into the surrounding medium, for instance by leakage from a bow-shock PWNe [322], instead of being long trapped into a hot and magnetized nebula (see a possible evolutionary path in [323]). These so-called halos were originally discovered with HAWC in the TeV range around pulsars B0633+17 (Geminga) and B0656+14 [291]. The most natural interpretation for the observed signal was radiation from energetic electrons/positrons IC scattering off ambient photons.

A major result was that the intensity distribution and flux level indicates a very strong confinement of particles around the source, with diffusion being suppressed by a factor of a few hundreds compared to the average ISM value inferred from local CR measurements [291]. Halos are a growing source class that may account for a large fraction of currently unidentified extended VHE sources [41,324–326], and that, as a population, potentially give a non-negligible contribution to the diffuse emission from the Galaxy, or at least some regions of it [327,328]. The phenomenon can naturally be expected to give rise to emission in other bands, and indeed the Geminga halo was later found at 10–100 GeV energies using *Fermi*-LAT data [329]. A broadband picture of pulsar halos is however largely missing today and searches are currently aiming at uncovering or expanding the population of halos in the radio, X-ray, and GeV bands.

Pulsar halos constitute a good opportunity to study particle transport around sources, especially because the radiating pairs are energetically subdominant in the medium [330]. Self-confinement by the streaming pairs could be responsible for diffusion suppression in the early phases, but the challenge in the case of middle-aged pulsars like Geminga is to sustain that confinement at later times, when the pulsar spin-down power has much decreased [331]. An alternative explanation is the presence of fluid turbulence injected

at small scales to guarantee sufficient power at the scales relevant for 100 TeV particle scattering [332]. Such conditions could be inherited from the expansion of the parent SNR. Another possibility is that escaping pairs are experiencing the turbulence imprinted in the vicinity of the system by CRs escaping from the parent SNR, although here again the question of maintaining such a turbulence over several 100 kyr should be further investigated. In any case, pulsar halos offer a great opportunity to study CR transport in the vicinity of some accelerators.

### *5.4. Emission Coincident with Star-Forming Regions*

SFRs, especially the most extensive ones involving massive stars, are expected to be prominent objects in the gamma-ray sky. First, because a large fraction of the most promising sites for particle acceleration will be found clustered in SFRs (colliding-wind binaries, pulsars and their nebulae, SNRs). Second, because SFRs are rich in targets for CR interactions, massive gas remainders from the parent molecular clouds and radiation fields enhanced by the many luminous stars, which guarantees an efficient conversion of CR energy into gamma rays. Therefore, SFRs seem to offer optimal conditions when it comes to studying how CRs are transferred from accelerators to the galactic population, by providing frequent injection of accelerated particles and favourable conditions to observe them.

The reverse side of the medal, however, is that the clustering of high-energy objects in the same region of space and time makes it difficult to unambiguously interpret gammaray observations of limited angular resolution and clearly associate a gamma-ray source with particles released from a specific object. From stellar evolution data used in [333], supernova explosions will be nearly uniformly distributed in time between 3 Myr after the initial burst of star formation (for the most massive 120 M stars) and 37 Myr (for the least massive 8 M stars). As soon as the SFR hosts more than a few hundreds massive stars, the average time interval between supernova explosions is less than the gamma-ray lifetime of many high-energy sources (∼50 kyr for SNRs or PWNe and ∼1 Myr for pulsars and their halos or colliding-wind binaries), and so a superposition of gamma-ray-emitting objects can be expected.

Furthermore, it has long been speculated that the clustering of high-energy objects may play a distinctive role in the origin of CRs [334–336]. This may occur through a variety of possible processes, for instance long-lived particle acceleration at the termination shocks of stellar winds [337,338], repeated shock acceleration or turbulent reacceleration in the interior of superbubbles and/or at their bounding shell [333,339,340], acceleration at the highest energies in converging shock flows [341]. Although solid observational evidence of this class of phenomena is still largely missing, a link between CR origin and SFRs is supported by the isotopic composition of CRs [56,342,343]. Therefore, the potential of SFRs to bring answers to key issues in CR astrophysics justifies a continued search to detect and characterise their gamma-ray emission. Yet, as illustrated below, isolating that specific contribution amidst a variety of concurrent particle accelerators and gamma-ray emitting sources is a real challenge.

As of today, GeV emission has been detected in the direction of about half a dozen SFRs. The most prominent SFRs studied in our Galaxy are listed in Table 2: the Cygnus region [344], NGC 3603 [345], Westerlund 1 [346], Westerlund 2 [347], and W 43 [348]. Several of these targets are also associated with emission in the TeV range, for instance Cygnus [349,350], Westerlund 1 [351], or Westerlund 2 [352]. We can also include in this category the Galactic centre, in which diffuse TeV emission is detected in the direction of three super-massive star clusters, although the CR source there may alternatively be connected to the central black hole [353]. Outside of the Milky Way, a handful of SFRs may be studied with existing gamma-ray instruments, for instance 30 Doradus or the N11 region in the LMC, NGC 602 and NGC 346 in the SMC. The 30 Doradus region was observed at both GeV and TeV energies [261,354]. The entire region is detected at GeV energies only, although with no apparent specific feature, while only a more peripheral emission was detected at TeV energies only in the direction of the 30 Doradus C superbubble.


**Table 2.** List of the most prominent star-forming regions with established or potential detections in gamma rays.

Notes: Right ascensions and declinations in degrees were recovered from the CDS and rounded to two decimals. Associations with gamma-ray sources were obtained from the CDS, complemented by the 4FGL [308] and the H.E.S.S. Galactic Plane Survey [40] catalogues. Some SFRs are associated with multiple 4FGL pointlike sources when no dedicated extended template was included in the automated catalogue analysis. For the TeV counterparts, we favoured the H.E.S.S. naming when available and appropriate, and otherwise used the TeVCat naming. Typical distance estimates come from determinations and/or literature review in [112] for Cygnus, in [355] for NGC 3603, in [113] for Westerlund 1, in [356,357] for Westerlund 2, and in [358] for W 43. Note that the ranges of distances result from both the difficulty in accurately locating the region in the Milky Way and its actual spread along the line of sight (e.g., substructures, in the Cygnus OB2 association). Typical distances of 8.5 and 50 kpc are used for the GC and LMC, respectively.

Over recent years, the list has been rapidly expanding thanks to more candidates emerging in gamma-ray source catalogues and a growing number of dedicated studies. The latest 4FGL-DR2 revision of the main *Fermi*-LAT catalogue contains five associations of gamma-ray sources with SFRs: beside Cygnus and Westerlund 2 that were already mentioned, we find *ρ* Ophiuchi and the H II region Sh 2-152 in our Galaxy, and NGC 346, which is the brightest star-forming region in the SMC [359]. The general LAT catalogues are mainly aimed at pointlike sources, therefore important complementary information is gathered by catalogues targeting extended sources, which revealed emission in the direction of W 30 [318], and of NGC 7822, NGC 1579, and IC 1396 [360]. The latter work, however, illustrates the difficulties of disentangling the potential emission of a SFR from the foreground and background emission in a catalogue-like analysis.

Among dedicated studies investigating the link between gamma-ray emission and SFRs we mention for instance those of W 30 [361], W 40 [362], or the H II region G41.1-0.2 [363]. In the direction of region G25.0+0.0, extended gamma-ray emission was detected and, based on similarities with Cygnus and a positional correlation with gas structures and energetic sources, it was proposed to be associated with a putative candidate OB association [364].

A common feature of several of these gamma-ray sources is their extended morphology and relatively hard spectrum, which is exactly what one would expect from young CRs freshly detached from their sources and diffusing away into the ISM. Where both GeV and TeV detections are available, however, it is frequent to observe mismatch and offsets in the GeV and TeV morphologies (see e.g., [346] for Westerlund 1). This is often explained in terms of superposition of sources, as anticipated above (e.g., an SNR interacting with a molecular cloud and a pulsar/PWN system in the case of W30; see [361]). To date, the most convincing association between gamma-ray source and SFR remains Cygnus, where the morphology of the gamma-ray emission at GeV energies shows a striking resemblance with the cavities carved in the ISM by the stellar winds and ionization fronts [344].

For some of these sources, the gamma-ray data were used to infer a radial distribution of CRs around the presumed accelerator and derive constraints jointly on the CR injection

history and transport properties. A 1/*r* radial profile determined for a handful of SFRs was recently invoked as evidence that SFRs are continuously releasing CRs over several Myr, presumably from the conversion of stellar wind power into CRs with 1–10% efficiencies [365]. For one of these regions, Westerlund 2, the diffusion coefficient was estimated to be 100 times smaller than in the large-scale ISM. While the radial CR distribution is indeed a powerful discriminant to characterise the acceleration site and transport process, establishing it from the gamma-ray observations is a highly non-trivial task.

First, the extended gamma-ray emission from the region of interest along the line of sight needs to be separated from the foreground and background emission in the Galaxy. Second, the conversion of gamma-ray intensity profile into CR density profile requires a robust knowledge of the gas mass distribution in/around the SFR. Unfortunately, both steps can be shaky. The first point was recently illustrated in the case of NGC 3603, where an updated Galactic interstellar emission model and a better treatment of other extended sources in the field resulted in the source associated with NGC 3603 not being significantly extended [345], contrary to earlier claims [347], even without resorting to dedicated interstellar emission models such as those developed for an analysis of the Cygnus region [344]. As to the second point, the lack of apparent correlation of the gammaray emission with the ambient gas distribution, e.g., for Westerlund 1 and 2 or Cygnus, even accounting for some radial CR distribution, should at the very least call for caution in the interpretation of the observations. In that respect, the ∼1–10 TeV emission from the Galactic centre and its correlation with gas in the central molecular zone may still constitute the most convincing evidence for a 1/*r* radial CR density profile [353]. It is interesting that even in such conditions, the identification of the CR source remains elusive. The CR gradient at the centre of the galaxy can be equally well explained from continuous CR injection by a central stationary source such as Sgr A\* active over several Myr [353], or from stochastic CR injection by dozens of SNRs over 100 kyr [366,367].

In an effort to identify the specific role of SFRs in the CR lifecycle, it is equally interesting to consider those objects that were not detected yet. For instance, *Fermi*-LAT observations of eight young star clusters with ages below a few Myr constrain the particle acceleration efficiency in stellar winds to be below 10% in four of them and below 1% in 2 of them [368]. For more evolved objects, it is striking that the extraordinary 30 Doradus region in the LMC does not stand out in GeV gamma-rays: after removing the emission from two very powerful pulsars lying in the field, the region seems to fit perfectly in the larger scale-emission of the galaxy and does not stand out in gamma-rays, or at least not in proportion of its ionizing luminosity [261,369]. Looking at even more evolved structures, it was recently established that the Orion–Eridanus superbubble does not harbour any significantly enhanced CR population at GeV energies with respect to the solar neighborhood [203]. It is not clear yet if this has to be attributed to time evolution, stellar content, or its less compact nature compared to, e.g., Cygnus, but this non-detection needs to be taken into account when assessing the potential of superbubbles for turbulent acceleration/reacceleration [203,339].

### **6. The Population of Gamma-Ray Emitting Galaxies: Different Realisations of the Cosmic-Ray Phenomenon**

In this section we turn our attention to the population of external galaxies whose gamma-ray emission is not dominated by the activity from a central supermassive black hole. We frequently refer to these as star-forming or starburst galaxies (hereafter SFGs and SBGs), although we recognize that such a naming is not fully appropriate because there are numerous examples of active galactic nuclei accompanied by star formation (establishing the relationship between the two phenomena being an active field of research).

Going beyond the three external galaxies spatially resolved by current gamma-ray telescopes (covered in Section 4), the interest of studying the integrated emission from gamma-ray emitting galaxies as a population is at least threefold:


In the following, we focus on the first two items. We first review the current status of gamma-ray observations before addressing how they can improve our understanding of CRs. Concerning the last item, an example for recent progress in incorporating gamma-ray observations in galaxy evolution works can be found in [4].

### *6.1. A Growing Source Class: The Gamma-Infrared Luminosity Correlation and Its Implications*

SFGs and SBGs were foreseen as a distinct GeV to TeV gamma-ray source class well before their actual detection (see e.g., [370,371]). As of today, gamma-ray emission has been established from a dozen SFGs and SBGs, mostly at GeV energies using the *Fermi*-LAT, and among these only two starbursts have also been observed at TeV energies. In a recent systematic search for gamma-ray emission from a sample of 588 SFGs using *Fermi*-LAT, 11 objects were firmly detected in the ∼0.1–100 GeV range [372]. For 2 out of the 11 (NGC 4945 and NGC 1068) and two additional candidates (NGC 2403 and NGC 3424), the detected emission cannot be dominantly attributed to processes linked to star formation, and might be contaminated by an active galactic nucleus [281,372,373]. Among the 11 galaxies, M 82 and NGC 253 were also detected at TeV energies with the current generation of IACTs [374,375].

Although still limited, the detected sample covers a variety of galactic properties, ranging from dwarfs (SMC and LMC) to large spirals (M31), and from relatively quiescent to starbursting objects (Arp 220). Investigating how the inferred total gamma-ray luminosity for detected and non-detected objects scales with global galactic properties, evidence was found for a correlation of the former with the star-formation rate, as traced for instance by the infrared luminosity (in the 8–1000 μm band). Initially investigated in [281,376], the correlation was recently revisited in [372] and is now more firmly established with a significance close to 5*σ*. The relation between the two quantities appears mildly nonlinear, with gamma-ray luminosity evolving as infrared luminosity to the power ∼1.3. In fitting the observed correlation to a power-law, however, a large dispersion is obtained and it is still not clear whether that is intrinsic, or due to biases and uncertainties in the galactic parameters adopted (e.g., distances or infrared luminosities), or a combination of both effects. The gamma-ray to infrared luminosity correlation is actually reminiscent of the far-infrared to radio correlation, both in their form and commonly accepted physical explanation, and several authors actually investigated them in a unified approach [377,378].

The correlation is thought to be driven by massive-star formation, which is at the origin of strong UV/optical light and CRs that, upon interaction with the ISM, power infrared and gamma-ray emission, respectively. The commonly accepted interpretation of the non-linear relation between gamma-ray emission and star-formation rate (traced by infrared) is that of an increasing calorimetric efficiency of the galaxies with respect to CRs. The calorimetric efficiency is defined as the fraction of CR power that is deposited in the ISM relative to that initially injected. High star-formation rates are reached in galaxies harbouring large amounts of dense molecular gas packed in small regions, typically <sup>∼</sup>1–10 <sup>×</sup>10<sup>8</sup> <sup>M</sup> in a volume spanning a few hundred pc. The average gas volume density in such regions can reach a few 1000 H cm−<sup>3</sup> and, due to the high density of young stars, the interstellar radiation field energy densities can exceed a few 1000 eV cm−<sup>3</sup> (for comparison, typical values for the Milky Way are 1 H cm−<sup>3</sup> and 1 eV cm−3). As a consequence, CRs lose their energy much more efficiently, in particular through radiative processes such as nucleon–nucleon inelastic collisions and IC scattering. The Milky Way is thought to be a poor proton calorimeter but a good electron calorimeter, with efficiencies of the order of 1–2% and 40–80% respectively, depending on the transport scenario assumed [379] (these efficiencies were computed for radiative processes only; in terms of emission the low calorimetric efficiency of protons is compensated by the fact that they are a factor of a hundred more numerous than electrons in CRs). In contrast, a starburst galaxy like Arp 220 can reach a calorimetric efficiency above 80% for protons in some propagation models [380]. The argument for increasing calorimetry in high star-formation rate galaxies seems backed up by the fact that their gamma-ray spectrum is observed to be hard, or at least harder than in the Milky Way, with a photon index ∼2.2–2.3 [372,375], which is expected if the emission is mostly hadronic in origin and CR transport is loss-dominated.

Although such a scheme is very reasonable as a general trend, there are several caveats in the interpretation of the still limited sample of galaxies available today. First of all, there is not yet a clear consensus on how CR transport evolves or differs among galaxies, and this inevitably affects our ability to predict the corresponding gamma-ray emission. For example, advection-dominated transport in high-star formation rate galaxies with powerful winds could provide an alternative explanation to the hard spectra. This is discussed in more detail in the next section. Furthermore, most galaxies in the sample are detected as point-like objects at GeV energies, and as such the gamma-ray emission is a blend of various sources: interstellar emission, populations of sources such as SNRs or young and recycled pulsars, etc. In that respect, the only spatially resolved galaxies so far show diverse and rather unexpected pictures. The distribution and flux levels of the emission from the LMC, SMC, and M 31, as summarised in Section 4.1, call for caution in interpreting the existing observations of the population of SFGs and SBGs, especially at low star formation rates. Continuous efforts are needed to extend the sample of gamma-ray-emitting SFGs, both in number and spectral coverage.

In that respect, extending the spectra towards the MeV and TeV ranges would certainly be instrumental in securing our understanding of the emission from SFGs and SBGs. On one hand, observations of emission in the hard X-ray/soft gamma-ray bands, with NuSTAR of future instruments like ASTROGAM/AMEGO/GECCO, can probe the population of secondary and tertiary leptons in SFGs and SBGs, which is an indirect diagnostic of the degree of calorimetric efficiency in individual systems. On the other hand, in the VHE range, more detections of galaxies are crucially needed to enlarge the sample beyond M 82 and NGC 253 and allow for a deeper broadband population study. The higher sensitivity observations >100 GeV with CTA and >1 TeV with HAWC, LHAASO, and, possibly, SWGO, are expected to provide an extended spectral coverage relevant to study all those effects that are more likely to show up at >TeV energies in SBGs: energy-dependent escape, photon–photon absorption, and emission from unresolved population of sources such as PWNe and pulsar halos. Incorporating 100 MHz–10 GHz radio observations, with their much better angular resolution, in the interpretation of gamma-ray observations of SFGs and SBGs can help to separate emission components and add useful information, e.g., on the intensity distribution in the galactic wind, for those systems that are viewed mostly edge-on [381].

Beyond individual detections towards infrared-selected targets, SFGs and SBGs are expected to contribute to the extragalactic gamma-ray emission as an unresolved component, integrated over cosmological distances. Because their emission results from hadronic interactions (only partially for moderately star-forming galaxies like the Milky Way, but predominantly for starbursts like M 82 or Arp 220), they are also expected to contribute some

background emission of neutrinos. Over recent years, advances in the understanding of the composition of the extragalactic gamma-sky above 50 GeV have seriously constrained the possible contribution from SFGs and SBGs. From improved catalogues of sources and photon counting statistics, the extragalactic source population is now constrained to be composed at 86% of blazars, thus leaving SFGs and SBGs as subdominant component [382]. A forward modelling of that contribution based on luminosity functions for star-forming galaxies and the observed gamma-ray to infrared luminosity correlation yield a contribution of the order of 10% of the extragalactic gamma-ray background [372,383,384], and a corresponding contribution at the 1% level to the 10 TeV–10 PeV astrophysical neutrino flux detected with IceCube [385]. It should be noted, however, that the latter constraint relies on the very strong assumption that the CR/gamma-ray properties inferred in the *Fermi*-LAT energy range can be readily extrapolated up to >10 PeV. Here, more gamma-ray observations at the highest energies with LHAASO, HAWC, CTA, and SWGO are crucially needed. In addition, a more realistic modelling of SBGs, taking into account a diversity of spectra in the population, as actually observed in [372], allows a larger contribution to the neutrino background, while still being consistent with the extragalactic gamma-ray background [386].

### *6.2. How Do Cosmic-Ray Transport Properties Vary with Galactic Environment?*

The sample of detected SFGs and SBGs spans 3-4 orders of magnitude in star formation rate, average gas density, or interstellar radiation field intensity see [387], for some examples of extreme values. This set of very diverse conditions can be expected to give rise to markedly different CR populations, in particular as a result of CR transport in the ISM being very unlike that prevailing in the Milky Way (so far, to our knowledge, a more limited attention was paid to whether/how extreme galactic conditions could alter the population of accelerated particles fed by sources into the ISM). As such, the study of SFGs and SBGs in gamma rays is a very good test bed for CR physics. After about a decade since the very first detections of external SFGs and SBGs in gamma rays (beyond the LMC, already observed with EGRET), however, it is fair to say that this new class of gamma-ray sources has not dramatically modified our understanding of CRs in galaxies and there is still room for deeper studies. Below, we review our present ideas on how CRs may evolve in galaxies different from the Milky Way.

The efforts in modelling or interpreting the observed population can be broadly separated into two kinds: those aiming at reproducing the spectra of one or several detected individual objects, and those concerned with reproducing the gamma-ray to infrared luminosity correlation. In both cases, as already mentioned, CR injection into the ISM was treated with prescriptions similar to those used for the Milky Way, and it is rather on the side of CR transport that alternative scenarios were explored. The main processes regulating CR transport in the ISM were assessed in the specific context of SFGs and especially SBGs. Examples of relevant questions are:


In the following, we illustrate how some of these questions were addressed in recent works and with what conclusions. We note that having at our disposal an open-source, versatile model, analogous to GALPROP, DRAGON, or PICARD (see Section 2.4), that can be easily configured and adapted to different galaxies would be ideal to go beyond simple leaky-box models and allow easier comparison of various transport scenarios.

The extreme conditions encountered in some SFGs, and especially in SBGs, translate into much stronger energy losses than those experienced by CRs in the average ISM of the Milky Way. This effect alone has already significant implications. In [378], CR transport is assumed to be very similar to that in the Milky Way (e.g., same diffusion coefficient), and the author assessed how the global interstellar gamma-ray emission is influenced by interstellar conditions, gas mass and distribution, and the related radiation and magnetic fields, for molecular gas densities increasing from 5 to 500 H2 cm−<sup>3</sup> (which is a bit short to describe extreme SBGs like Arp 220). The impact is evaluated in three different energy ranges, 0.1–10 MeV, 0.1–100 GeV, and 0.1–100 TeV. As gas density increases (and so do star formation and radiation and magnetic fields), the MeV range sees a strong increase of IC and Bremsstrahlung emission and dominance of secondary electrons and positrons as radiating particles. In the GeV range, emission from nucleon–nucleon inelastic collisions is the dominant process, but its intensity increases slower than linearly with gas density and flattens as CR protons and nuclei propagation becomes loss dominated, which yields a harder spectrum with a 2.3–2.4 photon index. In the TeV range, while emission from IC scattering from primary electrons can be dominant or comparable to that from nucleon– nucleon inelastic collisions for the lowest average densities, the latter overwhelms the former at large densities because it increases almost linearly with density, as CR protons and nuclei transport is largely diffusion-dominated at the highest energies.

One effect not investigated in [378] is that of photon–photon opacity in the densest radiation fields typical of SBGs. This was included in works such as [387] or [388], and the most interesting effects are the softening of the emission spectrum beyond TeV energies, and the generation of an additional population of CR electrons at TeV energies that, in the case of hard enough injection and strongly inhibited diffusion, yields a dominant IC contribution in hard X-rays. Both [378,387] (see also [389]) illustrate the potential of the hard X-ray/soft gamma-ray range as a diagnostic of CR interactions in dense galaxies, in particular for an indirect evaluation of the calorimetric efficiency of the system. The works of [387,390] show that improving current hard X-ray constraints from NuSTAR by a factor of a few may become interesting (in the case of NGC 253).

The above statements are dependent on the respective diffusion and advection properties assumed, and it actually remains to be clarified how both processes evolve among different galaxies and in which regime one or the other may dominate CR transport. Works like [387] or [391] succeeded in reproducing the gamma-ray emission spectrum from selected SBGs with a model in which wind advection is the dominant spatial transport mechanism (either by neglecting diffusion or using a small coefficient). In contrast, the model of [380] uses dedicated diffusion schemes but no advection, and is able to achieve the same goal. At the population level, the observed gamma-ray to infrared luminosity correlation can be fairly well reproduced with a Milky-Way-like diffusion scheme [378], but an advection scheme with star formation dependent wind velocity may be a viable alternative solution [392].

Figure 5 shows the most recent published version of the gamma-ray to infrared luminosity correlation, based on data from Ajello et al. [372]. Observations are compared to predictions from [378], based on a GALPROP-like model, and from Pfrommer et al. [393], based on an alternative modelling approach. In the latter work, the authors evaluated the gamma-ray emission in a series of two-fluid MHD galaxy formation simulations for a variety of dark matter halo masses and distributions. CRs are injected as a relativistic fluid following SN explosions and two CR transport scheme are considered: pure advection with gas motions, and advection plus anisotropic diffusion along magnetic field lines (with a single energy-independent value for the diffusion coefficient). Such an approach

allows us to track self-consistently and in a time-dependent way the interplay between CRs, gas motions and outflows, and galactic magnetic field structure, but it comes at the expense of approximations on the CR physics. Nevertheless, the authors show that, if a typical 10% of the supernovae kinetic energy goes into CRs, the observed gamma-ray to infrared luminosity correlation can be fairly well reproduced and argue that it is little dependent on uncertainties in the CR transport. At high star formation rates, most of the CR energy is lost to hadronic interactions, in agreement with the calorimetry argument put forward in other works; at low star-formation rates, however, the model overpredicts the emission and the authors suggest that more realistic multi-phase ISM descriptions would be needed in that range. Such works support a trend towards numerical models in which the complexity of CR physics is dynamically coupled to the galactic environment, models whose interest reaches far beyond high-energy astrophysics (e.g., cosmological simulations of galaxy formation).

**Figure 5.** Gamma-ray to infrared luminosity correlation. The observed luminosities plotted as black crosses and orange triangles come from Ajello et al. [372] and were converted to the 0.1–100 GeV range under the assumption that the emission spectrum is a power law spectrum with photon index 2.2 (the average value measured by the authors). The correlation band plotted in gray is the one corresponding to the so-called combined fit. Overlaid are predicted luminosities: from [378], as blue dots, for a GALPROP-like model of galaxies with different size and gas distributions and under the assumption of a Milky-Way-like CR diffusion scheme; from Pfrommer et al. [393], as red hexagons and cyan squares, for a series of two-fluid magneto-hydrodynamical galaxy formation simulations for a variety of dark matter halos and under two different assumptions for CR transport: pure advection, and advection plus anisotropic diffusion (see text).

In exploring CR spatial transport in external galaxies, the work of [380] offers an interesting alternative to the generic prescriptions mainly in use so far. The authors evaluate that starburst regions will be dominated by volume-filling, cold and weakly ionized gas, unlike the Milky Way, where the ISM is mostly filled with hot and warm ionized gas. This implies an efficient damping of MHD turbulence modes, from ion-neutral friction, and a cutoff of the externally driven turbulence cascade at scales such that particles with energies below a few hundreds TeV are not efficiently scattered. The streaming of CRs is then controlled by self-generated turbulence only, which is demonstrated to occur

at the Alfvén speed for particles ≤1 TeV and to increase with some power >1 of energy above. The corresponding macroscopic diffusion scheme implies a specific flavour of energy-dependent diffusive escape such that the resulting hadronic-dominated gamma-ray spectrum follows the CR injection spectrum up to ∼100 GeV and gradually falls off at higher energies (also because photon–photon absorption comes into play). This is shown to provide a convincing description of SBGs M 82, NGC 253, and Arp 220. In that picture, advection in a galactic wind is not needed but not excluded either. The authors argue that, while most CRs should be injected into the cold gas phase to ensure a high calorimetric efficiency and a sufficient level of gamma-ray emission, some CRs can be advected away from the plane with the hot ionised phase but will contribute few gamma rays (and retain the hard injection spectrum anyway).

### **7. Summary and Perspectives**

In the past decade gamma-ray observations have shed a new light on the richness and diversity of the processes that govern the build-up of CR populations in galaxies. While there is an overall agreement with the foundations of the standard Galactic CR paradigm, gamma-ray data, along with direct CR measurements and other related observables, have highlighted the limits of some standard assumptions and modelling approaches, and, therefore, have spurred many new developments in the field. We summarise below the main points highlighted in this review, before discussing forthcoming prospects and challenges.


extended components of a nature yet to be understood. Extended emission from the SMC lacks remarkable correlation with gas densities and star-formation sites, while in M31 gamma-ray emission is concentrated in innermost 5 kpc, possibly due to the larger concentration of an old stellar population in the bulge. These studies have so far been limited to the GeV range, and the topic has hardly been explored observationally in the TeV range.


of galactic environments, but it is fair to say that our understanding of the CR life cycle still lacks extensive testing for other galaxies.

In the GeV range, where the entire sky was surveyed by the *Fermi* LAT, interstellar emission has been explored in depth. In the TeV range, however, we have just started to scratch the surface. In the next decade the deep and extensive surveys of the Galactic plane and LMC with CTA, combined with continued operation of HAWC and starting exploitation of LHAASO, and, possibly, the advent of SWGO (Section 2.1) will unveil what interstellar emission looks like in the TeV range and above. This will shed light on crucial questions such as the origin of the inner-Galaxy hardening. It also has the potential to reveal how interstellar emission from >TeV CRs connects to ISM structures and the source regions on a variety of different scales. Very-high-energy observations are of particular interest to study the early phases of the CR life cycle. In particular, for the study of gamma-ray emission in the source vicinity developments in the very-high-energy range may alleviate the problems encountered when trying to disentangle these objects from large-scale diffuse emission, since we expect that at energies above 100 GeV large-scale interstellar emission should be comparatively less intense. The good angular resolution of CTA is essential for proper identification of the emission components in complex regions or resolving of fine structures close to the shock in SNRs. At the same time TeV instruments are expected to increase the number of external galaxies detected in gamma rays and provide a broader view of different realisations of the CR phenomenon at very high energies.

Toward the lower-end of the gamma-ray spectrum, a new (sub-)MeV mission such as ASTROGAM, AMEGO, or GECCO (see again Section 2.1) has a huge potential to better reveal spectral and spatial properties of IC emission from the Galaxy and, therefore, to infer the distribution of CR electrons that better sample CR inhomogeneities since they are affected by energy losses more strongly than nuclei and remain much closer to their sources. The improved performance in the MeV to GeV energy range of ASTROGAM or AMEGO would also allow us to probe CR nuclei in nearby clouds at sub-pc scale to test how low-energy CRs get depleted at their interior, to follow the release and diffusion of particles around SNRs and pulsars, and to infer the spectral energy distribution of the bulk of CR nuclei in SFRs in order to estimate the CR pressure in these environments.

Besides developments in gamma-ray instruments, major progress in the years to come is expected to stem from a more extensive integration of gamma-ray observations in a broad multi-wavelength/multi-messenger context. This is necessary to unambiguously interpret gamma-ray observations, as illustrated along the review, and it is also needed to explore the cross effects with related fields (e.g., star formation, galaxy formation and evolution, and astrobiology). At the same time, major theoretical and numerical developments are already warranted and under way (Section 2.4) and a challenge to the community is to ensure their open dissemination and use, in order to fully exploit the existing and forthcoming experimental data, and better connect macroscopic observables to the microphysics of non-thermal particle transport.

**Author Contributions:** The three authors contributed equally to conceptualization, literature review, visualisation, and writing. All authors have read and agreed to the published version of the manuscript.

**Funding:** L.T. and P.M. acknowledge financial support by CNES for the exploitation of *Fermi*-LAT observation and from CNRS-INSU for their work on CTA, as well as from the French Agence Nationale de la Recherche under reference ANR-19-CE31-0014 (GAMALO project). D.G. has received financial support through the Postdoctoral Junior Leader Fellowship Programme from la Caixa Banking Foundation (grant n. LCF/BQ/LI18/11630014). D.G. was also supported by the Spanish Agencia Estatal de Investigación through the grants PGC2018-095161-B-I00, IFT Centro de Excelencia Severo Ochoa SEV-2016-0597, and Red Consolider MultiDark FPA2017-90566-REDC.

**Acknowledgments:** The authors acknowledge discussions during the preparation of the review with Peter von Ballmoos, Henrike Fleishack, Michael Kachelrieß, Elena Orlando, and Andy Strong. Special thanks to Sarah Recchia for providing the model curve in Figure 3, to Carmelo Evoli for providing the model curve in Figure 4, to Marco Ajello and Mattia Di Mauro for providing the data in Figure 5, and to Reshmi Mukherjee for the comments on the manuscript. This work has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and of NASA's Astrophysics Data System Bibliographic Services. The preparation of the figures has made use of the following open-access software tools: APLpy [394], Astropy [395], Matplotlib [396], NumPy [397], SciPy [398].

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the review, in the literature selection, in the writing of the manuscript, or in the decision to publish.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **References**


### *Article* **Photon–Photon Interactions and the Opacity of the Universe in Gamma Rays**

**Alberto Franceschini †,‡**

Department of Physics and Astronomy, University of Padova, I-35122 Padova, Italy; alberto.franceschini@unipd.it; Tel.: +39-049-827-8247

† Current Address: Dipartimento di Fisica e Astronomia, Vicolo Osservatorio 3, I-35122 Padova, Italy.

‡ Paper submitted to the Journal *Universe*, the issue on *Gamma Ray Astronomy*.

**Abstract:** We discuss the topic of the transparency of the Universe in gamma rays due to extragalactic background light, and its cosmological and physical implications. Rather than a review, this is a personal account on the development of 30 years of this branch of physical science. Extensive analysis of the currently available information appears to us as revealing a global coherence among the astrophysical, cosmological, and fundamental physics data, or, at least, no evident need so far of substantial modification of our present understanding. Deeper data from future experiments will verify to what extent and in which directions this conclusion should be modified.

**Keywords:** high energy astrophysics; background radiation; photon–photon interaction; pair production

### **1. Introduction**

Photons are by far the fundamental channel of information that we have for investigating the Universe, its structure, its origin. Fortunately, after the optically thick phase ending at the recombination time 380,000 years from the Big Bang and thanks to the disappearance of free electrons from the cosmic fluid, photons over a large range of frequencies have been allowed to travel almost freely across the Universe.

However, Thomson scattering, that was so effective before recombination, is not the only process limiting the photon path. Once the first cosmic sources—either stellar populations, galaxies, or gravitationally accreting Active Galactic Nuclei (AGN)—started to shine at about redshift *z* ∼ 10 (e.g., [1]), a large flow of low-energy photons progressively filled up homogeneously the entire Universe. This low-energy photon field, covering a wide frequency range from the far-UV to the millimeter (0.1 < *λ* < 1000 μm) and growing with time down to the present epoch, is indicated as the Extragalactic Background Light (EBL). It adds to the already present and much brighter relic of the Big Bang, the Cosmic Microwave Background (CMB).

As a consequence of the presence of such high-density diffuse background radiation, high-energy particles—both cosmic rays and photons—have a high chance of interaction. Gamma ray photons, in particular, have a significant probability, increasing with energy, to collide with background photons and decay into an electron-positron pair [2,3],

$$
\gamma\_{VHE} + \gamma\_{EBL} \to e^- + e^+ ,
$$

hence essentially disappearing from view. Subsequent pair cascades, also depending on the strength of the magnetic field, would scatter the secondary gamma rays at lower gamma ray energies. Quantum electrodynamics makes precise predictions [4] about such a probability, that peaks when the product of the two photon energies equals on average the square of the rest-mass energy (*mec*2)2. As a consequence, Very High Energy (VHE) spectra of extragalactic sources show high-energy exponential cutoffs ∝ *e*−*τγγ* , where *τγγ* is the optical depth to photon–photon interaction. Once the source distance is known, a

**Citation:** Franceschini, A. Photon–Photon Interactions and the Opacity of the Universe in Gamma Rays. *Universe* **2021**, *7*, 146. https://doi.org/10.3390/ universe7050146

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 5 April 2021 Accepted: 7 May 2021 Published: 14 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

spectral analysis of the gamma ray source, complemented with some assumptions about the intrinsic spectrum, allows us a measure of *τγγ*, and hence an inference of the background photon density.

In conclusion, on one side the *γγ* interaction analysis is required to infer the intrinsic source spectrum for physical investigations of its properties. But on the other side, this opacity effect offers the potential to constrain an observable, the EBL background, which is of great cosmological and astrophysical interest. The EBL collects in an integrated fashion all radiation processes by cosmic sources between 0.1 and 1000 μm during the whole lifetime of the Universe, and then sets a fundamental constraint on its evolutionary history.

Because direct measurements of such radiations are very difficult or even impossible, the possibility to constrain them via the gamma ray photon opacity analyses is highly valuable, and makes an interesting bridge between the high-energy physics and the lowenergy astrophysics and cosmological domains.

Photon–photon interactions and opacity effects are also of relevance for other questions of cosmology and fundamental physics. One is related with the existence of tiny magnetic fields on large cosmic scales. The latter might originate during the earliest inflationary phases, or alternatively in dynamo effects during the large-scale structure's growth. Such low intensity fields are not directly measurable, e.g., with Faraday rotation, while a potential probe is offered by magnetic deflections of electron-positron pairs produced by gammagamma interactions and Inverse Compton (IC) scattering of background photons. This effect produces both extended halos from the reprocessed gamma rays and spectral bumps and holes, potentially measurable with Fermi/LAT and Imaging Air Cherenkov telescopes (IACT) telescopes [5,6].

Opacity measurements have also been considered for constraining cosmological parameters, like the Hubble constant *H*<sup>0</sup> ([7–9], among others). In our view, however, the degeneracy in the solutions due to the large number of parameters involved is such to make this application of very limited value compared to the many alternative observational approaches. This at least until a new generation of IACTs will appear with significantly better spectral resolution to identify sharp absorption features e.g., due to the integrated Polycyclic Aromatic Hydrocarbon emissions in the infrared (IR) EBL. Similar problems limit the possibility to measure the redshifts of distant blazars <sup>1</sup> lacking the spectroscopic measurement [10].

Processes involving VHE photons and opacity measurements in distant gamma ray sources allow us important tests for possible deviations from the Standard Model predictions, and for new physics, well beyond the reach of the most powerful terrestrial accelerators (e.g., the Large Hadron Collider).

In particular, possible violations of a fundamental physical law such as the specialrelativistic Lorentz Invariance are testable in principle. Such violations may arise in the framework of alternative theories of gravity and quantum gravity [11,12].

While quantum gravity effects are expected to manifest themselves in the proximity of the extreme Planck's energy *EQG* <sup>=</sup> <sup>√</sup>*hc*5/2*π<sup>G</sup>* <sup>10</sup><sup>19</sup> GeV, it turns out that the effects may be testable even at much lower energies. In particular, the quantization of space–time may affect the propagation of particles and a modification of the dispersion relation for photons in the vacuum would appear at an energy given by *EQG*, bringing for example to a relation

$$c^2 p^2 = E^2 (1 + \lambda E / E\_{\mathbb{Q}\mathbb{G}} + \mathcal{O}(E / E\_{\mathbb{Q}\mathbb{G}})^2) \tag{1}$$

*E* the photon energy and *λ* a dimensionless parameter, that, if different from zero, would violate the Lorentz invariance [13]. Such a dispersion relation leads to an energy-dependent propagation velocity of photons: *v* = *dE*/*dp* = *c*(1 − *λ*(*E*/*EQG*)), with a consequent time of arrival also depending on energy that is testable with VHE observations of fast transient sources like variable blazars or even Gamma Ray Bursts (GRB) [14].

<sup>1</sup> Blazars are Active Galactic Nuclei hosting nuclear jets of plasma in directions close to the observer's line-of-sight. They make, together with flat-spectrum radio sources, the most numerous population of extragalactic sources at HE and VHE energies.

The interesting point here is that an anomalous dispersion as above would also affect the interactions of high energy particles, like the photon–photon collisions. The threshold condition for pair creation, given one photon of energy and momentum *E*<sup>1</sup> and *P*<sup>1</sup> and a second with *<sup>E</sup>*<sup>2</sup> and *<sup>P</sup>*<sup>2</sup> given by (*E*<sup>1</sup> <sup>+</sup> *<sup>E</sup>*2)<sup>2</sup> <sup>−</sup> (*P*<sup>1</sup> <sup>−</sup> *<sup>P</sup>*2)<sup>2</sup> <sup>≥</sup> <sup>4</sup>*m*<sup>2</sup> *<sup>e</sup> c*4, would be modified by the anomalous dispersion as 2(*E*1*E*2)(<sup>1</sup> − *cosθ*) − *<sup>λ</sup>E*<sup>3</sup> <sup>1</sup>/*EQG* ≥ <sup>4</sup>*m*<sup>2</sup> *<sup>e</sup> c*4, assuming *E*<sup>1</sup> is the gamma photon energy and *E*<sup>2</sup> that of the low-energy background. Deviations from Lorentz Invariance, with *λ* = 0, would then be testable, assuming the characteristic EBL energy of *E*<sup>2</sup> 1 eV, with gamma ray energies in the range *E*<sup>2</sup> ∼ 10 to 100 TeV, that is well below the Planck energy scale [15]. An observational consequence would be the suppression or the reduction of the photon–photon interaction and absorption in the spectra of distant blazars, potentially testable above 10 TeV (see Section 5.3 below).

Similarly, quantum gravitational effects would also influence hadronic and mesonic processes, like for example the pion decay *<sup>π</sup>*<sup>0</sup> → *<sup>γ</sup>* + *<sup>γ</sup>* for energies in the PeV range, as well as many other processes involving high energy particles [16,17] .

Among the various attempts to achieve a theory of everything of all four fundamental interactions, including gravity, super-symmetric models and particularly super-string theories have been considered. A common prediction of these is the existence of spinzero, neutral, very light bosons, that are the generalization of the axion particle (see for a review [18]). Axion-like particles (ALP) are predicted to interact with two photons or with a photon and a static electromagnetic E and B field. So, in the presence of a magnetic field, high energy photons and ALPs would oscillate, like it is the case for solar neutrinos: a VHE photon emitted by a gamma ray source, by interacting with an intergalactic B field, would transform into an ALP, and the latter be reconverted in a photon after a subsequent interaction with another B field. Since during the ALP phase there is no interaction with background photons and pair production, this would overall reduce the photon–photon opacity. Observations of VHE distant sources can then offer a potential to constrain the existence of ALPs, by the analysis of their gamma ray spectra.

The paper is structured as follows. A brief historical account on the photon–photon interaction process in the astrophysical context is reported in Section 2. The Extragalactic Background Light as the low-energy photon field responsible for the high-energy cosmic opacity is discussed in Section 3. Its cosmological significance, the issues related with its measurements and modelling the known-source contributions to the EBL are all discussed in the relative subsections. The high-energy opacity of the Universe to photon–photon interaction is reported in Section 4, while specific problems concerning the ultraviolet and optical EBL, the near-IR and the far-IR EBL are specifically addressed in the Sections 4.2–4.4. Various aspects and consequences, including current constraints on the ALPs and Lorentz Invariance Violations (LIV) are discussed in Section 5, together with a mention of the relevant prospects for improvement expected by forthcoming and future instrumentation. Conclusions appear in Section 6.

### **2. Photon–Photon Interaction in the Astrophysical Context: Brief Historical Account**

The discovery of the CMB radiation in 1965, in addition to being a game-changer in our understanding of the Universe, prompted a number of reflections about its physical and astrophysical implications. People immediately realised that the propagation of highenergy particles across it might be impeded to some extent (e.g., [19,20]). This was found to apply to cosmic ray particles, but also to photons [2,3,21,22], still in relation to the CMB cosmological background, and it was found that extragalactic gamma rays from the cosmos with energies >100 TeV cannot reach the Earth.

With the early development of the radio and IR astronomy (see for a review, [23]), it became clear that not only the Universe hosts the dense CMB photon field, but also a rich variety of other diffuse extragalactic radiations. If the radio background is of no relevance for gamma ray astronomy, the IR one at shorter wavelengths than the CMB was indicated by [24,25] to make an important component. Based on the scanty data of the pre-IRAS era, Puget, Stecker, & Bredekamp [24] predicted quite realistic radiation intensities in the farand mid-IR, results that have been largely confirmed by the first all-sky investigation by the IRAS satellite in 1984 (see for a review, [26]).

However, it was only with the launch of the first gamma ray observatory in space, the Compton GRO in 1991, that the first ideas of a profound relationship between low-energy astrophysics and high-energy physics have started to be considered. In particular, soon after the launch, a large flare by the distant blazar 3C279 at *z* = 0.54 was detected by the Compton GRO at energies between 70 MeV and 5 GeV, showing a perfect power-law spectrum. Stecker, de Jager, & Salamon [27] argued that, assuming such a HE spectrum continues at higher energies and a far-IR (FIR) background intensity consistent with the IRAS data of the epoch, an exponential cutoff due to photon–photon interaction would be measured in the VHE spectra between 0.1 and 1 TeV by the new generation of air Cherenkov detectors. They also argued, for the first time, that this would provide an opportunity to obtain a measurement of, or at least a severe constraint on, the extragalactic IR background radiation field.

Indeed, a year later, the detection of the low-z (*z* = 0.03) blazar MKN421 by the Whipple Cherenkov observatory [28], showing a pure power-law spectrum up to 3 TeV, prompted Stecker & de Jager [29] to set an upper limit to the near-IR (NIR) EBL intensity of *νI*(*ν*) < 10−<sup>8</sup> Watt/m2/sr in the wavelength interval from 1 to 5 μm: this remarkable constraint on the NIR EBL was essentially confirmed by many later analyses.

Shortly afterwards, MacMinn & Primack [30] suggested that photon–photon opacity measurements can be used to constrain the processes of galaxy formation and evolution and offer an interpretation of preliminary evidence for a cutoff above 3 TeV in MKN421 based on their EBL estimate.

In the meantime, progresses have been made on both the theoretical side about the EBL intensity [31], and the Cherenkov instrumental facilities. The combination of Compton-GRO, HEGRA and Whipple observations produced a remarkably extended and well-sampled HE and VHE spectrum of MKN421, showing a significant cutoff above 1 TeV that was interpreted by Stecker & de Jager [32] as a preliminary evidence in favour of photon–photon absorption. Two EBL model solutions by [31] appeared to be consistent with the observations.

At the same time in 1997, a huge flare characterizing the other local (*z* = 0.03) blazar MKN501 was observed with HEGRA by Aharonian et al. [33] up to an energy of 10 TeV, thanks to the extreme luminosity of the source. On one side, some improper IR-EBL corrections and a too low adopted value for *H*<sup>0</sup> have brought to a claim of potential "TeV gamma ray crisis" [34]. On the other end, Stanev & Franceschini [35] simply inferred from this a significant upper limit on the IR EBL between *λ* = 3 and 20 μm (*νI*(*ν*) < 5 × 10−<sup>9</sup> Watt/m2/sr), very close to the lower limits set by deep extragalactic counts in the same wavelength interval by the Infrared Space Observatory's [36] deep mid-IR surveys. This was first evidence of very little room left to the IR EBL for a truly diffuse background in addition the contribution of known sources. These results were confirmed with similar analyses by Stecker & de Jager [37], Renault et al. [38], Malkan & Stecker [39]. Instead, Konopelko et al. [40], de Jager & Stecker [41], using a high normalization of the IR EBL from a model by [37], found an indication for a very large absorption correction (x20-40) in the HEGRA spectrum of MKN501 at the energy of 20 TeV.

Mazin & Raue [42] attempted to infer constraints on the spectral shape of the EBL over a large wavelength interval, 0.4 to 80 μm, by a joint analysis of 14 blazar TeV spectra and using EBL educated guesses by [43]. Although this analysis was limited by the strong degeneracy in the solutions, it already allowed to constrain the EBL in the near-IR to only a factor of 2 to 3 above the absolute lower limits set by source counts, hence ruling out the IRTS measurements of the EBL.

The start of the operations of the HESS large Cherenkov array led to the discovery in 2006 of two very distant blazars, H 2356-309 and 1ES 1101-232 at *z* = 0.165 and *z* = 0.186 that allowed Aharonian et al. [44] to set an important constraint on the EBL around 1 μm, essentially ruling out large excesses in the EBL indicated by IR space telescope observations [45]. For the first time, photon–photon opacity measurements led to a rather fundamental achievement for cosmology. This issue will be further discussed below.

More recently, thanks to important developments allowed by new relevant observational facilities and much better knowledge of the cosmic source populations by astronomical telescopes, during the last decade the field has achieved a maturity. Major imaging Cherenkov telescope arrays (HESS, VERITAS, MAGIC) operating at VHE have been implemented, while the multi-epoch all-sky surveys by the Fermi space observatory have extensively monitored the sky at the HE energies in an extremely complementary and synergistic fashion. Ultimately, the use of high-quality datasets have allowed to talk of real measurements of the EBL with credible significance, instead of mere upper limits (see in particular, [46,47]).

Finally, the operation of a plethora of astrophysical observatories in space (HST, ISO, Spitzer, Herschel) and from ground offered deep understanding of cosmic low-energy source populations over a substantial fraction of the Hubble time, hence setting hard lower limits to the EBL over 4 photon-frequency decades from 0.1 to 1000 μm. All this will be subject to our later discussion in Section 3.3.

### **3. The Low-Energy Extragalactic Background Radiation (EBL)**

*3.1. Origin and Cosmological Significance of the EBL*

The Extragalactic Background Light in the wavelength interval between 0.1 and 1000 μm is an important constituent of the Universe, permeating it quite uniformly (e.g., Longair [48]). The EBL is the collection of all photon emission processes at these wavelengths from the Big Bang till today, and offers an integrated information on all of them. For this reason it makes a fundamental constraint on our past history.

After recombination, a likely dominant source of energy is thermonuclear burning in stars, whose past integral peaks in the EBL at around *λ* 1 μm, as a consequence of surface temperature of stars from few thousands to about 100,000 degrees. However, stellar activity often takes place in dust-opaque media, as is particularly the case for young massive and luminous stars, that form inside dusty molecular clouds and exit them at later stages of their evolution. As a consequence, a significant fraction of the short wavelength UV-optical stellar photons is absorbed by dust grains and re-emitted by them in the IR via a quasi-thermal process at the equilibrium temperature of few to several tens of degrees.

The EBL then has two fairly well characterized maxima at *λ* 1 μm and 100 μm corresponding to the integrated stellar photospheric and dust-reprocessed emissions, with a minimum between the two at *λ* 10 μm.

Another important energy-generation process originates from gas accretion onto massive collapsed objects, like it typically happens onto super-massive black-holes (SMBH) in Active Galactic Nuclei. Similarly to stellar emission, gravitational accretion also emits fluently in the UV and the optical from the hot accreting plasmas, but again a significant part of this radiation is absorbed by dust in the accreting matter and emerges in the IR. In spite of the higher mass-to-energy transformation efficiency *η* of the latter process compared to stellar nucleosynthesis, because only about one part of a thousand of the processed baryon material goes to accrete onto the SMBH, AGN accretion is deemed to contribute a minor fraction to the EBL compared to stars [49,50]. Assuming a stellar efficiency of *η* ∼ 0.001 and the AGN one *η* ∼ 0.1 and similar evolutionary histories for the two [49], the average observed ratio of stellar mass to that of SMBH's in galaxies of about 1000 [51] implies that only about 10% of the EBL intensity can be ascribed to AGN activity.

Astrophysical processes in individual sources so far described are easily detectable with current imaging telescopes above the map's background. A question however remains how much these imagers can detect of more diffuse emissions, like could take place from low-density regions in the outskirts of galaxies or even from stellar populations distributed in the intergalactic medium: all these would easily sink onto the background, and possibly remain completely undetectable. Extreme occurrences of this kind might be a diffuse medium of decaying particles emerging from the Big Bang or early populations of stars, like the so-called Population III stars often invoked to explain the early metal enrichment in the Universe [52–54].

These cosmological signals are in any case all registered in the EBL spectral intensity. Therefore, any attempts to retrieve the history of energy production events in the past history of the Universe has to be confronted with available EBL constraints, like not exceeding in any case the total figure. In particular, the history of star-formation is significantly constrained by EBL observations.

### *3.2. Observational Issues Related with the Background Radiations*

Unfortunately, direct measurements of EBL all-over its wavelength definition range are very difficult, and even largely impossible. Excellent reviews of the subject can be found in [55] for the IR part and in [56] for the optical-UV one. The situation is well illustrated in Figure 1, showing how the Earth is immersed, more than within the EBL photon field, inside a variety of radiations of local origin. The major component is the Zodiacal light, including both Sun scattered light by Inter-Planetary Dust particles, and their quasi-thermal emission peaking at 10 μm. Contributions by the integrated emission of faint stars and by high-galactic latitude dust (cirrus) are also indicated.

**Figure 1.** Overview of the various components of the total night sky background at high galactic and ecliptic latitudes. The Zodiacal Inter-Planetary Dust emission, Zodiacal scattered light, and starlight (bright stars excluded) are indicated. The interstellar Galactic (cirrus) emission is normalized to the minimum column density observed at high Galactic latitudes (*NH* = 10<sup>20</sup> H atoms/cm2). Atmospheric O2 air glow and OH emissions in the near-IR, as well as the CMB, are also indicated. (Figure taken from Leinert et al. [57].)

As shown by the figure, even outside the terrestrial atmosphere, these emissions are so bright compared to the expected level of the EBL (that is around 10−<sup>8</sup> W/m2/sr) that any attempts of a direct measure are prone to huge uncertainties.

One possible exception is the spectral window from *λ* ∼ 200 to 1000 μm that is at the minimum of all such radiations. It is exactly there that credible claims of an extragalactic background radiation signal has been reported by Puget et al. [58], Lagache et al. [59]. While in this case too the total intensity is still dominated by cirrus and CMB emissions, the EBL can be safely extracted thanks to the a priori knowledge of the CMB spectral intensity and the clear dependence of the cirrus on Galactic coordinates.

At all other wavelengths, foregrounds are so much dominating to prevent reliable EBL determinations. This is particularly the case from 5 to 100 μm because of the IPD brightness and its weak dependence on the ecliptic coordinates (about a factor 2 from pole to equator) preventing it to be reliable subtracted from the total sky maps.

A particularly interesting situation has emerged in the range from about 1 to 5 μm (the near-IR EBL, NIR EBL), where completely different experiments (DIRBE on COBE, [60]); Spitzer [61]; AKARI [62]; IRTS [63]; CIBER [64], among others) indicated high levels of the NIR EBL intensity (but see also [65]). Such an intense isotropic radiation would have a spectrum like the Rayleigh-Jeans tail of a thermal radiation (as also shared by the Zodiacal scattered light) and a sharp cutoff at about *λ* ∼ 1 μm, consistent with processes taking place at high redshift (*z* ∼ 10), whose light is redshifted to that peak wavelength. Primeval (Population III) stars have been considered as a possible origin [45,66,67].

These estimates of the EBL are based on detailed modeling of the measured total intensity and the subtraction of the local Zodiacal and stellar contributions. These results show quite high values of the NIR EBL, of the order of *<sup>ν</sup>I*(*ν*) 40–60 nW/m2/sr (see e.g., the review by, [68]). Unfortunately, they are possibly compromised particularly by uncertainties in the precise level of the Zodiacal foreground.

A way to circumvent this difficulty was identified in the study of the background fluctuations *δIν*/*I<sup>ν</sup>* instead of the total intensity *Iν*, under the well-supported hypothesis that the Zodiacal light is uniform on scales lower than a degree (see for a general review [69]). Various authors report excess fluctuations in the deep maps in various near-IR and optical bands, typically showing wavelength dependencies consistent with the Rayleigh-Jeans law (among others [61,62,64,70–72]). These results have been alternatively interpreted with sources present during re-ionization at redshift *z* > 8, or primordial black holes, and with stellar emission from tidally stripped intergalactic stars residing in dark matter halos, or extended stellar halos at low z.

Mitchell-Wynne et al. [72] have expanded this search to emissions specifically at redshifts *z* > 8, where primeval sources responsible for the cosmological re-ionization are expected to be found, via an ultra-deep multi-wavelength investigation with HST. They find faint excess fluctuation signals above the current constraints based on Lyman-dropout galaxy surveys and low-z galaxies, but with low significance. Their conclusions appear to disfavour the very high values for the EBL found by the analyses of the foregroundsubtracted total light intensity (see above), and rather consistent with lower EBL values of *<sup>ν</sup>I*(*ν*) ∼ 10 nW/m2/sr. Another relevant outcome of the multiwavelength analysis by [72] was that good part of the large near-IR intensity fluctuations found by other teams (e.g., by CIBER, [64]) is likely to be attributed to diffuse light from our Galaxy, with the consequence to lower also the inferred EBL flux.

Fluctuation studies, in any case, are anything but free of significant uncertainties. Apart from the problem of the residual local foreground contribution, these measurements are sensitive to the details of the instrumental point spread function of the imager. One further difficulty comes from the determination of the level of spatial clustering of the various source populations contributing to the total fluctuation signal. For example, Helgason & Komatsu [73] have suggested that some of the claimed excess fluctuations may be entirely explained by the clustering of ordinary galaxies.

Not at all easier is the direct measurement of the optical-UV EBL, not only because of faint stars and the Sun-scattered light, but also due to the diffuse high-Galactic latitude dust reflecting starlight. Important data have been recently obtained from the photometric camera onboard the New Horizon spacecraft observing at >40 AU from the Sun, so as to get rid of the Zodiacal light. The data analysis by Lauer et al. [74] (see also [75]) has derived a total EBL intensity at the band-center of *<sup>λ</sup>* = 0.6 <sup>μ</sup>m of (17.4 ± <sup>5</sup>) nW/m2/sr, about half of which due to the integrated emission of galaxies and half to a diffuse unresolved component of unknown origin. It will be interesting to check later if such high background can be reconciled with the constraints set by the photon–photon opacity determinations.

In summary, while being a fundamental cosmological component of great significance, and in spite of the enormous effort dedicated to its determination, the EBL is escaping any reliable direct measurement over most of its waveband range of definition. The next Sections will be dedicated to infer entirely independent constraints of its value.

### *3.3. Modelling the Known-Source Contributions to the EBL*

Thanks to the variety of astronomical facilities, both from ground and from space, operative over the full wavelength range of 0.1 to 1000 μm, we can at least set a reliable minimum boundary to the extragalactic radiation field from known sources, that in turn will set a minimal threshold to the cosmic photon–photon opacity.

Many attempts have been published to model the known source contributions to the EBL based on the statistics of the multi-wavelength populations of galaxies and AGNs. The present-time EBL intensity can be obtained by the relation with the differential source number counts *N*(*Sν*) [sources/unit flux-density interval/unit solid angle], *S*(*ν*) being the source flux density [erg/s/Hz]:

$$I(\nu) = \int\_{S\_{\nu, \min}}^{S\_{\nu, \max}} N(S\_{\nu}) S\_{\nu} \, dS\_{\nu} \, \left[ \text{erg/s/cm}^2/\text{sr} \right] \tag{2}$$

or in the equivalent time-honoured units of Watt/m2/sr. *N*(*Sν*) is the immediate observable that can be obtained from a sky survey based on observations at the frequency *ν*. If instead we are interested in the evolution of the intensity with cosmic time, which is needed to estimate the opacity for a distant gamma ray source, then the approach is complicated to account for the progressive production of photons by sources and their redshift effects. In this case, the background intensity at redshift *z*∗ reads:

$$I\_{\nu\_0}(z^\*) = \frac{1}{4\pi} \frac{c}{H\_0} \int\_{z^\*}^{z\_{max}} dz \, j[\nu\_0(1+z)](1+z)^{-1}[(1+z)^3 \Omega\_m + \Omega\_\Lambda]^{-1/2},\tag{3}$$

for a flat universe with Ω*<sup>m</sup>* + ΩΛ = 1, with *j*[*ν*0] being the galaxy comoving volume emissivity:

$$j[\nu\_0] = \int\_{L\_{\rm min}}^{L\_{\rm max}} d \log L \cdot n\_c(L, z) \cdot K(L, z) \cdot L\_{\rm v\_{0f}} \tag{4}$$

with *K*(*L*, *z*) the K-correction, *K*(*L*, *z*)=(1 + *z*)*Lν*0(1+*z*)/*Lν*<sup>0</sup> and *nc* the comoving luminosity function at the redshift *z* expressed in number of galaxies per *Mpc*<sup>3</sup> per unit logarithmic interval of luminosity *L* at frequency *ν*0, *L* the luminosity in [erg/s/Hz]. The local background intensity as in Equation (2) coincides with Equation (3) for *z*∗ = 0.

From Equation (3), the photon differential proper number density [photons/cm3/Hz] at the redshift *z*∗ is given by:

$$\frac{d n\_{\uparrow}(\epsilon\_{0}, z^{\*})}{d \epsilon} = \frac{4 \pi}{c} \cdot \frac{I\_{\mathbb{V}\_{0}}(z^{\*})}{\epsilon} \,, \tag{5}$$

where <sup>0</sup> = *hν*<sup>0</sup> is the photon energy.

Two main strategies have been followed to model the source contributions to the EBL.

### 3.3.1. Empirical Models

One approach to model the EBL was to be as adherent as possible to the multiwavelength observational data, including the source number counts, redshift distributions, and the redshift-dependent luminosity functions. The models here have to identify the main population components, like star-forming and quiescent galaxies, and Active Galactic Nuclei of various categories, each one with its own statistical properties and its - somehow physically motivated—redshift evolution. The latter are fitted with simple parametric functions, that are needed to interpolate binned and discretized data (like the redshift dependencies), and to extrapolate them to regions of the parameter space where they are not directly measured (like the luminosity functions at the lowest—immeasurable—*L*).

An important aspect about this approach and supporting it has been emphasized by Madau & Pozzetti [76]: the observational number counts of extragalactic sources are so deep from the UV to the IR and cover such a large fraction of the flux-density range at the various *λ* in Equation (2) that the undetected sources below the flux detection limits give completely negligible contributions to EBL.

The quality of these empirical models of the EBL then rests on their ability to offer precise fine-grade description of the data and the ability to account for all the available observational constraints. Models of this sort are discussed in particular in [31,39,43,50,77–83]. The proper photon density and redshift variation, as well as the modelled local EBL

spectral intensity, based on [82], are reported in Figure 2.

**Figure 2.** (**Left**): the energy-weighted proper number density of EBL photons as a function of the energy . The various curves correspond to different redshifts, as indicated. The contributions of CMB photons appear in the fast rise below 0.01 eV. (**Right**): the corresponding EBL spectral intensity (thick line). The data-points correspond to the integration of the known-source number counts as in Equation (2). (Taken from [82]).

### 3.3.2. Physical Models

In a somewhat complementary fashion, models have been devised that predict the emissivity of cosmic sources based on a priori treatment of their origin and cosmic evolution based on physical prescriptions. In particular, an approach of this kind was pioneered by Primack et al. [84] and further developed by Gilmore et al. [85], based on semi-analytic ΛCDM modelling of galaxy formation [86].

An advantage of this is that the effects of different assumptions about the values of the cosmological parameters can be investigated, and that interpolations and, particularly, extrapolations outside the observationally-constrained parameter space are physically motivated. A serious draw-back is that it is almost impossible by these means to achieve full compliance with the observational statistics and the model suffers some rigidity to reproduce them because it combines constraints from observational data and the physical prescriptions.

### 3.3.3. Other Approaches

Of some historical interest, EBL models have been published less directly related to the data and rather built on some parametrization of the history of the star-formation rate in galaxies. The latter is complemented with prescriptions about the source spectral energy distributions, dust extinction, and their evolution. An advantage here is that it is possible to discuss the model uncertainties by simple variations of the model parameters, which is definitely not the case for the previously mentioned more elaborated ones.

Mazin & Raue [42] have adopted a new approach to constrain a kind of free-form representation of the EBL spectrum and its redshift dependence directly from observations of the photon–photon opacity based on the VHE spectra of 14 extragalactic sources. Useful limits on the EBL have been inferred in this way. As an interesting development of this kind of thought, Biteau & Williams [7] have analysed a very large sample of 106 blazars producing some 300,000 detected photons: by assuming an EBL spectral shape as in [77,85] and a simplified treatment for the redshift-dependence of the EBL intensity, they have found a remarkable agreement among all these data with an EBL local spectrum as reported in Figure 3.

**Figure 3.** EBL intensity at *z* = 0 from the analysis of Biteau & Williams [7]. The best-fit spectra derived there are shown with light blue (gamma rays only, four point spectrum) and blue points (gamma rays + direct constraints, eight-point spectrum) based on best-fit scaled-up models by [77,85], 1*σ* confidence. Lower and upper limits are shown with orange upward-going and dark-brown downward-going arrows, respectively. The results by [47] and the H.E.S.S. Collaboration (2013) are shown for comparison. (Kind permission by Biteau & Williams [7]).

### **4. EBL and the Cosmological Photon–Photon Opacity**

One clearly established fact about the EBL is the existence of a minimum intensity threshold imposed by the existence of numerous sources populating the sky, and the condition of general homogeneity and isotropy of the Universe. This is an unavoidable condition, on top of which other radiations of more diffuse origin can add. The latter are mostly impossible to measure, given the previously mentioned foreground problem, and the lack of absolute photometric measurement capabilities of the typical astronomical facility. Our approach will then be to calculate photon–photon opacities for this minimal EBL and test against gamma ray source spectra such predictions.

This can be immediately performed assuming an EBL spectral intensity and its cosmic evolution (see Figure 2), complemented with Standard Model treatment of the photon– photon interaction and pair creation process [4].

### *4.1. Cosmic Opacity due to Known Sources*

The optical depth as a function of the gamma ray source distance and photon energy is given, for an EBL density *dnγ*(, *z*)/*d*  as in Figure 2, by

$$\pi(E, z\_{\varepsilon}) = \varepsilon \int\_{0}^{z\_{\varepsilon}} dz \frac{dt}{dz} \int\_{0}^{2} dx \frac{x}{2} \int\_{0}^{\infty} d\varepsilon \frac{dn\_{\gamma}(\varepsilon, z)}{d\varepsilon} \sigma\_{\gamma\gamma}(\beta) \tag{6}$$

where *σγγ* is the pair-creation cross section and the argument *β* is computed as: *β* ≡ (<sup>1</sup> − <sup>4</sup>*m*<sup>2</sup> *<sup>e</sup> <sup>c</sup>*4/*s*)1/2; *<sup>s</sup>* ≡ <sup>2</sup>*<sup>E</sup> <sup>x</sup>*(<sup>1</sup> + *<sup>z</sup>*)2; *<sup>x</sup>* ≡ (<sup>1</sup> − cos *<sup>θ</sup>*), *<sup>θ</sup>* being the angle between the colliding photon directions, and, for a flat universe,

$$dt/dz = H\_0^{-1} (1+z)^{-1} \left[ (1+z)^3 \Omega\_m + \Omega\_\Lambda \right]^{-1/2}.$$

The intrinsic spectrum *Sint* of a gamma ray source at redshift *ze* is then absorbed as a function of energy as: *Sabs* = *Sint* exp [−*τ*(*E*, *ze*)].

This *σγγ* cross-section implies that the absorption is maximum for photon energies

$$
\epsilon\_{\text{max}} \simeq 2(m\_{\text{c}}c^2)^2 / E\_{\gamma} \simeq 0.5 \left(\frac{1 \text{ TeV}}{E\_{\gamma}}\right) /
$$

or in terms of wavelength

$$
\lambda\_{\text{max}} \simeq 1.24 (E\_{\gamma} [TeV]) \text{ } \mu m.\tag{7}
$$

The plot in Figure 4 shows the optical depth as a function of the gamma ray energy for a range of values of the redshift of the source. In addition to our modelled EBL we include here also the contribution to the opacity coming from the high density of CMB photons, assuming a temperature of *T* = 2.728 K. This shows up as a fast increase of *τ* at high values of *Eγ*.

**Figure 4.** The optical depth by photon–photon collision as a function of the photon energy for sources located at *z* = 0.003, 0.01, 0.03, 0.1, 0.3, 0.5, 1, 1.5, 2, 2.5, 3, 4, from bottom to top. The fast rise at the high *τ* and *Eγ* values is due to the large volume density of CMB photons. The graph is based on the model by [82].

The effects of the CMB and radio-backgrounds are further reported in Figure 5, showing the redshifts at which the photon–photon optical depth assumes the values of of *τ* = 1, 2, 3, and 4.6, as a function of the gamma ray energy. All over the interval from 105 to 10<sup>10</sup> GeV the uncertainties are virtually absent, thanks to the precision with which the CMB spectrum is known [87]. At all other energies the uncertainties are also small if we assume that the *γ* − *γ* optical depths are only due to known sources, for which the EBL and radio backgrounds are set by the source number counts, that are available with good precision at all frequencies.

**Figure 5.** Graphical representation of the global photon–photon opacity. The graph shows the source redshifts *zs* at which the optical depth takes fixed values as a function of the observed hard photon energy *E*0; the y-scale on the right side shows the distance in Mpc for nearby sources. The curves from bottom to top correspond to a photon survival probability of *e*−<sup>1</sup> = 0.37 (the horizon), *e*−<sup>2</sup> = 0.14, *e*−<sup>3</sup> = 0.05 and *e*−4.6 = 0.01. For D > 8 kpc the photon survival probability is larger than 0.37 for any value of *E*0. (Kind permission by [88]).

### *4.2. Constraining the Near-IR EBL (NIR-EBL)*

As mentioned in Section 3.2, background radiations at near-IR wavebands have been intensively investigated by several independent experiments, with claims of intensities largely in excess of the baseline EBL from known sources, as shown in Figure 2 right. Such excesses would amount to several tens of nW/m2/sr in the figure. Thanks to the very sensitive instrumentation (HST and Spitzer from space, very large telescopes from ground), the baseline EBL is very well constrained and understood at these wavelengths. At the same time, from Equation (7) these background photons produce opacity effects in the VHE spectra of sources at gamma energies of ∼1 TeV, where IACT's are maximally efficient. This combination then offers a good chance to test the excess NIR-EBL hypothesis via the pair-production effect.

The analysis of the two distant blazars by Aharonian et al. [44] offered the first important test exploiting pair production effects, that ruled out the reality of the excess at the levels previously indicated and the possibility that such a large signal might originate from the first light sources at *z* ∼ 10. All subsequent analyses have fully confirmed this result, leaving little room for any truly diffuse background at such wavelengths (e.g., [77]). Eventually, this EBL level turned out to be consistent with recent developments about the background signals as in [72,73].

### *4.3. Constraining the UV-Optical EBL (UV-EBL)*

The Fermi space observatory offered, during the last decade and counting, the first major facility promoting gamma ray to a fully-fledged and mature field of astronomy. Its *LAT* instrument detected several thousands of extragalactic sources between 20 MeV and 300 GeV, including many high-z ones. Since the cutoff energy due to pair production scales with redshift approximately as

$$E\_{\gamma, \textit{cutoff}}(z) \sim 800(1+z)^{-2.4} \,\text{GeV},$$

the observatory turned out to be in the ideal position to investigate how this cutoff evolves with *z*, hence, from Equation (7), how the UV-EBL intensity evolves with time. This analysis was performed by Ackermann et al. [47], who have performed a detailed comparison and

found excellent agreement with the model predictions by [77], and also good agreement with [78,79,85,89]. This analysis was further expanded by [90] to include the spectra of as many as 739 active galaxies and one GRB up to a redshift *z* 4.35. The inferred constraints on the UV-EBL are so precise and detailed that, assuming recipes for the dust-extinction from literature, these data were used to estimate the evolutionary UV-emissivity and the history of star-formation in the Universe per average comoving volume, given the tight relation of UV light and the rate of SF.

These results appear remarkably consistent, over the full range of EBL wavelengths of 0.1 < *λ* < 5 μm, with the mere integrated emissions of known galaxies. Instead, they are not entirely consistent with the latest direct evaluation of the local EBL at *λ* = 0.608 μm by the New Horizon interplanetary explorer of *<sup>ν</sup>I*(*ν*) 17.5 ± 4.2 nW/m2/sr (see Section 3.2), against the intensity of *<sup>ν</sup>I*(*ν*) <sup>6</sup>+<sup>2</sup> <sup>−</sup><sup>1</sup> nW/m2/sr allowed by the pair-production constraint. This is a significant inconsistency that has to be resolved in some way. It is not to be excluded that this may indicate some improper calibration of the Horizon photometers.

### *4.4. Constraining the Far-IR EBL (FIR-EBL)*

The wide wavelength interval from 5 to 300 μm hosts a large portion of the integrated radiant energy by cosmic sources (Figure 2). This is radiation by dust extinguishing short-wavelength emission by galaxies and AGNs, and re-processing it as quasi-thermal radiation. Indeed, major episodes in the formation of galaxies, of their stellar populations, and of AGN gravitational accretion happen inside dust-opaque media, where extinction of energetic radiation favours the coalescence and collapse of the primordial gas [49,50,91].

We have seen in Section 3.2 that over that wavelength interval direct observations of the Infrared EBL are precluded by the dominance of the Inter-Planetary Dust and Galactic dust emissions ([55] and Figure 1). Infrared telescopes from space can detect point sources, but are blind to diffuse emission, like extended halos of dust emission or truly diffuse processes, because of the huge background noise. Also, the source confusion problem due to the limited angular resolution at such long wavelengths prevents the detection of faint sources.

Clearly pair-production opacity effects detectable in the spectra of distant gamma ray sources offer an interesting tool to indirectly measure the IR-EBL [92]. From Equation (7) the FIR-EBL can be constrained by VHE observations at energies above several TeV. With the current IACT instrumentation, the highest energy photons so far detected by extragalactic sources came from the two lowest-z prototypical blazars MKN421 and MKN501. Aharonian et al. [33], Stanev & Franceschini [35], Stecker & de Jager [37], Aharonian et al. [93,94,95], took advantage of exceptional flaring events of the two sources in 1999-2001 and 1997, to constrain their spectra up to 10-20 TeV.

Equation (7) tells, however, that to constrain EBL over a larger portion of the dustreprocessing region in Figure 2 requires probing VHE spectra well above 10 and possibly up to 50–70 TeV. Now from Figure 4 it is evident that such high energy photons are detectable (say with *τ*[*Eγ*,*z*] < 10) only from very low redshift sources, say *z* < 0.03, meaning that even MKN421 and MKN501 are too far away to be suitable, while better chances are offered by long VHE observations of local radio-loud AGNs, like IC310 and M87 [92]. Figure 6 illustrates expected observations with various future VHE observatories of these two local radio sources during high-states and an outburst. With sufficiently long integrations (particularly promising observations with CTA and LHAASO), the spectra could be measured up to several tens of TeV and the FIR-EBL be constrained almost up to 100 μm.

**Figure 6. Left panel**: **Top**: The photon–photon absorption correction (*exp*[*τγγ*]) for the source IC 310 at *z* = 0.0189, based on the EBL model by [82]. **Bottom**: The blue data-points and continuous line were taken during an outburst phase, the red data and continuous line during a prolonged high-state. The 50-h 5*σ* and 100 h 2*σ* sensitivity limits for CTA, and the HAWC 5 years limits are shown. The blue dotted line and the red dashed one indicate the SWGO and LHAASO 5 year 5*σ* limits, respectively. The 50-h limit for the forthcoming ASTRI mini-array is shown in green. **Right panel**: Photon–photon absorption for the source M 87 at 18.5 Mpc. The observed (open red and continuous line) and absorption-corrected (black line) spectral data are reported. Same as in the left panel. (Figure taken from [92]).

### **5. Discussion**

An important cautionary note is in order. The analyses based on the photon–photon interaction suffer a limitation in the degeneracy between the source gamma ray spectra and the EBL spectral intensity. For example, any attempts dedicated to constraining the EBL intensity should include some prior knowledge and assumptions about the extrapolations of the source spectra to the highest energies, where pair-production cutoffs show up. Therefore, these analyses offer significant model constraints and consistency checks more than precise measurements and parameter estimation.

We discuss in this Sect. some of such investigations. We can outline our discussion by splitting it into two sections: one considering constraints on astrophysics and cosmology, the other concerning themes relevant for fundamental physics. The former will assume Standard Model physical prescriptions for the photon–photon interaction, while the latter will instead adopt standard assumptions for astrophysics and look for consistencies, or inconsistencies, that would require modifications to the Standard Model.

### *5.1. Some (Resolved?) Controversy*

Let us first of all anticipate a brief mention to a controversy that has originated from analyses of the cosmological gamma ray horizon and pair production process. Using IACT spectral data published during the last decade, some groups have found indications that EBL model corrections for pair production opacity over-predict the observed gamma ray attenuation [96–101]. This would manifest itself by a spectral hardening after EBL absorption correction at photon energies corresponding to a high optical depths.

Thanks to the joint efforts from space (Fermi) and the IACTs from ground looking at blazars over a range of redshifts, the analysis was performed by comparing the gamma ray spectral slopes at HE and at VHE and searching for spectral hardening with the photon energy. Horns [101] in particular reports some indications for anomalous cosmic transparency by plotting the spectral slopes *α* as a function of the *γ* − *γ* optical depth *τγγ*: he finds that while *α* naturally steepens for small values of optical depth up to *τγγ* ∼ 1, for *τγγ* > 1.5 it seems to show an upturn, that he attributes to anomalous transparency, considering that it would be contrived if such a hardening occurred in sources at exactly the energy where *τγγ* > 1, instead of depending on the source distance.

Various other teams have argued against such an evidence [7,47,77,82,90,102]. A particularly extensive analysis is reported by Biteau & Williams [7]: based on their large VHE database of 106 gamma ray spectra they report finding "no significant evidence for anomalies".

Of course, in the presence of such steep VHE spectra under the effect of the photon– photon interaction, all statistical and instrumental corrections become important for an appropriate interpretation. One of the most important is the statistical effect known as Eddington bias, that applies when trying to measure a quantity for a statistical set, subject to errors, in the presence of strong gradients in the probability distribution of that quantity ([103,104] among others). This is certainly considered at least at the first order in analyses ran by Cherenkov observatory teams based on fits of event distributions (or un-binned fits of event lists) fully accounting for the instrument response function (IRF), including the energy resolution. However second-order corrections to the observational data accounting in detail for the model spectral cutoffs could become important, in addition to the systematic uncertainties induced by the energy scale of the instrument.

In conclusion, it is perhaps fair to say that such controversy has recently somewhat weakened. Supporters of the anomaly seem to agree that the effect may not be so significant and anyhow requiring confirmation by future more sensitive experiments [105]. With the currently available data set, mostly from existing IACT and Fermi observations, our analyses did not so far appear having revealed significant inconsistencies with the present physical and astrophysical understanding so as to require fundamental revision.

Certainly this is not to say that everything is clear and settled—further discussion will be reported in later subsections. And, in any case, forthcoming and future instrumentation will go to investigate regions of the parameter space that are so far uncovered.

### *5.2. The Present Understanding: Constraints on Astrophysics and Cosmology* 5.2.1. The History of Star-Formation

We have summarized in Section 4 a number of studies reporting constraints on EBL from gamma ray observations. The bottom line appears to be that no major evidence has so far emerged for excess radiation components of EBL in addition to what is contributed by known source populations. Some remaining open questions that might require further inspection concern the large claimed excess at near-IR wavelengths from space IR observatories and the factor ∼2 excess background at *λ* = 0.6 μm reported by the New Horizon spacecraft, two results however uncertain for various reasons, especially because of the foreground contamination, and also not very statistically significant.

Not unexpected, but not even entirely obvious a priori, data on the photon–photon opacity did not require EBL levels lower than the minimal baseline set by the integrated emission of source, as inferred from cosmological deep surveys and based on Equation (2).

Now, let us reverse the argument, i.e. having in mind that the gamma ray data appear so far largely consistent with the mere EBL from known sources, a question might arise if there would be indications from astrophysics and cosmology of processes and events implying larger background fluxes at some wavelengths. One such instance would be the source population responsible for the early metal enrichment of the primordial gas and for the re-ionization of the Universe at *z* ≥ 9, the Population III stars, that are the products of zero-metallicity star-formation [53,106]. Such faint sources would be undetectable individually by astronomical telescopes, but their integrated emission might be substantial, and indeed these were mentioned as the possible origin of the putative large NIR-EBL excess in Section 3.2.

The general question about such past excess activity can be addressed by consideration of the local relics of the past history, like the stellar mass and black-hole mass functions, and the metal abundances observed in cosmic plasmas, all remnant products of high-z stars and AGNs.

Madau & Silk [107] dedicated a detailed analysis of the consequences of trying to explain the NIR-EBL excess. While not excluding that a small portion of that excess—say of order of few nW/m2/sr—might still be present, they conclude that attempt to explaining the whole claimed excess faces a number of inconsistencies making this a very unlikely possibility. These are related to the uncomfortably large amount of metals produced and, alternatively, the excess amount of intermediate-mass black holes (by a factor 50 more mass than hosted in galactic nuclei), creating problems with the data on the X-ray background.

Similar constraints apply to populations of normal galaxies and AGNs in excess of those already categorized. A question might arise of how much our current understanding of galaxy and AGN formation and evolution offers a consistent picture, or if inconsistencies of any kind would call for major revisions, with impact on their EBL contributions. Once more radiations from remote sources can be compared with the various local relics.

Madau & Dickinson [49] have performed an extensive review to map the cosmic history of star formation, and heavy element production. Under the assumption of a universal initial stellar mass function (that proposed by Chabrier [108] in particular), the average stellar mass density in galaxies observed as a function of *z* matches well the integral of all the previous star-formation activity. The comoving rates of star formation and central black hole accretion, all consistent with a huge amount of published observational data, follow a similar redshift evolution. The corresponding predicted rise of the mean metallicity of the Universe is also consistent with the observations of the abundance of metals in various cosmic sites and also, though a bit marginally, with the energy requirements by the cosmological re-ionization from the cosmic "dark ages" to the present. Many published reports agree with the results of this analysis [82,109–112].

Driver et al. [109] operate an equally extensive analysis, including as many as 600,000 galaxies over the whole Hubble time, reaching similar conclusions. However with the important addition that all these data not only offer a consistent picture of galaxy activity, as discussed above, but also strongly limit the fraction of stellar mass being stripped or ejected by the individual galaxies to not exceed the 13%.

All this is entirely consistent with the EBL modelling as in Sections 3.3 and 4, with little room for optical Intra-Cluster Light and Intra-Halo Light, and consistent with data on the photon–photon opacity.

### 5.2.2. Potential Constraints on Primeval Re-Ionization Sources

Certainly, the new generation of large IACT telescope arrays, like CTA [113], will offer such a large sensitivity gain over current instrumentation, including the Fermi space observatory, to extend the observational horizon at tens to hundreds GeV up to substantial redshifts, *z* ∼ 1 to several. In relation to the mentioned Population III and cosmic reionization sources, some new tests would then become feasible to detect such emissions from the pre-galactic era in the form of excess *γγ* absorption. This is based on the fact that, while major part of the EBL photons by galaxies are produced at *z* < 1 (e.g., Figure 2) and their proper density vanishes at higher *z*, those from primeval sources strongly increase proportionally to (1 + *z*)<sup>3</sup> because of the simple cosmological expansion. An example is reported in Figure 7, where a modest excess EBL flux at *z* = 0 from very high-redshift sources becomes a factor 50 in photon density already by *z* = 2, making a significant and possibly measurable contribution to the opacity in *z* > 1 blazars.

**Figure 7.** Energy-weighted proper number density of EBL photons as a function of the energy  and for various redshifts. The standard EBL evolution, as in Figure 2, is compared to the densities when including photons from primeval objects: an excess local background by less than a factor 2 at 1.4 μm becomes a factor 50 by *z* = 2 due to the (1 + *z*)<sup>3</sup> increase in the proper photon density. The color palette for the lines is the same as in Figure 4.

Similar considerations relating the primeval re-ionization sources with EBL excesses are developed in [77,114]

### *5.3. Constraints on New Physics: Lorentz Invariance Violations and Photon to Axion-Like Particle Mixing*

Playing with such high-energy photons as those observable at HE and VHE with Cherenkov observatories offers also invaluable tests of fundamental physics.

A major frontier for today's physics is the attempt to describe the gravitational interaction with the language of quantum mechanics, trying to achieve a coherent picture. In this context, modifications to the relativistic Lorentz transformation are expected at VHE energies by many proposed theories [115–117]. Indeed tests of the Lorentz Invariance with VHE gamma rays allow us probing it at the highest observable energies.

A predicted effect of LIV may be an energy-dependent variation of the speed of light with respect to the standard value in the vacuum, as previously mentioned. This would make a small effect (10−<sup>15</sup> in relative velocity units) even at the highest detectable photon energies, that has been however investigated based on observations of flaring AGN [118], GRB [119], or other sources [120].

A better testable potential effect concerns anomalies in the kinematics of particle collisions and scattering, particularly in the pair-production processes, with consequences for the photon absorption effect. Kifune [117], among others, based on Equation (1), for *λ* = ±1 and assuming the emergence of quantum-gravity effects at the Planck energy, predicts that significant anomalies, like spectral upturns or strong convergences would be observable at blazar photon energies larger than ∼ 10 TeV. While this is at the edge of the capabilities of current-day gamma ray observatories (among the tightest constraints

on the LIV energy can be found in [121]), the new generation of instruments, both IACT's (CTA [122] and ASTRI [123], in particular) and water-Cherenkov arrays (LHAASO [124] in particular, including the currently working HAWC [125]) will be perfectly suited to cover with sufficient sensitivity this extreme spectral range up to and above 100 TeV. It should be noted, however, that the poor energy resolution of water-Cherenkov detectors, such as LHAASO and HAWC, may be a limiting factor in the analysis of the sharp cutoff expected from the rising cosmic IR background.

Another potential source of anomalous *γγ* opacity might be the existence of axions or axion-like particles, mentioned in Section 1, these being one of the considered candidates for the long-sought non-baryonic dark matter. Their expected behavior of mixing with two photons or a gamma ray and a virtual photon associated with environmental magnetic fields would have potentially important observational consequences in terms of a reduced photon–photon opacity, because during the ALP phase no interaction with the EBL photons is expected to occur.

Waiting for direct laboratory detection of such extremely elusive particles, some indirect evidence could be achieved from detailed analyses of the associated anomalous photon–photon absorption effects on distant blazar spectra. This is what de Angelis, Galanti, & Roncadelli [126] and Galanti et al. [127], among others, have attempted, by comparing the spectral properties for samples of BL Lac blazars at various redshifts and VHE energies. The authors argue that, after standard EBL corrections (from [82]), the spectral indices Γ*em* of their sources show a correlation with redshift that has no physical justification, hinting for a lower average opacity as allowed by the photon-ALP mixing. While formally indicative of a few *σ* effect calling for new physics, their result mostly weights on just a couple of objects around *z* ∼ 0.5 with inferred very low Γ*em* 1–1.5 values.

Cenedese & Franceschini [128] further tested the scaling of VHE spectral slopes of blazars (mostly High-Peaked BL Lacs) between redshift z=0 and 0.5 based on an improved sample of all blazars in the TevCat sample [129] including VHE spectra observed at various epochs. Their results are summarized in Figure 8: on the left panel the power-law fit to the observed VHE spectra at around 1 TeV are reported, where the increase is caused by the stronger spectral softening at larger redshifts due to the larger cosmic opacity. The right panel plots the distribution of spectral indices after correction for EBL absorption as in [82]: a marginal residual dependence of Γ*em* is indicated, but only at the 2 − *σ* significance level. It should also be cautiously considered that the observed trend, if any, might reflect a bias introduced by the Malmquist effect, emphasizing higher luminosity sources at larger redshifts, with possibly slightly different spectral properties.

**Figure 8. Left panel**: Values of the observed spectral indices Γ*obs* for the blazar sample analysed by [128], versus redshift. Fits with different polynomials are indicated. **Right panel**: Same as on the left panel, but after correction for EBL absorption following [82] (Γ*em*). The red continuous curve is the best-fit regression line to the data, showing a modest redshift dependence.

In conclusion, gamma ray astronomy has an enormous potential to probe physics at extreme energies. However, at the sensitivity limits of current instrumentation no evident discrepancy has been revealed that would call for modifications to the Standard Model. In particular, testing Lorentz Invariance requires extending the IACT's or water Cherenkov observations to energies >10 TeV, that will be feasible only with very large arrays, while tests of ALP mixing will need better sensitivities to expand the observational parameter space and strengthen the statistics. Both requirements will be fulfilled by CTA, as extensively discussed in the review by Abdalla et al. [130]. Detailed simulations carried-out there indicate in particular the low-redshift radio galaxy NGC 1275 and the two blazars MKN 501 and 1ES 0229+200 as optimally suited for ALP and LIV investigations, respectively.

### *5.4. Other Open Questions and Prospects for Astrophysics and Cosmology* 5.4.1. Jet Astrophysics

Gamma ray astronomy also offers invaluable tests of astrophysics of extreme environments. Astrophysical jets from galactic and extragalactic sources are certainly among these, while the detected highest energy cosmic rays can hardly be classified differently than their most extreme manifestation. A possible link between the two has been recently suggested by the detection of a PeV neutrino from the direction of a blazar, with concomitant flaring gamma ray emission from the object [131]. Blazars and blazar jets are therefore suggested to be considered as the sources of high-energy cosmic rays. The clear consequence would be that jets not only include leptonic particles and processes (electrons, positrons, Synchrotron-Self-Compton, etc.), but also collimated beams of hadrons.

Hadron beams have been studied by various authors (e.g., among others [132–134]). Emitted protons and heavier nuclei would produce VHE photons via interactions and cascades along their trajectory, at some distance from Earth, whose paths then are shorter than the source distance, with an overall reduced photon–photon opacity. Within this scenario, we would expect the emergence of spectral components at energies well above the TeV. However, while hadronic components in jets cannot be ruled out, the present phenomenology of blazar properties appear still overall consistent with standard leptonic processes like the synchrotron self-Compton model

### 5.4.2. Cosmology

Finally, VHE photon propagation effects testable by gamma ray astronomy have interesting potential for analyses of cosmological interest. One of the hot topics in today's cosmology concerns the precise value of the Hubble constant, that turned out to show inconsistent determinations based on local (*H*<sup>0</sup> > 72 km/s/Mpc) and high-redshift (*H*<sup>0</sup> < 68) observables. Such inconsistency has no explanation so far, and may even require new physics or substantial modification of the standard Λ*CDM* model of cosmology [135].

Because the photon–photon optical depth is obviously dependent on the scale of the universe, hence on *H*0, observations of VHE sources at various *z* and of their spectral cutoffs can offer an entirely independent test. Preliminary attempts in this direction reported large uncertainties compared to other methods [8]. Good progress could be achieved with CTA, although I am not entirely confident that this might become really competitive with existing methodologies for measuring the cosmological parameters, due to the previously mentioned degeneracies.

Intergalactic magnetic fields permeating the universe on large scales could be both of primordial origin from the earliest expansion phases, or ejected later from galaxies. Their presence and properties, unfortunately, are very difficult to investigate, e.g., via the Faraday rotation effect. Because electromagnetic cascades initiated by photon–photon absorption are influenced by intervening magnetic fields, the latter can be probed in gamma rays in various ways. One would be to look at time delays in the cascade emission, or the presence of HE broad spectral features due to the cascade (e.g., [136]). The likely most promising technique will be to identify extended halos around distant point-like sources. Again simulations for perspective CTA studies are extensively discussed in [130].

### **6. Conclusions**

Gamma ray astronomy, particularly after the successful space mission Fermi and the implementation of the first IACT observatories, is becoming a clearly mature science. Its current main limitation rests on the VHE domain, above 100 GeV, because of the rarity of photonic events: the number of such energetic photons is decreasing very sharply with energy (*S<sup>ν</sup>* ∝ *ν*−2,−<sup>3</sup> at least). Fortunately, major progress is expected by the forthcoming implementation of very large arrays of IACT's (CTA) and water-Cherenkov (LHAASO) arrays, that will compensate such extremely low rate of arrival with the expansion of the photon collectors.

Major progress is expected in many fields from these developments. For astrophysics, fundamental topics like the origin of the high-energy cosmic rays and the structure of astrophysical jets will largely benefit. Furthermore, the technique based on the photon–photon opacity analysis also offers interesting tests and constraints in the field of observational cosmology, for the topic of the history of stellar formation and AGN accretion, hence uniquely embracing high-energy physics with low-energy astrophysics and cosmology.

As for fundamental physics, laboratory experiments and large particle accelerators have likely reached their current technological frontier, while the next steps forward will require lot of effort, resources, and time. An excellent complement at much lower price, however, is offered by gamma ray astronomy at its VHE limits, with opportunities to test the validity of fundamental laws in regimes—e.g., close to the Planck energy—where they have never been tested, to set the stage for higher level theories beyond the Standard Model.

If we have to summarize our present understanding, it seems to us that current investigation concerning the highest energy photons from cosmic sources has not found clear and significant evidence for deviations and need for new physics, either in the field of astrophysics and cosmology or that of fundamental physics.

No doubt, however, that improved instrumentation, refined observational techniques, and new ideas will call for the next steps in our understanding of the universe and its fundamental laws.

**Funding:** This research received no external funding.

**Data Availability Statement:** Part of the dataset on the photon-photon opacity used in this paper can be found in: http://www.astro.unipd.it/background/, accessed on 7 May 2021.

**Acknowledgments:** I am grateful to Leinert et al. [57], De Angelis, Galanti, & Roncadelli [88], and Biteau & Williams [7], for kindly allowing reproduction of their published results. I benefited by extensive discussions with Alessandro De Angelis, Michele Doro, Mose' Mariotti, Giulia Rodighiero, and Elisa Prandini, among many others. Part of Section 5.3 comes from ongoing work with Francesco Cenedese. I am indebted to various anonymous referees for their careful reading of a previous version of the manuscript and their very useful comments and to the Journal editors for help in the manuscript editing. The University of Padua is also warmly thanked for continuous support to this research.

**Conflicts of Interest:** The author declares no conflict of interest.

### **References**


### *Review* **The Gamma-ray Window to Intergalactic Magnetism**

**Rafael Alves Batista 1,\* and Andrey Saveliev 2,3**


**Abstract:** One of the most promising ways to probe intergalactic magnetic fields (IGMFs) is through gamma rays produced in electromagnetic cascades initiated by high-energy gamma rays or cosmic rays in the intergalactic space. Because the charged component of the cascade is sensitive to magnetic fields, gamma-ray observations of distant objects such as blazars can be used to constrain IGMF properties. Ground-based and space-borne gamma-ray telescopes deliver spectral, temporal, and angular information of high-energy gamma-ray sources, which carries imprints of the intervening magnetic fields. This provides insights into the nature of the processes that led to the creation of the first magnetic fields and into the phenomena that impacted their evolution. Here we provide a detailed description of how gamma-ray observations can be used to probe cosmic magnetism. We review the current status of this topic and discuss the prospects for measuring IGMFs with the next generation of gamma-ray observatories.

**Keywords:** intergalactic magnetic fields; high-energy gamma rays; electromagnetic cascades

### **Contents**


**Citation:** Alves Batista, R.; Saveliev, A. The Gamma-ray Window to Intergalactic Magnetism. *Universe* **2021**, *7*, 223. https://doi.org/ 10.3390/universe7070223

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 16 May 2021 Accepted: 25 June 2021 Published: 2 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

### **1. Introduction**

The advent of imaging air Cherenkov telescopes (IACTs) enabled the study of veryhigh-energy (VHE; *E* - 1 TeV) processes involving gamma rays with unprecedented precision. With small angular resolutions (*θ*psf ∼ 0.1◦), IACTs such as the High-Energy Stereoscopic System (H.E.S.S.) [1], the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) [2,3], and the Very Energetic Radiation Imaging Telescope Array System (VERI-TAS) [4,5] provide a unique view of the gamma-ray universe above TeV energies. These observations are supplemented, at higher energies, by measurements with water-Cherenkov detectors such as the High Altitude Water Cherenkov Experiment (HAWC) [6], the Astrophysical Radiation with Ground-based Observatory at YangBaJing (ARGO-YBJ) [7], the Large High Altitude Air Shower Observatory (LHAASO) [8], and the Tibet Air Shower Experiment (Tibet-AS*γ*) [9]. The launch of the Fermi Large Area Telescope [10] (Fermi-LAT) in 2008 was undoubtedly one of the most important landmarks in gamma-ray astrophysics. The complementarity between Fermi-LAT and IACTs has been crucial to glimpse into extreme cosmic accelerators, and to shed light on large-scale properties of the Universe, including the topic of this review: intergalactic magnetic fields (IGMFs).

In the last few decades, active galactic nuclei (AGNs) were observed across the whole electromagnetic spectrum, from radio to gamma rays (for reviews see, e.g., Refs. [11,12]). Ever since the Whipple Telescope observed the first BL Lac-type AGN at very-high energies, Mrk 421, in 1992 [13], blazars—a sub-class of AGNs—have been extensively studied. Because their relativistic jets point approximately towards Earth, their emission can probe the Universe over vast distances. At gamma-ray energies, in particular, they can be used to probe the extragalactic background light (EBL) [14–20] and IGMFs [21–23], as well as fundamental physics [24–26]. It is thanks to combined observations of blazars by IACTs and Fermi-LAT that the first studies aiming to constrain IGMFs were possible.

The process whereby high-energy gamma rays emitted by blazars initiate electromagnetic cascades in the intergalactic space has been known for over half a century [27–31]. It follows immediately from the idea of cascades that magnetic fields can interfere with their development. Despite the numerous works on the topic (e.g., [32–37]), it was not until later that the potential of electromagnetic cascades as a method to probe IGMFs was fully realized by Plaga [38], though there have been considerations of the underlying concept even before that [39].

The seminal work of Neronov and Vovk [40] sparked an avalanche of subsequent investigations along the same lines in the following years (e.g., [41–47]), most of which derived lower bounds on the strength of IGMFs ranging from *B* - 10−<sup>18</sup> G to *B* - 10−<sup>16</sup> G, depending on the specific details of the analysis, which is in line with more recent works [47–49]. So far, only constraints on IGMFs have been derived, as opposed to actual measurements. Apart from very general considerations, the coherence length (*LB*) of IGMFs has not been constrained by any analysis until very recently, when somewhat weak bounds were obtained based on observations of the neutrino-emitting blazar, TXS 0506+056: *LB* ∼ 10 kpc– 300 Mpc [50].

Large-scale cosmic magnetic fields have been investigated using several techniques. For instance, X-ray and radio emission by galaxy clusters have been used to probe the magnetic field in these objects [51,52]. Clusters are connected through magnetized ridges observed in radio with instruments like the Low-Frequency Array (LOFAR) [53,54]. At even larger scales, in cosmic voids, measurements are more difficult because of the low density of these regions. This is where high-energy gamma rays from electromagnetic cascades excel: they provide tomographic information of the magnetic fields in these regions. In this case, the short-lived electron–positron <sup>1</sup> pairs produced in electromagnetic cascades are sensitive to the *local* magnetic field. Given the distance scales involved in this type of study, it is probable that, on average, the pairs are formed in cosmic voids, which fill most of the volume of the Universe. Because magnetic fields in voids are virtually unaffected by structure formation, they provide a direct window into the early Universe and the magnetogenesis process (see, e.g., [55–59] for reviews). The absence of such fields

would indicate that seed magnetic fields originated in astrophysical objects, and were subsequently amplified through dynamo processes until they reached present-day levels of ∼1 μG in galaxies [52,57].

Another way to constrain cosmic magnetic fields (or to explain certain observations) provides only upper bounds. If IGMFs have been generated in the early Universe—called primordial magnetic fields (PMFs)—they have an impact on several cosmological aspects. First of all, they represent an additional constituent of the total energy of the Universe and, as such, have an impact on its evolution which results in manifold imprints onto the CMB (see [60] and the references therein). In fact, they may even be able to reduce the tension between the values of the Hubble constant obtained, on the one hand, from type Ia Supernovae observations and, on the other hand, from Planck measurements of the CMB [61]. Furthermore, there are claims [62] that, depending on their strength, PMFs created at the electroweak phase transition (EWPT) may prevent the electroweak baryogenesis. Contrarily, it has been shown recently that Inflation-generated *helical* magnetic fields could create the necessary baryon asymmetry in the first place [63]. Additionally, strong magnetic fields have an impact on the neutron-proton conversion rate, therefore affecting the rates of the weak reactions which are responsible for the chemical equilibrium of neutrons and protons before Big Bang nucleosynthesis (BBN), thus modifying it [55].

This article reviews some key results on cosmic magnetism obtained through gammaray measurements in the last three decades. First, we present a brief overview of intergalactic magnetic fields, their origin, evolution, and properties, in Section 2. Then, in Section 3, we introduce some gamma-ray sources that have been used for IGMF studies and provide more details on how high-energy gamma rays propagate in the intergalactic space and how they can be used to probe IGMFs, followed by some experimental constraints, in Section 4. Finally, in Section 5, we reflect upon the status and the main challenges of this particular field, and discuss the prospects for finally measuring IGMFs with gamma-ray telescopes.

### **2. Intergalactic Magnetic Fields**

Magnetic fields are present on all scales, ranging from small objects like planets to clusters of galaxies and beyond. Galaxies have fields of *B* ∼ 1 μG [52,57,64,65], which drive the magnetization of the circumgalactic medium via winds [66]. Active galaxies can eject jets of magnetized material into galaxy clusters [67] and even into cosmic voids [68]. Clusters of galaxies are connected to each other through filaments, whose fields are *B* ∼ 0.1–10 nG [52,57,65]. They compose the cosmic web, whose magnetic properties are poorly known. This is, to a large extent, due to the scarcity of observational data, owing to the intrinsic difficulties in measuring magnetic fields at scales larger than clusters of galaxies. For this reason, numerical simulations play an important role providing the full picture of how magnetic fields are distributed in the cosmic web. For further details, the reader is referred to, e.g., Ref. [65].

A natural question that arises is how magnetic fields in galaxies reached the μG level we observe today. One possible explanation is that astrophysical dynamos can amplify seed magnetic fields by many orders of magnitude. In this context, these seed fields are required to have strengths of at least 10−<sup>22</sup> to 10−<sup>15</sup> G [69,70], the actual value depending on the particular model. However, if the seeds are strong enough (*B* - 10−<sup>11</sup> G), one does not need to invoke dynamos. In this case, adiabatic compression [55] (potentially together with some stretching and shearing of flows [71]) is sufficient.

Given the distance scales involved in typical IGMF studies using particle probes, only the large-scale distribution of magnetic fields is relevant. In this case, clusters of galaxies fill 10−<sup>3</sup> of the Universe's volume (the so-called volume filling factor), such that their magnetic field is virtually negligible if we are studying how particles propagate over large distances and the effects of magnetic fields upon them. Filaments are believed to have filling factors ∼10−3–10−1, whereas cosmic voids are the most important contribution, with a filling factor -10−1. Therefore, magnetic fields in the voids are, to first order, the dominant component that determines how particles propagate over cosmological baselines.

The origin of the seed magnetic fields is one of the main open questions in astrophysics today. In Section 2.2, we briefly mention some of the main mechanisms for magnetogenesis, focusing on providing some estimates of the relevant observables—field strength, coherence length, and helicity—that can be probed with high-energy gamma-ray observations. Before that, in Section 2.1, we provide some important definitions and the mathematical framework required for understanding cosmic magnetism. Conclusively, in Section 2.3 we present techniques other than gamma-ray astrophysics used to constrain IGMFs and the results originating from them.

### *2.1. Statistical Observables*

Stochastic magnetic fields can be characterized by a number of observables which correspond to different statistical averages. The first one is the average magnetic-field strength (*B*). When considering this quantity, it is a common misconception to talk about the *mean* of *B* because at cosmological scales it is expected that *Bi* = 0 for each individual component *i*, in particular for a Gaussian distribution, which is the typical first-order assumption. The relevant quantity, in this context, is the *root mean square* of the magnetic field, defined from

$$B^2 \equiv B\_{\rm rms}^2 = \frac{1}{V} \int\_V \mathbf{B}^2(\mathbf{r}) \, \mathbf{d}^3 r\_{\,\prime} \tag{1}$$

where *V* is the considered volume. Another related quantity, the magnetic helicity *HB*, is given by

$$H\_B = \int\_V \mathbf{A}(\mathbf{r}) \cdot \mathbf{B}(\mathbf{r}) \, \mathbf{d}^3 r\_{\,\,\,t} \,\tag{2}$$

where **A**(**r**) is the vector potential, i.e., **B** = ∇ × **A**. Originally, *HB* had been defined for a vanishing normal magnetic field component everywhere at the boundary of *V*, even though it is possible to drop this condition in a more general case [72]. As the name suggests, *HB* is directly related to the topology of the magnetic field, more precisely to whether the magnetic field on average is left- or right-handed. It should be noted that magnetic helicity is a well-defined quantity as it is gauge-invariant if the aforementioned requirement of a vanishing normal magnetic field at the border is fulfilled. Furthermore, it is conserved in ideal MHD and plays an important role for the time evolution of the (energy content of the) magnetic field in general (cf. Section 2.2.1).

To define the other quantities, we need to introduce the Fourier transform, which for a given (magnetic) field **B**(**r**) is given by

$$\tilde{\mathbf{B}}(\mathbf{k}) = \frac{1}{(2\pi)^{\frac{3}{2}}} \int\_{V} \mathbf{B}(\mathbf{r}) e^{-i\mathbf{k}\cdot\mathbf{r}} \,\mathrm{d}^3 r\_{\prime} \tag{3}$$

and represents the mode for the wave vector **k**. We can then determine the statistical connection between any two modes, represented by wave vectors **k** and **k** , by calculating the corresponding ensemble average (denoted as ...), given by

$$
\left\langle \hat{B}\_{\mathbf{d}}(\mathbf{k}) \hat{B}\_{\mathbf{b}}(\mathbf{k}') \right\rangle = (2\pi)^3 \delta^{(3)}(\mathbf{k} - \mathbf{k}') \mathcal{P}\_{\mathbf{d}\mathbf{b}}(\mathbf{k}') \,. \tag{4}
$$

Assuming that the magnetic field is homogeneous and isotropic, the general form of P*ab* is

$$\mathcal{P}\_{ab}(\mathbf{k}) \propto \left(\delta\_{ab} - \frac{k\_a k\_b}{k^2}\right) M\_k + \frac{i}{c\_H} \varepsilon\_{abc} k\_c \mathcal{H}\_{k\ \nu} \tag{5}$$

where *δab* and  *abc* are the Kronecker delta and the Levi–Civita symbol, respectively, *Mk* is the spectral magnetic energy, H*<sup>k</sup>* is the spectral magnetic helicity density, and *cH* is a numerical constant which depends on the convention used. It is important to mention here that there is a fundamental relation between *Mk* and H*k*: one can show (see, for example, [73]) that for a given *k* the value of H*<sup>k</sup>* is limited by the corresponding value of *Mk*,

$$|\mathcal{H}\_k| \le \frac{|c\_H|}{k} M\_{k\prime} \tag{6}$$

such that the right hand side of Equation (6) is also called maximal (spectral) helicity, and the actual spectral helicity density may be expressed as a fraction *fH* of it, with −1 ≤ *fH* ≤ 1.

In general, one assumes

$$\mathcal{M}\_k \propto k^{a\_B - 1} \,, \tag{7}$$

which means that the spectrum is given by a power law for which the spectral index *α<sup>B</sup>* defines its type. It is assumed that at small scales (i.e., for large values of *k*), IGMFs have a Kolmogorov (*α<sup>B</sup>* = −2/3, see [74,75]) or an Iroshnikov/Kraichnan (*α<sup>B</sup>* = −1/2, see [76,77]) spectrum at present time. Both values for the spectral index were derived from dimensional considerations, with the latter one assumed to be valid under the assumption of a strong mean magnetic field. Still, due to the fact that the numerical values of these two spectral indices are very close to each other, up to the present day it has not been possible to distinguish between them experimentally [78,79]. For large scales (i.e., small *k*) one expects a Batchelor spectrum (*α<sup>B</sup>* = 5), as predicted using general causality arguments in [80] and confirmed by semi-analytical simulations in [81]. Other works also considered a white noise spectrum (*α<sup>B</sup>* = 3) [82,83]. On the other hand, IGMFs produced during Inflation (cf. Section 2.2.1) are expected to be scale-invariant, which corresponds to *α<sup>B</sup>* = 0 (see, for example, [84]). Note that there are different ways to define the spectral index, such that the numerical values in other publications might differ from the one used here while describing the same kind of spectrum.

The last essential characteristic statistical observable considered here is the correlation length (*LB*) which is given by

$$L\_B = \frac{2\pi \int k^{-1} M\_k \,\mathrm{d}k}{\int M\_k \,\mathrm{d}k} \,\mathrm{.}\tag{8}$$

In a simplified way, *LB* can be understood as the average size of the eddies of the magnetic field. Again it should be noted that several different ways to define the correlation length are found in the literature, such that small differences (for example by a factor of a few) are possible. This is discussed, e.g., in [85], where also the power-law case relevant here is addressed in more detail.

### *2.2. Origin*

While the origin of IGMFs is still unknown, there are two classes of scenarios for their magnetogenesis present in the literature [55,58,64]. In cosmological scenarios strong seed magnetic fields were created during the early Universe and later decayed to their present state. In astrophysical scenarios, on the other hand, weak seed magnetic fields emerged due to local effects (for example, a battery process) in astrophysical objects, being subsequently amplified by dynamo mechanisms. In the following, we will present possible mechanisms for both of these classes of scenarios. Note that this separation is done here purely for reasons of clarity and comprehensibility. In reality, the situation may be more complex, as the particular mechanism may be the result of (yet) unknown physics, or the actual origin of IGMFs may turn out to be a combination of multiple processes, astrophysical and/or cosmological.

### 2.2.1. Cosmological Scenarios

The seed magnetic fields of cosmological scenarios, i.e., the PMFs, are thought to be created by some major cosmological effect, such that they permeate the whole Universe. Without any claim to completeness (more details may be found in [58,59,86]), we list some of these possibilities below.

**Inflation.** Magnetic fields may have been produced during inflation (for a review on Inflation in general, see, e.g., [87]). However, if Maxwellian conformal invariance is preserved, these fields are predicted to be exceedingly weak (*B* 10−<sup>50</sup> G at the epoch of galaxy formation [86]), being negligible for all practical purposes [88]. Models for inflationary magnetogenesis that are of astrophysical relevance must generate much stronger fields. Because conformally invariant fields are not produced in an expanding conformally-flat spacetime, one has to introduce a coupling of the electromagnetic field with the inflaton, and/or an additional coupling which breaks the conformal or gauge invariance, mainly of the form *RμναβFμνFαβ* or *RμνAμAν*, respectively [58] (where *Fαβ* is the electromagnetic field tensor, *Rμν* is the Ricci tensor and *Rμναβ* is the Riemann curvature tensor), even though other terms are also possible [88,89]. After the seminal publications in the field [88,90] the follow-up works (see, for example, [91–99]) then further explored the idea or investigated more exotic scenarios. Due to a large parameter space the resulting magnetic field strength estimations, even in the simplest models, range from 10−<sup>65</sup> to 10−<sup>9</sup> G [92].

**Post-Inflationary.** It is also possible that magnetic fields emerged between Inflation and the EWPT, for example during or before reheating. The general idea is that the coupling between the electromagnetic and a scalar field breaks the Maxwellian conformal invariance. In particular, the scalar field in question may be an oscillating inflaton, which decays into radiation and reheats the Universe [100], resulting in IGMFs with *B* - <sup>10</sup>−<sup>15</sup> G on ∼ Mpc scales. In another scenario [101], Majorana neutrino decays result in lepton asymmetries, and ultimately in baryon asymmetries via anomalous processes, subsequently leading to the violation of lepton/baryon numbers. This then may produce relic hypercharge magnetic fields which are converted to electromagnetic fields during the EWPT, giving ∼10−<sup>18</sup> G field strength with *LB* 10 pc today. More recently, the idea of a Weibel instability emerging and subsequently amplifying a possible inflationary magnetic field during this era has been considered [102].

**Electroweak Phase Transition.** Within the SM, the EWPT is assumed to be rather smooth [103], such that in order to realize a first order transition, mechanisms beyond the Standard Model (BSM) have to be considered [104]. The basic idea of magnetogenesis during the EWPT was first laid out by [105]. Due to the restrictions of possible values of the vacuum expectation value of the Higgs field, Φ, which breaks the electroweak symmetry, and the fact that it varies with the position in space, we have *∂μ*Φ = 0, such that the electromagnetic field strength does not necessarily compensate effects arising from the Higgs field. Hence, we expect a non-vanishing magnetic field after the phase transition. Note, however, that the magnetic field depends on gradients of Φ. Other possible scenarios may be found in [106], with the general conclusion that magnetic fields of up to ∼10−<sup>11</sup> G on scales of ∼10 kpc are possible [59].

**Quantum Chromodynamics Phase Transition.** In a similar fashion to the EWPT, it should be mentioned here that within the Standard Model of particle physics (SM) the quantum chromodynamics phase transition (QCDPT) is considered to be of the second order or crossover type [107–110], such that, for it to be of first order, a SM extension has to be invoked [111]. Several works [112–114] discuss magnetogenesis due to the growth of bubbles of the hadronic phase and, subsequently, charge separation, which ultimately leads to the creation of electric currents and consequently of magnetic fields with an estimated field strength of the order of ∼10−<sup>16</sup> G on ∼kpc scales [113].

It is usually assumed that immediately after magnetogenesis most of the magneticfield energy is concentrated on a characteristic length called the integral scale. The basic idea, as described in [115], is that throughout the evolution of the Universe up to the present day, the magnetic energy decays starting with small scales, such that the integral scale is increasing until it reaches the coherence length of IGMFs today. Throughout the years, there has been a large number of simulations, both numerical and (semi-)analytic, which modelled this time evolution for different magnetogenesis scenarios [81,83,115–120].

As a final remark it should be pointed out that, due to the fact that magnetic helicity is (nearly) conserved, it plays an important role in the time-evolution of magnetic fields, in particular by causing the so-called inverse cascade of energy, i.e., the transfer of magnetic energy from small to large scale fluctuations [73,115,121–123]. These inverse cascades, however, do not seem to be exclusive to helical fields, as shown in recent simulations [124].

It is well possible that PMFs actually were helical. One of the first works along these lines was [125], suggesting the creation of a left-handed PMF due to a change of the Chern– Simons number. Other possible mechanisms include extra dimensions [126], the coupling to the cosmic axion field [127] or an axion-like coupling [128], the Riemann tensor [129], a spectator field [84], or an inherently helical coupling [130] during Inflation in the first place. Recently, also the possibility of helicity generation via a chiral cosmological medium around the EWPT has been considered [131], however the authors found the effect to be suppressed due to the value of the baryon to entropy ratio.

### 2.2.2. Astrophysical Scenarios

A number of possible mechanisms also exists for the astrophysical scenario. They all have in common that magnetic fields are created locally due to some astrophysical process. Some of them are concisely described below.

**Biermann Battery.** It is manifestly difficult to create magnetic fields from scratch due to the fact that in classical MHD, if **B**(**r**) = **0** at some instant in time, then this is true for all later times. A way to evade this limitation is through the Biermann battery mechanism [132,133], for which the basic idea is that the misalignment of temperature and density gradients induces an electric field which ultimately results in the generation of a magnetic field. Prior to Reionization, this process produces exceedingly weak fields in the intergalactic space (*<sup>B</sup>* <sup>10</sup>−<sup>24</sup> G) [134]. In protogalaxies, these fields can reach *<sup>B</sup>* ∼ <sup>10</sup>−22–10−<sup>17</sup> <sup>G</sup> [135–137]. For other astrophysical and cosmological settings see also [56,57,138–140].

**Galactic Outflows.** One obvious candidate to produce IGMFs are the galaxies themselves, as they eject matter and energy into the intergalactic space. Most authors assume that this can be driven by stars, in particular magnetized winds, or cosmic rays [66,141–144]. However, other possibilities exist as well, including the magnetization of voids by giant radio lobes or bubbles from AGNs, even though energetics requirements generally do not allow for such a substantial effect over the age of the Universe [68,145–147].

**Cosmic-Ray Return Currents.** In addition to the outflow scenario discussed above, cosmic rays escaping from a galaxy create a charge imbalance resulting in electric fields and, subsequently, return currents. Ultimately, these return currents can produce magnetic fields on scales which are sufficiently large to provide the seed for IGMFs [148,149].

**Photoionization during the Reionization Era.** During Reionization high-energy photons are able to escape from objects like population III stars, protogalaxies, and quasars into the (then) neutral intergalactic medium (IGM). This causes photoionization which ultimately causes the generation of radial currents (and electric fields), inducing magnetic fields with strengths *<sup>B</sup>* ∼ <sup>10</sup>−<sup>25</sup> − <sup>10</sup>−<sup>20</sup> <sup>G</sup> on scales between ∼<sup>1</sup> kpc and 10 Mpc [150–152]. Remarkably, this mechanism can generate global magnetic fields through astrophysicallyinitiated mechanisms. This seeding scheme agrees with results of large-scale cosmological MHD simulations by Garaldi et al. [153], although they could, in principle, be subdominant with respect to seeds produced through the Biermann battery.

**Primordial Vorticity.** In a seminal paper by Harrison [154], it was suggested that due to relativistic effects electromagnetic fields are coupled to vorticity 2, such that rotating protogalaxies could create primordial vorticity that could generate magnetic fields in the radiation-dominated era. However, vorticity is predicted to decay rather fast in the early Universe, such that more advanced theories based on the same idea, but with vorticity appearing at later stages or using higher-order effects, had to be introduced [155–158].

Several of the mechanisms listed above require a dynamo mechanism in order to amplify the magnetic field strength to the observed present-day values. Especially the small-scale dynamo has attracted major interest in this context (see [65,73,159,160] for some recent results), even though simulating it numerically poses a challenge due to the size of the scales which have to be resolved.

### *2.3. General Constraints*

In this review we focus on constraints on IGMFs from gamma-ray observations. However, since IGMFs can interact through various electromagnetic phenomena throughout the Universe, there are other ways to derive bounds on them. In this section we present the general ideas to do so, based on Ref. [161] and including some more recent developments.

First, there is a generic lower and upper limit on the coherence length (*LB*) [161]. The latter is given by the size of the observed Universe, i.e., the Hubble radius. On the other hand, the IGMF decays due to magnetic diffusion, such that the lower limit on *LB* is given by the length scale equivalent to the corresponding decay time, i.e., *LB* -<sup>2</sup> × <sup>10</sup><sup>11</sup> m [55].

As for the magnetic-field strength (*B*), measurements of the Zeeman splitting of H I lines can be used to set upper bounds on this quantity. This can be done either for our own galaxy or for the radiation from distant quasars [162–164], both consistently giving a result of the order of ∼μG. In the latter case, any stronger IGMFs along the line of sight to the object would have a measurable impact on the observations, thus giving a robust upper limit for the IGMF strength.

Another constraint on IGMFs is derived from Faraday rotation measurements of polarized radio emission from quasars and other extragalactic sources. Faraday rotation describes the (wavelength-dependent) rotation of the polarization plane of polarized electromagnetic radiation when it traverses a magnetized medium. Therefore, the value of the relevant observable, the so-called rotation measure (RM), may be subdivided into contributions from the host galaxy, the IGM, and the Milky Way. With a rigorous statistical analysis of RM data, one can then identify the impact of the IGMF, and hence derive limits on the IGMF strength which in general depend on *LB*. There are many studies on the topic [54,165–171], all of which give upper limits ranging from nG to a few μG. This is also confirmed by other methods, like the interpretation of radio observations as the result of shock acceleration in galaxy clusters [172,173]. In this context, fast radio bursts (FRBs) can play an important role [174], delivering both rotation and dispersion measures. As a consequence, magnetic fields along the line of sight can be better inferred because the use of these two observables reduce the reliance on models on models of the electron density distribution [175]. RMs can be related to the magnetogenesis model, as shown in Ref. [176].

In addition, an important set of limits can be derived from cosmology considering the cosmological scenarios for magnetogenesis (cf. Section 2.2.1). An indirect, theoretical approach is to consider a given mechanism of magnetic-field generation and derive the corresponding limits on the initial magnetic-field strength and correlation length, and then calculate their time evolution via freely-decaying MHD up to the present day. A detailed description is given in [115], while in Figure 1 we present the region which contains most of these constraints, following [58]. In general, one can state that these limits bound the field strength from above and the coherence length from below, the latter due to the fact that in cosmological scenario IGMFs are generated at small scales (see above).

From an observational point of view, most of the limits on IGMFs from cosmology are derived using the cosmic microwave background (CMB), as there is a large range of effects through which magnetic fields can impact the background radiation. The most basic idea, developed already in [177], is to assume a homogeneous field throughout the Universe and then to derive the temperature anisotropies expected from that. Comparing this dataset to data from COBE [178] or, more recently, from Planck [179], gives an upper limit of around 4 nG (marked in Figure 1 as "CMB anisotropies"). Since then the upper limit has been dramatically improved by using CMB observations in combination with such effects as spectral distortions (in Figure 1 we present a limit stemming from this phenomenon based on [180], denoted "CMB spectrum"), temperature anisotropies, polarization, non-Gaussianity and Reionization (for a concise review see [60]). The best upper limit so far, *B* ∼ 10–50 pG [60], was derived by considering the change of the Recombination process itself via density fluctuations due to the presence of PMFs. In addition, CMB observations are also interesting because they may also be used to derive constraints on magnetic helicity [181–183].

**Figure 1.** Schematic overview of the main constraints on IGMFs, as discussed in Section 2.3. The lower and upper bounds on *LB* come from the decay of magnetic fields due to magnetic diffusion and the Hubble radius, respectively [161]. The upper bounds are due to Zeeman splitting and Faraday rotation observations of extragalactic objects [161]. The 'early magnetic dissipation' bound indicates the region of the parameter space excluded by freely decaying MHD in the early Universe [58,115]. Other limits from cosmology come from CMB observations (spectrum [58,180] and anisotropies [178]); the currently strongest limit [60], labelled 'JS19', is shown for the case of a scale-invariant spectrum (*α<sup>B</sup>* = 0) which leads to the most conservative bounds.

Finally, ultra-high-energy cosmic rays (UHECRs), i.e., nuclei with energies above 1018 eV, may be used to constrain IGMFs [161,184–188]. The general principle used here is that, since UHECRs are charged, they are deflected. Hence, once their sources are identified, the corresponding deflection angle can be measured, providing a direct measure of the magnetic-field strength orthogonal to the line of sight. Ref. [189] used the observed excess of UHECRs with energies ∼10<sup>20</sup> eV in the direction of Centaurus A to constrain the local extragalactic magnetic field, obtaining *B* 10−<sup>8</sup> G. This local constrain evidently serves as an upper limit for IGMFs. Ref. [190] used the anisotropy reported by the Pierre Auger Observatory to associate UHECR detections with extragalactic objects and to derive upper limits of *<sup>B</sup>* ∼ <sup>10</sup>−<sup>9</sup> G for *LB* < 100 Mpc and ∼10−<sup>10</sup> G for *LB* > 100 Mpc. More recently, Ref. [191] found that for *<sup>B</sup>* > <sup>6</sup> × <sup>10</sup>−<sup>10</sup> G the Auger anisotropy measurements are in good

agreement with the local density of star-forming galaxies. On the other hand, if the local density is treated as a model parameter, the authors found a conservative upper limit of *BL*1/2 *<sup>B</sup>* < <sup>24</sup> nG Mpc1/2. In principle UHECRs may also be used to constrain the helicity of IGMFs, as argued in [192] and demonstrated numerically in [193].

Note, however, that with the direct UHECR observations available today, it is rather difficult to derive IGMF constraints, as their sources would have to be known (see also Sections 3.1 and 4.5 for an indirect gamma-ray–based approach on how to use UHECRs for deriving IGMF constraints). Moreover, the statistic of events at the highest energies (*E* - <sup>4</sup> × <sup>10</sup><sup>19</sup> eV) is fairly limited, while the composition (and thus the charge) of UHE-CRs is only known statistically, and not on an event-by-event basis [194], posing severe challenges for any attempt to constrain IGMFs with UHECRs. Finally, the distribution of magnetic fields in the cosmic web is more complex than that in cosmic voids, and much more uncertain [65]. Numerical studies of UHECR propagation in magnetic fields lead to very discrepant results, such that the prospects for UHECR astronomy (and thus IGMF constraints using UHECRs) are far from clear (cf. [195–198]).

### **3. Electromagnetic Cascades**

In this section we lay out the theoretical foundations for understanding how electromagnetic cascades develop in intergalactic space. We start off, in Section 3.1, by describing some classes of astrophysical objects that can emit particles that initiate the electromagnetic cascades. We describe two scenarios, depending on the type of particle that initiate the cascade which ultimately leads to the observed gamma-ray signal. In the first, the cascades are triggered by high-energy gamma rays (or electrons), whereas in the second, they are initiated by ultra-high-energy cosmic rays. After describing how electromagnetic cascades originate, we proceed to Section 3.2, where we give a detailed account of how they develop, how they interact with the photon fields that pervade the Universe, and how IGMFs can affect them. In Section 3.3 we present approximate analytical descriptions for the cascade process, which can also be treated in more detail with numerical codes, as described in Section 3.6 and illustrated in Section 3.7. In Section 3.4 we chime into the debate surrounding the role played by plasma instabilities in the development of cascades. Other potentially relevant propagation effects are concisely mentioned in Section 3.5.

### *3.1. Origin*

The most common source of high-energy gamma rays used in IGMF studies are blazars. These objects are a sub-class of active galactic nuclei (AGNs) whose relativistic jets point approximately towards Earth [199]. Their spectral energy distribution is characterized by a low-energy hump corresponding to synchrotron emission by relativistic electrons [11,12]. There is also a second notable hump which, in the case of high and extreme synchrotronpeaked objects, is of interest for IGMF studies, peaks at ∼TeV energies [12]. These objects are excellent cosmological probes as the very-high-energy emission assures the production of a substantial electronic component in the cascade, which can be used to probe IGMFs and the EBL [200].

An object widely considered in gamma-ray astronomy to constrain IGMFs is the extreme blazar 1ES 0229+200. It was used, for instance, in Refs. [40–42,47,48,201]. In fact, there is a population of extreme blazars like 1ES 0229+200 with hard spectra that have been commonly used for IGMF studies, given the weakness of the ∼GeV contribution with respect to the TeV band [200]. Besides 1ES 0229+200, there are other objects that are also employed for this purpose, such as: 1ES 0347-121 [202], 1ES 0414+009 [203], 1ES 1101-232 [204], 1ES 1218+304 [205], 1ES 1312-423 [206], 1RXS J101015.9-311909 [207], H 1426+428 [208], H 2356-309 [209], Mrk 421 [13], Mrk 501 [210], PG 1553+113 [211], PKS 0548-322 [212], PKS 2155-304 [213] RGB J0152+017 [214], RGB J0710+591 [215], and VER J0521+211 [216]. Note that, while typical IGMF studies are done for blazars that can be observed both at ∼GeV and TeV energies, this is not a strict requirement and magnetic-field properties can be inferred solely from the cascade signal.

In general, AGNs are active over time scales of <sup>T</sup> ∼ <sup>10</sup>6–108 years [217], which makes it hard to use temporal information for constraining IGMFs since they are, for all practical purposes, quasi-steady sources. However, some objects, such PKS 2155-304 [213], Mrk 421 [13,218] and Mrk 501 [210,219,220], display short-time variability [221]. This information can, in principle, be used together with light curves in other wavelengths in the context of multimessenger campaigns to improve the constraints on IGMFs via time delays (see Section 4). Interestingly, for blazars that are slightly misaligned with respect to the line of sight, the GeV gamma rays stemming from the TeV emission could still be observed today over angular scales of ∼1◦ even if the objects are no longer high-energy emitters [222].

Another class of objects that can potentially be used to probe IGMFs are gamma-ray bursts (GRBs). They emit highly collimated relativistic jets of high-energy radiation within a short time. GRBs are the most luminous events known, reaching isotropic-equivalent luminosities of ∼1054 erg s−<sup>1</sup> (see, e.g., [223–225] for reviews). Only recently were GRBs observed at very-high energies, with the detection of a bright flash from GRB 190114C [226], which was used for IGMF studies [227,228].

GRBs are interesting cosmological probes because they can be used exactly in the same manner as blazars, while in general providing more accurate temporal information. In this case, the high- and very-high-energy components depend strongly on the properties of intervening IGMFs [229–231]. Moreover, if their HE light curve were known, in principle it would be possible to reconstruct a possible TeV emission even in the absence of VHE measurements, based only on the cascade signal at ∼GeV energies, up to high redshifts [232–234]. Note that this argument only holds if the TeV light curve is known from theoretical models, which is not the case [223,225], or if there are well-defined relations between the GeV and TeV light curves.

The shape of the intrinsic spectrum of the sources of interest for this work, whether a blazar or a GRB, is not precisely known. In general, it is assumed to be a power law of the form

$$\frac{\text{dN}}{\text{dE}} \propto E^{-a} f\_{\text{cut}}(E) \, , \tag{9}$$

where *f*cut(*E*) denotes a function that suppresses the spectrum above a given energy *E*max, which depends on the mechanism responsible for particle acceleration (and consequently for gamma-ray emission). This function is typically an exponential, log-parabola, or similar [235–239]. Interestingly, the value of *E*max that could be inferred with observations depends on the opacity of the Universe to gamma-ray propagation, i.e., the distribution of photon fields such as the EBL, as well as on the properties of the intervening IGMFs [240].

Cosmic-ray-induced electromagnetic cascades in the intergalactic medium may lead to observational signatures that resemble those initiated by gamma rays. These cascades are evidently affected by intervening IGMFs, as discussed in, e.g., Refs. [241–246]. Therefore, gamma rays from cosmic rays can, in principle, also be used to probe IGMFs. In the case of GRBs, this was suggested by the authors of Ref. [247]. Similarly, blazars are prominent contenders to emit UHECRs that can induce electromagnetic cascades in the IGM [248,249]. For highly collimated jets, it could be even possible to distinguish this hadronic scenario from the standard picture wherein gamma rays from the sources induce the cascades [250].

The cosmic rays relevant for this type of analysis are UHECRs, since they can produce electromagnetic cascades during intergalactic propagation, via photonuclear or hadronuclear interactions. In fact, this type of scenario has been suggested to explain observations of some blazars, as they lead to better agreement with the measurements [246,248,251–255].

One process that creates electrons and positrons that trigger cascades is Bethe–Heitler pair production: *<sup>A</sup> <sup>Z</sup>* <sup>X</sup> + *<sup>γ</sup>*bg → *<sup>A</sup> <sup>Z</sup>* <sup>X</sup> <sup>+</sup> *<sup>e</sup>*<sup>−</sup> <sup>+</sup> *<sup>e</sup>*+, wherein *<sup>A</sup> <sup>Z</sup>* X denotes an arbitrary cosmic-ray nucleus *X* of atomic mass *A* with *Z* protons interacting with a background photon (*γ*bg).

Nuclear interactions also produce electrons and photons, starting with the photodisintegration of cosmic-ray nuclei (e.g., *<sup>A</sup> <sup>Z</sup>* <sup>X</sup> <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup> *<sup>A</sup>*−<sup>1</sup> *<sup>Z</sup>*−1<sup>X</sup> <sup>+</sup> *<sup>p</sup>*, *<sup>A</sup> <sup>Z</sup>* <sup>X</sup> <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup>*A*−<sup>1</sup> *<sup>Z</sup>* <sup>X</sup> <sup>+</sup> *<sup>n</sup>*), possibly producing unstable nuclei (*<sup>A</sup> <sup>Z</sup>* <sup>X</sup>∗) which decay as *<sup>A</sup> <sup>Z</sup>* <sup>X</sup><sup>∗</sup> → *<sup>A</sup> <sup>Z</sup>* X + *γ*.

The most important hadronic channel for the generation of cascade-inducing particles (electrons and photons) is photopion production. For a cosmic-ray proton, *p* + *γ*bg → <sup>Δ</sup><sup>+</sup> <sup>→</sup> *<sup>p</sup>* <sup>+</sup> *<sup>π</sup>*<sup>0</sup> and *<sup>p</sup>* <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup> <sup>Δ</sup><sup>+</sup> <sup>→</sup> *<sup>n</sup>* <sup>+</sup> *<sup>π</sup>*+. The decay of the neutral pion produces photons (*π*<sup>0</sup> → *<sup>γ</sup>* + *<sup>γ</sup>*) and the decay of charged pions <sup>3</sup> lead to the generation of leptons (*π*<sup>+</sup> <sup>→</sup> *<sup>μ</sup>*<sup>+</sup> <sup>+</sup> *νμ*), including muons, whose decays produce electrons (*μ*<sup>+</sup> <sup>→</sup> *<sup>ν</sup><sup>e</sup>* <sup>+</sup> *<sup>ν</sup>*¯*<sup>μ</sup>* <sup>+</sup> *<sup>e</sup>*+). Note that the cascades stemming from the by-products of pion decays also occur for an arbitrary nucleus *<sup>A</sup> <sup>Z</sup>* X. In this case, the production rate depends on the number of each nucleonic species (see, e.g., Ref. [256] for further details).

While it is, in principle, possible to constrain IGMFs with UHECR-produced gamma rays, this is not straightforward. Firstly, the sources of UHECRs are not known. Secondly, they are deflected by intervening galactic and extragalactic magnetic fields, potentially spoiling any correlation between the source direction and the gamma rays. For more details on the cosmic-ray–gamma-ray connection, the reader is referred to some reviews on the topics: [194,257].

### *3.2. Theory of Propagation*

The particle physics aspects relevant for the propagation of high-energy gamma rays are well known. At energies *E* - 400 GeV high-energy gamma rays interact with background photon fields predominantly at infrared frequencies, generating electron– positron pairs: *<sup>γ</sup>* <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup> *<sup>e</sup>*<sup>+</sup> <sup>+</sup> *<sup>e</sup>*−. The mean free path for this process is typically of the order of tens to hundreds of Mpc. These pairs up-scatter photons from (mostly) the CMB to high energies via inverse Compton scattering (*e*<sup>±</sup> + *γ*bg → *e*<sup>±</sup> + *γ*). These new photons, in turn, can either travel straight to Earth or, if their energy is above the threshold for pair production, restart this process, leading to an electromagnetic cascade in the intergalactic medium.

The picture outlined in the previous paragraph is theoretically simple, but there are uncertainties that complicate the modelling of electromagnetic cascades in the IGM. The most important one is the distribution of the EBL, which is not precisely known. At extremely high gamma-ray energies (*E* - 1017 eV), the contribution of the cosmic radio background (CRB) starts to become relevant. A comparison of EBL and CRB models, as well as the density of CMB photons, is illustrated in Figure 2.

In general, the inverse of the mean free path *λ* for a particle of energy *E* and mass *m* interacting with isotropically-distributed photons of differential number density <sup>4</sup> <sup>d</sup>*n*(*ε*,*z*) <sup>d</sup>*<sup>ε</sup>* is

$$
\lambda^{-1}(E, z) = \frac{1}{8E^2} \int\_0^\infty \int\_{\varepsilon^{\rm sing}}^{s\_{\rm max}} \frac{1}{\varepsilon^2} \frac{\mathrm{d}n(\varepsilon, z)}{\mathrm{d}\varepsilon} \mathcal{F}(s) \, \mathrm{d}s \, \mathrm{d}\varepsilon,\tag{10}
$$

where *z* is the redshift (see below), *ε* refers to the energy of the background photon, and F depends on the process of interest, with kinematic limits *s*min and *s*max. For pair production, F = *<sup>s</sup>σ*PP(*s*), with *<sup>s</sup>*min = <sup>4</sup>*m*<sup>2</sup> *<sup>e</sup> <sup>c</sup>*<sup>4</sup> and *<sup>s</sup>*max = <sup>4</sup>*Eε*. For inverse Compton scattering, F = *<sup>σ</sup>*IC(*<sup>s</sup>* − *<sup>m</sup>*<sup>2</sup> *<sup>e</sup> c*4)/*β*, wherein *β* denotes the speed of the electrons, in units of the speed of light. The kinematic limits, in this case, are *s*min = *m*<sup>2</sup> *<sup>e</sup> c*<sup>4</sup> and *s*max = *m*<sup>2</sup> *<sup>e</sup> c*<sup>4</sup> + 2*Eε*(1 + *β*). Here, *σ*PP and *σ*IC denote, respectively, the cross sections for pair production and for inverse Compton scattering. Note that the minimum and maximum energies are, in principle, unbounded, i.e., *ε*min → 0 and *ε*max → ∞, but in practice they quickly vanish outside a given energy range. In the case of the EBL, for example, for purposes of calculations, *<sup>ε</sup>*min <sup>10</sup>−<sup>4</sup> eV and *ε*max 10 eV (see Figure 2).

**Figure 2.** Compilation of the density of background photons (*n*(*ε*)) with different energies (*ε*) at *z* = 0. The curves correspond to different backgrounds, radio (dashed lines), microwave (dotted), and infrared/optical (solid). Different colors represent different EBL and CRB models: Franceschini et al. [258], Finke et al. [259], Domínguez et al. [260], Gilmore et al. [261], the upper (UL) and lower (LL) limits by Stecker et al. [262], Protheroe and Biermann [263], Ni¸tu et al. [264], and measurements by ARCADE-2 (Fixsen et al. [265]). The frequencies (*ν*) corresponding to the photon energies are shown at the top.

Following Ref. [161], we can approximate Equation (10) as

$$
\lambda\_{\rm PP} \simeq 40 \frac{\kappa}{(1 + z\_{\rm PP})^2} \left(\frac{E\_{\gamma}}{20 \,\text{TeV}}\right)^{-1} \,\text{Mpc} \tag{11}
$$

for pair production, where *κ* is a model-specific parameter of the order of *κ* ∼ 1, and as

$$
\lambda\_{\rm IC} \simeq 32 \frac{1}{(1+z\_{\rm PP})^4} \left(\frac{E\_{\rm c}}{10 \,\rm TeV}\right)^{-1} \,\rm kpc \,\,\tag{12}
$$

for inverse Compton scattering.

Typically, gamma rays with ∼TeV energies produce pairs after travelling distances larger than ∼100 Mpc. For inverse Compton scattering, the typical distance TeV electrons travel before they undergo interactions is ∼30 kpc. In Figure 3 we show the inverse of the mean free path for these processes, as obtained from Equation (10).

Higher-order processes to pair production and inverse Compton scattering are important for the propagation of gamma rays of *E* - 10<sup>15</sup> eV. In particular, for scenarios wherein UHECRs induce electromagnetic cascades (see Section 3.1), they are an essential ingredient to understand gamma-ray production. The higher-order equivalent of the Breit-Wheeler pair production is the double pair production [266,267] (*<sup>γ</sup>* <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup> *<sup>e</sup>*<sup>+</sup> <sup>+</sup> *<sup>e</sup>*<sup>−</sup> <sup>+</sup> *<sup>e</sup>*<sup>+</sup> <sup>+</sup> *<sup>e</sup>*−). This process has been extensively studied in various astrophysical contexts, including the propagation of high-energy photons [268,269]. Inverse Compton scattering can also occur as a second-order process called triplet pair production (*e*<sup>±</sup> <sup>+</sup> *<sup>γ</sup>*bg <sup>→</sup> *<sup>e</sup>*<sup>±</sup> <sup>+</sup> *<sup>e</sup>*<sup>+</sup> <sup>+</sup> *<sup>e</sup>*−). Its role in the propagation of high-energy photons has long been recognized [270–273]. This process starts to become important at *E* - 1017 eV, for cosmological distances. The (inverse) mean free paths for double and triplet pair production are also shown in Figure 3.

The cosmological propagation of any particle is subject to adiabatic energy losses due to the expansion of the Universe. The change in redshift (d*z*) for a small propagated distance (d*x*) can be written as d*z* = *<sup>H</sup>*(*z*) *<sup>c</sup>* d*x*, where *H*(*z*) is the Hubble parameter, which in a flat Lambda cold dark matter (ΛCDM) universe is given by

$$H(z) = H\_0 \sqrt{\Omega\_\Lambda + \Omega\_\mathfrak{m} (1+z)^3} \, , \tag{13}$$

with *H*<sup>0</sup> denoting the Hubble "constant", i.e., the value of the Hubble parameter at present time. Here, Ω*<sup>m</sup>* and ΩΛ parameters represent the fraction of the total energy density of the Universe corresponding to matter and dark energy, respectively. According to recent measurements, *<sup>H</sup>*<sup>0</sup> <sup>≈</sup> 67.37 km s−<sup>1</sup> Mpc<sup>−</sup>1, <sup>Ω</sup>*<sup>m</sup>* <sup>≈</sup> 0.3147, and ΩΛ <sup>≈</sup> 0.6853 [274].

Generally, in the presence of a magnetic field the charged component of the electromagnetic cascade (electrons) loses energy through synchrotron emission. However, synchrotron losses are small for intergalactic gamma-ray propagation, since *B* 10−<sup>9</sup> G (see Figure 1).

**Figure 3.** Each panel shows the energy-dependent inverse mean free path at *z* = 0 for the processes relevant for the cosmological propagation of gamma rays with energies *E* - 1 GeV. Different photon backgrounds are considered. Solid lines correspond to interactions with the EBL, dashed lines are for the CRB, and the dotted line refers to the CMB. The EBL [259–262] and CRB [263,264] models used are represented by different colors.

The last theoretical ingredient missing for understanding how electromagnetic cascades propagate is the interaction of its charged component with magnetic fields. The equation of motion for a particle of charge *q* with velocity **v** in a magnetic field **B** can be written as

$$\frac{d\mathbf{p}}{dt} = q\mathbf{v} \times \mathbf{B} \,\prime \tag{14}$$

where **p** is the particle momentum. As a consequence of this equation, the electrons and positrons will deflect away from each other. This deflection consists of a circular/helical movement (around the magnetic-field lines), characterized by the Larmor radius *r*L, given by

$$
\sigma\_{\mathcal{L}} = \frac{p}{\varepsilon B} \, \text{s} \tag{15}
$$

where *p* is the absolute value of the particle momentum.

We can now draw a general picture of how gamma rays can be used to constrain IGMFs. Consider an object located at a distance *D* from Earth, corresponding to a redshift *z*src, emitting a jet of high-energy gamma rays with an opening angle Θjet, as sketched in Figure 4. Let Θlos denote the angle between the jet axis and the line of sight, i.e., the angle of misalignment. The primary gamma rays are generated at a redshift *z*src and produce pairs at *z*PP, travelling for a distance dictated by the mean free path for pair production (*λ*PP) for the energy and redshift of interest (see Equations (10) and (11)). The pairs produced are deflected by intervening IGMFs, forming an angle *δ* with the direction of the parent gamma ray. The distance the electrons travel is typically of the order of the mean free path for inverse Compton scattering (*λ*IC; see Equations (10) and (12)). The up-scattered gamma rays can restart the cascade depending on their energy, such that the cascade would have multiple generations of particles. In Figure 4 only one generation is shown. Finally, the secondary gamma rays are detected at Earth forming an angle *θ*obs with respect to the line of sight, i.e., the line connecting the observer and the object. With this scheme in mind, we can estimate the relevant gamma-ray observables, namely the spectrum, arrival directions, and light curves, either analytically (see Section 3.3) or numerically (see Section 3.6).

The secondary photons resulting from the electrons deflected in the presence of IGMFs will be delayed compared to primary gamma rays emitted at the same time. Depending on the distance to the source, the duration of the emission and the properties of the IGMF, some of the secondary gamma rays from the cascade, produced from electrons via inverse Compton scattering, will not be able to arrive at Earth within one Hubble time, leading to an energy-dependent decrease in the flux. Therefore, the three main gamma-ray observables relevant for IGMF studies are


Naturally, quite often combinations of these strategies are employed, as will be presented in more detail in Section 4.

**Figure 4.** Schematic drawing of the development of an electromagnetic cascade. A source (yellow star) with a jet of opening angle Θjet tilted—with respect to the line of sight—by an angle Θobs emits high-energy gamma rays (dark green line) forming an angle *θ*emi with this line. After interaction, it produces an electron–positron pair (blue and red arrows) that, in the presence of IGMFs, is deflected by an angle *δ* with respect to the direction of the original gamma ray. These pairs can then up-scatter background photons to high energies (light green line), being detected with an angle *θ*obs.

### *3.3. Analytical Description of Propagation and Observables*

Neronov and Semikoz [161] presented a pedagogical model describing how gammaray telescopes can be used to probe IGMFs. This model is a suitable approximation for the energy range of interest, between GeV and tens of TeV. Making use of some simplifying assumptions, they derived analytical expressions for the expected signatures of specific combinations of magnetic-field strength (*B*) and coherence length (*LB*). It is beyond the scope of this review to derive the formulae, but it is certainly worth transcribing the main results and some of the steps required to obtain them.

One can distinguish two regimes of propagation for the charged component of the electromagnetic cascades. They are determined by an interplay between the characteristic scales of inverse Compton scattering (*λ*IC) and the coherence length of the magnetic field (*LB*). For *λ*IC *LB*, the propagation is quasi-rectilinear (ballistic), whereas for *λ*IC  *LB*, the electrons diffuse before they produce the secondary photons via IC scattering. In the former case, the electrons can be seen as effectively moving in a homogeneous magnetic field, such that in the small-angle approximation *δ LB*/*r*L, wherein *r*<sup>L</sup> is given by Equation (15). In the latter case, we have *<sup>δ</sup>* <sup>√</sup>*λ*IC*LB*/*r*L. Together with Equations (12) and (15) we then obtain the estimate

$$\delta \simeq \begin{cases} 0.03^{\circ} (1 + z\_{\rm PP})^{-\frac{1}{2}} \left( \frac{E\_{\varepsilon}'}{10 \,\mathrm{TeV}} \right)^{-\frac{3}{2}} \left( \frac{B}{10^{-15} \,\mathrm{G}} \right) \left( \frac{L\_{\mathrm{B}}}{1 \,\mathrm{kpc}} \right)^{\frac{1}{2}} & L\_{\mathrm{B}} \ll \lambda\_{\mathrm{IC}}, \\ 0.003^{\circ} (1 + z\_{\rm PP})^{-2} \left( \frac{E\_{\varepsilon}'}{10 \,\mathrm{TeV}} \right)^{-2} \left( \frac{B}{10^{-15} \,\mathrm{G}} \right) & L\_{\mathrm{B}} \gg \lambda\_{\mathrm{IC}}. \end{cases} \tag{16}$$

where *E <sup>e</sup>* is the electron energy at redshift *z*PP. Note that a more detailed investigation [275] shows that the deflection angle also weakly depends on the spectral index *α<sup>B</sup>* of the magnetic field (see Section 2.1) for *LB λ*IC .

For distant sources, the pairs are produced closer to the source than to Earth (*λ*PP  *D*). If *δ*  1, then we can adopt the approximation *z z*src *z*PP, which allows us to derive an analytic expression for *θ*obs:

$$\theta\_{\rm obs} \simeq \begin{cases} 0.07^{\circ}(1+z)^{-\frac{1}{2}} \left(\frac{\pi\_{\theta}}{10}\right)^{-1} \left(\frac{E\_{\gamma}}{0.1\,\text{TeV}}\right)^{-\frac{3}{4}} \left(\frac{B}{10^{-14}\,\text{G}}\right) \left(\frac{L\_{B}}{1\,\text{kpc}}\right)^{\frac{1}{2}} & L\_{B} \ll \lambda\_{\text{IC}},\\ 0.5^{\circ}(1+z)^{-2} \left(\frac{\pi\_{\theta}}{10}\right)^{-1} \left(\frac{E\_{\gamma}}{0.1\,\text{TeV}}\right)^{-1} \left(\frac{B}{10^{-14}\,\text{G}}\right) & L\_{B} \gg \lambda\_{\text{IC}},\end{cases} \tag{17}$$

where *τθ* is the ratio between the angular diameter distance from the observer to the source and the mean free path for pair production, *λ*PP. Morphologically, this corresponds to a "halo" of secondary photons around the point-like source. Note that, while the morphology of the arrival directions does resemble a halo in the axi-symmetric case, this is not always the case. Depending on the geometry of the jet (Θlos > 0◦; see Figure 4) and properties of the intervening magnetic fields (e.g., helical fields), more complex shapes arise. We continue to use the term 'halo' nonetheless.

An interesting and somewhat more accurate approach to estimate the size of such haloes was presented in Ref. [276], in which the moments of the halo distribution are calculated from diffusion-cascade equations. This method is applicable whenever the distribution of gamma rays emitted by the source is isotropic or the jet opening angle (Θjet) is sufficiently large.

Another important quantity when determining IGMFs from electromagnetic cascades is the time delay Δ*tB*, defined as the difference between the following two quantities: the cumulative propagation time of the "reprocessed" gamma rays resulting from the cascades (see Figure 4), consisting of the lifetime *t*PP of the primary gamma ray until it results in pair production and of the duration *t*sec of the cascade from the secondary gamma rays; and the light-travel time (*t*prim) of primary gamma rays. Therefore, one can write the equation

$$
\Delta t\_B = (t\_{\rm PP} + t\_{\rm sec}) - t\_{\rm prim} \,\,. \tag{18}
$$

For the standard consideration of IGMFs we have *z*PP *z*src = *z*  1 and *δ*  1, such that Equation (18) becomes

$$
\Delta t\_B \simeq \begin{cases}
7 \times 10^5 \,\mathrm{s} \,(1 - \tau\_\theta^{-1}) (1 + z)^{-5} \kappa \left(\frac{E\_\gamma}{0.1 \,\mathrm{TeV}}\right)^{-\frac{5}{2}} \left(\frac{B}{10^{-18} \,\mathrm{G}}\right)^2 & L\_B \ll \lambda\_{\mathrm{IC}}, \\
1 \times 10^4 \,\mathrm{s} \,(1 - \tau\_\theta^{-1}) (1 + z)^{-2} \kappa \left(\frac{E\_\gamma}{0.1 \,\mathrm{TeV}}\right)^{-2} \left(\frac{B}{10^{-18} \,\mathrm{G}}\right)^2 \left(\frac{L\_B}{1 \,\mathrm{kpc}}\right) & L\_B \gg \lambda\_{\mathrm{IC}}.
\end{cases} \tag{19}
$$

The last observable we describe here concerns the probing of magnetic helicity of IGMFs using gamma rays and was first suggested in [277]. Since then, it has been further extended and investigated in a significant number of publications [278–287]. There it was shown that the helical part of the magnetic field spectrum (see Equation (5)) has a direct impact on the morphology of the halo around the gamma-ray source. In particular, when the magnetic field is helical, the halo becomes "twisted", i.e., instead of an (elongated) circular or oval halo, as one would expect from considering the simple analytic formulae derived above, the result is a spiral-like pattern (see Figure 8).

This twisted pattern or, more specifically, its handedness, can be measured by the quantity *Q* introduced in [277] (and summarized in [59]) as

$$Q(\mathbf{\hat{n}}\_1, \mathbf{\hat{n}}\_2, \mathbf{\hat{x}}\_{\rm los}) = (\mathbf{\hat{n}}\_1 \times \mathbf{\hat{n}}\_2) \cdot \mathbf{\hat{x}}\_{\rm los} \,\,\,\tag{20}$$

where **n**ˆ <sup>1</sup> and **n**ˆ <sup>1</sup> are the unit vectors of the arrival directions of two particles with the respective energies *E*<sup>1</sup> and *E*<sup>2</sup> (with *E*<sup>1</sup> < *E*2), and **x**ˆlos is the unit vector along the line of sight from the observer to the source. Using this one can calculate the so-called *Q*-statistics, given by

$$
\langle \overline{Q}(\theta\_{\text{cbs}}^{\text{max}}) \rangle = \langle Q(\mathbf{\hat{n}}\_1, \mathbf{\hat{n}}\_1, \mathbf{\hat{x}}\_{\text{lcs}}) \rangle\_{\theta\_{\text{cbs}} \leq\_{\text{cbs}} \rho} \tag{21}
$$

i.e., the average over all photons with angles *θ*obs up to a value of *θ*max obs .

If the direction of the line of sight is not known, the arrival direction **n**ˆ <sup>3</sup> of a third particle with an energy *E*<sup>3</sup> (with *E*<sup>3</sup> > *E*<sup>2</sup> > *E*1) may be considered instead of **x**ˆlos. In fact, by generally considering such triplets of particles from any direction in the sky, one can calculate the generalized *Q*-quantity (and, subsequently, the corresponding statistics) as [279]

$$Q(E\_1, E\_2, E\_3, \theta^{\text{max}}) = \frac{1}{N\_1 N\_2 N\_3} \sum\_{\mathbf{\hat{n}}\_3} \sum\_{\angle(\mathbf{\hat{n}}\_1, \mathbf{\hat{n}}\_2)} Q(\mathbf{\hat{n}}\_1, \mathbf{\hat{n}}\_2, \mathbf{\hat{n}}\_3) \,, \tag{22}$$

where for every particle with the arrival direction **n**ˆ <sup>3</sup> (and given energy *E*3) the second summation is carried out over all particles with the given energies *E*<sup>1</sup> and *E*<sup>2</sup> (with *E*<sup>3</sup> > *E*<sup>2</sup> > *E*1) with arrival directions **n**ˆ <sup>1</sup> and **n**ˆ 2, respectively, which lie inside "patches" of angular size *θ*max around **n**ˆ 3. Finally, the values *N*1, *N*2, and *N*<sup>3</sup> in Equation (22) are the corresponding total numbers of particles for each of the three energies.

The final step in connecting the *Q*-statistics (and therefore the handedness of particle arrival directions) with the handedness of the magnetic field (and therefore its helicity) is to consider the case *<sup>θ</sup>*max → *<sup>π</sup>*/2. As shown in [279], this is proportional to the helical part of the spectrum H*k*, as defined in Equation (5).

An alternative to the *Q*-statistics, introduced in [283], is the *S*-statistics which, for a single source, can be used to quantify the spiral shape of the halo.

### *3.4. Plasma Instabilities*

The physics of electromagnetic cascades described above is well understood, but it neglects the back-reaction of the intergalactic medium on the cascades. This is a common assumption adopted in most IGMF studies, but if it turns out to be a poor approximation, plasma effects may become dominant. It was suggested [288] that the electrons in the cascade interact with the IGM and lead to the generation of plasma instabilities, losing their energy and consequently heating the IGM. Due to the extreme parameters of the interacting components (for example, a factor of up to 10<sup>24</sup> between the density of the electron beam and the background plasma [289]), it is practically impossible to exactly calculate the impact of the instabilities on the development of the cascade. Nevertheless, one can rely on approximations and/or extrapolations.

The IGM parameters relevant for plasma instabilities are its temperature, which is typically *<sup>T</sup>*IGM ∼ <sup>10</sup><sup>4</sup> <sup>K</sup> [288], and the density, which in the cosmic voids is *<sup>n</sup>*IGM ∼ 0.1 m−<sup>3</sup> [290]. Another important parameter is density of the gamma-ray beam, which is related to its luminosity.

As mentioned above, there is no general agreement on whether plasma instabilities are important for the propagation of electromagnetic cascades. Even if one accepts this assumption, it is not clear which kind of instability could be dominant. In fact, the modulation [289,291–293], oblique [288,294,295], kinetic [296], and longitudinal [297] instabilities, as well as non-linear Landau Damping [298] have been considered in the literature. On the other hand, Ref. [299] found that even if they are present, the effect of plasma instabilities is too small to cause a significant impact on observations. A comparison of the energy-loss length for different types of instabilities is shown in Figure 5.

Several authors subsequently published results of actual simulations of gamma-ray propagation including possible plasma instabilities effects and compared them to actual observations [300–302]. The results show that, while the instabilities can, indeed, lead to appreciable deviations from the paradigmatic picture of cascade development, they may not be sufficient to render gamma-ray constraints of IGMFs completely ineffective. In this case, all that would be required is a more detailed modelling of the electromagnetic cascades—which, understandably, would be more susceptible to uncertainties due to the inclusion of an additional and poorly-understood effect.

**Figure 5.** Cooling rates due to plasma instabilities computed at *z* = 0, according to different models [288,291,293,294,298]. This example is for a typical scenario for an IGM density of *n*IGM = <sup>10</sup>−<sup>1</sup> <sup>m</sup>−<sup>3</sup> and temperature of 10<sup>4</sup> K, for a blazar beam with luminosity <sup>L</sup> <sup>=</sup> <sup>10</sup><sup>38</sup> J/s. We also present the inverse mean free path for inverse Compton scattering in the CMB for comparison.

There is also another window of opportunity to evade plasma instabilities, even if they majorly disrupt electromagnetic cascades. The growth rates of plasma instabilities are often estimated using simplifying assumptions like a continuous and constant stream of particles. However, if the object in question emits gamma rays in flares, the temporal structure of the resulting charged beam should be considered. In particular, if the duration of the flare is short enough, the instability might not have enough time to fully develop, consequently having no significant impact on the electrons.

### *3.5. Other Propagation Phenomena*

So far we have discussed how electromagnetic cascades propagate in the Universe in light of a standard picture entirely contained within the framework of quantum electrodynamics (Section 3.2). We also briefly discussed how plasma instabilities could quench electromagnetic cascades propagating in the IGM (Section 3.4). In this subsection, we briefly describe how other physical phenomena could affect the development of electromagnetic cascades and, consequently, observations of high-energy gamma-ray sources, with direct implications for IGMF studies.

One phenomenon that can interfere with the propagation of gamma rays and consequently compromise IGMF constraints is gravitational lensing. Massive objects can significantly deform the space-time surrounding them, altering the path along which particles travel. As a result, in the context of gamma rays, gravitational lenses can significantly deform the morphology of haloes and, in the case of flaring objects, increase the time delays due to this gravitationally-induced contribution. The first source for which gravitational lensing has been observed at gamma-ray energies (up to 30 GeV) was PKS 1830-211 [303]. Since then, the phenomenon has been detected for this and other gamma-ray–emitting objects [304–307]. Ref. [308] investigated how macrolenses could compromise estimates of the optical depth for pair production, concluding that this effect would not lead to any measurable changes in this observable. This result is corroborated by the more detailed study of [309].

Other potentially important phenomena arise in the context of BSM models. The most widely studied BSM processes that could interfere with the gamma-ray–IGMF framework we present here involve Lorentz invariance violation (LIV) and interactions with axion-like particles (ALPs). Because of the potentially important role played by these phenomena in determining the gamma-ray signatures of sources used for IGMF constraints, we briefly touch upon these issues.

Lorentz invariance violation is a possible consequence of various BSM approaches, especially in the context of quantum gravity. The standard approach, from the field theory side, is to create a minimal SM extension, in particular by introducing additional terms to the SM Lagrangian, resulting in a effective field theory with LIV [310].

In terms dynamics, the main effect when it comes to the propagation of particles is the modification of the dispersion relation, given by

$$E\_{\rm LIV}^2 = E^2 + \eta \frac{p^{n+2}}{M\_{\rm Pl}^n} \, , \tag{23}$$

where *E*LIV and *E* is the particle energy with and without LIV, respectively, *η* is a dimensionless parameter measuring the strength of the LIV, and *M*Pl is the Planck mass. This, on the one hand, changes the threshold of a given reaction, and, as a consequence, also changes the corresponding propagation length, as it modifies the limits of the integral in Equation (10). In addition, new reactions, which are not possible without LIV, may then be kinematically allowed, such as, to name the ones most relevant in the context of this review, spontaneous photon decay into pairs/photons, the vacuum Cherenkov effect for electrons and charged UHECRs, as well as spontaneous photodisintegration of multi-nucleon nuclei. For an overview on the modifications of the processes in the electromagnetic cascades and in UHECR propagation, see [311–313] and [311,314,315], respectively. All this may result in a significant modification of particle propagation and, therefore, impact the corresponding observations. In particular, [316] showed that LIV might dramatically increase the interaction length of pair production for energies above 100 TeV and therefore suppress the cascade development. On the other hand, LIV might also imply that the photons' speed is energy-dependent, thus resulting in energy-dependent time delays [313].

Axion-like particles, or ALPs, appear in extensions of the SM. They are pseudo Nambu-Goldstone bosons associated with a broken symmetry *U*(1). They were originally introduced by Peccei and Quinn [317,318] as a solution to the strong CP problem. ALPs couple to standard-model particles via the Lagrangian [319]

$$\mathcal{L}\_{a\gamma} = -\frac{1}{4} \mathbf{g}\_{a\gamma} \mathbf{E}\_{\gamma} \cdot \mathbf{B}\_{\text{ext}} \,\prime \tag{24}$$

where **E***<sup>γ</sup>* denotes the electric field of the photon itself, and **B**ext represents an external magnetic field (in the context of this review, IGMFs). The coupling constant *gaγ* determines how strongly photons, in our case gamma rays, will interact with the ALP field. For a distance *x*, the probability of a photon to convert into an ALP (and vice-versa) is

$$P\_{\mathbf{z}\mapsto\gamma}(\mathbf{x}) = \sin^2\left(\frac{1}{2}\arctan\left(\frac{2\Lambda\_{\mathbf{z}\gamma}}{\Lambda\_{\mathbf{z}}-\Lambda\_{\gamma}}\right)\right)\sin^2\left(\frac{1}{2}\mathbf{x}\sqrt{(\Lambda\_{\parallel}-\Lambda\_{\mathbf{z}})^2+4\Lambda\_{\mathbf{z}\gamma}}\right).\tag{25}$$

Here the Δ terms refer to the solution of the equation of motion derived from the Lagrangian (Equation (24)). They describe: the coupling between photons and ALPs (Δ*a<sup>γ</sup>* = *gaγBT*) between a photon of energy *E* and the ALP field for a an external magnetic field *BT* transverse to the direction of propagation of the photon; the kinetic term (Δ*<sup>a</sup>* = *m*<sup>2</sup> *<sup>a</sup>*/2*E*) for an ALP of mass *ma*; the polarization states of the photon (Δ and <sup>Δ</sup>⊥), which, in our case, encompass the contribution of the IGM plasma (Δpl = −*ω*pl/2*E*) for a plasma frequency *ω*pl, and the QED vacuum polarization (ΔQED ∝ *B*<sup>2</sup> *<sup>T</sup>*), which depends on the direction (ΔQED,<sup>⊥</sup> = <sup>7</sup>ΔQED/2 and <sup>Δ</sup>QED, = <sup>2</sup>ΔQED). For more details, the reader is referred to, for example, Ref. [319].

Two regimes of propagation can be identified [320], depending on whether the gammaray energy is larger or smaller than the critical energy (*Ec*), given by:

$$E\_c = \frac{m\_d^2 - \omega\_{\rm pl}}{4\Delta\_{a\gamma}} \approx 2.5 \left( \frac{\left| m\_d^2 - \omega\_{\rm pl}^2 \right|}{10^{-20} \text{ eV}^2} \right) \left( \frac{10^{-9} \text{ G}}{B\_T} \right) \left( \frac{10^{-11} \text{ GeV}^{-1}}{\text{g}\_{a\gamma}} \right) \text{GeV} \,\text{.}\tag{26}$$

The limit *E Ec* corresponds to the so-called strong mixing. In this case, the probability of conversion (see Equation (25)) does not depend on the energy. If *E Ec*, the energy dependence becomes salient leading to an effective low-energy cut-off.

The propagation of gamma rays will be affected by ALPs in multiple ways. Firstly, the magnetic fields in the sources will contribute to the total ALP-photon mixing. Secondly, once the gamma rays are injected into the intergalactic space, they may initiate electromagnetic cascades as described in Section 3.2. Upon entering the Galaxy, ALP-photon mixing may also occur due to the Galactic magnetic field. The oscillation probability from Equation (25) will then be a combination of the probabilities in each of these environments, as discussed in, e.g., Refs. [321–324].

In the case of gamma rays propagating over cosmological distances, the oscillation probability from Equation (25) implies deviations from the expected transparency of the Universe, since gamma rays will be able to travel longer without undergoing pair production. A number of works investigated the possibility that this "pair production anomaly" could be related to ALPs (e.g., [323,325–328]).

The effects of IGMFs on gamma-ray–ALP interconversion has been studied adopting several methods ranging from semi-analytical approaches to more detailed simulations. While the first studies on the topic assumed relatively simple magnetic-field configurations, later studies [329,330] improved the treatment including turbulent fields (see Section 2.1 for details). Investigations considering the actual distribution of magnetic fields in the magnetized cosmic web have also been performed [331]

While ALPs are an important ingredient that could play a leading role on the intergalactic propagation of gamma rays in a magnetized Universe, they have not been observed and only constraints exist. Some limits were derived using gamma-ray observations [327,332–334], but much of the parameter space is excluded due to observations of photons in wavelengths other than gamma rays. Interestingly, some works have been obtaining combined limits on IGMFs and ALPs together [335]. For reviews on the status of the field, see, e.g., Refs. [336–338].

### *3.6. Propagation Codes*

The simulation of the propagation of electromagnetic cascades in the intergalactic medium is often done numerically, employing some approximations to enable (semi-)analytical solutions (e.g., [48,339,340]). In the last decade, Monte Carlo methods have been used to treat this problem [44,283,341–344]. Many codes are now publicly available.

Elmag [341,344] is a Fortran code that tracks the development of electromagnetic cascades. In the first two versions of the code [341], the effects of magnetic fields on the charged cascade component, namely time delays and deflections, were taken into account using the small-angle approximation. Therefore, this version was limited to low magneticfield strengths. The newest version, Elmag 3.01 [344], adds a Lorentz-force solver that enables three-dimensional simulations assuming turbulent magnetic fields generated based to Equations (5) and (7), following Refs. [345,346], as well as custom grids.

CRPropa [342,347,348] is a well-known code for ultra-high-energy cosmic-ray propagation, written in C++ and with Python bindings (since version 3). The original CRPropa [347] and CRPropa 2 [348] made use of the numerical methods from Ref. [339], namely using transport equations to treat the development of electromagnetic cascades. The newest version include a full treatment of electromagnetic cascades [285,342,349]. A variety of magnetic-field configurations are available, and the code is flexible enough to handle

customizations and arbitrary magnetic-field grids. Moreover, it can generate turbulent magnetic fields on the grid or using grid-less methods [350], with improvements from [351]. Earlier releases of CRPropa 3 supported the propagation of gamma rays with energies -10<sup>17</sup> eV through the Monte Carlo EleCa code [352]. Due to the computational limitations, EleCa is restricted to the highest energies, but in CRPropa a hybrid approach using the transport-equation treatment of [339] was available. Recent developments enable a full Monte Carlo treatment of photons from ultra-high down to GeV energies, which is useful for exploring the UHECR-induced cascade scenarios (see Section 3.1).

A Fortran code for cascade propagation was developed by Fitoussi et al. [343]. It does not rely on any approximations, performing the full three-dimensional propagation of the cascades. In this code, the magnetic field is composed of cells with randomly oriented strengths. A semi-analytical treatment of the cascade development in Mathematica is implemented in *γ*-Cascade [353].

Plasma instabilities are often neglected in simulations, or treated within a dedicated MHD computational framework. In [302], grplinst was presented. It is a module for the CRPropa code that implements plasma effects on the electrons as an additional energy-loss term of the form

$$-\frac{\mathrm{d}E\_{\mathrm{e}}}{\mathrm{d}x}(E\_{\mathrm{e}},\mathrm{x},z) = \frac{E\_{\mathrm{e}}}{c\pi(E\_{\mathrm{e}},\mathrm{x},z)} \, \mathrm{} \tag{27}$$

where *x* is the length of the trajectory described by an electron (or positron) of energy *Ee*, *z* is the redshift, and *τ* is the electron energy-loss time due to the plasma instability. Within this simplified treatment, the time scale in which the instability grows (T ) is taken to be the electron cooling time (*τ*). Therefore, Equation (27) is overestimated, since in reality T ≤ *τ*. More recently, these same parametrizations of grplinst [302] were implemented in Elmag 3.02.

### *3.7. Examples*

To illustrate the effects of IGMFs, we present the results of Monte Carlo simulations of the development of electromagnetic cascades in the IGM. To this end, we use the CRPropa code [342], but similar results could have been derived with, e.g., Elmag [341,344] or the code presented in Ref. [343].

We first select the archetypical blazar 1ES 0229+200 [201], used in several IGMF studies (cf. Section 3.1). This object is located at a distance corresponding to *z* 0.14. We fix the coherence length to *LB* = 1 Mpc to illustrate the formation of the haloes around the source. This is shown in Figure 6. Note that these plots are shown in the coordinate system of the simulation, as we would observe from Earth, but they can be immediately converted to another coordinate system, such as galactic or equatorial.

It is evident from Figure 6 that a significant fraction of the flux is not contained within a finite-sized containment radius centred at the source. This causes spectral changes with respect to the point-like source flux, as shown in Figure 7.

If magnetic fields are maximally helical, then the halo shape shown in Figure 6 changes considerably. In fact, the changes can be so drastic that the morphology of the arrival direction pattern is no longer a standard axi-symmetric halo. For a source pointing straight at Earth (Θlos), we expect a spiral-like pattern, as shown in Figure 8 for a hypothetical source at *z* = 0.08. In this case, the handedness of the halo reflects the sign of the helicity: left-handed for *HB* > 0, and right-handed for *HB* < 0.

The arrival directions of gamma rays can be quantified through the calculation of the *Q*-factors (see Equation (22)). The effects of the helicity of IGMFs are more pronounced for large coherence lengths, hence the choice of *LB* = 250 Mpc in the example of Figure 8. The smaller the ratio between the source distance and the coherence length, the more diluted the signal is, which would reflect in the *Q*-factors (see [283,287]).

**Figure 6.** Simulated pair haloes around the blazar 1ES 0229+200, for the magnetic-field strengths indicated in the figures. The intrinsic source spectrum is a power-law with *α* = 1.5 and *E*max = 5 TeV, following Ref. [45]. The coherence length is assumed to be *LB* = 1 Mpc in this example. All gamma rays with *E* -1 GeV are considered in this plot.

**Figure 7.** This figure illustrates the expected point-like flux from 1ES 0229+200 obtained with Monte Carlo simulations. The lines corresponds to different magnetic-field strengths, indicated in the legend. The data points represent measurements by Fermi-LAT [45] and H.E.S.S. [201]. The source parameters are the same as in Figure 6.

**Figure 8.** This figure illustrates the expected arrival directions of gamma rays considering arbitrary realizations of a helical turbulent magnetic field of strength *B* = 10−<sup>15</sup> G with a Batchelor spectrum (*α<sup>B</sup>* = 5) and coherence length *LB* = 200 Mpc. The left panel corresponds to a realization with maximally positive helicity (*HB* = +1), whereas the right one corresponds to another realization with negative helicity (*HB* = −1). The source, assumed to be located at *z* = 0.08, emits gamma rays with a spectrum *E*−1.5 and an exponential cutoff at *E*max = 100 TeV (see Equation (9)). Its jet has an opening angle Θjet = 5◦ and it points directly at Earth (Θlos = 0◦; see Figure 4). The color scale indicates the normalized spectrum-weighted number of detected events in the angular bin.

### **4. Results**

Fluxes of distant objects, like the ones used in IGMF studies, are normally computed for a point-like source. The magnetically-induced broadening of the electromagnetic cascade will naturally affect the measured point-like flux, especially at lower (typically *E* 10 GeV) energies, since these are predominantly secondary gamma rays if the intrinsic spectrum extends beyond ∼TeV energies. In this case, the larger the angular broadening caused by IGMFs, the more pronounced is the suppression of the gamma-ray flux from a point-like source, since a fraction of the events will leak outside the point spread function (PSF) of the detector.

Gamma-ray sources are observed during a given time. If IGMFs are such that the incurred time delays (T) exceed this window of observation, then the measured flux will be affected. The suppression will, in general, be stronger as energy decreases because this contribution is likely produced by lower-energy electrons whose Larmor radii increase with energy (see Equation (15)). The relevance of this effect depends on an interplay between the duration of the emission, which depends on the type of object (see Section 3.1), the time window of observation, and the magnetically-induced time delay.

A number of studies attempted to constrain IGMFs based on gamma rays using different methods. One possible way to classify these studies is the number of sources used, such that in Section 4.1 we describe the results of analyses of individual gamma-ray sources, and in Section 4.2 those of multiple stacked sources. In general, the results are for the magnetic-field strength. However, there have also been attempts to constrain the coherence length and helicity of IGMFs with gamma rays, which we discuss in Sections 4.3 and 4.4, respectively, followed by results for IGMFs considering that the cascades are induced by UHECRs in Section 4.5. Finally, in Section 4.6 we discuss the prospects for IGMF measurements with gamma-ray observatories.

### *4.1. Analyses of Individual Sources*

The first constraints on IGMFs using gamma rays were derived by Neronov and Vovk [40], using observations by Fermi-LAT and IACTs of the blazars 1ES 0229+200, 1ES 0347+121, and 1ES 1101-232. The results suggest that *B* - 10−16.5 G for *LB* -1 Mpc

and *BL*1/2 *<sup>B</sup>* - <sup>10</sup>−16.5 <sup>G</sup> Mpc1/2 for *LB* <sup>1</sup> Mpc, as shown in Figure 9. This dependence of the lower limit of IGMFs on the coherence length, *LB*, follows from the simplified approach commonly used (see Section 3.3), and is adopted in most of the works to which we refer below, with a few exceptions. Most importantly, this work was the first to firmly exclude the case *B* = 0. An earlier investigation [354] of the blazar 1ES 1101-232 concluded that an exceedingly hard intrinsic spectrum for this object would be required to account for observations, unless the EBL was more intense and the IGMF were stronger (*B* - 10−<sup>15</sup> G). Ref. [355] argued along the same lines, when interpreting observations of the blazar H1426+428.

Following Neronov and Vovk's [40] influential work, much attention has been given to this topic, new objects were used in the analyses and other observables were introduced. For instance, the MAGIC Collaboration obtained compatible results via the non-observation of pair haloes around Mrk 421 and Mrk 501 [21]. In [41], the authors analyzed gammaray observations of 1ES 0229+200, excluding *B* 10−15.5 G. A more comprehensive study included additional sources (1ES 0347+121, and 1ES 1101-232, RBG J0152+017, and PKS 0548-322) and showed that, if the emission by these objects is stable over a time scale <sup>T</sup> ∼ <sup>10</sup><sup>7</sup> yr, then, in general, *<sup>B</sup>* -10−<sup>15</sup> G [42].

A thorough analysis of Fermi-LAT and IACT observations of five blazars was performed in [48], excluding *B* 10−<sup>19</sup> G (for *LB* - 1 Mpc) at a 5*σ*-level, as indicated in Figure 9. These results are robust with respect to the choice of the EBL model, variability of the source (T), and jet opening angle (Θjet). Moreover, the authors perform additional checks about the energy range of the Fermi-LAT data used in the analysis, demonstrating that the results are the same regardless of whether a dataset containing gamma rays with energies starting from 100 MeV or 1 GeV is used. The significance of these results decreases slightly for other EBL models (Refs. [258,356]). Note that a more detailed treatment of the cascade interactions would increase the flux at lower energies, so that these estimates are actually conservative.

The H.E.S.S. Collaboration [22] combined its own observations with those of Fermi-LAT. The absence of a detectable halo around PKS 2155-304 excludes 10−15.5 *B*/G 10−14.5 at a 99% C.L., assuming *LB* - 1 Mpc. Constraints in a similar range (10−15.5 *B*/G 10−14.5) were obtained by the VERITAS Collaboration [23], at a 90% C.L., from 1ES 1218+304. The constraints by both H.E.S.S. and VERITAS are shown in Figure 9. In principle, because the coherence length was assumed to be *LB* = 1 Mpc in both works, one could be tempted to extrapolate the conclusions to *LB* > 1 Mpc. However, because the objects used to derive the constraints (1ES 1218+304 and PKS 2155-304) show some intrinsic variability [357–359], care should be taken extrapolating the bounds to larger values of the coherence length.

The Fermi-LAT Collaboration [47] compiled a catalogue of sources that was used to constrain IGMFs and performed a detailed analysis of this sample of blazars. They found no evidence of extended emission, neither around individual objects, nor in the stacked analysis. This way, they could constrain the allowed values for the strength of IGMFs: *B* - 10−16.5 G for *LB* - 10 kpc. This result is conservative, assuming T 10 yr. If this condition is relaxed, the bounds are even stronger: *B* - 10−<sup>14</sup> G and *B* - 10−12.5 G, for <sup>T</sup> 104 yr and <sup>T</sup> 107 yr, respectively, as shown in Figure 9. While these limits were derived for a jet with half-opening angle Θjet ≤ 10◦ (see Figure 4), no misalignment was considered (Θlos = 0◦). These results were derived for a combination of sources which include, among others, 1ES 0229+200 and 1ES 1218+304. Because there are indications that they could be variable [358,360], if these sources are removed from the analysis, limits which are more conservative are obtained.

**Figure 9.** Compilation of some constraints found in the literature. Colored regions represent *excluded* regions of the parameter space, whereas non-filled bounded by a line indicate *allowed* regions. The regions shown in green are exclusions by Neronov and Vovk [40], Dermer et al. [340], and Finke et al. [48]. The purple regions are bounds derived by Fermi-LAT and Biteau [47], for different source activity times (T). The region labelled 'conservative' excludes from the analysis the blazars 1ES 0229+200 and 1ES 1208+304 (see text for details). Constraints by VERITAS [23] and H.E.S.S. [22] are shown as pinkish rectangles. The red rectangle corresponds to the 95% C.L. allowed region according to Essey et al. [252]. The orange lines demarcate the best-fit regions (68% C.L.) of the parameter space according to Alves Batista and Saveliev [50] for the EBL models labelled D11 [260] and the lower-limit S16l model [262]. Note that the regions plotted refer exclusively to the region of the parameter space reported in the corresponding references, without extrapolations to high/low values of the coherence length. The grey region are the combined excluded regions from Figure 1, obtained via other methods.

The time scale over which a given source emits gamma rays (T) influences the bounds one can derive. For instance, Ref. [43] analyzed VERITAS and Fermi-LAT data. The lower bounds they obtained for *LB* - 1 Mpc were *B* - <sup>3</sup> × <sup>10</sup>−<sup>18</sup> <sup>G</sup> and *<sup>B</sup>* - <sup>2</sup> × <sup>10</sup>−<sup>16</sup> G, for source activity periods T 3 yr and T → ∞, respectively. The former is evidently more

conservative, as it encompasses exclusively the period for which there are observations of the object (RBG J0710+591). Similar considerations about T were discussed in [340], which obtained *B* - 10−<sup>18</sup> G for *LB* - 1 Mpc, for 1ES 0229+200 (see Figure 9). These results are order-of-magnitude compatible with the lower limits by Ref. [361], which are *B* -10<sup>−</sup>16–10−<sup>18</sup> G.

The flaring object Mrk 501 drew much interest for IGMF constraints, due to its variability (see, e.g., [21,23,47,362–367]). The prospect for detecting pair echoes from this same object was studied in [368]. With an analysis of observations of its 2009 flare by MAGIC, VERITAS, and Fermi-LAT, the authors of [365] argued that its spectrum and time profile could be explained by IGMFs with *<sup>B</sup>* <sup>10</sup><sup>−</sup>17–10−<sup>16</sup> <sup>G</sup> (for *LB* - 1 Mpc). Ref. [366] studied the pair echoes from this same blazar, concluding that *B* - <sup>10</sup>−<sup>20</sup> G, assuming *LB* <sup>1</sup> kpc, at a 90% confidence level. Making use of similar methods, Ref. [369] analyzed data from ARGO-YBJ and Fermi-LAT for Mrk 421, excluding *<sup>B</sup>* <sup>10</sup>−20.5 <sup>G</sup> for *LB* <sup>1</sup> kpc, at a 4*<sup>σ</sup>* level. The results of the latter analysis are particularly interesting because they do not make assumptions about the intrinsic spectrum of the source during periods when it is not observed.

A somewhat elaborate treatment of the cascade development was adopted in [44], in which Monte Carlo simulations were used to derive bounds on IGMFs for a sample of three blazars (1ES 0229+200, RBG J0710+591, and 1ES 1218+304). The limits obtained depend on the strategy adopted for the analysis: *B* - 10−<sup>15</sup> G considering the absence of haloes, as observed by Fermi-LAT, and *B* -10−<sup>17</sup> G considering time delays.

An important factor that significantly affects IGMF estimates is the model of the EBL. Ref. [45] studied this dependence for the archetypical extreme blazar 1ES 0229+200. They found *B* - 10−<sup>17</sup> G, which can increase by nearly two orders of magnitude depending on the EBL model. In fact, the EBL is one of the main intrinsic uncertainties that hinders the exclusion of the scenario *B* = 0, as argued in [370]. In this analysis, among the seven blazars considered, only one led to *B* > 0 irrespective of the choice of EBL model. However, the uncertainties in the intrinsic source spectrum and EBL model might be unrealistic, as noted in refs. [58,59].

The analysis by Dolag et al. [46] is interesting because it employed magnetic fields obtained from cosmological magnetohydrodynamical simulations from [195]. These simulations are constrained, i.e., they roughly reproduce the distribution of large-scale structures up to hundreds of Mpc. At larger distances, this cosmological volume was replicated up to the distance of 1ES 0229+200. The authors showed that more than ∼60% of the Universe along the line of sight of this object have magnetic fields with strength *B* - 10−<sup>16</sup> G. Interestingly, this analysis also showed that haloes can be used to probe the maximal energy of the gamma rays emitted by a source, *E*max (see Equation (9)). In fact, there is a considerable correlation between the value of *E*max that could be inferred from fits in the presence and in the absence of IGMFs [240].

Ref. [371] employed a Monte Carlo code to model the development of electromagnetic cascades initiated by GRBs. However, because GRBs had not been detected at TeV energies until recently, when MAGIC observed GRB 190114C [226], the authors extrapolated the GeV flux of GRB 130427A measured by Fermi-LAT up to TeV energies. If the extrapolation is correct, the lower limit obtained is *<sup>B</sup>* ∼ <sup>10</sup>−17.5 G (for *LB* -1 Mpc).

With the first observation of VHE emission by a GRB, it was possible to effectively constrain IGMFs with gamma-ray observations. By combining Fermi-LAT and MAGIC data, Ref. [228] obtained a lower limit of *B* - 10−19.5 G for *LB* - 100 kpc. Ref. [227] performed a similar analysis for GRB 190114C using Monte Carlo simulations. They concluded that Fermi-LAT is not sensitive enough to detect the cascade signal from this GRB on time scales of one month. The discrepancy between these two works is due to a combination of factors. Firstly, the former employed a simpler semi-analytical method, whereas the latter performed detailed Monte Carlo simulations using the three-dimensional version of the Elmag code. Moreover, the authors of [227] reconstructed the intrinsic spectrum following [254], while in Ref. [228] a fixed spectral index *α* = 2 (for a power-law

distribution ∝ *E*−*α*) was assumed. Yet another difference is the treatment of the time information of the photons. While [227] accounted only for photons detected more than 62 s after the burst, [228] adopted 6 s. This issue is far from simple, as it requires knowledge of the inner workings of gamma-ray bursts. For further details, the interested reader is referred to the works on GRB 190114C by the MAGIC Collaboration [226,372].

### *4.2. Stacked and Diffuse Analyses*

It is rather difficult to observe magnetically-induced haloes from individual sources, as they are normally not bright enough to be detected [37,373–375]. Hence, techniques that are more sensitive are needed. Analyses of stacked samples of blazars could be useful for this purpose, since the signal-to-background ratio increases, easing the identification of any excess to the detector's PSF.

The authors of [376] performed a stacked analysis of 170 AGNs using 11 months of Fermi-LAT data. They claim to have found an excess to the PSF of the detector 0.5 − 0.8◦, at a 3.5*<sup>σ</sup>* level. Haloes of these sizes are caused by *<sup>B</sup>* <sup>10</sup>−<sup>15</sup> <sup>G</sup> (see Equation (17)). Nevertheless, it was later shown that these results could be attributed to instrumental effects associated to different treatments of photons measured in different parts of the detector [377] (see also Ref. [378]). This was not included in the PSF used in Ref. [376], thus leading to an incorrect estimate of the strength of IGMFs.

The stacked analysis from Ref. [379] considered 24 selected high-synchrotron-peaked BL Lacs, at *z* < 0.5. Using Fermi-LAT data, the authors found indications of extended emission, consistent with *<sup>B</sup>* ∼ <sup>10</sup>−17–10−<sup>15</sup> G. However, an updated analysis using 12 objects of the same population resulted in no compelling evidence for an extended emission, with only a modest 2*<sup>σ</sup>* significance for *<sup>B</sup>* ∼ <sup>10</sup>−<sup>15</sup> G [380].

Another stacked analysis of a sample of 394 AGNs, 158 of which present flaring activity, was performed in Ref. [381]. Interestingly, the method employed considers temporal information of the sources by comparing the fluxes during low quiescent states and during flaring periods. No evidence for pair haloes was found. The recent analysis by Fermi-LAT [47] corroborates this result, finding no indications for extended emission in the stacked source samples of high-synchrotron peaked BL Lacs.

Using the method introduced in [382], Ref. [375] identified misaligned blazars from a catalogue of radio-loud AGNs, and performed a search for pair haloes around the stacked sample of these objects. They showed that a magnetic field with *BL*1/2 *<sup>B</sup>* <sup>10</sup>−<sup>15</sup> G Mpc1/2 would lead to specific halo anisotropy patterns that are not observed, thus providing an upper limit on the strength of IGMFs. Note, however, that the assumptions about the intrinsic properties of the considered sources are subject to uncertainties. Considering the available lower limits derived in the works discussed above, the parameter space that would remain available for IGMFs would be tiny or, considering the stronger constrains from the recent results by Fermi-LAT [47], inexistent. The authors of [375] then conclude that, if there is indeed no room for IGMFs that can explain the observations, then some other process might be at play that quenches the electromagnetic cascades. They claim that this could be due to, for instance, plasma instabilities (see Section 3.4). More recently, the same group claimed to have found convincing evidence of the non-existence of pair haloes. Using the same method, they exclude *<sup>B</sup>* ∼ <sup>10</sup>−16–10−<sup>15</sup> <sup>G</sup> with *LB* > <sup>100</sup> Mpc at a 3.9*<sup>σ</sup>* level, and *<sup>B</sup>* ∼ <sup>10</sup><sup>−</sup>17–10−<sup>14</sup> G at 2*<sup>σ</sup>* [49] (see Figure 9).

An interesting idea to constrain IGMFs is to study their possible imprints on the diffuse gamma-ray background (DGRB) [383,384], even though the validity of the assumptions used in these analyses is unclear. The presence of IGMFs may suppress the lower-energy diffuse gamma rays measured. Interestingly, the authors of Ref. [383] claim that, in fact, the observations by Fermi-LAT already disfavors the scenario with null IGMF. This agrees with [384], who found that the contribution of cascade gamma rays from blazars to the DGRB changes significantly in the presence of IGMFs, such that for *B* - 10−<sup>12</sup> G the blazar contribution to the spectrum of the DGRB changes.

In the context of diffuse searches, it is important to keep in mind that, in addition to the uncertainties in the EBL models (see Figure 2), there are fluctuations correlated to the processes that produce the EBL photons. This leads to inhomogeneities in the EBL distribution that can affect the propagation of electromagnetic cascades. However, as shown in [385], this effect is small (1%), so it should have little impact on IGMF measurements using diffuse gamma-ray observations.

### *4.3. Bounds on the Coherence Length*

A method to measure the coherence length was suggested in Ref. [386]. In this case, the slope of the light curve of secondary gamma rays would provide an upper limit on *LB*. More specifically, the time dependence of the flux would be ∝ 1/ <sup>√</sup>Δ*tB* for coherence lengths much larger than the mean free path for inverse Compton scattering (*LB λ*IC), and approximately constant if *LB λ*IC. Similarly, the angular profile of the haloes can also retain information about the coherence length. For *LB λ*IC, the surface brightness profile is roughly uniform, whereas for *LB λ*IC it decays as the angular distance to the centre of the source increases.

With the first multimessenger observations of high-energy neutrinos from the flaring blazar TXS 0506+056 in coincide with electromagnetic radiation [387,388], Ref. [50] used the cascade signal delayed with respect to the neutrino emission to constrain IGMFs. The derived limits depend on the EBL model, such that the hypothesis of null IGMFs could only be rejected for two out of the four models tested (Domínguez et al. [260] and the lower-limit model by Stecker et al. [262]). Interestingly, while the bounds are not robust, this work derived, *within the investigated parameter space* , limits on the coherence length of IGMFs for the first time: 30 kpc *LB* 300 Mpc, at a 90% C.L., shown in Figure 9. Naturally, the significance of this result depends on the reliability of the neutrino–gamma-ray correlation and on the assumptions made, namely that the IGMF has a Kolmogorov power spectrum, and that the intrinsic spectrum of TXS 0506+056 both during the flaring and quiescent periods can be described by a power-law with an exponential cut-off.

### *4.4. Constraints on the Magnetic Helicity*

There has been a growing interest in probing the helicity of IGMFs, given its importance for understanding magnetogenesis. All-sky analyses of Fermi-LAT data employing the parity-odd correlators described in Section 3 found indications of IGMFs with *<sup>B</sup>* ∼ <sup>10</sup>−<sup>14</sup> <sup>G</sup> at *LB* ∼ <sup>10</sup> Mpc and an overall negative (left-handed) helicity [278,280]. More recently, a re-analysis of a larger data set showed this result to be a fluctuation stemming from a miscalculation of the statistical significance that neglected the look-elsewhere effect [287]. The same publication also claims that it is currently challenging to detect helicity, both in the fluxes of individual sources and in the diffuse gamma-ray background. In addition, the authors of [286] claim that they did not find any handedness using the *Q*-statistics for Fermi data, being, however, unable to state definitively whether there is actually no handedness present or whether the *Q*-statistics is not sensitive enough for measuring it.

The signatures of helical IGMFs on the shape of haloes are unique, with significant deviations from axial symmetry, as illustrated in Figure 8. Moreover, the sign of the helicity directly correlates with the handedness of the morphology of the arrival directions of gamma rays. In Ref. [281], the authors employed a semi-analytical method to show that spiral-like patterns are the natural shape of the arriving gamma rays for helical fields. Nevertheless, within this simple framework, IGMFs were assumed to be homogeneous, which is not realistic unless the coherence lengths involved are exceedingly high, comparable to the distance of the source. In the more realistic case of turbulent magnetic fields with coherence lengths possibly shorter than the distance from Earth to the gamma-ray source in question, the spiral pattern could vanish, being diluted into something closer to a typical axisymmetric halo. This was, indeed, observed in a more detailed study using

three-dimensional Monte Carlo simulations [283]. This work, however, does support the measurement of the helicity of IGMFs for *LB* -50 Mpc, for sources at redshifts *z* 0.10.

### *4.5. Constraints from UHECR-Produced Gamma Rays*

In Section 3.1 the model proposed in Refs. [248,251] was presented. In this scenario, the flux of some extreme blazars could be attributed to cosmic-ray interactions along the line of sight. Essey et al. [252] constrained IGMFs considering that the gamma rays observed are a combination of those emitted by the blazars and those stemming from CR interactions. In this case, the combined limits from all three blazars analyzed favor 10−<sup>17</sup> *B*/G 10<sup>−</sup>14.5, at 95% C.L. This result is robust with respect to the choice of EBL model. It is also shown in Figure 9. Other authors also performed similar investigations (e.g., [255,389–392]).

Interestingly, within the UHECR-cascade framework, photons with *E* - 10 TeV could be detected even if the sources are very distant (*z* - 0.1). Nevertheless, for IGMFs with *B* - 10−<sup>14</sup> G, significant deviations from a point-like flux would be expected due to magnetically-induced deflections, compromising any constraints that one could derive in the context of this hadronic model [252].

For blazars, the investigations of the role of line-of-sight interactions in gamma-ray measurements [248,249] were also shown to lead to time variabilities that are characteristic for specific magnetic-field properties, of the order of years for *B* = 10−<sup>15</sup> G [389]. Nevertheless, the variabilities cannot be too short since even for weak IGMFs of the order of 10−<sup>18</sup> G cascade photons with *E* = 10 GeV would be magnetically-delayed by ∼10 yr [390]. In the purely leptonic scenario, this timescale is shorter by a ten fold [250].

A detailed account of the effect of magnetic fields on both the electromagnetic cascades as well as on their progenitor UHECRs was presented in [255]. Using three-dimensional simulations of the magnetized cosmic web from [195] and detailed numerical methods, the authors found that the cascade broadening could be detected with next-generation gamma-ray telescopes and possibly some of the ones in operation today.

Note that the propagation of cosmic rays is not trivial. There are many uncertainties involved (see, e.g., [393,394] for a discussion), which might compromise the production of gamma rays. Moreover, depending on the location of the blazar in the cosmic web, local magnetic fields (e.g., in filaments) might significantly deflect cosmic rays away from the line of sight [195–198] (see the discussion at the end of Section 2).

### *4.6. Prospects for Measurements of IGMFs*

From the discussion so far a general picture of IGMFs emerges, wherein gamma rays play a fundamental role in excluding part of the parameter space shown in Figure 1, as summarized in Figure 9. It is important to bear in mind that there are many factors that could compromise the derived limits shown in the latter figure. This includes uncertainties regarding the intrinsic source spectrum and possible variability, the knowledge of the EBL, the distribution of magnetic fields in the Universe, the contribution of a putative hadronic component to the cascade, etc. Moreover, plasma instabilities may quench electromagnetic cascades, even if this effect is minor. The central question that arises is, therefore, if the next generation of gamma-ray telescopes, whether ground- or space-based (or a combination of both), will be able to *unambiguously* detect them. In this subsection we briefly revisit the theory that can be directly connected to the experiments. In particular, we highlight here the requirements for next-generation detectors to be sensitive enough to detect IGMFs.

In general, the detection of haloes depends on two factors. First, the size of the extended emission should be such that it is fully contained within the field of view (FoV) of the detector, of size *θ*fov. Second, this extension must exceed the angular resolution of the detector (*θ*psf). In other words, the signal can be observed if the PSF and FoV of the instruments satisfy *θ*psf < *θ*obs < *θ*fov.

Current-generation IACTs (VERITAS, MAGIC, H.E.S.S.) can resolve scales of ∼0.08◦, with a typical FoV of 6◦. The upcoming Cherenkov Telescope Array [395] will reach angular resolutions as high as ∼0.02◦ with a field of view of 20◦, improving the possible

constraints on IGMFs [396–399]. For instance, 50 hours of observations of the blazar 1ES 0229+200 could be used to probe magnetic-field strengths of *<sup>B</sup>* ∼ <sup>10</sup>−13.5 <sup>G</sup> at a 5*σ*-level, for *LB* - 1 Mpc [400]. In particular, with angular resolutions of 0.13◦ at *E* - 100 GeV, a combination of CTA and Fermi-LAT observations could also be used to probe magnetic helicity [283], although it is not clear if any measurable signal could be extracted [287].

Figure 10 shows the region of the parameter space that can be probed with the halo strategy. Note that, while the PSF must necessarily be smaller than the size of the halo for any extended emission to be identified, the condition *θ*fov > *θ*obs is not a strict requirement. Nevertheless, if the halo is not entirely contained within the field of view of the instrument, it becomes difficult (but not impossible) to reconstruct the image, due to uncertainties stemming from the reconstruction procedure and the motion of the telescope to scan the region surrounding the source. Typically, IACTs have higher angular resolutions near the centre of the FoV, decreasing radially from that point.

**Figure 10.** This figure shows the typical size of the extended emission (*θ*obs) for different combinations of *B* and *LB*, for gamma rays of energy 10 GeV (*left* panel) and 100 GeV (*right*). This example was calculated using the approximation given by Equation (18) [161]. The source is assumed to be located at a distance corresponding to redshift *z* = 0.10.

Figure 10 was obtained using simplifying assumptions, in particular Equation (17), derived in [161]. If these estimates were improved using detailed Monte Carlo simulations and instrument response functions were accounted for, then the picture could change slightly. Nevertheless, a recent work by the CTA Consortium [400] using simulations obtained with the CRPropa code is in qualitative agreement with Figure 10.

More generally, IGMF constraints based on gamma-ray observations employing the halo strategy depend on the point-source sensitivity of the instruments, shown in Figure 11. A simple comparison with the simulations of Figure 7 demonstrates that instruments like CTA will be able to probe IGMFs stronger than ∼10−<sup>14</sup> G, as shown in Ref. [400]. The sensitivities shown in Figure 11 are a useful guide for a first assessment of the instrumental capabilities in IGMF studies through simple comparisons with theoretical expectations (Figure 7). Nevertheless, there are multiple conceivable ways to probe IGMFs with halo searches. The simplest one is the direct search for an extended emission, as we discussed in the preceding paragraphs, but one could also employ methods involving the fit of the halo profile and comparison with the background, for example. This would lead to differences in the sensitivity curves, as discussed in detail in Ref. [396] for the case of CTA.

It is worth stressing that facilities operating at slightly higher or lower energies can play an important role in this type of study, despite being seldom considered for IGMF studies. They can be used to constrain putative PeV gamma rays as well as cascade photons in the GeV band. The current and upcoming facilities operating at higher energies, like LHAASO [8] and the planned Southern Wide-field Gamma-ray Observatory (SWGO) [401], formerly known as the Southern Gamma-ray Survey Observatory (SGSO), can help in the precise determination of the intrinsic spectrum of the sources and consequently lead to better models. Observatories such as the planned e-ASTROGAM [402] and the All-sky Medium Energy Gamma-ray Observatory (AMEGO) [403] can detect secondary (cascade) photons in the MeV–GeV band, thus providing additional insights into IGMFs. For the extreme blazars with hard spectra (see discussion in Section 3.1), in particular, this will ultimately reduce the uncertainties when constraining IGMFs. Interestingly, observations around GeV energies may also probe spectral features expected from some plasma instability models (e.g., [302]; see also Section 3.4).

**Figure 11.** Comparison of the point-source sensitivity for various gamma-ray observatories. The Fermi-LAT band encompasses sources at various positions in the sky, for the P8R3\_SOURCE\_V2 instrument response function [404]. The sensitivities for the IACTs, namely VERITAS [405], MAGIC [3], H.E.S.S. [406], and CTA [395] are given for 50 h of observations. For SWGO [401] and LHAASO [8], the curves shown are for 5 and 1 year, respectively. The 1 year is also the observation time used to derive the sensitivity for AMEGO [403]. For HAWC [407], the curve corresponds to 507 days (which is equivalent to approximately 3000 hours) of observations. The thick black lines correspond to the simulations from Figure 7. Note that the instrument response functions of each detector are *not* folded into the simulations; the corresponding sensitivities are shown here just for the sake of comparison.

All currently operating instruments can resolve short-duration events from sources at distances closer than *<sup>z</sup>* ∼ 1, probing magnetic fields with strengths *<sup>B</sup>* <sup>10</sup>−<sup>17</sup> <sup>G</sup> for *LB* - 1 Mpc; note that the exact value of *B* that can be probed depends on the distance to the source. For stronger magnetic fields, however, it becomes difficult to detect time delays if they are larger than a few years or a decade. In fact, according to Equation (18), the expected time delay for 10 GeV gamma rays assuming *B* - 10−<sup>17</sup> G and *LB* - 100 kpc would already be Δ*tB* - 100 yr, posing obstacles for measurement within a reasonable time window of a few decades.

Figure 12 shows the region of the parameter space that can be probed with the time-delay strategy. It is clearly favorable for probing the region of the parameter space corresponding to weaker magnetic fields (compared to Figure 10). This particular example is for a source at redshift *z* = 0.42, the same as GRB 190114C [226].

In a recent work [367], the prospects for measuring strong IGMFs (*B* - 10−<sup>12</sup> G) were analyzed using the constrained cosmological simulations of the cosmic web from Ref. [408], based on gamma-ray observations from both Mrk 421 and Mrk 501. The authors argue

that, at least for the latter object, IGMFs with *B* - 10<sup>−</sup>12–10−<sup>11</sup> G and *LB* 10 kpc could be measured in the energy range between 1 and 10 TeV via halo searches. Such strong IGMFs could, in principle, be invoked to resolve the Hubble tension [61].

**Figure 12.** This figure shows the typical size of the extended emission (*θ*obs) for different combinations of *B* and *LB*, for gamma rays of energy 10 GeV (*left* panel) and 100 GeV (*right* panel). This example was calculated using the approximation given by Equation (18) [161]. The source is assumed to be located at a distance corresponding to redshift *z* = 0.42.

### **5. Outlook**

Following on the footsteps of pioneer ground-based gamma-ray detectors, in particular IACTs like the Whipple Observatory and HEGRA, currently-operating facilities such as H.E.S.S., VERITAS, MAGIC have made outstanding progress in studying the VHE universe. Complemented by space-borne detectors like Fermi-LAT and AGILE at energies below ∼100 GeV, and by ground-based particle detectors such as HAWC, Tibet-AS*γ*, and ARGO-YBJ at higher energies (∼100 TeV), we have in recent years made significant progress towards understanding the Universe at high energies. At the dawn of the multimessenger era, the discovery potential of ground-based gamma-ray facilities can be maximized by working with other observatories across the whole electromagnetic spectrum, as well partners measuring cosmic rays, neutrinos, and gravitational waves. In this context, joint studies through multimessenger networks such as the Astrophysical Multimessenger Observatory Network (AMON) [409] are extremely useful to orchestrate campaigns of follow-up observations. It is through coordinated efforts of multiple of these facilities that we can pave new roads to fully exploit the potential of gamma rays as probes of cosmology and fundamental physics (e.g., Lorentz invariance violation, axion-like particles; see Section 3.5). Within this landscape, we identify a unique opportunity for measuring IGMFs using gamma rays as well as other messengers.

Many challenges lie ahead in the coming decade. Firstly, it is possible that nextgeneration IACTs like CTA will still not be sensitive enough to enable measurements of magnetically-induced haloes. This limitation is certainly true for other ground-based instruments given the lower angular resolution of water-Cherenkov detectors compared to IACTs. Secondly, there are theoretical issues that need to be addressed, including the issue of plasma instabilities (see Section 3.4). Moreover, future studies should start relying on more detailed magnetic-field models, capturing also the magnetization of the cosmic web wherein the gamma-ray sources used as "lighthouses" to probe the cosmos are embedded. We are entering an era of precision measurements and, therefore, also require more accurate tools to model the three-dimensional propagation of electromagnetic cascades if we wish to exploit the data as much as possible. Finally, there is room for novel methods to be devised to measure IGMFs, involving, among other messengers, gamma rays.

New insights into cosmic magnetism will be obtained with the Square Kilometre Array (SKA) [410,411]. Through measurements of Faraday of polarized extragalactic sources (e.g., FRBs, GRBs, quasars) SKA will deliver a tomographic map of extragalactic magnetic fields, disentangling part of the IGMF component, and offering clues on the structure and evolution of IGMFs [412].

Figures 1 and 9 neatly summarize the space of parameters for IGMFs allowed by measurements. However, the landscape of cosmic magnetism is more complex than this simple two-dimensional parameter space. Besides the magnetic-field strength (*B*) and the coherence length (*LB*), IGMFs may be helical, such that a third dimension (*HB*) should be added to this plot. Moreover, the magnetic power spectrum (*αB*) can also play a role in the development of electromagnetic cascades, adding a fourth dimension. It is manifestly difficult to scan over all these parameters (*B*, *LB*, *HB*, and *αB*) simultaneously. Still, these caveats should be borne in mind when constraining IGMFs, since there might be degeneracies. In this context, observation of gamma-ray sources can play an important role, given its ability to probe all these parameters. Nevertheless, besides more sensitive gamma-ray observatories, theoretical efforts in this direction are needed.

With the promising prospects for measuring IGMFs using next-generation gamma-ray observatories, we can invert the reasoning presented in Section 2.3: from the measurements, assuming IGMFs have a cosmological origin, we could constrain certain aspects of cosmology by inferring the specific parameters that characterize them. In fact, all the IGMF parameters mentioned in the previous paragraph may, in principle, be used for this purpose. With the measurement of *B*, one directly obtains the overall energy content of IGMFs. Like any other form of energy permeating the Universe, this would have an immediate impact in its global evolution, such that it could be necessary to consider this contribution as an addition to the standard ΛCDM model. Moreover, measurements of both the spectral index (*αB*) and the coherence length (*LB*) could be used to constrain the major processes from which they originate, like Inflation, QCDPT and EWPT. In the case of a phase transition, in particular, these measurements could allow us to infer its order. Finally, the measurement of a non-zero magnetic helicity (*HB*) would strongly hint at a general CP violation in the Universe, with clear implications for various aspects of particle cosmology.

In summary, it is fair to say that gamma rays represent a unique observational window into cosmic magnetism. With the advances in gamma-ray astronomy, we could already capitalize on this window of opportunity to better understand IGMFs and to start constraining the *B*-*LB* parameter space. In the coming decades, the next generation of instruments might improve our understanding of cosmic magnetism more than ever, probing magnetism at cosmological scales and providing us a glimpse into the mechanisms whereby magnetic fields originated.

**Author Contributions:** Both authors contributed equally to the writing. Conceptualization was done by R.A.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** R.A.B. is currently funded by the Radboud Excellence Initiative. The work of A.S. is supported by the Russian Science Foundation under grant no. 19-71-10018.

**Acknowledgments:** We are grateful to Karsten Jedamzik, Michael Kachelrieß, and Tanmay Vachaspati for valuable comments and suggestions which helped us to improve the quality of this review.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **References**


### *Review* **Astrophysical Neutrinos and Blazars**

**Paolo Giommi 1,2,3,\*,† and Paolo Padovani 4,5,\*,†**


**Abstract:** We review and discuss recent results on the search for correlations between astrophysical neutrinos and *γ*-ray-detected sources, with many extragalactic studies reporting potential associations with different types of blazars. We investigate possible dependencies on blazar sub-classes by using the largest catalogues and all the multi-frequency data available. Through the study of similarities and differences in these sources we conclude that blazars come in two distinct flavours: LBLs and IHBLs (low-energy-peaked and intermediate-high-energy-peaked objects). These are distinguished by widely different properties such as the overall spectral energy distribution shape, jet speed, cosmological evolution, broad-band spectral variability, and optical polarisation properties. Although blazars of all types have been proposed as neutrino sources, evidence is accumulating in favour of IHBLs being the counterparts of astrophysical neutrinos. If this is indeed the case, we argue that the peculiar observational properties of IHBLs may be indirectly related to proton acceleration to very high energies.

**Keywords:** high-energy gamma-ray astrophysics; relativistic astrophysics; astroparticle physics

### **1. Introduction**

The discovery of ultra-high energy cosmic rays (UHECRs) established the existence of powerful cosmic accelerators capable of reaching energies millions of times larger than those that can be achieved by the best accelerators on Earth (see ref. [1] and references therein). The interaction of very high-energy cosmic rays (CRs) with matter or radiation in astrophysical contexts results in the production of neutral and charged mesons, which then decay into *γ*-rays, high-energy neutrinos and energetic particles, which lose energy in a variety of ways. Neutrinos, *γ*-rays (of energy <sup>&</sup>lt; <sup>∼</sup> 1 TeV, if not absorbed inside the emitting region or in the host galaxy), and the electromagnetic radiation emitted by secondary particles, are the 'messengers' that can reach the Earth undeflected from cosmological distances, providing information about the Universe at extreme energies.

A flow of high-energy neutrinos likely generated in these environments has been detected by the IceCube Observatory (https://icecube.wisc.edu, accessed on 11 December 2021) at the South Pole with energies extending beyond 1 PeV and an energy flux comparable to that observed in the *γ*-ray-band [2–4]. This facility, however, cannot provide an accurate estimate of the arrival directions of the incoming neutrinos and is not sensitive enough to ensure firm detections of point sources with multiple events. Nevertheless, the absence of any significant anisotropy in the arrival direction of the neutrinos points to a flux that is mostly of extragalactic origin [5]. A number of astrophysical source types have been suggested to be responsible for the observed signal. In particular blazars, long-known as

**Citation:** Giommi, P.; Padovani, P. Astrophysical Neutrinos and Blazars. *Universe* **2021**, *7*, 492. https:// doi.org/10.3390/universe7120492

Academic Editor: Fridolin Weber

Received: 12 November 2021 Accepted: 9 December 2021 Published: 13 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

efficient cosmic accelerators, have been proposed as likely sources of high-energy neutrinos (Refs. [6–8] and references therein).

In this early phase of multi-messenger astronomy, where the available instrumentation is limited in precision and sensitivity, unambiguous associations would greatly benefit from the detection of high-energy neutrinos together with enhanced activity in some parts of the electromagnetic spectrum of the astrophysical counterpart [9,10]. This condition, however requires that hadronic-related electromagnetic flares are strong enough to outshine the non-thermal radiation generated via different mechanisms, e.g., from accelerated electrons radiating in magnetic fields in the so-called leptonic scenarios [11–13]. One such association in space and time was described in ref. [14], which reported a large flare in the *γ*-ray and other parts of the spectrum from the blazar PKS 1424−418 in correspondence with the detection of a ∼2 PeV IceCube neutrino. The positional uncertainty of this event (15.9◦, 50% radius), however, is so large that the a posteriori probability of a chance coincidence was estimated to be about 5%, too large for an unambiguous association.

So far only one astronomical object has been associated with a high-energy astrophysical neutrino with a significance larger than 3*σ*. This is the blazar TXS 0506+056, also known as 5BZB J0509+0541, located in the relatively small uncertainty region of the neutrino IceCube-170922A, detected when the source was undergoing a *γ*-ray flare. This result was further strengthened by the discovery of an excess of several lower energy neutrinos from the same direction in a subsequent analysis of IceCube archival data [15,16]. This so-called "neutrino flare" occurred when TXS 0506+056 was in a period of low *γ*-ray activity, suggesting that the relationship between astrophysical neutrinos and *γ*-ray emission is not straightforward.

### **2. Blazars**

Blazars are the most powerful sources of persistent non-thermal radiation in the Universe [17]. They are a special and rare type of Active Galactic Nuclei (AGN, [18]) with unique characteristics such as the emission of highly variable radiation over the entire electromagnetic spectrum. The most peculiar aspect about blazars is that this radiation originates within a relativistic jet that moves away from the central supermassive black hole and is oriented in the direction of the Earth. This is the very special condition that is responsible for the distinctive features of these sources such as superluminal motion and fast variability (see refs. [17,19,20]).

A few thousand blazars are known. Although these sources have been mostly discovered in the radio, X-ray or *γ*-ray bands, they are conventionally sub-classified depending on their optical spectrum as Flat-Spectrum Radio Quasars (FSRQs) and BL Lacertae objects (or BL Lacs), with FSRQs showing broad emission lines like radio-quiet Quasi-Stellar Objects (QSOs), and BL Lacs being featureless or at most displaying very weak emission lines [20]. All blazars emit across the entire electromagnetic spectrum with an energy distribution that, in *ν*f(*ν*) vs. *ν* space, displays two broad humps, one attributed to synchrotron radiation that rises from radio frequencies and peaks between the far IR and the X-ray band, and the second, more energetic one due to inverse Compton or other mechanisms, that peaks in the low or high-energy *γ*-ray band [21]. See ref. [22] for examples of different types of spectral energy distributions (SEDs) including large amounts of multi-frequency data. The peak energy and the relative intensity of the two humps span a wide range of values and have been used to classify blazars depending on the distribution of their power output. On the basis of the rest-frame frequency of their low-energy hump (*ν<sup>S</sup>* peak) blazars are classified into low- (LBL/LSP: *ν<sup>S</sup>* peak < 1014 Hz), intermediate- (IBL/ISP: 10<sup>14</sup> Hz < *<sup>ν</sup><sup>S</sup>* peak < 1015 Hz), and high-energy (HBL/HSP: *ν<sup>S</sup>* peak > 1015 Hz) peaked sources respectively [23,24]. Extreme blazars are defined by *ν<sup>S</sup>* peak > 2.4 × <sup>10</sup><sup>17</sup> Hz (see ref. [25] for a recent review). In this paper we use the nomenclature LBL, IBL, and HBL as in ref. [22], which is an extension of the codification originally defined in ref. [23] applied to all blazar types, FSRQs and BL Lacs.

Although extremely rare among optical sources, blazars are by far the most common type of objects detected by *γ*-ray telescopes outside the Galactic plane [26–28]. Given that neutrino production is inevitably associated with the generation of *γ*-rays, this peculiarity makes blazars natural candidate neutrino sources. From an observational perspective the situation is however complex since *γ*-rays can also be generated in purely leptonic contexts (e.g., synchrotron self-Compton or external inverse Compton [12,29]), and *γ*-rays associated to neutrino production could be completely or largely absorbed before they leave the emission region.

### **3. Astrophysical Neutrinos and Blazars**

Blazars have long been considered as probable sources of astrophysical neutrinos [30,31], but statistical searches for neutrino emitters have become possible only recently when sufficient data has been accumulated, mostly from IceCube observations. Investigations of this type have been conducted largely comparing neutrino arrival directions with lists of bright radio or *γ*-ray sources or with catalogues of well known blazars and other types of AGN. A search for point sources in the IceCube 10 year data collection, which includes events with energy typically <sup>&</sup>gt; <sup>∼</sup> 1 TeV, found an excess of neutrinos at the 2.9 *<sup>σ</sup>* level from the direction of the local (*z* = 0.004) Seyfert 2 galaxy NGC 1068 and a 3.3 *σ* excess in the northern sky due to significant *p*-values in the directions of NGC 1068 and three blazars: TXS 0506+056, PKS 1424+240, and GB6 J1542+6129 [4]. A similar work based on the database of the ANTARES neutrino observatory did not lead to the firm detection of point-like sources, with the most prominent excesses found in correspondence of the radio galaxy 3C 403 and the blazar MG3 J225517+2409 [32].

In the radio domain recent papers considered a number of data sets including a complete sample of 8 GHz sources selected from the very large baseline interferometry (VLBI)-based Radio Fundamental Catalogue (RFC), and the list of blazars monitored at various radio frequencies by the MOJAVE project and at the Owens Valley, the Metsähovi, and RATAN-600 Radio Observatories [33–36]. Possible correlations between radio sources and IceCube neutrinos have been reported in ref. [33], where it is concluded that four radio bright blazars, namely 3C 279, NRAO 530, PKS 1741−4038, and PKS 2145+067, all LBLs, are highly probable high-energy (E > 200 TeV) neutrino emitters. However 3C 279, the brightest and the only one of these blazars that is included in the list of objects used for the point source search in the 10-year IceCube data set, is not listed among the objects considered as likely neutrino emitters [4]. In addition, a new analysis based on the same list of 3411 bright radio sources and the updated sample of 10 year of IceCube data [37] did not confirm the results of ref. [34]. A recent investigation aimed at the detection of flare neutrino emission in the 10 year IceCube data set [38] reported a time-dependent neutrino excess in the northern hemisphere at the level of 3*σ* associated with four sources: M87, a giant radio galaxy, TXS 0506+056 and GB6 J1542+6129, two IBL blazars, and NGC 1068. As noted in ref. [39] M87, which has *γ*-ray properties similar to those of HBL objects, is also the possible counterpart of the IceCube-141126A track-like event.

The most robust statistical evidence so far for an association between astrophysical neutrinos and blazars remains the case of TXS 0506+056 [9,15,16]. This IBL/HBL type blazar is located well within the ∼0.5◦ (90% containement) error region of the ∼0.3 PeV event IceCube-170922A that was detected in correspondence with enhanced *γ*-ray activity from the source. This already significant result was further strengthened by the detection of a 3.5*σ* excess from the same direction found in a subsequent search of IceCube archival data [15,16]. The corresponding 13 ± 5 neutrinos were detected between September 2014 and March 2015, when TXS 0506+056 was not undergoing strong *γ*-ray activity, implying that the relationship between astrophysical neutrinos and *γ*-ray emission is likely complex. A similar, although less striking, event is the case of 3HSP J095507.9+35510, a blazar of the HBL type that is located in the error region of the highly energetic neutrino IceCube-200107A, that was detected during a strong X-ray flare [40].

To find similar cases in existing neutrino data and using all the available multi-frequency information Giommi et al. have carried out a systematic detailed study of 70 public IceCube high-energy neutrino tracks that are well reconstructed and away from the Galactic plane [39].

This effort led to a 3.2 *σ* (post-trial) correlation excess with *γ*-ray detected IBLs and HBLs, while no excess was found in correspondence of LBL blazars. This result, together with previous findings, consistently points to growing evidence for a connection between some IceCube neutrinos and IBL and HBL blazars. Moreover, several of the 47 IBLs and HBLs listed in Table 5 of ref. [39], are expected to be new neutrino sources waiting to be identified. Further progress requires optical spectra, which are needed to measure the redshift, and hence the luminosity of the source, determine the properties of the spectral lines, and possibly estimate the mass of the central black hole, *M*BH, which is the purpose of "The spectra of IceCube Neutrino (SIN) candidate sources" project.

In this framework Paiano et al. [41] presented the spectroscopy of a large fraction of the objects selected in ref. [39], which, together with results taken from the literature, covered ∼80% of that sample. This was the first publication of the SIN project whose aim is threefold: (1) to determine the nature of these sources; (2) to model their SEDs using all available multi-wavelength data and subsequently the expected neutrino emission from each blazar; (3) to determine the likelihood of a connection between the neutrino and the blazar using a physical model for the blazar multi-messenger emissions, as done, for example, in refs. [42,43]. In the second SIN paper [44], the sources studied in ref. [41] have been characterised to determine their real nature, and also see if these sources are any different from the rest of the blazar population. Of particular relevance here are the socalled "masquerading" BL Lacs. Padovani et al. [45] showed, in fact, that TXS 0506+056, the first plausible non-stellar neutrino source is, despite appearances, *not* a blazar of the BL Lac type but instead a masquerading BL Lac. This class was introduced in refs. [46,47] (see also ref. [48]) and includes sources which are in reality FSRQs whose emission lines are washed out by a very bright, Doppler-boosted jet continuum, unlike "real" BL Lacs, which are *intrinsically* weak-lined objects. This is relevant because "real" BL Lacs and FSRQs belong to different physical classes, i.e., objects *without* and *with* high-excitation emission lines in their optical spectra, referred to as low-excitation (LEGs) and high-excitation galaxies (HEGs) [18]. TXS 0506+056, being a HEG, therefore benefits from several radiation fields external to the jet (i.e., the accretion disc, photons reprocessed in the broad-line region or from the dusty torus), which, by providing more targets for the protons might enhance neutrino production as compared to LEGs. This makes masquerading BL Lacs particularly attractive from the point of view of high-energy neutrinos. Padovani et al. [44] have found that the sample considered in [39] has a fraction > 25% and possibly as high as 80% of masquerading sources, which is tantalizing.

### **4. Blazars of Different Types**

In the previous section we have shown that experimental evidence is accumulating in favour of blazars being likely neutrino counterparts. Motivated by the possibility that some specific blazar sub-classes may play a role in the emission of astrophysical neutrinos, in this section we use the best available samples and multi-frequency data to review similarities and differences among blazars sub-types.

By definition, LBLs, IBLs and HBLs only differ by the value of *ν<sup>S</sup>* peak in their SEDs. This spread could simply reflect the maximum energy at which particles are accelerated or could be due to other physical processes that determine the shape of the SED.

### *4.1. Blazar Samples*

At present the largest available lists of confirmed blazars are the following:


iii. The 4LAC-DR2 sample of *γ*-ray-selected AGN [51,52], which includes over 3500 blazars of all types, many of which also included in the BZCAT and 3HSP catalogues.

### A Sample of IBL Blazars

No well-defined flux limited samples of IBL blazars currently exists. The only large list available is the subset of *Fermi* 4FGL-DR2 *γ*-ray sources classified as ISPs in the 4LAC-DR2 paper [51,52]. The SED classification of 4LAC-DR2 is however not always reliable, especially for IBL sources since at IR/optical frequencies, where IBLs peak, there are emission components not related to the jet, such as the host galaxy or the blue bump, that can sometimes lead to large inaccuracies in the estimation of *ν<sup>S</sup>* peak. To assemble a large and accurately determined sample of *γ*-ray -detected IBL blazars we have built the SED of a large fraction of *Fermi* 4FGL-DR2 blazars through the Open Universe VOU-Blazar tool V1.94 [53], which uses 71 multifrequency catalogues and spectral databases. In particular, we have considered the following samples: (a) all sources classified as ISPs in the 4LAC-DR2 catalogue; (b) blazars classified as HSPs in 4LAC-DR2 that are not included in the 3HSP sample [50]; (c) all sources classified as LSPs in 4FGL-DR2 with *ν<sup>S</sup>* peak> <sup>10</sup><sup>13</sup> Hz. A careful visual inspection of the SED of each candidate allowed us to remove the components that can be related to non-jet emission. A fit to the average multi-frequency non-thermal components led to a sample including 482 sources with *ν<sup>S</sup>* peak between 10<sup>14</sup> and 1015 Hz, and radio flux densities ranging from a few mJy to over 6 Jy at 1.4 GHz. Particular care was also taken in the verification of the redshift of each object as in many cases this parameter was estimated by automatic software that was run on a large number of optical spectra. In some cases warning flags generated by the code have not been taken into account and wrong or inaccurate redshifts have been included in on-line sites and reported in the literature. Our verification was done by visually inspecting published optical spectra, or online SDSS-DR16 spectra, when available. This was useful and necessary since a number of sources with reported medium-high redshifts values in fact showed featureless optical spectra.

Note that this sample of IBL blazars will evolve somewhat in the future depending on the availability of additional multi-frequency data. The table including the details of each IBL blazar in the sample will be made available via the Open Universe platform (https://openuniverse.asi.it, accessed on 11 December 2021). By construction, this is a *γ*ray flux-limited sample. However, considering the observed range of radio to *γ*-ray flux ratios for IBLs and the sensitivity of the 4FGL-DR2 catalogue, we expect it to include almost the totality of IBL blazars with radio flux densities <sup>&</sup>gt; <sup>∼</sup> 150–200 mJy.

### *4.2. LBLs vs. IBLs vs. LBLs*

To investigate possible intrinsic differences between LBL, IBL, and HBL blazars we must use samples that are reasonably complete above a common flux limit in the same energy band. Since all blazars in the available lists are radio sources with similar radio spectra, we use a conservative radio flux density limit of 200 mJy, which is sufficiently large to ensure that no objects above this limit are missed in the catalogues listed above.

The IBL and HBL samples can be defined in a straightforward way, since they simply consists of the subset of sources with radio flux density ≥ 200 mJy. For the sample of LBL sources we have taken all objects in the BZCAT that are above this flux density limit and are not included in the 3HSP catalogue or in our IBL sample. To ensure uniformity in the radio data, and to avoid the complications of the Galactic plane, we consider the part of the sky covered by the NRAO VLA Sky Survey (NVSS: [54]) (Dec ≥ −40◦) with Galactic latitudes |b| > 10◦, corresponding to an area of 28,400 square degrees of sky. These selection criteria resulted in HBL, IBL and LBL subsamples that include 38, 114, and 1370 sources, respectively. The total blazar number density N(>200 mJy) therefore is (38 + 114 + 1370)/28,400 = 0.054 deg<sup>−</sup>2, a value that is close to that derived from the LogN-LogS of the DXRBS survey [55] for all blazars types. We can therefore assume that these lists are complete at a level that is certainly sufficient for the purpose of finding relevant differences in the three sub-samples.

The relative space density of radio flux-limited (*f*radio > 200 mJy) LBLs, IBLs and HBLs is shown on the left side of Figure 1. A large gap between LBLs and other blazars types is clearly present. LBLs are over a factor 10 more abundant than IBLs, which are instead only about a factor of two more abundant than HBLs. In this plot LBL sources are all confined in the Log(*ν<sup>S</sup>* peak) interval 12–13. Even though some blazars with Log(*ν<sup>S</sup>* peak) between 13 and 14 likely exist, from our experience objects possibly located in this frequency interval are rarely observed and their *ν<sup>S</sup>* peak is very difficult to estimate because the flux at these energies is often contaminated by components that are unrelated to the jet, like the dusty torus or emission from the host galaxy. The low number of sources in this frequency bin marks a clear discontinuity between LBLs and the other types of blazars.

**Figure 1.** The surface density (**left side**) and redshift distributions (**right side**) of LBL, IBL and HBL blazars with radio flux density larger than 200 mJy.

Another striking discontinuity is evident in the redshift distributions shown on the right side of Figure 1, with the redshifts of both IBLs and HBLs being mostly confined to low values (z <sup>&</sup>lt; <sup>∼</sup> 1.0 and peaking below z = 0.5), while those of LBLs reach values larger than 3, with a mean value of 1.44 and a dispersion of 0.8. The large frequency of IBLs in the first redshift bin is due to the still relatively large number of these sources with no redshift estimation (∼30%) due to featureless or unavailable optical spectra, and which therefore are assigned the value of 0. The number of IBL sources with zero redshift will most likely significantly decrease if photometric redshift estimations based on host galaxy signatures in the optical/IR parts of the SED, are carried out as in the case of the 3HSP sample [50]. This sharp discrepancy in the redshift distributions reflects the well known different cosmological evolution between LBL (mostly FSRQs) and HBL blazars. The cosmological evolution of LBLs is strong and similar to that of optically or X-ray selected AGN, which are largely radio quiet QSOs [55,56]. HBLs instead are known to show negative or no evolution [55,57,58]. No conclusive explanation for this low level of evolution in HBLs has been found yet. Since the redshift distribution of IBLs is similar to that of HBLs (Figure 1, right side) it is reasonable to assume that the evolution of IBL blazars is similar or identical to that of HBLs.

LBLs and IBLs/HBLs exhibit sharp differences also from the point of view of SED variability. Large changes of *ν<sup>S</sup>* peak up to factors of 100 or more are commonly observed in HBL blazars (see Figure 4 of ref. [22]) but not in LBLs. Most of the flux variability associated to *ν<sup>S</sup>* peak shifts in HBLs occurs in the X-ray band and is not accompanied by equivalent simultaneous changes at UV or at lower energies. Examples of this behaviour are illustrated in Figure 2 where the data corresponding to high and low states appear in red and green respectively, and all the available data are shown in blue. IBL blazars also show large *ν<sup>S</sup>* peak changes which cause them to occasionally become HBLs during high states. One such example is shown on the left side of Figure 3 for the case of BL Lac, which shifted *ν<sup>S</sup>* peak from typical values around 10<sup>14</sup> Hz to ∼1016 Hz during a large flare [59,60]. Several other IBL/HBL transitional sources are known; some of these, e.g., OJ 287, TXS 0506+056, PKS 2005−489, PG 1553+113, are among the blazars that have been frequently observed by Swift, and their SEDs can be found in ref. [22] and online at https://openuniverse.asi.it/blazars/swift/ (accessed on 11 December 2021).

**Figure 2.** The SED of the HBL Mrk 501 (**left**) and the IBL PKS 1424+240 (**right**) illustrating that most of the variability in these objects is in the X-ray band and translates into large shifts of the *νS* peak energy. Red and green colours represent Swift UVOT and XRT simultaneous data during high and low luminosity states respectively.

**Figure 3.** The SED of the IBL source BL Lac (**left**), which shifted *ν<sup>S</sup>* peak by a large amount during a strong flare, and the LBL blazar 3C 454.3 (**right**), which shows large variability of similar amplitude from far IR to the X-ray band, and no large *ν<sup>S</sup>* peak shift. Red and green colours represent Swift UVOT and XRT simultaneous data during high and low luminosity states respectively.

In LBL blazars *ν<sup>S</sup>* peak remains fairly stable between ∼1012 and ∼10<sup>13</sup> Hz even during large flares since flux variability in these sources seems to occur almost a-chromatically, with similar amplitude at all frequencies from far IR to UV and X-rays [21,22]. One example of such behaviour is shown in the right side of Figure 3 for the case of 3C 454.3. Other examples of LBL blazars with well populated SEDs that follow the same behaviour are 3C 279, PKS 0235+164 and CTA 102 [22] (https://openuniverse.asi.it/blazars/swift/, accessed on 11 December 2021). Blazars also show differences in jet kinematics. LBLs exhibit a wide distribution of apparent jet speeds, with values reaching 30–40c, with jet speeds instead being typically 5c and 10c for HBL and IBLs respectively [61]. An upcoming paper (Padovani et al., 2022, in preparation) concludes, based on VLBI data, that the two neutrino candidates TXS 0506+056 and PKS 1424+240 have overall parsec-scale properties similar to HBLs and different from those of LBLs.

LBL and HBL blazars also differentiate in their optical polarisation properties, as demonstrated in ref. [62] where the authors studied a sample of X-ray selected BL Lacs (mostly HBLs) and concluded that these objects show a lower level of optical polarisation (with a maximum of ∼10%) and lower duty cycle than radio selected BL Lacs (that are mostly LBLs where the optical flux is dominated by the jet and not by the blue bump and broad lines emission), which are instead characterised by polarisation levels of 30–40% and a high duty cycle [63].

Based on the empirical evidence described above, which reveals strong similarities between IBLs and HBLs properties, and large differences from LBLs, we suggest that blazars come only in two main flavours: LBLs, and IBLs plus HBLs combined, which we propose to collectively call IHBLs. For practical purposes, blazars can be assigned to one of the two sub-classes on the basis of a single *ν<sup>S</sup>* peak value, with LBLs being those with *νS* peak < <sup>10</sup>13.5 Hz and IHBLs those with *<sup>ν</sup><sup>S</sup>* peak > <sup>10</sup>13.5 Hz.

We stress that LBLs and IHBLs are not simply sub-classes of blazars bureaucratically defined by the value of an observational parameter, but reflect deeply different intrinsic properties, such as cosmological evolution, demographic characteristics, broad-band spectral variability, optical polarisation, and possibly the physical conditions in the emitting regions that in IHBLs might host efficient proton acceleration. Since the cosmological evolution of LBLs is similar to that of optically and X-ray selected radio quiet QSOs, the presence of LBL relativistic jets, at least in first approximation, occurs in QSOs independently of cosmic epoch. The same does not happen in IHBL jets, which instead follow a different evolution. If proton acceleration occurs only (or mostly) in IHBL source it could be that the conditions that enable proton acceleration drive the evolution in these sources.

### *4.3. Transient Blazars and Neutrino Astronomy*

The *γ*-ray source 4FGL J1544.3−0649 is a remarkable blazar that remained unnoticed at high energies until May 2017 when it showed a transient-like behaviour brightening to such a level to be detected by *Fermi*-LAT and the MAXI X-ray sky monitor. The source remained bright for a few months exhibiting an SED typical of bright HBL blazars [64]. This discovery suggests the existence of a population of still undiscovered objects that can occasionally flare and become strong X-ray and *γ*-ray sources. If X-ray flares are indeed associated to proton acceleration and neutrino emission, as suggested in ref. [65], these sources could play a significant role in neutrino astronomy. Considering that the expected neutrino flux from 4FGL J1544.3−0649 is the fifth largest in the list of 66 bright blazars considered in ref. [10], blazars that are currently uncatalogued and below detectability in the X-ray and *γ*-ray bands could even be a major population of neutrino sources. In this case the identification of the counterpart of many neutrino events would be a very difficult task.

The importance of this possible contribution clearly depends on the abundance of this population of blazars, how often large transient events occur, and how long sources remain in a bright phase. A reliable estimate of the space density and duty cycle of these elusive objects would only be possible through all-sky X-ray or *γ*-ray monitors that are significantly more sensitive that those currently in operation. Sensitive all-sky monitors would also easily discover possible high-energy flares from objects in the large localisation errors of neutrino events, providing "smoking-gun" evidence for neutrino-blazar association.

### **5. Summary and Discussion**

Neutrino astronomy is still in a nascent phase, a status characterised by consolidated results on the detection of neutrinos of astrophysical origin, but also by the lack of indisputable evidence about the nature of their electromagnetic counterparts. Some results in this field are sound, such as the absence of anisotropy in the arrival directions of high-energy IceCube neutrinos, which implies a dominant population of extragalactic sources, although a minor Galactic component cannot be excluded. Considering specific associations with known cosmic electromagnetic sources we have shown that a growing number of papers are reporting possible associations of astrophysical neutrinos with AGN, particularly blazars of different types. Substantial theoretical work also demonstrated that neutrino emission mechanisms consistent with the existing data can be accommodated in physical situations that are typical of blazars [6–8,42,43,66].

In the following we make some considerations on the nature of the proposed associations with significance larger than 2*σ* for different types of blazars.

As reported in the previous sections correlations of radio-bright AGN with astrophysical neutrinos have been sought extensively [14,33–36]. Recently, possible statistical associations with specific LBL sources (3C 279, NRAO 530, PKS 1741−038, and PKS 2145+067) have been reported in ref. [33]. These results however were not confirmed by similar works [36,37]. Samples of bright radio sources tends to select LBL objects since IHBL blazars are largely underrepresented in these data sets. For example only 103 out of the 3411 sources (∼3%) of the complete RFC sub sample with fr,8GHz > 150 mJy used in refs. [34,37] are IHBL blazars. A similar proportion is also evident in the surface density comparison plotted on the left part of Figure 1, which refers to blazars with radio flux density larger than 200 mJy at 1.4 GHz. Despite the large surface density of LBL blazars and the multiple statistical searches based on radio-bright sources, no stable significant association with sources of this type has been found. The matching of astrophysical neutrino samples with the position of known *γ*-ray-detected blazars, regardless of their radio flux, resulted instead in frequent possible associations with blazars of the IHBL type. Numerous examples of this increasingly reported outcome exist. The following is a list of the most important cases.


If the several hints linking IHBLs to astrophysical neutrinos genuinely reflect a particular physical situation in these sources, it could be that the characteristics that distinguish them (e.g., high *ν<sup>S</sup>* peak, no cosmological evolution, slow jet, strongly chromatic variability, low level of optical polarisation) may simply be the observational consequences of proton acceleration to very high energies. The highly variable UV or X-ray radiation that defines IHBLs could then be the direct result of proton synchrotron emission as proposed in refs. [10,65] or of radiation produced by the secondary particles generated in photo hadronic interactions that lose energy on timescales that depend on the specific physical conditions of the emitting region or of its evolution after flares. In this view the existence of IHBL blazars would be strictly connected to proton acceleration in AGN. The still poorly understood low level of cosmological evolution of IHBLs compared to other blazars, might also be connected with the conditions that lead to proton acceleration.

**Figure 4.** The SED of the three blazars reported as possible neutrino sources in [4]. They are all of the IBL type with remarkably similar flux and overall shape, except in the X-ray band, where small differences in *ν<sup>S</sup>* peak , large variability and non-simultaneous observations cause a large scatter.

An origin of high-energy neutrinos from a non-evolving and low-density population of sources, such as IHBLs, easily meets the constraint for UHECRs sources not to overproduce the observed *γ*-ray background [71]. Intermittent proton acceleration occurring in persistent IHBLs, or in transient blazars like 4FGL J1544.3−0649, would further ease the constraint on the *γ*-ray background.

Neutrino multi-messenger astronomy is an exciting new field in rapid evolution where new observational data, possibly of improved localisation accuracy, need to be accumulated to confirm or disprove the hypothesis that IHBL blazars are the preferential site of proton acceleration. Forthcoming observatories like KM3NeT (https://www.km3 net.org/, accessed on 11 December 2021) and Baikal-GVD (https://baikalgvd.jinr.ru/, accessed on 11 December 2021) under water neutrino telescopes, the Pacific Ocean Neutrino Experiment (P-ONE) [72], and the future more sensitive IceCube-Gen2 (https://icecube. wisc.edu/science/beyond/, accessed on 11 December 2021) instrument, ideally operating in parallel with a new generation of sensitive all-sky X-ray and *γ*-ray detectors, are the facilities that will bring neutrino astronomy into the next phase, settling the question about the role of the different types of AGN in terms of their neutrino and UHECR emission.

**Author Contributions:** Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The multi-frequency data of all SEDs presented in this work are available through the tools and web pages of the Open Universe platform, e.g., the VOU-BLazars

application [53] and the interactive table at https://openuniverse.asi.it/blazars/swift/ (accessed on 11 December 2021).

**Acknowledgments:** We thank Bia Boccardi for a careful reading of the paper. We acknowledge the use of data, analysis tools and services from the Open Universe platform, the National Extra-galactic Database (NED), and the bibliographic services of the Astrophysics Data System (ADS).

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


### *Review* **Gamma-ray Bursts at the Highest Energies**

**Lara Nava 1,2**

<sup>1</sup> INAF-Osservatorio Astronomico di Brera, I-23807 Merate, Italy; lara.nava@inaf.it

<sup>2</sup> INFN-Sezione di Trieste, I-34127 Trieste, Italy

**Abstract:** Emission from Gamma-ray bursts is thought to be powered mainly by synchrotron radiation from energetic electrons. The same electrons might scatter these synchrotron seed photons to higher (>10 GeV) energies, building a distinct spectral component (synchrotron self-Compton, SSC). This process is expected to take place, but its relevance (e.g., the ratio between the SSC and synchrotron emitted power) is difficult to predict on the basis of current knowledge of physical conditions at GRB emission sites. Very high-energy radiation in GRBs can be produced also by other mechanisms, such as synchrotron itself (if PeV electrons are produced at the source), inverse Compton on external seed photons, and hadronic processes. Recently, after years of efforts, very high-energy radiation has been finally detected from at least four confirmed long GRBs by the Cherenkov telescopes H.E.S.S. and MAGIC. In all four cases, the emission has been recorded during the afterglow phase, well after the end of the prompt emission. In this work, I give an overview, accessible also to non-experts of the field, of the recent detections, theoretical implications, and future challenges, with a special focus on why very high-energy observations are relevant for our understanding of Gamma-ray bursts and which long-standing questions can be finally answered with the help of these observations.

**Keywords:** Gamma-ray bursts; non-thermal emission; radiative processes; relativistic astrophysics; very-high energy Gamma-rays

### **1. Introduction on GRBs**

Gamma-ray bursts (GRBs) are transient sources of radiation associated to extragalactic catastrophic events. Following the accretion of a massive disk onto a newly-born compact object (most likely a black hole), material is ejected in the form of two opposite jets reaching ultra-relativistic velocities with a typical bulk Lorentz factor Γ - 100. The initial emission phase (called prompt emission) is caused by the conversion of a fraction of the jet energy (which is either in kinetic or magnetic form) into radiation, and is though to occur at a distance *<sup>R</sup>*∼1013–1015 cm from the central engine. The prompt emission is detected as a bright, impulsive emission of soft Gamma-rays (∼10 keV–10 MeV), with variability of the order of 0.01–1 s, total duration between ∼0.1 s and ∼103 s, and isotropic equivalent luminosities typically in the range of *<sup>L</sup>*iso∼1050–1053 erg/s. Two different sub-classes of GRBs have been clearly identified: those related to the merger of a binary system of two neutrons stars or a neutron star and a black hole (producing GRBs with prompt emission duration shorter than ∼2 s) and those triggered by the core collapse of a massive star (in this case, the prompt emission lasts typically more than 2 s). At a distance of *R* > 1015–1016 cm, the interaction of the jet with the external medium becomes relevant and is responsible for the gradual decrease in the bulk Lorentz factor and the production of the afterglow radiation, a broadband (radio-to-GeV) emission that becomes fainter with time and that is detectable for days or even weeks/months.

The year 2019 marked the beginning of a new epoch in the study of GRBs. The MAGIC and H.E.S.S. collaborations reported for the first time the firm detection of very highenergy (VHE, >100 GeV) radiation from two long GRBs, captured during their afterglow phase [1,2]. These detections came after several years of efforts, where major upgrades in

**Citation:** Nava, L. Gamma-ray Bursts at the Highest Energies. *Universe* **2021**, *7*, 503. https:// doi.org/10.3390/universe7120503

Academic Editor: Ulisses Barres de Almeida, Michele Doro and Binbin Zhang

Received: 22 September 2021 Accepted: 16 November 2021 Published: 17 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the telescope sensitivities, energy thresholds, and slewing speed did not initially yield to the expected results (for more details on Cherenkov telescopes, their capabilities, and past observations of GRBs, see Section 2). While the community was discussing the implications of the lack of detections by the current generation of Cherenkov telescopes and placing hope in the next generation [3], the bright VHE emission recorded from GRB 190114C answered in one shot many questions, at least about the presence and phenomenology of VHE emission in GRBs. This detection revealed that VHE radiation up to at least 1 TeV can be efficiently produced (i.e., with a luminosity similar to the X-ray luminosity) in GRB environments, for at least one hour after the end of the prompt (a timescale that places its origin in the afterglow phase) and is detectable also at relatively large redshifts (*z* = 0.42). At this redshift, the absorption of VHE photons via pair-production on photons from the extragalactic background light (EBL) produces already a relevant flux attenuation (at least two orders of magnitude at 1 TeV, according to current EBL models [4–6]—for a recent review, see [7], this Special Issue).

Currently, firm detections of GRBs at VHE by imaging atmospheric Cherenkov telescopes (IACTs) amount to four (see Section 3), and they all refer to long GRBs, on timescales much longer than their prompt emission duration. TeV radiation in GRBs is expected to be produced as a result of inverse Compton scattering. In particular, being compact sources of synchrotron radiation, the same electrons producing synchrotron photons might efficiently scatter them to higher (>10 GeV) energies, building a distinct spectral component called synchrotron self-Compton (SSC). Although an SSC origin for the ∼TeV radiation detected so far from GRBs is widely accepted, at this stage other possibilities can not be discarded. In particular, the discussion is currently open on the possibility to explain multi-wavelength observations with a single synchrotron component up to TeV energies. This scenario requires a revision of the shock acceleration model adopted to explain particle acceleration in these relativistic shocks: while synchrotron photons from electrons accelerated be the Fermi mechanism reach a maximum of ∼50 MeV (in the plasma comoving frame), the detection of TeV photons (observer frame) at late times (hours) can be explained in the synchrotron scenario only modifying the assumption that leads to a maximum photon energy of ∼50 MeV, e.g., invoking a different mechanism or specific conditions (such as a decaying magnetic field) able to exceed this limit by at least three orders of magnitude.

Other open questions (besides what is the origin of this radiation and the energy of the emitting particles, which is discussed in Section 6) concerns the conditions (e.g., jet energy, magnetic field strength, size of the region, bulk Lorentz factor, and properties of the ambient medium) leading to efficient production of VHE emission, how typical they are in GRB environments, how well we can trust EBL models to infer the intrinsic properties of GRB VHE spectra, whether short GRBs can also have a relevant VHE component, and whether VHE photons can be produced also during the prompt phase, as the result of internal dissipation processes (Section 7).

There have been several attempts to predict VHE radiation from GRBs from different mechanisms [8–13]. In general, the poor knowledge of the conditions at the region where afterglow radiation is produced prevents researchers from making clear predictions on the luminosity of the associated VHE emission. Even more difficult is the prediction of VHE radiation associated to the keV-MeV prompt radiation, since the origin of the latter one is not understood yet.

In the absence of detections with Cherenkov telescopes, studies of GRBs at high energies have relied on observations from the Large Area Telescope (LAT), a pair conversion telescope covering the energy band from 20 MeV to 300 GeV onboard the *Fermi* satellite, which was launched in 2008. The LAT detected only a fraction of the GRBs discovered at lower energies by the other telescope onboard *Fermi*, the Gamma-ray Burst Monitor (GBM, 8 keV–40 MeV). This percentage amounts to only about 12% (after taking into account the different fields of view of the two instruments). The study of radiation in the range 100 MeV–100 GeV (hereafter high-energy (HE)) detected by the LAT has improved

our understanding of GRB radiation, especially of the afterglow phase, adding valuable information in the high-energy part of the afterglow synchrotron spectrum [14–19] and hinting to the presence of a distinct component at higher energies [20]. For the prompt emission, instead, no clear conclusions can be drawn, although it seems fair to state that a strong GeV component associated to the keV–MeV prompt emission has not been found by the LAT [21]. A brief summary of (V)HE observations of GRBs is given in Section 2. For a more complete review on HE emission from GRBs before the VHE era, see [22].

In this work, I first summarise the main properties of the GRBs detected at VHE (Section 3) and the attempts to detect emission from short GRBs (Section 4) and emission simultaneous to the prompt (Section 5). In the second part, I discuss the interpretation of the origin of VHE radiation (Section 6) and then I focus the attention on the importance of studying VHE emission from GRBs (Section 7). In particular, I propose a reflection on the status of the comprehension of the GRB phenomenon at the dawn of the VHE GRB era. I select a few controversial topics in GRB physics and outline how the study of VHE radiation can produce a breakthrough in their comprehension.

### **2. The Hunt for GRBs at High and Very High Energies**

### *2.1. Scientific Motivation*

Both in prompt and afterglow studies, the reason to extend observations towards the highest energies is manifold and is related on the one hand to a better characterisation of the high-energy part of the prompt/afterglow spectra and on the other hand to the search for an additional emission component in the GeV–TeV energy range.

Prompt emission has been detected from thousands of GRBs, and it is typically observed in the energy range between ∼10 keV and a few MeV. At energies above the spectral peak (∼0.1–0.3 MeV), the prompt emission spectrum is difficult to characterise, due to the low signal-to-noise ratio typically available at these energies, except for very bright GRBs. In these cases, the high-energy part of the spectrum is often well described by a single power-law (*dN*/*dE* ∝ *Eβ*, [23–25]). The photon index *β* might enclose information about the energy spectrum of the emitting particles. For power-law (PL) spectra of the accelerated particles (*dN*/*dγ* ∝ *γ*−*p*) and synchrotron cooling, the high-energy end of the synchrotron spectrum is expected to behave as a PL with index *β* = −*p*/2 − 1. For typical photon indices *β*∼−2.2, an injection particle spectrum with index *p*∼2.4 is inferred. Extension of prompt spectral studies to higher energies is possible since 2008 for those GRBs observed by the LAT. The inclusion of LAT data to the study of prompt emission spectra decreases the median value of *β* from *β*∼−2.2 from the GBM-only fits to *β*∼−2.5 [25]. These softer values might point to the presence of intrinsic spectral breaks and/or cutoffs and show the importance of systematically extending the energy range of prompt emission spectral studies. A spectral cutoff at the high-energy end of the prompt spectrum is expected to occur, either caused by internal pair-production opacity or by the high-energy cutoff in the energy spectrum of the emitting particles, which marks the maximum energy up to which particles have been accelerated. Such cutoffs have been constrained only in a few cases, mostly thanks to the inclusion of HE observations by LAT [26,27]. If interpreted as caused by internal opacity, the position of the cutoff/break energy can be related to the size and bulk Lorentz factor of the source, which is typically in the range 100–400.

In the afterglow phase, a high-energy cutoff in the synchrotron spectrum is expected to occur as a result the maximum energy of the accelerated particles. In the basic model for relativistic shock acceleration, the energy gain by particles proceeds at a maximum rate given by Bohm diffusion and is limited by synchrotron cooling. In this scenario, the maximum synchrotron photon energy in the observer frame is estimated to be located at *E*syn max 10 GeV <sup>Γ</sup>2/(<sup>1</sup> + *<sup>z</sup>*) [28] (where <sup>Γ</sup><sup>2</sup> ≡ <sup>Γ</sup>/102), and it can then reach several tens of GeV in the early phases (seconds) of the afterglow; it decreases towards hundreds MeV at later times (days/weeks), when the beaming decreases as a result of jet deceleration. Any deviation from this prediction carries information either on a smaller efficiency of the relativistic shocks [29] or on the need for a more efficient mechanism and/or deviation from

standard assumptions [28,30,31]. Therefore, constraints on the location of the high-energy cutoff in prompt and/or afterglow spectra carry a wealth of information, e.g., about the bulk Lorentz factor, particle acceleration efficiency, and size of the emitting region.

Besides constraining the high-energy part of the prompt and afterglow synchrotron spectra, (V)HE observations can also reveal the presence of a distinct spectral component. The detection of such a component is of great importance for understanding the conditions in the region where the radiation is produced, allowing to discriminate among different models and providing additional constraints to model parameters. Moreover, in case GRBs are GeV–TeV emitters, they could be used as powerful tools (complementary to other sources) for Lorentz invariance violation (LIV) studies [32,33] and for EBL studies [34].

Since the emission from GRBs is characterised by two well-distinguished phases with different origins, physical properties, region sizes, and locations, observational challenges and open questions differ between prompt and afterglow emission and should be treated separately. Observations at (V)HE can be of paramount importance to answer some of the most important questions in both fields. In particular, the identification of prompt emission with synchrotron radiation is still highly debated, and the discussion would benefit from observations at TeV energies. Regarding the afterglow, the origin is more certain, but the physical conditions of the emitting source (which are strictly related to the environment, the jet properties, and to particle acceleration and magnetic field amplification) are largely unknown.

In the next sections, I give a brief overview of past searches of GeV and TeV radiation in prompt and afterglow phases of GRBs by space telescopes, such as EGRET and LAT, and by ground-based telescopes, such as extensive air shower (EAS) detectors and IACTs. For a more complete review, see [22].

### *2.2. EGRET Detections of GRBs*

The first major advances in the study of GRBs have been possible thanks to the Compton Gamma-ray Observatory (CGRO, in orbit from 1991 to 2000), and in particular to the instruments Burst and Transient Source Explorer (BATSE, 30 keV–2 MeV) and the Energetic Gamma Ray Experiment Telescope (EGRET, 20 MeV–30 GeV). While BATSE detected prompt emission from almost three thousand GRBs, EGRET detected only five of them with its spark chambers. The GRB photon with the highest energy ever recorded by EGRET is an 18 GeV photon detected from GRB 940217 more than an hour after the burst onset, well after the end of the prompt emission. Combining simultaneous BATSE and EGRET data, one burst, GRB 941017, resulted of particular interest in the search for a high-energy component. The joint spectral analysis revealed the presence of a rising highenergy component in the spectrum, extending up to 200 MeV (Figure 1). This high-energy component appeared ∼10–20 s after the beginning of the burst and displayed a roughly constant flux and a hard spectral slope (*F<sup>ν</sup>* ∝ *ν*0) up to 200 s. At these late times, the highenergy (10–200 MeV) tail contained at least 3 times more energy than the 30 keV–2 MeV prompt *γ*-rays component [35].

Even though EGRET detected only a few events, it was already evident that highenergy emission from GRBs shows a diversity of behaviours in both its temporal and spectral properties: photons have been detected simultaneously to the prompt emission but also on much longer time scales, and they are sometimes consistent with being the high-energy part of the prompt spectrum, while at least in one case there is evidence for a separated spectral component. This suggests that high-energy radiation can be produced both as a result of internal and external dissipation, explaining the different timescales of GeV detections. Moreover, both in prompt and afterglow radiation, the emission can be the extension of the emission that is commonly detected at lower energies, or it can have a distinct origin. All these different cases produce a wealth of different temporal/spectral behaviours at GeV energies, as later confirmed by observations from the LAT.

**Figure 1.** Gamma-ray spectra of GRB 941017. Five time intervals are shown in five separated panels, from the beginning of the prompt phase to more than 200 s (panel **a**: 18–14 s; **b**: 14–47 s; **c**: 47–80 s; **d**: 80–113 s; **e**: 113–211 s). Crosses and filled circles show BATSE and EGRET data, respectively. The model fit (solid curve) is composed by a superposition of a smoothly broken PL (Band model) and a PL function. Two distinct spectral component are clearly present, except in the first time interval. From [35].

### *2.3. LAT Observations*

The *Swift* satellite launched in 2004 carries onboard the burst alert telescope (BAT, 15 keV–150 keV), the X-ray telescope (XRT, 0.3 keV–10 keV) and the ultraviolet–optical telescope (UVOT). Although it marked undoubtedly a giant leap for the study of GRBs, its instruments are sensitive to frequencies up to hard X-rays, and we had to wait until 2008 to progress the study of GRBs at higher energies. The launch of the *Fermi* satellite, carrying onboard the LAT telescope, allowed for the first time the systematic study of GRBs between 40 MeV and 100 GeV. Counting only GRBs with emission above 100 MeV, about 16 GRBs per year are detected both by the GBM and the LAT1. LAT photons usually are detected during the prompt emission but starting with a small delay (of the order of seconds) [21]. In about 60% of the cases, emission in the LAT energy range continues also after the end of the prompt. Although it is reasonable to assume that such a long-lasting HE radiation is connected with afterglow emission (e.g., produced as a result of the jet expansion into the external medium), a contribution from internal dissipation at early times cannot be excluded and is even necessary in some cases to explain variability detected in the LAT energy range [36].

The main results concerning the prompt phase can be summarised as follows. The LAT did not systematically identify the position of the high-energy cutoff in the prompt spectrum. The contamination from HE afterglow radiation already during the prompt makes it

difficult to identify this feature. There are two clear cases showing an evident high-energy cutoff, which are located at energies of ∼20–60 MeV and ∼80–150 MeV, providing an estimate of bulk Lorentz factors in the Γ∼100–400 range [27]. Not surprisingly, in both cases the afterglow emission starts after the prompt phase is ended, giving the opportunity in these two bursts to study the prompt spectrum at high energies with no contamination from the HE afterglow component. A more indirect evidence for the presence of a highenergy cutoff in prompt spectra comes from the analysis of LAT non-detections, i.e., GRBs observed by the LAT but for which only upper limits could be placed [26].

If, on the one hand, the high-energy data do not show in general a lack of flux during the prompt emission (indicative of a cutoff ), on the other hand also a strong excess over the extrapolation of the Band function fit is in most cases excluded. In few cases, a clear extracomponent has been identified, sometimes extending also at the lower end of the GBM energy range. However, this component rises with a delay as compared to the keV–MeV prompt emission and might be justified with contamination from the afterglow emission. Summarizing, the LAT data have not provided clear evidence about the existence of an SSC component (or any other component) of internal origin dominating the GeV energy range in the prompt emission phase of GRBs.

GeV radiation detected on timescales much longer than the prompt phase is commonly interpreted as synchrotron emission from electrons accelerated by the external forward shock, i.e., in this interpretation, GeV data would lie on the high-energy part of the synchrotron spectrum [19]. Synchrotron photons are expected to be produced below a maximum energy corresponding to the maximum energy of the accelerated electrons. The latter can be estimated by equating the acceleration and cooling timescales. Assuming a Bohm acceleration rate and synchrotron cooling, the maximum electron energy implies a limit for the energy of the observed synchrotron photons of *E*syn max 10 GeV Γ2/(1 + *z*). This limit is time-dependent, as the bulk Lorentz factor decreases in time. Photons with energies in excess of 1–10 GeV at times larger than 102–103 s can be hardly explained as synchrotron radiation. The LAT has observed several photons from different GRBs that are in excess of this limit (e.g., [37]). The highest photon energy recorded by the LAT associated with GRBs is ∼100 GeV in the rest frame [37]. As of 2019, the LAT detected more than 160 GRBs, which exhibit photons above 100 MeV and ∼29 with VHE (-10 GeV) photons [21]. This is probably the most important evidence for the presence of an additional spectral component dominating the afterglow emission at energies >10–100 GeV, even though it is not a final proof. Alternative models for particle acceleration might explain how electrons can attain larger energies and then emit synchrotron photons above the standard limit [30], explaining >10 GeV LAT photons. Unambiguous spectral evidence for the presence of an SSC component associated to the GRB afterglow has never been found, although a strong hint for the need of an extra-PL component in the LAT energy range is present in the spectrum of GRB 130427A (Figure 2) and has been successfully interpreted as SSC emission [20,38]. If an additional emitting component is indeed there, its presence should be revealed by observations at higher (TeV) energies.

### *2.4. Early Attempts of TeV Detections with Ground-Based Telescopes*

The final proof for the presence of an additional emission component at high energies is expected to come from Cherenkov telescopes, sensitive in the GeV–TeV energy range. The main advantages of Cherenkov instruments are (i) their large effective area (several orders of magnitude above space-based instruments such as LAT) which compensates for the smaller photon flux at VHE and (ii) the extension to higher energies.

In the BATSE era, attempts to perform GRB follow up at different wavelengths were limited by slew times and large uncertainties in the GRB source position. To overcome these difficulties, extended air shower (EAS) detectors such as Milagro and ARGO appear to be well suited to perform a search for TeV GRBs, since they can benefit from a large field of view and a high duty cycle.

**Figure 2.** GRB 130427A: fit of the LAT spectrum at two different times. A modelling with a synchrotron component (dashed line) and an SSC component (dot-dashed line) was proposed to explain the spectral hardening at *E* > 5 GeV. From [38].

Milagro began operation in 1999. A prototype detector, Milagrito, operated from February 1997 to May 1998, during which it observed 54 GRBs detected by BATSE. The analysis of Milagrito observations during the prompt emission of each burst showed for one of them, GRB 970417a, an excess above background at a statistical significance of 3*σ* [39], which was not high enough to be conclusive. The VHE *γ*-ray fluence inferred from this result is at least an order of magnitude greater than the sub-MeV fluence. If true, this detection would imply that at least for some GRB we might have missed most of the emitted energy, since this has been radiated in the VHE range. Observations at these energies might then be crucial for a correct estimate of the energetics involved in the GRB phenomenon. Unfortunately, no similar TeV signals were found from observations of bursts with the more sensitive Milagro.

### *2.5. VERITAS, MAGIC and H.E.S.S. Observations*

The cosmological nature of GRBs implies that VHE emission from these sources suffers from strong EBL attenuation. At a typical GRB redshift *z*∼2, the attenuation is about one order of magnitude already at *E* - 100 GeV. To limit the impact of EBL attenuation on the capability of ground-based instruments to detect GRBs, the low-energy instrument threshold should be extended as much as possible below 100 GeV. Moreover, the short live time of the prompt emission (<103 s, typically 20–30 s) and the fast fading nature of the afterglow demands for fast repointing times. All these requirements have been achieved and continuously enhanced by IACTs over the last 10–15 years. As a downside, IACTs have relatively narrow fields of view (a few degrees), can operate only during the night, and reach good performances on dark clear nights, resulting in a low duty cycle. Observations of GRBs with IACTs have led for many years to upper limits, on time scales ranging from a few tens of seconds to days.

Currently, GRB follow-up observations are regularly carried out with the latest generation of IACTs including the Major Atmospheric Imaging Cherenkov Telescope (MAGIC), the High Energy Stereoscopic System (H.E.S.S.), and the Very Energetic Radiation Imaging Telescope Array System (VERITAS).

As of 2018, VERITAS (*E* > 100 GeV, [40]) has observed more than 150 GRB locations [41]. No evidence for emission was reported. Stringent upper limits could be put in two cases: GRB 150323A [41], for which constraints on the density of the external medium could be placed from the lack of an SSC component, and GRB130427A [42], for which

VERITAS observations (started 20 h after the burst) could place constraints on the presence of the cutoff of the SSC component (e.g., caused by Klein–Nishina, see Figure 3).

**Figure 3.** Joint LAT-VERITAS SED of GRB 130427A. The VERITAS upper limits at the observing time 71 ks to 75 ks were calculated assuming an SSC model with breaks at 100, 140, and 180 GeV (solid, dot-dashed, and dashed lines, respectively). The gray shaded region shows the 1 *σ* range of PL models compatible with the LAT data after temporal extrapolation from 10 ks to 70 ks. From [42].

H.E.S.S. has followed-up 49 GRBs between 2008 and 2018 (this number already ignores 19 follow-ups performed in bad weather conditions or with issues in the data taking). Derived upper limits are approximately of the order of 10<sup>−</sup>11–10−<sup>12</sup> cm−<sup>2</sup> s−<sup>1</sup> TeV−<sup>1</sup> [43].

MAGIC followed-up more than 130 GRBs starting from 2005 [44,45]. After the second telescope was added in 2009, GRB observations have been carried out in stereoscopic mode. Excluding cases when proper data could not be taken due to hardware problems or weather conditions, 105 GRBs were observed from July 2004 to February 2019. Of these, 40 have a measured redshift, but only 8 and 3 had *z* < 1 and *z* < 0.5, respectively [1]. Observations started less than 30 min after the burst for 66 events and less than 60 s for 14 events.

In light of the recent detections (see Section 3), which show the presence of VHE emission both in energetic and sub-energetic GRBs, the paucity of detections and the difficulty faced by IACTs up to some years ago must be ascribed to the combination of several factors, such as the relatively high-energy threshold (∼100–200 GeV) before recent upgrades, a low duty cycle (∼10%), as well as the combination of the IACT narrow field of view (a few degrees) with the poor localization capabilities of the most productive space detectors (e.g., the BATSE and the GBM) and with the time needed to repoint and start follow-up observations.

### **3. Long GRBs: Detections of TeV Afterglow Radiation**

In January 2019, the MAGIC collaboration circulated through the GCN [46] and ATel [47] channels the news of the firm detection (significance above 20 *σ*) of photons with energies in excess of 300 GeV, from GRB 190114C. A few months later, the H.E.S.S. collaboration released the analysis of a GRB observed in 2018, displaying a VHE excess with significance of ∼5*σ*. These detections, published by the Nature journal at the end of 2019 [1,2], marked the beginning of the VHE era in GRBs. Since then, two additional GRBs have been firmly (>5*σ*) detected by IACTs.

In this section, I describe the properties of these four GRBs and their VHE emission, also including in the list an additional GRB with a detection at ∼3.5*σ*. All these detections

refer to long GRBs, detected at VHE during their afterglow phases. Their main properties are summarized in Table 1.

A hint (∼3*σ*) of VHE excess recorded by MAGIC from the short GRB 160821B is presented in Section 4. Observations performed during the prompt phase will be discussed in Section 5. A discussion and interpretation of the VHE emission is the topic of Section 6.

**Table 1.** List of the GRBs detected by IACTs. The columns refer to the GRB name, redshift, duration (*T*90) and isotropic-equivalent emitted energy *Eγ*,iso of the prompt emission, XRT luminosity (*L*X,11h) integrated between 2–10 keV (rest frame) at 11 h after *T*0, the energy range of detected TeV photons, the initial and final time of VHE detection, and the name of the IACT that detected VHE radiation. For all GRBs the time of the VHE detection refers to BAT trigger, except for 190829A, for which the GBM trigger was adopted. For GRBs listed in the first four rows, the significance of the VHE detection was >5*σ*, while for the last GRB in the table, the significance was 3.5*σ*. *<sup>a</sup>* 10–1000 keV; *<sup>b</sup>* 50–300 keV; *<sup>c</sup>* 1–104 keV.


### *3.1. GRB 180720B*

Detected by H.E.S.S., this GRB is located at redshift *z* = 0.653 and triggered both the BAT and the GBM. The prompt duration measured by the GBM is *T*<sup>90</sup> = 48.9 ± 0.4 s, but, from the BAT and XRT lightcurves, it is evident that the bursting phase continues up to at least 130 s (Figure 4). The energy emitted during the prompt phase in the energy band 50–300 keV is *<sup>E</sup>γ*,iso = (6.0 ± 0.1) × <sup>10</sup><sup>53</sup> erg, larger than the average value for long GRBs with measured redshift (∼1053 erg). LAT observations are available from the time of the initial trigger to 700 s after. LAT-detected photons extend in energy above 1 GeV, with a 5 GeV photon that arrived 142 s after the GBM trigger. The XRT lightcurve shows an initial variability, possibly related to the prompt component, a plateau phase extending up to ∼300 s, and then a PL decay. In soft X-rays, during the afterglow emission, this is one of the brightest GRB ever detected.

H.E.S.S. observed the GRB for two hours, starting 10.1 h after the trigger (Figure 4) and detected an excess in the sub-TeV range (0.1–0.44 TeV) with a significance of 5.3*σ* [2]. Given the low signal of the source, the H.E.S.S. spectrum could not be properly constrained, and a fit with an EBL-attenuated PL model returns a value of the photon index *γVHE* with large uncertainties: *γVHE* = 1.6 ± 1.2(*stat*.) ± 0.4(*syst*.). No temporal analysis could be performed to check for flux variation in time. The lack of simultaneous XRT or LAT observations at the time of the H.E.S.S. observations prevents us from building the spectral energy distribution (SED). The interpolation of the XRT flux at the time of the H.E.S.S. detection reveals that the energy flux emitted in the 0.1–0.44 TeV range was only ∼2 times smaller than the energy flux in the 0.3–10 keV range (Figure 4).

From this detection we learn that VHE radiation can be efficiently produced several hours after the end of the prompt, with a luminosity that at the time of the H.E.S.S. detection is similar to the luminosity in X-rays.

**Figure 4.** Temporal evolution of the emission from GRB 180720B. Upper panel: light curve detected by GBM (green), LAT (blue), and XRT (grey). The early 0.3–10 keV lightcurve is derived extrapolating the BAT spectra (15 keV–150 keV) to the XRT band (0.3–10 keV). The optical (*r*-band) lightcurve is also shown (purple). The 0.1–0.44 TeV flux inferred from H.E.S.S. observations is shown with a red circle. Observations at 18 days did not show any evidence for a VHE excess from the source (red arrow, upper limit at 95% confidence level). Bottom panel: photon index of the LAT and *Swift* and H.E.S.S. spectra (error bars at 1 *σ*). From [2].

### *3.2. GRB 190114C*

Located at redshift *z* = 0.424, this long GRB triggered both BAT and GBM, as well as AGILE [48] (both with Super-AGILE and AGILE-MCAL), KONUS-Wind [49], INTE-GRAL [50], and Insight-HXMT [51]. The prompt duration was about 116 s as measured by the GBM and about 360 s as measured by the BAT. However, both from GBM and BAT lightcurves, it is evident that the bursting phase ends at ∼25 s. The X-ray emission after this time fades following a PL function (Figure 5, left). The energy emitted during the prompt phase was *<sup>E</sup>γ*,iso = (2.5 ± 0.1) × 1053 erg [52]. The LAT observations are available up to 180 s, when the burst left the LAT FoV. The burst reentered the LAT FoV at 8600 s, and significant emission was still observed by the LAT (Figure 5, left plot, red circles). Photons with energies in excess of 1 GeV have been detected, with a 21 GeV photon that arrived 21 s after the GBM trigger. The XRT lightcurve is well described by a PL with decay index −1.36 ± 0.02, from 68 s to ∼10<sup>6</sup> s. Similarly to GRB 180720B, the X-ray afterglow of GRB 190114C is one of the brightest ever detected.

MAGIC observed the GRB field starting ∼60 s after the BAT trigger and detected a signal up to 2400 s, in the energy range 0.2–1 TeV, with significance of detection of >50*σ* [1]. The large signal of the source allowed to perform time-resolved analysis by dividing the whole time-interval into six bins and to study the temporal behaviour of the VHE flux. The lightcurve in the 0.2–1 TeV range decays in time as a PL with index −1.51 ± 0.04. The spectrum, after correcting for EBL absorption, has a photon index <−2, possibly evolving in time to softer values (from *<sup>γ</sup>VHE* <sup>=</sup> <sup>−</sup>2.17+0.34 <sup>−</sup>0.36 at 62–90 s to *<sup>γ</sup>VHE* <sup>=</sup> <sup>−</sup>2.80+0.48 −0.54 at 635–2400 s). Simultaneous XRT and LAT observations are available from 68 s to 180 s and allowed to build two SEDs from 0.1 keV to 1 TeV (blue and yellow data points in Figure 5). The XRT flux (integrated between 1–10 keV) at the time of the MAGIC observations is a factor ∼2 larger than the 0.3–1 TeV flux, which is in turn very similar to the 0.1–1 GeV flux.

These observations revealed for the first time the temporal behaviour of the TeV emission in GRB afterglows and allowed to build the first GRB SEDs extending up to 1 TeV.

**Figure 5.** Observations of GRB 190114C, detected at VHE by the MAGIC telescopes. **Left**: lightcurves at different wavelengths, from radio to *γ*-rays, versus time since the BAT trigger time. The MAGIC light curve (in the energy range 0.3–1 TeV, green circles) is compared with light curves at lower frequencies. The vertical dashed line marks approximately the end of the prompt emission phase, identified with the end of the last flaring episode. For the data points, vertical bars show the 1*σ* errors on the flux. **Right**: multi-band spectra in the time interval 68–2400 s. Five time intervals were considered: 68–110 s (blue), 110–180 s (yellow), 180–360 s (red), 360–625 s (green), and 625–2400 s (purple). MAGIC data points have been corrected for attenuation caused by the EBL. Data from other instruments are shown when available (i.e., for the first two time-intervals): XRT, BAT, GBM, and LAT. For each time interval, the LAT contour regions are shown limiting the energy range to the range where photons are detected. MAGIC and LAT contour regions were drawn from the 1*σ* error of their best-fit power law functions. For *Swift* data, the regions show the 90% confidence contours for the joint fit XRT-BAT obtained, fitting to the data a smoothly broken power law. Filled regions were used for the first time interval (68–110 s, blue color). Both figures are from [53].

### *3.3. GRB 190829A*

This very nearby (*z* = 0.078) GRB detected by the H.E.S.S. provides a unique possibility to probe the VHE emission with minor effects from EBL attenuation. The prompt duration was *T*90∼63 s (50–300 keV) and the energy emitted in the 10–1000 keV energy range was *<sup>E</sup>γ*,iso∼<sup>2</sup> × 1050 erg, falling in the low-energy tail of the distribution for long GRBs. The GRB also triggered BAT with a delay of 51 s as compared to the GBM. At the time of the GBM trigger, the GRB was in the LAT FoV and remained visible until ∼1100 s. No emission has been detected by LAT, and only upper limits on the 0.1–1 GeV flux could be estimated. XRT observations began 158 s after the GBM trigger and showed a flare/peak between 1000 and 3000 s. After this time, the XRT light curve followed a PL decay and was observed up to ∼106 s.

The H.E.S.S. observed and detected emission on three consecutive nights, from 4.3 to 55.9 h after the GRB in the TeV range (0.1–3.3 TeV) with a significance of detection of 21.7*σ* in the first night [31]. The H.E.S.S. lightcurve, extracted in the 0.2–4 TeV energy range, is well described by a PL model with index −1.09 ± 0.05 (Figure 6). The intrinsic spectrum could be studied during the first two nights. A fit with an EBL-attenuated PL returns intrinsic PL indices −2.06 ± 0.10(*stat*.) ± 0.26(*syst*.) and −1.86 ± 0.26(*stat*.) ± 0.17(*syst*.), respectively. Simultaneous observations by XRT were available during the first two nights and allowed to build an XRT-H.E.S.S. SED. LAT upper limits were also available during the first night but did not help in constraining the SED shape. The comparison with the XRT flux at the time of the H.E.S.S. detections reveals that the luminosity emitted in the 0.2–4 TeV energy range was about three to four times smaller than the luminosity in the 0.3–10 keV range.

The detection of this GRB by the H.E.S.S. revealed for the first time that GRBs can produce VHE radiation on long timescales (days) and up to large energies (-3 TeV). Additionally, in contrast with previous detections, the isotropic energy of the prompt emission (*Eγ*,iso∼<sup>2</sup> × <sup>10</sup><sup>50</sup> erg) was quite low and suggests that moderately low-luminosity GRBs can efficiently produce VHE radiation.

**Figure 6.** GRB 190829A detected by the H.E.S.S. Panel (**A**): X-ray (XRT, blue closed squares) and VHE gamma rays (the H.E.S.S., red circles) energy flux light curves. Upper limits on MeV *γ*-rays by LAT are also shown (grey arrows). The dashed blue line is the PL fit to the XRT temporal decay obtained by considering only the XRT data that were simultaneous with the H.E.S.S. observations (open squares). Panel (**B**): the corresponding intrinsic photon indices are shown. The H.E.S.S. intrinsic spectral index (red line) is the mean value of 2.07 ± 0.09 determined over all three nights of observation. Panel (**C**) shows the energy flux evolution of the prompt emission observed by the BAT. All error bars correspond to 1*σ* uncertainty, and the LAT upper limits are at the 95% confidence level.

### *3.4. GRB 201015A*

An excess at VHE was reported by MAGIC, with a significance of >3*σ* [54]. This GRB, located at redshift *z* = 0.42, was detected and localized by BAT and had a duration of *T*90∼10 s. There was no onboard trigger by the GBM, but the GRB was identified by the GBM targeted search. The isotropic-equivalent prompt energy inferred from analysis of GBM data was *<sup>E</sup>γ*,iso∼10<sup>50</sup> erg. XRT started observations only 3214 s after the BAT trigger, due to observing constraints. Observations extend to almost one day and show that the X-ray lightcurve follows a PL decay with index −1.49 (Figure 7). Late observations by Chandra and XRT (between 8 and 21 days) show that the XRT flux was higher (a factor of 20–100) than the extrapolation of the PL behaviour followed at earlier times.

MAGIC observations started 33 s after the BAT trigger and lasted about 4 h. Preliminary analysis of MAGIC data [55] show evidence for emission above 140 GeV with a significance of ∼3.5*σ*.

In terms of intrinsic properties, this GRB was comparable to GRB 190829A, having similar *Eγ*,iso and similar X-ray luminosity of the afterglow phase (Table 1 and Figure 7, bottom panel). Being located at a larger distance, the overall flux was much reduced (Figure 7, top panel) and the EBL attenuation in the VHE range more severe. MAGIC observations were performed under good observational conditions, resulting also in a low energy threshold (∼140 GeV), which is crucial to increase the possibility of detection. A proper comparison with GRB 190829A will be possible after MAGIC final data analysis is published.

**Figure 7.** X-ray lightcurves of the long GRBs detected at VHE by MAGIC or H.E.S.S.. Different colours were used for different GRBs (see legend). For each GRB, the darker colour refers to the XRT, and the lighter colour was used for the BAT. **Top**: flux as a function of observer frame time since the trigger time (XRT: 0.3–10 keV observer-frame, unabsorbed; BAT: 15–50 keV). **Bottom**: luminosity as a function of the rest frame time (XRT: 2–10 keV rest frame, unubsorbed; BAT: 15–50 keV).

### *3.5. GRB 201216C*

This long GRB, which triggered both the GBM and BAT, has been detected by MAGIC even though it is located at large redshift *z* = 1.1. The prompt duration was about 30 s in the range of 50–300 keV, and about 48 s in the range of 15–350 keV. The energy emitted during the prompt phase was *<sup>E</sup>γ*,iso∼<sup>5</sup> × <sup>10</sup><sup>53</sup> erg (10–1000 keV). The LAT observations were available from 3500 s to 5500 s after the trigger. No significant emission was detected by the LAT in this time interval. Due to observing constraints, XRT began observations about 3000 s after the trigger (Figure 7).

MAGIC observed the GRB for about 2.2 h, starting 56 s after the BAT trigger. An excess of counts, dominated by events at ∼100 GeV in the first 20 min of observations was detected with a significance of about -6*σ* [56].

GRB 201216C has a prompt energy and an afterglow X-ray luminosity very similar to those observed from GRB 180720B and GRB 190114C (Figure 7, bottom panel). The higher redshift resulted in a more severe attenuation of the TeV flux by EBL absorption. In spite of this, the detection is quite clear and is probably favoured by the very good observing conditions and low energy threshold (∼100 GeV).

### *3.6. Comparison between Eγ*,iso*, z and X-ray Lightcurves of VHE GRBs*

Figures 7 and 8 propose a comparison between the X-ray lightcurves, the emitted prompt energy *Eγ*,iso, and redshifts of the five long GRBs detected by IACTs. In particular, X-ray (prompt and afterglow) lightcurves (fluxes and luminosities) are shown in Figure 7, while their distribution in the planes *Eγ*,iso-*z* and *L*X,11h-*Eγ*,iso are shown in Figure 8. In this last figure, VHE GRBs are shown, for comparison, together with a sample of *Swift* long GRBs with a high level of redshift completeness [57,58]. Given its low detection significance in preliminary analysis (∼3.5*σ*), GRB 201015A is marked in all plots with a gray colour.

**Figure 8.** GRBs detected at VHE compared to GRBs from the complete sample BAT6. **Top**: isotropic equivalent energy *Eγ*,iso emitted in the prompt phase versus redshift. **Bottom**: X-ray afterglow luminosity (2–10 keV, rest frame) at 11 h (rest frame) versus *Eγ*,iso.

Three GRBs (180720B, 190114C, and 201216C) are very similar in terms of intrinsic properties, such as *Eγ*,iso and luminosity of the X-ray lightcurves. The higher redshift of GRB 201216C explains its slightly lower flux (Figure 7, top panel, purple colour) and the faintness of the VHE detection, caused by the larger distance and more severe EBL attenuation (here we are assuming that X-ray luminosity is a proxy for VHE luminosity). Their X-ray lightcurves (0.3–10 keV, observer frame) have similar behaviours and almost overlap, with typical fluxes around 5 × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> and luminosities around 1–4 ×1046 erg s−<sup>1</sup> (both numbers are estimated at 11 h, observer/rest frame for fluxes/luminosities).

GRB 190829A has a much smaller *Eγ*,iso compared to the three GRBs previously mentioned (about 10<sup>3</sup> times smaller) and a fainter X-ray luminosity (more than two orders of magnitude). The redshift is also very different (*z* = 0.078), resulting in a X-ray flux that is very similar to those of brighter but more-distant GRBs (Figure 7, top panel, blue colour). The X-ray flux might then act as a proxy for the detection probabilities of VHE emission from IACTs. However, an exception is represented by GRB 201015A. Having a small *Eγ*,iso and intrinsic X-ray luminosity but a redshift very similar to GRB 190114C, the X-ray flux was also very low as compared to all the other GRBs. This might explain the faintness of the VHE excess detected by MAGIC, which, if truly connected with the source, can be explained thanks to the very good observing conditions, low energy threshold, and short delay.

In general, we learned that the intrinsic properties of GRBs able to produce detectable VHE emission span at least two orders of magnitude in X-ray afterglow luminosity and three in *Eγ*,iso. Redshifts are in the interval 0.078–1.1 and imply a variation over three orders of magnitude in the observed X-ray flux.

### **4. Short GRBs: Observations and Follow Up of GW Events**

Short GRBs are thought to be originated by the coalescence of compact binary systems (either a NS-BH or NS-NS), as supported by several indirect pieces of evidence (see [59] and references therein). The detection of a GW signal from the merger of an NS-NS system (GW 170817 [60]) in association with the detection of a short GRB marked the first direct proof. Programs for the follow up of GW alerts and short GRBs at VHE are in place at all major facilities (for a review on the state-of-the-art of observations of electromagnetic counterparts to GWs and also high-energy neutrinos, see [61], in this Special Issue). In this section, I summarise the results of observations at VHE of GW/GRB 170817A and GRB 160821B.

### *4.1. Follow-Up Observations of GRB 170817A by H.E.S.S., MAGIC, and HAWC*

The detection of GW 170817 triggered an extensive observational campaign aimed at covering a very wide range of frequencies (from radio up to VHE *γ*-rays). The proximity of the event, located at *z* = 0.0097, implies very limited attenuation by EBL (about 10% at 1 TeV and 30% at 10 TeV)), which favours VHE observations of this event and, more generally, of GW alerts. However, in this specific case, the MWL flux from the associated jet received at Earth was quite reduced by the relatively large viewing angle (the X-ray flux at the peak of the emission reached about 2 × <sup>10</sup>−<sup>14</sup> erg cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1).

The H.E.S.S. observations [62] of the region of NGC 4993 were performed about 5.3 h after the GW event, before the identification of the optical transient SSS17a (interpreted as emission from the kilonova) as part of the scanning of the GW localization uncertainty region, covering an area of 31 deg2 . After the identification of the optical transient, the H.E.S.S. observations focused on the region of SSS17a and continued in the following nights, covering the range 0.22–5.23 days after the GW event. Around the peak of the X-ray, optical, and radio emissions (about 160 days), the H.E.S.S. performed additional observations [63]. The obtained upper limits around the peak of the emission at lower frequency were at a flux level 10 times higher than the observed X-ray flux (Figure 9, top panel).

**Figure 9.** VHE observations of GRB 170817A. **Top**: the H.E.S.S. 1–10 TeV energy flux upper limits (green arrows) are compared with VLA radio data at 3 GHz (blue stars) and 6 GHz (orange circles) and with X-ray data from Chandra (red crosses). **Bottom**: multi-wavelength SED (radio, optical and X-ray) as computed between 123 and 195 d post-merger. The fit with a synchrotron model and the predicted SSC at 155 days post-merger are shown in red. The MAGIC upper limit is shown by a yellow circle. From [64].

MAGIC observed the counterpart from January to June 2018, when visibility constraints allowed it, for a total amount of ∼9.5 h in 10 different nights. The time of the MAGIC observations corresponds to the time when the radio and X-ray flux were at their maximum. Assuming a spectrum with photon index *α* = 2, the resulting UL calculated at *<sup>E</sup>* > 400 GeV is 3.6 × <sup>10</sup>−<sup>12</sup> erg cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1. The MAGIC UL lies well above the predicted SSC component associated to the synchrotron emission detected at lower frequencies (Figure 9).

GRB 170817A entered the FoV of HAWC ∼8 h after the GRB/GW event. For the first transit, an upper limit of 1.7 × <sup>10</sup>−<sup>10</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> for an energy range of 4–100 TeV was derived [65]. Observations continued also at later times, extending to 120 days after the trigger. In the time period of 10–110 days after the merger time, the flux upper limit found was 3.37 × <sup>10</sup>−<sup>12</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> in the energy range of 7–170 TeV [66].

The upper limits derived by the different telescopes on the VHE emission from GRB 170817A are not particularly constraining for the models, even though some information on the strength of the magnetic field could be inferred. Theoretical implications will be discussed in Section 6.

### *4.2. Follow-up Observations of GRB 160821B by MAGIC*

GRB 160821B is a short GRB detected by the BAT (*T*90∼0.5 s) and GBM (*T*90∼1 s) . At a redshift *z* = 0.162, this is one of the nearest short GRBs known. The prompt energy emitted is *<sup>E</sup>γ*,iso∼1.2 × 1049 erg. Analyses of the multiwavelength observational data of its afterglow emission revealed an optical–infrared component consistent with kilonova emission [67,68]. So far, it is the best sampled kilonova without a GW detection. No emission was detected by LAT.

Follow-up observations with the MAGIC telescopes automatically started 24 s after the burst trigger. The first ∼1.7 h of the data were strongly affected by clouds, while the remaining ∼2.2 h were taken under better weather conditions. Evidence of a *γ*-ray excess above 500 GeV was found at a significance of ∼3*σ*. The flux implied by these observations, in case the excess was interpreted as a real signal from the source, was about a factor of 10 higher than the simultaneous X-ray flux, once the VHE flux was de-absorbed by EBL. Such a large (compared to X-ray emission) VHE flux is surprising for a short GRB and has proved to be challenging to explain [69,70].

Further observations of short GRBs are necessary to understand if such a bright emission component at VHE in short GRBs is present and eventually what is its origin.

### **5. Prompt Emission: Observations by EAS Arrays**

The search for a VHE counterpart to the keV–MeV prompt emission requires that the total time needed to point the GRB location (which is the sum of the delay in receiving the alert and the telescope slewing time) is shorter than the prompt duration. For short GRBs, which have a duration <2 s, only serendipitous detections (which are very unlikely with IACTs) are possible. For long GRBs, the typical duration (∼20–30 s) is comparable with the shortest time delays of past IACTs observations. A combination of short time delay and particularly long GRB is then needed in order to start observations with IACTs while the prompt emission is still ongoing. Since 2013, MAGIC started observations within 100 s about 30 times, but unfortunately in almost all these cases the short delay was longer than the prompt duration. In a few cases, the GRB was observed with a delay similar or shorter than the prompt duration, but the large redshift and/or non-optimal observing conditions prevented the derivation of useful constraints on the presence of a prompt VHE component. These numbers in any case show that these observations are feasible, and it might be only matter of time for IACTs to detect VHE emission during the prompt (or at least place strong constraints on its presence).

Another possibility to observe the prompt with IACTs is represented by those GRBs that are triggered on a precursor event: if the delay time of the telescope is comparable to the delay between the main event and the precursor, it is even possible to observe GRBs with IACTs during the brightest part of the prompt emission.

The observation of X-ray flares (present in a good fraction of GRBs) is a task well within reach with IACTs. X-ray flares are detected over much longer timescales, 102–104 s, making simultaneous observations of X-ray flares in the GeV–TeV band highly feasible. In GRB 180720B, high-energy (LAT) observations are available during the flaring activity. This GRB was also detected at VHE (but at much later times, ∼10 h). The spectral analysis of GBM and LAT data around 100–200 s [71] shows the evidence for a distinct spectral component rising above 100 MeV. The full characterization of this component and its relation with the X-ray flare can be performed only with the help of VHE observations. In the last two GRBs detected at VHE, GRB 201015A, and GRB 201216C, MAGIC started observations with a delay <60 s, but unfortunately in both cases XRT could not start observations until thousands of seconds after the trigger time, due to observational constraints.

EAS arrays, such as LHAASO [72] and HAWC [73], might have better chances of observing GRBs during the prompt emission, given their much larger FoV∼2 sr and duty cycle >90%. Observations by LHAASO of GRB 190829A, detected by H.E.S.S. up to 3 TeV, were available both during the prompt and afterglow phase. The analysis shows no indication of emission, but the upper limits on VHE radiation during the prompt show the potential for interesting studies of prompt emission with EAS arrays. GRB 190829A occurred on the edge of the FoV of LHAASO, when a quarter of the WCDA (one of the major parts of LHAASO) was operational. Data from *T*<sup>0</sup> − 0.5 h to *T*<sup>0</sup> + 2 h are available. Limits were converted to energy flux upper limits adopting a PL intrinsic spectrum *E*−1.5 and considering EBL absorption. Two distinct upper limits were derived in two different energy ranges: at >100 GeV and in the TeV domain and compared with a phenomenological model for the VHE emission. This is assumed to be a PL extending up to at least 10 TeV. In the TeV domain, the LHAASO upper limit places strong constraints on the brightness of this putative component. Even though several caveats are in place, these observations, performed with a limited array and non-optimal observing conditions, compared to a simple phenomenological model show the potential of LHAASO, and EAS arrays in general, in producing strong constraints on (or even detect) VHE emission from GRBs in their prompt phase.

### **6. Interpretation: What We Have Learned**

The recent detections of GRBs with IACTs show that, similarly to other extreme sources (such as blazars and pulsar wind nebulae), GRBs can develop physical conditions suitable for the production of VHE radiation. How this radiation is produced is matter of debate: detections are very recent and limited in number, with half of them reaching barely the telescope detection threshold. Current observations leave open a few possibilities, which the community is currently discussing.

The two most likely scenarios are SSC and synchrotron from ultrarelativistic electrons. Although SSC seems the most natural explanation and has not been discarded by observations, the similarity between X-ray and TeV fluxes, spectral indices, and temporal behaviour is pushing the community to take into consideration also a synchrotron origin, although this requires electron energies well in excess of the maximum energy assumed to be achievable in the basic version of acceleration at the external shock.

### *6.1. Double Bump or Single Component?*

A first question, to discriminate among different models, is whether or not there is evidence for a distinct spectral component, producing a double bump in the multiwavelength SED.

Soft X-ray (∼1 keV) afterglow emission is interpreted as synchrotron radiation from electrons accelerated at the forward shock, driven by the relativistic jet into the external medium. Although the origin of flares and of the plateau phase are still under debate, the PL decay phase is consistent with model predictions. The photon index of XRT spectra measured in different GRBs ranges between −1.5 and −2.5, with an average value of −2 [74]. This suggests that the peak of the synchrotron spectrum can lie both below and above the X-ray band, depending on the GRB conditions and the observing time. In the latter case, brighter LAT emission is expected and indeed observed [74]. LAT photons (at least those with energy below a few GeV, which represent the bulk of the LAT-detected photons) are also consistent with synchrotron forward shock emission [14–16,18]. Modeling of MWL afterglow observations, including LAT, have successfully explained observations within this model [18,19].

Current observations of VHE emission by IACTs open the question whether this additional emission is also consistent with being the high-energy part of the synchrotron spectrum or whether it is produced by a different mechanism. From a purely observational point of view, the smoking gun to discriminate among the two possibilities comes from the study of the SED shape: while a synchrotron origin would imply one single broad component, a double bump in the X-ray to TeV SED would reveal the need for an additional emission component.

Simultaneous X-ray/GeV/TeV SEDs are available only for GRB 190114C and GRB 190829A, in both cases for two epochs. A double bump is visible in the first SED (between 68 and 110 s after the trigger time) of GRB 190114C. Evidence comes from the comparison between the TeV and X-ray flux (a single component would require a flat PL spectrum from X-rays to TeV) and especially from the LAT spectral point, which shows that the flux must decrease between the X-ray and GeV band and then rise at higher energies, in order to explain the TeV flux (see Figure 5, blue points). One can wonder whether the uncertainty on the EBL absorption (which at *z* = 0.42 can be quite relevant) affects the results on the consistency of MAGIC data with the extension of the spectral component. MAGIC unabsorbed data (Figure 5) have been inferred adopting the EBL model developed by [5]. At 300 GeV an attenuation five times less severe (which basically means no attenuation) is required to make MAGIC observations consistent with the extrapolation of the fit to X-ray and LAT data. Although the presence of a double bump in GRB 190114C is fairly convincing, the observation of a double bump in one GRB does not necessarily imply that in all TeV GRBs another emission mechanism (other than synchrotron) is at work.

In GRB 190829A, the TeV and X-ray lightcurves decay in time at similar rates and their spectra have a consistent photon index (within the errors), once they are both modelled with PL functions. The extrapolation of the X-ray spectrum to the energy range of the H.E.S.S. detection predicts a flux that is consistent with observations. Unfortunately, in this case, LAT observations do not help in constraining the shape of the SED, as they provide an upper limit that lies well above the extrapolation of the X-ray spectrum (Figure 10).

**Figure 10.** *Cont.*

**Figure 10.** SEDs and MWL modelling of GRB 190829A at two different epochs (first and second night of H.E.S.S. observations). **Top**: XRT (black regions), LAT (green arrow, available only for the first night), and H.E.S.S. intrinsic spectrum with its uncertainty (statistical only, red regions). The shaded areas represent the 68% confidence intervals for the SSC model (light blue) and for the synchrotron model (orange). Dashed lines indicate the synchrotron component, while the dash-dotted lines show the inverse Compton components, neglecting internal *γ*-*γ* absorption. **Bottom**: modelling from radio to TeV proposed by [75]. The predicted SEDs at the times of the H.E.S.S. detections are shown with blue and red solid lines at 5 and 30 h, respectively, and are based on a synchrotron and SSC interpretation. A reverse shock component (dotted curve) dominates the radio observations. Confidence bands with 90% and 50% confidence levels are marked in lighter shades.

Concerning GRB 180720B, one single epoch of the H.E.S.S. observations is available, and simultaneous XRT and LAT data are missing. An SED cannot be built, but the interpolation of X-ray lightcurve at the time of H.E.S.S. detection and the extrapolation of the XRT spectrum to 400 GeV shows a rough consistency with the H.E.S.S. flux. The H.E.S.S. spectrum has a photon index > 2, suggesting a rising component, inconsistent with the synchrotron interpretation, but uncertainties are quite large (−1.6 ± 0.4). Finally, for GRB 201216C, data are not publicly available yet, and the comparison between simultaneous XRT and TeV flux cannot be performed at the time of writing.

Finally, for the short GRB 160821B, the MAGIC flux lies well above the extrapolation of the synchrotron spectrum. If the *γ*-ray excess is real and associated to the GRB, then an additional spectral component must be invoked to explain the SED shape. Both SSC and external Compton (EC) models have been considered for this GRB, but the large flux compared to the synchrotron flux is quite challenging to reach in both scenarios [69,70].

In the next session, synchrotron and SSC processes as sources of VHE radiation are discussed, with a particular emphasis on their implications on the physics of GRBs.

### 6.1.1. Synchrotron Radiation

TeV photons can be produced by the synchrotron mechanism if the energy of the electrons is large enough. To estimate the energy of electrons that would radiate synchrotron photons at VHE, we must first estimate the bulk Lorentz factor and the magnetic field. Considering that the H.E.S.S. detected photons up to 1 TeV during the first two nights of observations and that the energy of the four detected GRBs is in the range 1050–1053 erg, I consider as reference values *E*<sup>k</sup> = 1052 erg and *t* = 1 d. In the following equations, time and energy are in the rest frame, while the magnetic field is in the plasma comoving frame. For a homogeneous medium (H) with number density *n* = *n*<sup>0</sup> :

$$
\Gamma = 6 \, E\_{\rm k,52}^{1/8} n\_0^{-1/8} \, t\_{\rm d}^{-3/8} \quad (\mathbb{H}), \tag{1}
$$

where the assumption *<sup>R</sup>* ∼ *c t* <sup>Γ</sup><sup>2</sup> was adopted.

$$B'=0.07\,\text{G}\,\epsilon^{1/2}\_{\text{B},-3}n\_0^{3/8}\,E^{1/8}\_{\text{k},52}t\_{\text{d}}^{-3/8} \quad (\text{H}),\tag{2}$$

where <sup>B</sup> is the fraction of shock-dissipated energy that goes into the magnetic field. The Lorentz factor of electrons emitting synchrotron photons with energy *E*ph = 1 TeV is:

$$\gamma\_{\mathbf{e}} = 10^{10} \, E\_{\text{ph},1\,\text{TeV}}^{1/2} \, \epsilon\_{\text{B},-3}^{-1/4} \, n\_0^{-1/8} \, E\_{\text{k},52}^{-1/8} \, t\_{\text{d}}^{3/8} \tag{\text{H}},\tag{3}$$

corresponding (for the adopted reference values) to an electron energy of ∼5 PeV. Electrons should then be accelerated beyond PeV energies.

A similar energy is required also in case of expansion in a wind-like medium (W). For number density *<sup>n</sup>* = <sup>3</sup> × 1035 *<sup>A</sup> <sup>R</sup>*<sup>−</sup>2:

$$
\Gamma = 5 \, E\_{\rm k,52}^{1/4} n\_0^{-1/4} \, t\_{\rm d}^{-1/4} \quad (\text{W}), \tag{4}
$$

$$B'=0.6\,\text{G}\,\epsilon^{1/2}\_{\text{B},-3}A\_{\star}{}^{3/4}E\_{\text{k},\text{52}}{}^{-1/4}t\_{\text{d}}^{-3/4} \quad (\text{W}),\tag{5}$$

$$
\gamma\_{\rm e} = 5 \times 10^9 \, E\_{\rm ph, 1TeV}^{1/2} \, \epsilon\_{\rm B, -3}^{-1/4} \, A\_{\star}^{-1/4} \, t\_{\rm d}^{1/2} \quad (\text{W}), \tag{6}
$$

corresponding to an electron energy of ∼3 PeV for the adopted reference values.

This energy exceeds the maximum electron energy inferred assuming a Bohm acceleration rate and synchrotron cooling: Following [28]:

$$
\gamma\_{\rm c}^{\rm max} = \sqrt{\frac{9 \, m\_{\rm c}^2 c^4}{8 \, q^3 \, B'}} \tag{7}
$$

corresponding to 10<sup>8</sup> <sup>−</sup>3/8 B,−<sup>3</sup> *<sup>n</sup>*<sup>0</sup> <sup>−</sup>3/8 *E*−1/8 k,52 *t* 3/8 <sup>d</sup> for the homogeneous medium case and to <sup>3</sup> <sup>×</sup> <sup>10</sup><sup>7</sup> <sup>−</sup>1/2 B,−<sup>3</sup> *<sup>A</sup>* <sup>−</sup>3/4 *E*1/4 k,52 *t* 3/4 <sup>d</sup> in the wind-like case. In both cases, an electron energy two orders of magnitude higher is necessary to explain TeV photons as synchrotron radiation.

The maximum electron energy in Equation (7) was inferred assuming that the magnetic field strength is the same in the acceleration and emission regions. If the shock-amplified magnetic field decays downstream of the shock front on a length-scale smaller than the distance travelled by the electron before loosing a sizable fraction of its energy, higher Lorentz factors can be reached [18,28]. In a modified model where *B*acc > *B*em, the maximum attainable electron energy is given by Equation (7) times the factor *B*acc/*B*em (e.g., [28]).

A synchrotron origin was suggested so far to explain the emission detected by the H.E.S.S. from GRB 190829A [31]. In this study, the two possible origins were both investigated, and the synchrotron one was found to better account for the data, while the SSC was strongly disfavoured (see Figure 10, top panel). A different interpretation, proposing a SSC scenario, has been put forward by [75] (see Figure 10, bottom panel), which concluded that an SSC interpretation is a viable explanation for the H.E.S.S. detection. The reason why different conclusions are reached in the two different investigations should be probably ascribed to the different regions of parameter space investigated.

### 6.1.2. Synchrotron Self Compton

Astrophysical sources powered by synchrotron radiation are expected to have an inverse Compton scattering component. The same electrons that produce synchrotron photons may efficiently scatter these synchrotron seed photons to higher energies, depending on the source conditions, and in particular on the energy density of the radiation and of the magnetic field.

If the scattering proceeds in the Thomson regime, the electron energy can be estimated as:

$$
\gamma\_{\text{e}} \sim \sqrt{\frac{\nu^{\text{SSC}}}{2\,\nu^{\text{syn}}}} \tag{8}
$$

where *ν*SSC is the frequency of the upscattered photon, and *νsyn* is the synchrotron target photon. Different electrons contribute to the production of SSC photons with a given energy. For photon energy *E*ph = 1 TeV, the energy of the electrons that mostly contribute to their production depends on which part of the spectrum we are looking at 1 TeV. Considering again as a reference the H.E.S.S. GRB 190829A, the flat spectrum detected both in X-ray and in TeV suggest that the peak of the synchrotron and SSC spectra are around 1 keV and 1 TeV, respectively. The electron Lorentz factor is then 2 × 104, corresponding to an energy of about 10 GeV. The ratio between the height of the two bumps depends on the ratio between energy density of the radiation and of the magnetic field. For the Thomson regime, it is independent from *γ*<sup>e</sup> and can be approximated as:

$$\mathcal{Y} = \frac{\mathcal{U}^{\text{rad}}}{\mathcal{U}\_{\text{B}}} \tag{9}$$

The similarity between X-ray and TeV fluxes implies a Compton parameter *Y*∼1, where *Y* is the ratio between the power emitted in SSC and synchrotron.

At 1 d, with the reference values assumed in the previous section, the scattering between 1 keV photons and electrons with *<sup>γ</sup>*e∼<sup>2</sup> × <sup>10</sup><sup>4</sup> is (barely) in the Thomson regime. It is then worth estimating the required electron energy in case TeV photons are by scattering in the Klein–Nishina regime, *E*ph = Γ *m*e*c*<sup>2</sup> *γ*e. From the bulk Lorentz factor inferred in the previous section, for typical parameters this implies an electron energy *<sup>γ</sup>*<sup>e</sup> *<sup>m</sup>*e*c*2∼200 GeV. The SSC theory has been applied to the detected GRBs for which VHE data have been published (GRB 190114C, 180720B, and 190829A) and was found to be consistent with the data.

The multi-wavelength emission from GRB 190114C has been successfully explained, except for optical and radio late-time data, which are overproduced by the model. X-ray, GeV, and MAGIC data are instead well modelled, both in their temporal and spectral behaviour. Interestingly, according to the modelling, the emission detected by LAT at 10<sup>4</sup> s, after the GRB re-entered the FoV, is dominated by SSC radiation. It is very likely that in other cases LAT photons detected at late times (difficult to reconcile in the synchrotron scenario) have an SSC origin. From the shape of the full LAT lightcurve and spectral evolution, the presence of two components cannot be guessed, and only the inclusion of VHE observations are able to disentangle the contribution of the two components to the LAT energy range. Previous speculations on the presence in the GeV range of a distinct spectral component were based on the difficulty in producing synchrotron photons in the GeV range at late times. All the investigations focused on inferring the afterglow parameters from the modelling of multi-wavelength observations of GRB 190114C within the SSC scenario [53,76,77] have inferred similar values for the parameters, which lie in the following ranges: <sup>e</sup> = 0.07–0.1, <sup>B</sup> = (4–8) ×10<sup>−</sup>5, density *<sup>n</sup>* = 0.3–2 cm<sup>−</sup>3, *<sup>p</sup>* = 2.5–2.6, and *<sup>E</sup>*<sup>k</sup> = (3–8) ×1053 erg, with the exception of B, which in [77] was found to have a larger value <sup>B</sup> = (2–6) ×10<sup>−</sup>3.

For GRB 190829A, an SSC interpretation has been put forward by [75] (see the SED modeling in Figure 10, bottom). They found that in order to obtain a good fit to the data, the fraction of electrons accelerated into the supra-thermal PL tail should be *ζ*<sup>e</sup> < 0.13. The other parameters were similar to those inferred from GRB 190114C (except for a harder value of *<sup>p</sup>*): <sup>e</sup> = 0.03, <sup>B</sup> = <sup>5</sup> × <sup>10</sup><sup>−</sup>5, density *<sup>n</sup>* = 0.2 cm<sup>−</sup>3, *<sup>p</sup>* = 2.15, and *<sup>E</sup>*<sup>k</sup> = <sup>2</sup> × 1053 erg.

GRB 180720B has been modelled as SSC radiation by [76], which again found similar values to those adopted in the modelling of the other GRBs: <sup>e</sup> = 0.1, <sup>B</sup> = 10−4, density *n* = 0.1 cm<sup>−</sup>3, *p* = 2.4, and *E*<sup>k</sup> = 10<sup>54</sup> erg.

These applications of the SSC model to the few GRBs detected so far reveals already some interesting features. The range of <sup>B</sup> values is still quite large, but values of 0.1–0.01 (considered typical in the treatment of external shocks) were excluded. Moreover, the need to introduce the parameter *ζ*<sup>e</sup> suggests that finally, with a larger range of observables available, the degeneracy between the different parameters can be partially solved, and the fraction of accelerated electrons can be constrained. Numerical simulations suggest a fraction between 0.01 and 0.1 (see, e.g., [29]), but given the paucity of constraints, in afterglow modelling, this parameter is usually fixed to the value *ζ*<sup>e</sup> = 1. In all the investigations, a good modelling of the VHE GRBs has been found both in the case of a homogeneous medium and in the case of a wind-like density profile. All fits are consistent with a homogeneous medium with a density typical of the ISM density (*n* = 0.2–1 cm<sup>−</sup>3), even though long GRBs are expected to explode in chaotic media with a radial profile *n* ∝ *R*<sup>−</sup>2. The uncertainty on the radial profile of the density of the environment surrounding long GRBs is then still an open question.

### **7. Open Questions for Future Facilities**

The study of TeV emission in GRBs is still at an early stage, and the first, most pressing question on which investigations are focusing is the physical origin of the emission. VHE can also be used, especially once its origin has been understood, to shed light on some of those challenging questions that remain unanswered in the physics of GRBs and in particular on the origin of prompt emission, the properties of the environment where GRB explodes, and the nature and properties of particle acceleration and magnetic field amplification. In this section, the potential of VHE radiation in shedding light on these fundamental questions is discussed.

### *7.1. Prompt Emission*

The mechanisms responsible for the production of prompt radiation have not been identified yet. The lack of a satisfying explanation touches many aspects of the prompt emission, such as the nature of the dissipation process, the location of the emitting region (i.e., its distance from the central engine), the role of thermal processes, and the nature of the non-thermal mechanism (what the emitting particles are, how they are accelerated, and what the radiative mechanism is).

The primary candidate for the radiative mechanism is synchrotron radiation from non-thermal electrons. The common understanding of the conditions at the emitting region (i.e., large magnetic field strength *B* ∼104 − 105 G and electron Lorentz factors *<sup>γ</sup>*e∼102–103) imply that electrons are in an extremely fast cooling regime [78] and produce a photon spectrum with low-energy index *α* = −1.5. The low-energy part of the observed spectrum, however, is harder than the predicted value, having *α*∼−1. A major ease to this situation came with the discovery that many GRB spectra have a spectral break in the low-energy part [79,80]: in this case, the spectral regime *α* = −0.67 is visible inside the energy rage of the soft/hard X-ray instruments, and the spectral break (identified between 1–100 keV [79–84]) connects the segment *α*<sup>1</sup> = −0.67 with the segment *α*<sup>2</sup> = −1.5. In this configuration (sometimes named marginally fast cooling [85]), the break is identified with the cooling break frequency, and the radiation efficiency is still large and does not affect the estimates on the jet energetics. The identification of the cooling break frequency has major consequences on the inference of the conditions at the emitting region [81,84]. To account for the large cooling frequencies, the required magnetic field must be much smaller than what is usually assumed (*B* ∼1–10 G, [81,84]). Such a low value of the magnetic field immediately implies a strong SSC component. HE observations of prompt emission by LAT did not clearly identify the presence of such a component. This information, coming from the study of HE radiation, implies that in order to avoid a huge SSC component, the emission region should be placed at large radii (*R* - 1016 cm) barely consistent with variability timescales and close or even larger than the radius where afterglow emission becomes important. In order to face these difficulties, different models have been invoked, such as the mini-jet scenario [86] and synchrotron radiation from protons ([87], see however [88]).

We must notice that the identification of the spectral regime *α* = −0.67 and then the consistency of the overall spectrum with the predicted shape of synchrotron radiation represents a major improvement, but it faces the difficulty in explaining the hardest GRBs (which cross the line-of-death, having a spectrum harder than *α* = −0.67) and the possible inconsistency between the relatively large width of theoretical synchrotron spectra and the narrowness of GRB spectra [89–91].

From this discussion it is evident how the identification of the radiative mechanism producing the prompt radiation would have strong implications on the physics of the emitting region, revealing size, distance from the central engine, magnetic field strength, and the bulk Lorentz factor, with enormous impact on our knowledge of the outflow and finally on the progenitors and jet-launching mechanisms. The presence (or lack) of an inverse Compton component, or more generally of a GeV–TeV component, would represent an additional and unique probe of the physical conditions at the region producing prompt radiation. After decades of studies of GRB spectra and with no immediate improvement in the capabilities of keV–MeV instruments, the window at GeV–TeV energies, explored with current and under-construction facilities seems the most appropriate tool to advance in the next decade our knowledge of the processes related to the production of prompt emission.

### *7.2. Short GRBs*

Together with the detection of VHE radiation associated to the prompt emission, the detection of VHE emission from short GRBs represent the other main challenge for the present and future generation of Cherenkov telescopes.

In general, the detection of VHE *γ*-rays (and in particular of SSC radiation) from short GRBs is more challenging, because they are less energetic than long GRBs and in general explode in less-dense regions. This implies that detections or tight upper limits on *γ*-ray emission would provide constraints on the environment surrounding the progenitor [92] and hence on the nature and evolution of the progenitor itself. The importance of observing GW counterparts at VHE relies also on the fact that observations of particularly nearby GRBs are only moderately affected by EBL absorption. This simplifies the study of the spectral shape, allowing the identification of spectral features (such as cutoffs and spectral breaks) directly related to the physics of processes at work in the source. It is evident how the study of GW-GRB-associated signals can shed light on a number of aspects relevant to the study of emission from GRBs, and in particular on *γ*-ray radiation, but also on the study of the progenitors.

Chances of detection of VHE emission from short GRBs might increase if additional mechanisms able to produce bright VHE emission are at work. A good fraction of short GRBs have late-time activity, such as extended hard X-ray emission that continues for ∼100–300 s and X-ray flares observed even ∼104–105 s after the prompt emission and the plateau emission (extending for ∼104 s). These temporally extended components are commonly interpreted assuming that the central engine activity lasts much longer than the prompt emission, which could be explained by a magnetar or black hole accretion. The prolongued emission is attributed to internal dissipation. Several studies have investigated whether enhanced (V)HE Gamma-ray emission can be produced from prolonged engine activities in SGRBs.

Late photons that are related to the extended and/or plateau emission can be upscattered to the VHE band by high-energy electrons accelerated at the external forward shock via external inverse Compton (EIC) emission [93]. In this scenario, *γ*-ray emission can be useful to reveal the compact remnants such as a BH with a remnant disk or a long-lived pulsar. Whether the merger remnant is in general a BH or NS is an open question. This model found an application in GRB 160821B, a short GRB at *z* = 0.16 with an associated kilonova emission [67,68], for which a signal at ∼TeV energies with 3*σ* significance has been found by MAGIC [69].

An additional source of seed photons that can be upscattered to higher energies, producing *γ*-ray emission, might be provided by cocoon photons [94]. If the prolonged jet dissipates kinetic energy inside the cocoon radius, non-thermal electrons will be present in the dissipation region. The jet–ejecta interaction also produces copious thermal photons, leading to high-energy Gamma-ray counterparts to GWs, which can be used to probe the prolonged jets and the long-lasting activity of the central engine.

### *7.3. Circumburst Medium*

The detection of VHE emissions associated with the forward external shock can be fundamental in revealing the properties of the external medium surrounding the GRB progenitor.

In a typical case, available afterglow observations (covering the X-ray band and, with a sparser coverage, the optical band, and only in a small fraction also the radio band) are not sufficient to constrain the properties of the environment. Short GRBs are expected to explode in tenuous and homogeneous clean media, typical of the outskirts of the galaxies. Long GRBs are instead expected to explode preferentially in very chaotic media, with density profiles shaped by the wind of the stellar progenitor in the last phases of its life. This diversity should be reflected in a diversity on the afterglow radiation. The properties of the external medium (and in particular its radial profile, homogeneity and density, and magnetic field) shape the properties of the radiative output, since they affect the fireball dynamics (i.e., the deceleration time *<sup>t</sup>*dec <sup>∝</sup> *<sup>n</sup>*−1/(3−*s*) <sup>0</sup> and the Lorentz factor Γ(*t*) ∝ *n*−1/(8−2*s*) <sup>0</sup> ) but also the amount of emitting particles and the magnetic field (*B* ∝ *n*(*R*)1/2). Efforts to model the observations of the afterglow of long GRBs, which are usually more complete and sampled than the ones of short GRBs, often result in a impossibility in discriminating among a constant or wind-shaped medium, as both scenarios can account for the observations. The degeneracy among the many unknown parameters of the afterglow model can be sensitively reduced with the inclusion of complementary information obtained at VHE. Studies of the progenitors, conditions giving rise to the birth of a GRB explosion, environments, and stellar evolution might largely benefit from a study of the circumburst environment by means of its broadband afterglow emission.

### *7.4. Particle Acceleration at Ultra-Relativistic Shocks*

The outcome of particle acceleration at the external shock is described adopting several free parameters, such as the fraction *ξ*<sup>e</sup> of electrons accelerated in the supra-thermal tail, the fractions <sup>e</sup> and <sup>B</sup> of dissipated energy converted into non-thermal energy of the electrons and into magnetic energy, respectively, and the maximum Lorentz factor *γ*e,max. Modelling of afterglow observations and theoretical/numerical predictions both suggest a typical value of <sup>e</sup>∼0.1. The fraction *ξ*<sup>e</sup> is still highly uncertain and usually fixed to 1 in the modelling, while simulations point to a fraction between 0.01 and 0.1 [29]. The parameter <sup>B</sup> inferred from observations spans a wide range of values, from 10−<sup>7</sup> to 0.1. Finally, *γ*max has never been inferred from observations and (if needed) is introduced in the modelling assuming Equation (7).

The inclusion of HE data from LAT has provided constraints on B, suggesting a value much smaller than the one close to equipartition usually assumed (<sup>B</sup>∼0.01–0.1). On the other hand, they did not help in identifying the location of the synchrotron cutoff, due to the limited signal in the high-energy part of the LAT instrument (a simple power-law model is usually a good fit to the LAT spectra) and to the possible presence of a VHE component already dominating at energies >1–10 GeV. The extension of the energy range thanks to VHE instruments can allow to model the VHE component and its contribution to the LAT energy range while disentangling the two spectral components, possibly inferring the cutoff of the synchrotron one. If, on the other hand, VHE data are consistent with being the continuation of the synchrotron emission, the maximum energy of the particles exceeds the limit currently assumed and gives information on the need for a more efficient mechanism able to accelerate electrons up to multi-PeV energies. VHE detections can complete the picture and allow a better interpretation of the SED.

### **8. Concluding Remarks and Future Prospects**

At the dawn of the VHE era in GRBs, the few detections currently available are proving the enormous potential of this energy band in shedding light on several aspects of GRB physics and are giving a major boost to the field.

For a full exploitation of TeV data, first a comprehension of the origin of this emission needs to be achieved. The sources of uncertainty come from the difficulty in having simultaneous X-ray, GeV, and TeV observations to build a proper SED and also on the relatively large redshift of the detected GRBs: for three out of four detected GRBs the redshift is between 0.42 and 1.1, where EBL attenuation is severe and increases the uncertainties on the intrinsic spectrum at VHE. The detection of nearby events (*z* < 0.1) is then fundamental, but their rate is much smaller as compared to the bulk of the population.

Even though the current generation of IACTs has allowed the discovery of VHE radiation from GRBs, the understanding and the full exploitation of observations in this energy band are expected to come with the next generation of Cherenkov telescopes. The Cherenkov Telescope Array (CTA) will represent an improvement in several technical aspects which are fundamental to increase the possibilities of a GRB detection. CTA will have an appreciably lower energy threshold (30 GeV), and cover the entire sky, featuring a sensitivity considerably better than existing instruments and rapid slewing capabilities (180 degrees azimuthal rotation in 20 s, comparable to MAGIC). Moreover, CTA will be able to measure the spectra and variability of GRBs at multi-GeV energies with unprecedented photon statistics. Preliminary estimates of the expected GRB detection rate amount to a few events per year, depending mostly on the energy threshold and on the delay time [95].

The detection of GRB 190829A up to ∼3.3 TeV opens the interesting possibility to detect GRBs also with small-sized telescopes (SSTs). The planned ASTRI Mini-Array, composed of nine imaging atmospheric dual-mirror Cherenkov telescopes at the Teide Observatory site, will play a crucial role in the study of the new TeV component, by further extending the explored range to energies greater than few TeV. The target sources are those GRBs at a particularly low redshift: their detection (even though it will not constitute the bulk of the population) would be very important for studying the properties of GRBs at the highest energies, with minor attenuation by the EBL (and then smaller uncertainties related to our limited knowledge of EBL at higher redshifts). Figure 11 shows preliminary results on the detectability of GRBs with the ASTRI-Mini-Array. In particular, GRB 190114C was taken as a template, used at its original redshift (*z* = 0.42), and moved to closer redshifts (*z* = 0.25 and *z* = 0.078, with the latter one being the redshift of the H.E.S.S.-detected GRB 190829A). The lightcurves at 1 TeV in the three different cases are compared to the ASTRI Mini-Array sensitivity in the left-hand panel, and the spectra accumulated from 200 s to 800 s are shown in the right-hand panel.

For a full exploitation of current and future GRB observations at VHE, the role of other facilities is fundamental. IACTs require external alerts and localization with error < 1◦, which currently are assured by BAT and, in minor part, by GBM triggers, but, in the near future, no additional space telescopes will be available. The role and importance of synergies with other facilities is not limited to providing external alerts. The nature of VHE emission can be unveiled only by means of multi-wavelength observations. In particular, GeV observations simultaneous to the VHE detections have been of paramount importance in GRB 190114C to show the presence of a deep in the SED and then to support the existence of two distinct components.

At present, it is still unclear whether VHE emission can be accommodated within the standard scenario and whether its physics can be satisfactorily captured by a simple one-zone SSC model. Future observations by current facilities, and in a few years by the CTA, will be able to tell us if a modification (or even a radical change) in our description of afterglow radiation is needed, e.g., in terms of the nature of the radiating particles, the physics of the particle acceleration process, or even the need for developing more accurate two-zone models.

**Figure 11. Left**: lightcurve of GRB 190114C at 1 TeV (dotted purple curve) also moved to redshift *z* = 0.25 (dashed yellow) and *z* = 0.078 (dot-dashed green). The sensitivity of the ASTRI Mini-Array at 1 TeV is shown with a black solid line. **Right**: simulation of the spectrum of GRB 190114C integrated between 200 and 800 s as detected by the ASTRI Mini-Array (purple). Simulations of the same spectrum moved at closer redshift are also shown. From [96].

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Notes**

<sup>1</sup> https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermilgrb.html accessed on 22 September 2021.

### **References**


96. Stamerra, A.; Saturni, F.G.; Green, J.G.; Nava, L.; Lucarelli, F.; Antonelli, L.A. TeV Transients with the ASTRI Mini-Array: A case study with GRB 190114C. In Proceedings of the 37th International Cosmic Ray Conference—PoS(ICRC2021), Berlin, Germany, 12–23 July 2021; Volume 395, p. 890. [CrossRef]

### *Review* **The Hunt for Pevatrons: The Case of Supernova Remnants**

**Pierre Cristofari**

Observatoire de Paris, LUTH, 5 Place Jules Jansen, 92195 Meudon, France; pierre.cristofari@obspm.fr

**Abstract:** The search for Galactic pevatrons is now a well-identified key science project of all instruments operating in the very-high-energy domain. Indeed, in this energy range, the detection of gamma rays clearly indicates that efficient particle acceleration is taking place, and observations can thus help identify which astrophysical sources can energize particles up to the ∼PeV range, thus being *pevatrons*. In the search for the origin of Galactic cosmic rays (CRs), the PeV range is an important milestone, since the sources of Galactic CRs are expected to accelerate PeV particles. This is how the central scientific goal that is 'solving the mystery of the origin of CRs' has often been distorted into 'finding (a) pevatron(s)'. Since supernova remnants (SNRs) are often cited as the most likely candidates for the origin of CRs, 'finding (a) pevatron(s)' has often become 'confirming that SNRs are pevatrons'. Pleasingly, the first detection(s) of pevatron(s) were not associated to SNRs. Moreover, all clearly detected SNRs have yet revealed to not be pevatrons, and the detection from VHE gamma rays from regions unassociated with SNRs, are reminding us that other astrophysical sites might well be pevatrons. This short review aims at highlighting a few important results on the search for Galactic pevatrons.

**Keywords:** pevatrons; Galactic cosmic rays; gamma rays

**Citation:** Cristofari, P. The Hunt for Pevatrons: The Case of Supernova Remnants. *Universe* **2021**, *7*, 324. https://doi.org/10.3390/ universe7090324

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 26 July 2021 Accepted: 26 August 2021 Published: 31 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

### **1. Introduction**

### *1.1. The Hunt for the Sources of Galactic Cosmic Rays*

The term *Pevatron*, simply referring to an object capable of accelerating particles up to the PeV (=1015 eV) range, is now widely used in the scientific programs of all major instruments investigating particle acceleration in astrophysical sources. This passionate search for pevatrons is not a stubborn search for the most powerful astrophysical particle accelerators, but is undoubtedly motivated by the hunt for the origin of Galactic cosmic rays (CRs).

CRs are relativistic charged particles, typically ∼92% protons, ∼6% Helium, ∼1% electrons and ∼1% heavier nuclei, that fill the whole disk of the Milky Way. Although decades of experiments and theoretical studies helped to accumulate valuable knowledge, the fundamental question of their origin is still missing [1].

It is widely accepted that accelerators located inside our Galaxy must produce the bulk of CRs detected at the Earth. One strong argument for the Galactic origin comes from the detection of gamma rays: observations of the Galaxy revealed that the Galactic disk is gamma-ray bright, which is a clear indication of the interactions of CRs with interstallar medium (ISM) material and Galactic magnetic field in the disk, and that gamma rays roughly scale with the amount of material in the disk. Moreover, observations of the Large Magellanic Cloud (LMC), located in the vicinity of our Galaxy at ∼50 kpc, have shown that the gamma-ray signal is weaker than expected given the mass of the target material in the LMC [2,3]. This reduced gamma-ray signal from the LMC can be seen as evidence that CRs are not homogeneously distributed in the Universe, and originate within our Galaxy. Additionally, a simple reasoning on the Larmor radius *rL* of CRs, compared to the typical Galactic halo *H* size imposes that CRs of energy *E* remain confined within the Galaxy if *rL*(*E*) *H*. For typical magnetic fields of the interstellar medium (ISM) of the order of ∼μG and a halo size of a few ∼kpc (although both of these quantities remain poorly constrained), this simple estimate indicates that CR protons of energy *E* 1016−<sup>17</sup> eV are expected to remain confined in the Galaxy. Therefore, we have indications that (1) CRs originate inside our Galaxy; (2) CRs are confined inside our Galaxy up to at least 1016–1017 eV. These CRs of Galactic origin are simply called Galactic CRs.

Moreover, the local CR spectrum is famously exhibiting a power law of spectrum <sup>∝</sup>*E*−2.7 up to an energy of ∼3 PeV, where the slope becomes <sup>∝</sup>*E*<sup>−</sup>3, this feature being usually referred to as the *knee* [4]. Let us here mention that some collaborations have also found that the *knee* of the light component (*He*+protons) might be located at lower energy around ∼0.7 PeV [5,6], which could mean that the proton knee is below 1 PeV.

The remarkable, and almost featureless power law (although to be precise several instruments have in fact revealed some deviations in the powerlaw [7]), up to the knee, is seen as an indication that the sources of Galactic CRs must be accelerating particle up to the knee, and therefore, be *pevatrons*. Hence, the wild hunt for Galactic pevatrons, with a sometimes oversimplified argument that "finding the pevatrons" would mean "unveiling the origin of CRs up to the knee", or "unveileing the origin of Galactic CRs". Let us here insist on the fact that for the source(s) of Galactic CRs, being (a) pevatron(s) is a necessary condition but is not sufficient. It is for instance clear now that the Crab nebula is an astrophysical site where the acceleration of PeV particle (at least electrons) is taking place, and is however, not currently accepted as a typical source of the bulk of Galactic CRs (protons).

### *1.2. 100 TeV Gamma Rays*

The best way to probe particle acceleration up to the PeV range, is (so far) through gamma-ray observations, in the very-high-energy range (VHE, 100 GeV to 100 TeV), and ultrahigh-energy range (UHE, above 100 TeV). Indeed, radiations in the VHE and UHE range directly testify to the abundant presence of nonthermal particles. Three mechanisms can essentially produce gamma rays in this range. First, a hadronic mechanism: the creation and decay of neutral pions through the interactions of accelerated hadrons with nuclei of the ISM. Second, a leptonic mechanism: the inverse Compton scattering (ICS) of accelerated electrons on soft photons (CMB, optical, infrared). Third, another leptonic mechanism: the Bremsstrahlung emission of accelerated electrons that can also lead to the production of gamma rays in this range [8,9].

In the ∼TeV domain, it is not easy to differentiate between these different potential origins, and to know if parent hadrons or leptons are responsible for the production of gamma rays. The interpretation of the origin of observed TeV gamma rays is often subject to fervent discussions, and in many cases, the gamma-ray emission can be explained by one mechanism or another. See e.g., the numerous discussions on many well-studied astrophysical objects, for which the GeV and TeV gamma-ray emission can often be accounted for by a leptonic ICS origin, a hadronic origin, or a mixture of both. The case of the superova remnant (SNR) RXJ1713-3946 is a stereotypical example of the copious discussions on the interpretation of the gamma-ray emission. [10–22]

It has often been claimed that above -50 TeV, the interpretation of detected gamma rays becomes less ambiguous, since the inverse Compton scattering of nonthermal electrons becomes dramatically inefficient in producing gamma rays, due to the Klein–Nishina suppresion: at these energies, gamma rays are thus expected to testify that the acceleration of hadrons is taking place. Moreover, the VHE gamma rays of hadronic origin mimic the parent proton distribution, with a typical energy of gamma rays scaling as *E<sup>γ</sup>* ≈ 1/10*Ep*. Therefore, this has often been condensed into "the detection of ∼100 TeV gamma rays is be a direct indication of acceleration of ∼PeV CR protons".

Still, this affirmation needs to be tempered. The distinction between hadronic and leptonic origin of gamma rays is far from being straightforward even in the ∼100 TeV range. Although the Klein–Nishina suppression considerably reduces the production of gamma rays from ICS in this range, leading to an exponential suppression above -50 TeV, gamma rays can still be produced by electrons through ICS. A conspicuous amount of VHE

electrons can thus produce a gamma-ray signal in the 100 TeV region. Such a gamma-ray signal can especially be important in the context of instruments operating with an ever increasing sensitivity in the -50 TeV range, hence capable of detecting gamma rays photon even from steep or exponentially suppressed spectra [23–25]. Moreover, the synchrotron emission of accelerated electrons can also efficiently produce gamma rays.

A prime example of a gamma-ray emission interpreted as the result of accelerated electrons is the Crab Nebula, the brightest source of the TeV sky, and thus one of the best studied objects [26–28]. More generally, several theoretical works have illustrated that electrons accelerated above -100 TeV could lead, through inverse Compton scattering or bremsstrahlung, to gamma rays in the -100 TeV range (see e.g., [9] and references therein), and can produce hard gamma-ray spectra extending up to the - 100 TeV, making it especially strenuous to differentiate between hadronic and leptonic acceleration mechanisms [29]. Let us additionnally mention that the detection of neutrinos would be an effective and elegant way to discriminate between the hadronic and leptonic mechanisms since neutrinos can only be produced through hadronic interactions [30].

The detection and study of electron accelerators is of course of great importance, but in the search for the origin of CRs, is not enough. First, because CRs are mostly protons, and we are thus looking for proton sources: an efficient electron accelerator is not necesseraliy an efficient proton accelerator. The case of SNRs is instructive, since in spite of extensive studies the content of electrons and protons accelerated at SNRs is still poorly understood. Second, because unlike protons, electrons suffer important losses while transported in the Galaxy. Therefore, the problem of the origin of CR electrons is more *local* than the one of CR protons and somewhat disconnected [31–34].

So far, on the observational side, all major observatories relying on Imaging Atmospheric Cherenkov Telescopes (IACTs), such as VERITAS [35], MAGIC [36], or H.E.S.S. [37], and predecessors, such as HEGRA [38], have been probing VHE domain up to a few tens of TeV. Therefore, the search and identification of pevatrons has been a remarkably challenging task, but nonetheless successful since H.E.S.S. was capable of claiming the detection of the first pevatron in the Galactic center region [39]. Other observatories, using different techniques, relying on shower front detectors (or "air shower detectors"), such as HAWC [40], Tibet AS*γ* [41] and LHAASO [28], operating up to and above the ∼100 TeV gamma-ray domain also reported on the detection of several Galactic pevatrons. All these observations indicated that most of the pevatron candidates seem to not be associated to SNRs.

In this short review, we intend to especially discuss the case of SNRs, since they have been, for long, at the center of the debate on the origin of Galactic CRs, and that the question of whether they can accelerate PeV particles is still open. We then briefly discuss other pevatron candidates and important open questions in the coming years. For pedagogical introduction to the general physics of CRs, we refer the reader to reference writings of the field [1,42,43]. For updated reviews discussing recent results and advances on the problem of the origin of CRs, we refer the reader to Blasi [44], Gabici et al. [45].

### **2. Supernova Remnants**

In the search for the origin of Galactic CRs, supernova remnants (SNRs), the spherical shock waves expanding in the ISM after the explosion of massive stars, have become over the years the most famous candidates. Several reviews detail the reasons of the success of the SNR paradigm [46–48]. In short, the SNR hypothesis is supported by at least the following strong arguments:


power law in momentum <sup>∝</sup>*p*−*α*, with *<sup>α</sup>* <sup>∼</sup> 4 for a strong shock (which corresponds, at high energy to *E*−*α*<sup>+</sup>2), somewhat compatible with the Local CR spectrum measured at the Earth.

3. Magnetic field amplification: several mechanisms have been shown to be able to amplify the magnetic field at SNRs shock fronts, therefore helping to the reach the PeV range. Indeed, the Hillas criterion, where the Larmor radius of particles *rL* is equated to the typical size of the accelerator (the SNR radius *r*sh) gives that the maximum energy of particles is typically at most:

$$E\_{\rm max} \approx \left(\frac{r\_{\rm sh}}{\rm pc}\right) \left(\frac{u\_{\rm sh}}{1000 \,\rm km/s}\right) \left(\frac{B}{\mu \rm G}\right) \rm TeV \tag{1}$$

with *u*sh the SNR shock velocity, and *B* the magnetic field. Therefore, in order to attain the PeV range, for a typical SNRs of a few pc, expanding at a few 1000 km/s, values of at least -10<sup>2</sup> μG are needed. More precisely, there are now established theoretical results on the growth of instabilities at colisionless shocks, coupled to observational X-ray measurements at SNR shocks, that have reported values of magnetic fields of several hundreds of μG, i.e., several orders of magnitude above the values of the ISM [53].

4. Gamma rays: the detection of GeV to multi-TeV gamma rays from SNRs, and especially the capacity of current IACTs to resolve the shells corresponding to the spherical expanding shock waves, is an undeniable indication that efficient particle acceleration is taking place.

In spite of these compelling arguments, the SNR hypothesis is however facing several difficulties, discussed in detail in several reviews [45,54]. Amongst them:


protons. So far, the observation of all SNR shells in the VHE domain have revealed cut offs indicating that PeV CRs are not efficiently produced.

This last point is crucial, since of all the aspects mentioned above, the identification of a SNR pevatron is often seen to be the conclusive proof that SNRs are the sources of Galactic CRs. Hence, this has stimulated the investigation of alternative scenarios for the origin of CRs. At least two comments are in order here. First, stricto census, current IACTs are not sensitive enough to detect gamma rays in ∼100 TeV range, but they still have been able to measure the gamma-ray spectrum in the TeV and tens of TeV region, and revealing exponential suppression that indicate that the flux in the 100 TeV range should be at least drastically suppressed. Second, recent results of several instruments, HAWC, TibetAS*γ* and LHAASO [28,68,69] have reported on the detection of ∼100 TeV gamma rays from a region in which the SNR G106.3+2.7 is located. By design, the improved sensitivity in the 100 TeV range of these observatories, compared to typical IACTs, comes with a poorer angular resolution than typical IACTs. It is therefore difficult for these instruments to spatially resolve the source(s) of the 100 TeV emission and specify which astrophysical object(s) is (are) responsible for the acceleration of particles up to the PeV range [68]. In addition, the associated SNR, G106.3+2.7 is middle-aged (∼10 kyr) which makes it a priori a relatively poor pevatron candidate [70]. The role of the different objects located in this complex region have thus yet to be clarified.

Two possible explanations for the fact that all known TeV shells seem to not be pevatrons are: either that SNRs are only pevatrons for a short period of time, a priori in the early stages of their evolution and that all studied SNRs are already too old to accelerate PeV particles, or/and, that only a small fraction of SNRs are pevatrons [71,72]. In both cases, this coud explain the limited chances to identify an active SNR pevatron, even in the future Galactic surveys.

### *2.1. Acceleration up to the PeV Range*

One of the reasons why all best studied SNRs are not pevatrons might be because they are too old to still be pevatrons. Indeed, as a SNR blast wave expands in the ISM, it slows down. The maximum energy of accelerated particles is thus expected to decrease. Naively, if we estimate that the maximum energy can be estimated by equating the diffusion length of particles *l*<sup>d</sup> to a fraction *χ* of the shock radius *r*sh:

$$l\_{\rm d} = \frac{D(E)}{\mu\_{\rm sh}(t)} \approx \chi r\_{\rm sh}(t) \tag{2}$$

assuming a Bohm-like diffusion coefficient, for relativistic CR particles, *D*(*E*) = <sup>1</sup> <sup>3</sup> *r*L(*E*)*c*, with *B* ∝ *t* <sup>−</sup>*β*, *r*sh ∝ *t <sup>α</sup>*, and *u*sh ∝ *t <sup>α</sup>*−1, this leads to:

$$E\_{\text{max}}(t) \propto B(t)r\_{\text{sh}}(t)u\_{\text{sh}}(t) \propto t^{2\alpha - 1 - \beta} \tag{3}$$

thus *E*max decreases in time provided that *β* > 2*α* − 1. For a typical SNR expanding in a uniform ISM, *α* = 4/7 [73,74] thus *E*max decreases if *β* ≥ 1/7. For a SNR expanding in a wind <sup>∝</sup>*r*<sup>−</sup>2, *<sup>α</sup>* = 7/8, *<sup>β</sup>* ≥ 3/4.

The nature of the mechanism driving the amplification of the magnetic field, needed to reach the PeV range is currently still a matter of debate. It has been shown that several mechanisms can theoretically lead to the growth of instabilities. Let us for example mention see also the short review [75]:


instabilities): the production of PeV CRs by this mechanism is however expected to be rather limited [80,81].


This so-called nonresonant Bell mechanism is especially important in the case of SNRs, because it can lead to magnetic field amplification with values significantly larger than the pre-existing magnetic field *B*0. Moreover, the growth rate of the instabilities is typically faster than in all previously mentioned mechanisms, and is therefore thought to be the mechanism governing magnetic field amplification at SNRs. A pedagogical discussion on the growth of resonant and nonresonant streaming instabilities can be found in [44]. The growth of the magnetic field from nonresonant streaming instability is exponential until saturation is reached. The exact details on when saturation is reached is still a matter of discussion, but the typical amplified magnetic field *δB* can be estimated equating the Larmor radius of particles to the typical plasma spatial displacement induced by the growth of instabilities:

$$
\delta B(t) \approx \sqrt{4\pi \frac{\xi\_{\text{CR}} \rho(t) v\_s^3(t)}{\Lambda c}} \tag{4}
$$

where *ξ*CR is the CR efficiency (i.e., the fraction of the shock ram pressure converted into CRs at the shock through DSA), *ρ* the density upstream the shock front, and Λ = ln(*p*max/*mc*), and the slope of accelerated particles is assumed to be ∝*p*<sup>−</sup>4. Equation (5) leads to typical values:

$$
\delta B(t) \approx 2 \left( \frac{\xi\_{\rm CR}}{0.1} \right)^{1/2} \left( \frac{v\_{\rm sh}(t)}{1000 \text{ km/s}} \right)^{3/2} \left( \frac{n}{1 \text{ cm}^{-3}} \right)^{1/2} \mu \text{G} \tag{5}
$$

Thus, for shock velocities of a few thousand km/s, and density sufficiently high, which is typical for instance of the shock launched by the explosion of core-collapse supernovae exploding in the dense wind of a late sequence massive star, values of few hundred μ*G* can be reached.

The growth of nonresonant instabilities drives the amplification of the magnetic field and thus allows to reach higher values for the maximum energy of accelerated particles. Refs. [85,86] discussed that this maximum energy can be estimated by considering that the growth of instabilities saturates when the amplification reaches a number N of *e*-folding. For the maximum growth rate *γ*max, corresponding to wave numbers *k*, this condition reads: *γ*max*t* ≈ N . The value of N is not well constrained, but arguments in favor of 3 ≤N ≤ 10 have been made [85]. The corresponding maximum energy reads:

$$E\_{\rm max}(t) \approx \frac{1}{\mathcal{N}} \varepsilon \sqrt{4\pi\rho(t)} \frac{\zeta\_{\rm CR} v\_{\rm sh}^3(t) t}{c\Lambda} \tag{6}$$

Expliciting Equation (7) with numerical values:

$$E\_{\text{max}}(t) \approx 1 \left(\frac{\frac{\pi}{2} \text{CR}}{0.1}\right) \left(\frac{v\_{\text{sh}}(t)}{5000 \text{ km/s}}\right)^3 \left(\frac{t}{100 \text{ year}}\right) \left(\frac{n}{10 \text{ cm}^{-3}}\right)^{1/2} \text{PeV} \tag{7}$$

where N = 5. This shows that in order to reach the PeV range at times of the order of the century, relatively high shock velocities and density are required. However, a careful calculation including the precise quantities associated to typical SNRs shows that in most of the evolution of typical SNRs—in most of the free expansion phase, and in the adiabatic Sedov-Taylor phase—the PeV range cannot be attained. Indeed, in SNRs from thermonuclear supernovae, high shock velocities -5000 km/s can be observed but usually associated to an ISM of the order of ∼1 cm−3. On the order hand, for SNRs from core-collpase SNe (CCSNe), which correspond to the bulk of SNe, the higher densities provided by the dense wind of the late sequence progenitor star can be as high as ∼103–104 cm−<sup>3</sup> (decreasing with ∝*r*<sup>−</sup>2) but thereby substantially decreasing the shock velocity below 500 km/s. It therefore seems that only SNRs from peculiar SNe, sufficiently energetic and/or exploding in dense winds are capable of efficiently producing PeV particles [87]. Moreover, the duration of the pevatron phase of these SNRs appears to be rather limited. By estimating the total content of protons produced by typical SNRs and peculiar pevatron SNe, it has been possible to use the local CR proton spectrum to constrain the rate of Galactic SNR pevatron to a fraction of the Galactic SN rate 3–4% *ν*SN ≈ 0.1/century [72]. Moreover, since these peculiar SNRs are typically pevatrons on timescales of one century, this corresponds to one SNR pevatron active for the order of ∼1 century every few ∼10,000 years, drastically limiting the chances of catching an active SNR pevatron. In other words, it could be that SNRs are indeed producing the bulk of CRs, but that the short duration of the pevatron phase prevent us from seeing one in activity.

### *2.2. Supernovae*

Discussions on magnetic field amplification, e.g., through the growth of nonresonant streaming instabilities, lead us to the idea that the maximum energy of accelerated particles scales in time as ∝*tv*<sup>3</sup> sh(*t*)*ρ*1/2(*t*). This corresponds, for CCSNe in which the velocity is typically in the early stage (free expansion phase) *v*sh ∝ *t* <sup>−</sup>1/8 in a wind of profile *n* ∝ *r*−<sup>2</sup> to *E*max ∝ *t* <sup>−</sup>1/4. In the case of a thermonuclear SNe expanding in a uniform ISM, *v*sh ∝ *t* −3/7 in the free expansion phase, thus *E*max ∝ *t* <sup>−</sup>2/7. This indicates that the maximum energy is reached at the earliest time and supports the idea that if some SNRs are pevatrons, they most likely must be pevatron when they are the youngest. Since the duration of the potential pevatron phase is virtually unknown, some authors have investigated the possibility that PeV acceleration would especially be efficient during a few days/month/years after the explosion of SNe [88–99]. Consequently, such efficient particle acceleration could lead to a considerable emission of gamma rays from the GeV to the multi-TeV range, a priori mostly through hadronic interactions because of the high density of the circumstellar environment in which CCSNe explode. Of course, in our Galaxy, the problem of low number of events remain, with a SN rate typically inferred ∼3/century, but it has been argued that the detection in the gamma-ray domain of close-by extragalactic SNe could help to study the acceleration up the PeV range in the first days after the explosion of the SN (in other words, extremely young SNRs).

The possibility for instruments to detect such SNe is essentially limited by a physical process that can degrade the gamma-ray signal: the two-photon interactions in which a gamma-ray photon interacts with a low energy photon from the SN photosphere, producing an electron-positron pair, can potentially degrade the gamma-ray by several orders of magnitude. In the case of luminous supernovae SN1993J, it has been shown that a detection of gamma rays from extragalactic energetic SNe within a few Mpc is possible with nextgeneration instruments [100].

### *2.3. Archeology of Pevatrons*

Whether it is SNe, or SNRs, or even other astrophysical objects, since the duration of pevatron phase is expected to be limited (month, years or few centuries), and that together with the low rate of these objects, we might be in a situation where the number of active Galactic proton pevatrons is low. See e.g., discussion above, on the number of SNR pevatrons from energetic SNe expected to be 0.1/century typically active for a duration of the order of ∼1 century. This would make it possible for us to be in a situation where SNRs do produce the bulk of Galactic CRs, but where we currently do not have any active proton SNR pevatron in the Galaxy. In such a situation, we need alternative ways to probe the past activity of pevatrons, and therefore attempt to do archaeology of pevatrons 1. The signature of the extinct pevatron could for instance be found through deeper studies of SNR shocks (e.g., extrapolating backwards the shock dynamics), by probing the surrounding molecular clouds (MC) of the pevatron candidates, for instance through gamma rays observations of MCs in the vicinty of pevatron candidates [101–103], or through the study of the radiation of secondary particles produced around pevatrons [104].

### **3. The Detection of Galactic Pevatrons with Gamma Rays**

### *3.1. The Crab Nebula*

The Crab Nebula is probably one of the best studied objects of the Universe: its emission has been monitored over several decades and accross the entire accessible electromagnetic spectrum [105]. Extensive observations of the Crab Nebula have revealed variability in the gamma-ray domain [26,106,107]. These *flares*—typically lasting for several hours to few days—have often been interpreted as the result of efficient acceleration of electrons up to the PeV range, emitting gamma rays through synchrotron mechanism. Discussions on the acceleration mechanisms usually invoke magnetic reconnection around the Crab pulsar and shock acceleration at the pulsar-wind termination shocks [108–115]. Recently, the LHAASO instrument reported on the detection of gamma rays above -1 PeV, additional confirmation that an electron pevatron is located within the Crab Nebulae [28,116]. This means that, unexpectedly, the first detection of a pevatron corresponded to an electron– pevatron, a reminder that the finding of a pevatron does not immediately solve all problems of CR physics.

Currently, the mechanism involved in electron acceleration at the Crab Nebula are still not well understood, which makes the case of the Crab Nebula even more remarkable. Several mechanisms have been proposed, such as for instance: (a) DSA at the termination shock formed between the pulsar relativistic wind and the circum-pulsar medium; (b) driven magnetic reconnection of the alternating field lines compressed at the shock; (c) resonant cyclotron absorption in ion-doped plasma, in which electrons and positrons absorbe energy emitted by ions in the plasma (see e.g., for a short and pedagogical discussion [117]). Theoretical works, through Magnetohydrodynamics (MHD) and particle-in-cell (PIC) simulations are helping to advance our understanding on particle acceleration around pulsars.

### *3.2. The First Galactic Pevatron Detected through VHE Gamma Rays*

While the multiple observations in VHE range with IACTs of SNR shells were revealing that all known SNRs seem to not be pevatrons, observations with H.E.S.S. were objectifying that gamma rays up to ∼100 TeV seem to originate from the Galactic center region around Sagittarius A\*, a source called HESS J1745-290, thereby indicating the presence of a pevatron in the Galactic center. No SNR seem to be associated to this emission, making the first catch of a pevatron even more startling than expecting. As previously mentioned, the 100 TeV range is not directly accessible to H.E.S.S., but the observations were capable of revealing a hard spectrum with little suppression up to 10–20 TeV, strongly suggesting that 100 TeV gamma rays are produced [39]. This result as been confirmed by other instruments such as VERITAS [118]. Various interpretations of the gamma-ray emission of this pevatron candidate have been proposed, including e.g., supernovae and clusters of massive stars [94,119], millisecond pulsars [120–122], or particle acceleration from the supermassive black hole [123,124].

### *3.3. A Population of Galactic Pevatrons*

In the recent years, refined observations with various techniques, such as IACTs (e.g., H.E.S.S. [125], VERITAS [35], MAGIC [126]) (see e.g., for reviews [127–129]), shower front detector instruments (e.g., HAWC [130], Tibet AS*γ* [41], LHAASO [23]), and systematic survey of the sky, have helped discover several Galactic pevatron candidates. The HAWC observatory has for instance reported on the detection of at least ten sources with emission above -50 TeV of likely Galactic origin [40,68,68]. The case of J2227+610 is especially exciting as a new potential Galactic pevatron [68], since it has also been detected with Tibet AS*γ* [69], and its association with the SNR G106+2.7. Arguments in favor of the acceleration of PeV particles from the pulsar wind nebula VER J2227+608 through gammaray observations with Fermi-LAT had already been made [131], and make this case of special interest, although the situation remains quite unclear due to the complexity of the region [132,133].

Finally, with the detection of gamma rays of energy -100 TeV by LHAASO from a dozen of sources that are likely located in the Galaxy, another important milestone has been reached: the first detection of a population of pevatrons [28]. So far, the spectral shape of most of these sources is still not well established, which makes it difficult to clearly understand whether the origin is leptonic or hadronic. Moreover, the source association remains a difficult task given the typical angular resolution of the order of ∼0.3◦ at 100 TeV, but these detections are still an important step in the hunt for Galactic pevatrons. In Table 1 we give a quick overview of the known Galactic pevatron population as of May 2021, although this list is likely to be lengthened soon due to ongoing surveys.



### **4. All Kinds of Pevatrons: Pulsars, Massive Stars, Stellar Clusters & Superbubbles**

*4.1. Pulsars and Surroundings*

The Crab Nebula, as mentioned above, is a clearly accepted example of pevatron, where the gamma-ray flares reveal the acceleration of ultrarelativistic electrons, injected from the pulsar and are energized to at least the PeV range. Altough it is possible that protons are also accelerated, the Crab is a clear example of an electron pevatron. It illustrates the possibility for pulsars, and systems hosting pulsars to be efficient particle accelerator. The detection of several halos of TeV gamma rays, often referred to as *TeV halos* is connected to this issue of particle acceleration around pulsars [116,138,139].

### *4.2. Massive Stars, OB Associations and Stellar Clusters*

Aside from the fact that most SNRs seem to not be pevatrons, other issues remain: for example, the spectra of accelerated particles at SNR shocks, the spectra of particles injected in the ISM, or the electron-to-proton ratio accelerated and injected in the ISM. These tensions have motivated the search of potential alternative accelerators: massive stars, and their dense winds, leading to the formation of a strong shock at the termination of the shocks, were for instance proposed as Galactic CR factories [140–142]. Moreover, as massive stars tend to form in clusters, it has been proposed that the shock created by the collective effects of the winds of individual stars could accelerate protons above the PeV range, and contribute to the production of Galactic CRs [143,144]. For a review on particle acceleration from star forming regions we refer the reader to Bykov et al. [145].

In addition to theoretical arguments, the detection of gamma rays from star clusters has been a direct confirmation that efficient particle acceleration is taking place: Westerlund 1 and Westerlund 2 detected by H.E.S.S. [146,147]. Moreover, spatially resolved observations of stellar clusters have revealed that the distribution of particles from massive stars seem to scale as the inverse of the distance away from the sources, which is an indication of steady production and injection of particles in the ISM. This is thus supporting the idea that these objects could be prominent sources of CRs [148–150]. Futhermore, the detection of a population of at least 12 sources emitting gamma rays above -100 TeV by the new instrument LHAASO, testifies of the detection of at least 12 pevatrons of Galactic origin, at reported in Table 1. Although it is yet difficult to draw any firm conclusion on the association to known objects, most of these sources seem to not be associated to SNR shocks. On the other hand, some sources, as for example, J2032+4102 could potentially be associated to massive stars. Precise measurements of the differential gamma-ray spectra of these sources are needed to understand if the emission is due to the acceleration of protons or electrons above the PeV range, and deeper studies of the spatial origin of this emission will help understand which astrophysical object are responsible for the production of these particles.

### *4.3. Superbubbles*

In the line of star clusters, superbubbles, the giant cavities of hot and turbulent plasma inflated by repeated neighboring SNe explosion, have also been proposed as potential pevatrons. Superbubbles have especially been mentioned in the context of the search for the origin of CRs because they could potentially account for unexplained trends in the CR composition. Indeed, measurement of the CR spectrum have indicated that the abundance of volatile elements (e.g., N, Ne, Ar) seem to correlate with the atomic mass; a correlation that is not clearly found for refractory elements (e.g., Mg, Si, Fe) (see [54] for a review on the topic). It has been shown that a mixture of material with primordial solar composition and material enriched by stellar outflows or ejecta could explain the observed trends [151]: such a mixed ISM can typically be found in superbubbles. Moreover, superbubbles might help overcome another identified problem: the problem of the abundance of Be and B. In metal poor halo stars, Be and B have been found to increase linearly with metallicity, rather than quadratically, which is expected in a standard scenario in which CRs are accelerated in a nonenriched ISM [152]. Superbubbles might help explain this linear dependency. This makes them interesting candidates in the CR origin problem, and as any good CR factory candidate, the question of particle acceleration up to the PeV range is central [153–157]. Detection of gamma rays from the Cygnus X superbubble [158] supports the need for further investigations with next-generation instruments optimized in the 100 TeV range. Finally, the detection of high-energy neutrinos from the Cygnus Cocoon region, in association with gamma rays, comes as direct indication that efficient acceleration of hadrons is taking place, and has been claimed to be a clear evidence that a proton pevatron is located in the Cygnus region [159].

### *4.4. Other Candidates*

In the list of potential proton pevatrons, and thus possible contributors to CR proton spectrum, we must also mention other astrophysical objects, such as, for example, low-luminosity black holes binaries and their magnetically arrested disks [160], or direct acceleration of protons in the magnetospheres of pulsar [161]. Although we only briefly mention these recently proposed scenarios here, they clearly deserve full attention and must be kept in mind in the hunt for pevatrons.

### *4.5. Superpevatrons*

The improved performance of instruments have made possible the detection of gamma rays of energies even above 100 TeV. For instance, HAWC has reported on the detection of gamma rays around ∼200 TeV from J1825-134 [68] and TibetAS*γ* on the detection of gamma rays -100 TeV potentially associated to the region around SNR G106+2.7 or PSR J2229+6114, as previously discussed [69], and the LHAASO collaboration has reported on the detection of gamma rays of -1 PeV around the source J2032+4102, or from the Crab Nebula [116]. Should these gamma rays be due to hadronic interactions, this would indicate the acceleration of -10 PeV CR hadrons, and thus point to Galactic superpevatrons. These superpevatrons might thus play a role in shaping the CR spectrum above the *knee*, and open many questions on the origin of CRs above the knee, and on the transition between CRs of Galactic and extragalactic origin.

### **5. Gamma Rays: Limitations and Hopes for the Future**

Gamma-ray instruments have now clearly demonstrated their potential in the search for pevatrons. It remains that several or most of the aspects of particle acceleration up to the PeV range have yet failed to be understood through observations in the gamma-ray range: the astrophysical site(s), the physical mechanism(s), the duration of the pevatron phase(s). In order to move forward on these issues, it is essential to progress on the spatial, time and spectral resolution. This statement is quite general in Astronomy, since improving performance is always a strong motivation behind new generations of instruments at any wavelength: in the case of gamma-ray instruments and the search for pevatrons, it is now essential in VHE and UHE domain. It is worth mentioning typical orders of magnitude that are determining in the context of the hunt for pevatrons. First, the time evolution of the sensitivity of gamma-ray instruments makes it usual to target sources for at least several tens of minutes, and usual to need ∼tens of hours, which naturally limits the amount of observing time available per source. In turns, this often prevents multiplying observations of a target of interest on monthly scales, and thus makes it challenging to probe the time dependence of the gamma-ray emission. Second, the angular resolution, even in the most favorable case of next generation IACTs (such as CTA) [24], is at best of the order of a few arcmin at ∼100 TeV and ∼10 arcmin at 100 GeV, which limits the possibility of probing extended emission even around Galactic accelerators. This is problematic, for instance to address the question of the escape of particles from their accelerators, or to study complex regions around pulsars (e.g., TeV halos) [162].

Finally, the spectral perfomance. Again, even in the favorable case of IACTs, obtaining a differential spectrum from a point source requires a significant amount of observing time (typically several hours), and the obtained spectrum is usually composed of ∼10–20 bins per decade at ∼1 TeV, with error bars increasing at high energy with the decreasing fluxes. A least two comments are in order. In the context of the search for pevatrons through gamma-ray observations, it is now clear that the shape of the spectrum above 50 TeV is crucial (and especially above 100 TeV), if only to discriminate between the potential leptonic and hadronic origin. We have seen that with the remarkable performance of air shower instruments (such as LHAASO), integrated spectra can reveal the presence of pevatrons, but the characterization of their spectra is also essential: spectral index, existence of an exponential suppression, gradual change of slope. These details imprinted in the gammaray spectrum can unveil precious information on the parent particle content. For instance,

a sharp exponential cutoff in the 50 TeV range is expected in the case of parent electrons, due to the Klein–Nishina suppression, whereas a more gentle steepening could sign the presence of parent protons, or of synchrotron from electrons, as for example with the Crab gamma-ray spectrum and its gradual steepening over three energy decades [116]. Moreover, differences in the initial spectra of parent protons and electrons could also imprint the gamma-ray spectra. For example, at SNR shocks, it has been shown that the high-energy suppresion for protons is often well approximated by an exponential ∝exp(−*p*/*p*max) whereas for electrons, if the diffusion is Bohm-like and the maximum energy loss limited, the spectrum is expected to be suppressed as <sup>∝</sup>exp(−(*p*/*p<sup>e</sup>* max) 2 ) [163,164].

Futhermore, it has been shown that the shape of the suppression at VHE energy can also vary for different accelerators, even considering only the acceleration of protons. This is for instance the case of stellar clusters. As detailled in Morlino et al. [144], which proposed a semianalytical description of the acceleration of particles at the wind termination shock of stellar clusters, the geometry, the diffusion regime both inside the stellar cluster wind and outside the termination shock can shape the suppression at the highest energies. In principle, the study of the gamma-ray spectral index at the highest energy could therefore offer precious indications on physical parameters at accelerators of interest, although, of course, such study will remain extremely challenging even for next generation IACTs.

Eventually, let us observe that gamma-ray astronomers have taken the habit of trying to fit differential spectra with power laws, broken power laws, or power law exponentially suppressed, or other usual simple functions. This approach is physically reasonable, effective, and useful, but it can sometimes become decorrelated from any underlying assumption on the physical mechanism involved. Moreover, in many cases, the quality of the data after high-level analysis is still limited and allows for some freedom in the fitting. In some cases, one can then wonder if a given function providing a decent fit to a set of data, is really more motivated than another, and it has to be kept in mind that the fitting with usual function might be artificial and lead to loss of information. As mentioned above in the case of stellar cluster, at VHE, deviations from a clean exponential cutoff could sign different physical aspects, and it is especially important to keep this point in mind for future investigations of the gamma-ray spectra in the -50–100 TeV range. Indeed a wide variety of spectra could be revealed in this range, from steep to exponentially suppressed, or modified exponentials, including gradual changes of index or even bumps and features, holding valuable information.

In addition to the gamma-ray domain, it is essential to mention observations at other wavelengths useful in the pevatron search, already down to the radio domain, where for synchrotron emission of accelerated electrons or of secondary particles can help probe the processes at stake [71,104,165,166]. Multimessenger observations (especially neutrinos) are also encouraging [167–169] (see e.g. [30,170,171] for reviews).

### **6. Open Questions and the Hunt in the Coming Years**

The recent detection of several Galactic pevatrons with various instruments operating the gamma-ray domain is gripping and might help us close in on the origin of Galactic CRs. Although it has often been claimed that the detection of 100 TeV gamma rays would be the unequivocal proof of the acceleration of PeV protons, and thus almost straightforwardly uncover the sources of Galactic CRs, it is now clear that things are a bit more abstruse. The detection of Galactic pevatrons has yet maybe brought more questions than answers. Among these questions:


With the detection of several Galactic pevatrons, whose astrophysical nature and spectral properties have yet to be understood, it is now clear that finding -100 TeV gamma rays, and catching a pevatron, is not enough to solve the problem of the origin of Galactic CRs. Sources of Galactic CRs must necessary be pevatrons, but this condition is not sufficient. In order to win the title of "main source(s) of Galactic CRs", an astrophysical site will not only have to be a pevatron, but also to answer the different issues currently faced by SNRs such as: spectrum of particle injected in the ISM, chemical composition of CRs ( 22Ne/22Ne ratio), fondamental physical process driving the acceleration of particles and magnetic field amplification, etc. Moreover, the role of the different detected pevatrons in the production of CRs and their contribution to the LIS will have to be understood. Fundamentally, one also has to prepared to the idea that not only one class (or "type") of source(s) produce the bulk of Galactic CRs, but that the local CR spectrum might in fact the result of a complex intertwining of many sources, e.g., acceleration and injection at very high energy from some sources for a limited amount of time, reacceleration at other sources, etc., that would finally produce the measured local CR spectrum. The study of pevatrons will undoubtedly help to better understand the fundamental physical processes at stake (growth of instabilities, escape from sources, content injected in the ISM from sources), but will likely not be sufficient to answer all questions opened by CR physics. Complementary studies of the propagation of CRs in the Galaxy, or CR composition are vital to provide a complete understanding of the local CR spectrum.

In the coming years, multiwavelength and multimessengers observations will also be essential to in order to progress on the question of origin of CRs. The gamma-ray range itself is expected to play a central role to probe the content of accelerated particles, and study the astrophysical objects involved. The instruments already operating in the VHE range, such as H.E.S.S., Veritas, MAGIC, LHAASO, HAWC, and the next-generation instruments, such as CTA [24], SWGO [25], with constantly improving spectral energy resolution and angular resolution, are needed to better understand the gamma-ray spectra in the ∼100 TeV range, the emitting gamma-ray regions, and the distribution of particles around Galactic pevatrons.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** PC warmly thank F. Aharonian, P. Blasi, S. Gabici, J. Holder, A. Marcowith, P. Martin, G. Morlino, E. Peretti, M. Renaud, H. Sol, and the entire LUTH PHE team for stimulating discussions.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Note**

<sup>1</sup> A term borrowed to J. Holder.

### **References**


### *Review* **Probing Quantum Gravity with Imaging Atmospheric Cherenkov Telescopes**

**Tomislav Terzi´c 1,\*, Daniel Kerszberg <sup>2</sup> and Jelena Striškovi´c <sup>3</sup>**


**Abstract:** High energy photons from astrophysical sources are unique probes for some predictions of candidate theories of Quantum Gravity (QG). In particular, imaging atmospheric Cherenkov telescopes (IACTs) are instruments optimised for astronomical observations in the energy range spanning from a few tens of GeV to ∼100 TeV, which makes them excellent instruments to search for effects of QG. In this article, we will review QG effects which can be tested with IACTs, most notably the Lorentz invariance violation (LIV) and its consequences. It is often represented and modelled with photon dispersion relation modified by introducing energy-dependent terms. We will describe the analysis methods employed in the different studies, allowing for careful discussion and comparison of the results obtained with IACTs for more than two decades. Loosely following historical development of the field, we will observe how the analysis methods were refined and improved over time, and analyse why some studies were more sensitive than others. Finally, we will discuss the future of the field, presenting ideas for improving the analysis sensitivity and directions in which the research could develop.

**Keywords:** very-high-energy gamma-ray astrophysics; relativistic astrophysics; astroparticle physics; imaging atmospheric Cherenkov telescopes; Quantum Gravity; Lorentz invariance violation; time of flight; modified photon interactions

### **Content**


**Citation:** Terzi´c, T.; Kerszberg, D.; Striškovi´c, J. Probing Quantum Gravity with Imaging Atmospheric Cherenkov Telescopes. *Universe* **2021**, *7*, 345. https://doi.org/10.3390/ universe7090345

Academic Editor: Ezio Caroli

Received: 18 June 2021 Accepted: 31 August 2021 Published: 14 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).


### **1. Introduction and Motivation**

The general theory of relativity is a beautiful and elegant theory, which connects the local matter and energy content to the curvature of spacetime, thus giving a classical description of gravity. It has been heavily tested and scrutinised ever since Albert Einstein proposed it in 1915 [1]. Nevertheless, it breaks down in extreme circumstances such as singularities within black holes, or the early universe. Therefore, it is expected that there exists a more fundamental quantum theory of gravity, which can handle these extreme situations. Furthermore, a quantum theory of gravity would be a giant leap towards unification of all fundamental forces.

Theoretical endeavours in formulating the theory of QG have explored different avenues (see, e.g., [2–7]). However, despite significant efforts, a complete and consistent description of gravity on a quantum level remains unknown. In addition, many of the QG models include departures form the Lorentz symmetry (see, e.g., [8–11]). Performing measurements in the expected realm of QG would strongly hint in which direction theoretical research should proceed. Unfortunately, the expected domain of QG is the Planck scale1. Even if QG emerges at energies several orders of magnitude below *E*Pl, it is still vastly above the highest energies accessible in contemporary human-built accelerators. When technology falls short, we turn to nature's own accelerators: active galactic nuclei, gamma-ray bursts, supernova remnants, pulsars, etc. The most energetic particles detected up to date, a cosmic ray at ∼3.2 × 1011 GeV [12,13], a neutrino at ∼6.3 × <sup>10</sup><sup>6</sup> GeV [14], and a gamma ray at ∼1.4 × 106 GeV [15], while reaching energies higher than those achievable in Earth-based accelerators, are still more than a little shy of *E*Pl. So what allows us to hope that we will measure an effect of QG?

### *1.1. A Proposal to Probe Quantum Gravity*

In 1997, the distance of a Gamma-ray burst (GRB) <sup>2</sup> was measured for the first time. Indeed, following the detection of GRB 970508 [16], an optical counterpart was observed [17], which allowed estimating of its redshift to 0.835 ≤ *z* 2.3 [18]. A strong flux of gamma rays from a quickly varying source detected at a cosmological distance incited Amelino-Camelia et al. [19] to suggest that the signal from GRBs could be used to probe the structure of spacetime. The proposal was based on the idea of spacetime as a dynamical medium, which experiences quantum fluctuations due to QG effects. While the scale of fluctuations was expected to be comparable to the Planck units, propagation of photons of substantially smaller energies could still be affected by it. Probing the fluctuations would result in an energy-dependent propagation speed, similar to what visible light experiences when propagating through a medium such as water or air.

### *1.2. Modified Photon Dispersion Relation*

This behaviour can be modelled by modifying the standard photon dispersion relation in the following way:

$$E^2 = p^2 c^2 \times \left[ 1 + \sum\_{n=1}^{\infty} S\_n \left( \frac{E}{E\_{\text{QG},n}} \right)^n \right],\tag{1}$$

where *E* and *p* are respectively the energy and momentum of a photon, *c* is the Lorentz invariant speed of light, and the *E*QG,*<sup>n</sup>* are the energy scales at which effects of QG become significant. We will start discussing the values of *E*QG,*<sup>n</sup>* in a short while, for now let us just acknowledge that *E*/*E*QG,*<sup>n</sup>*  1 even for the most energetic gamma rays. Different modifying terms in the dispersion relation contribute less and less with increasing *n*. Therefore, usually, only the first two leading terms (*n* = 1 or *n* = 2) of the series are considered and independently tested for3. They are often referred to as linear and quadratic energy-dependent contributions, respectively. Letting *E*QG,*<sup>n</sup>* → ∞ for all *n* leads to the wellknown Lorentz invariant photon dispersion relation. Parameters *Sn* can take values ±1, and their role will become apparent immediately. From Equation (1) the energy-dependent photon group velocity can be easily derived as:

$$w\_{\mathcal{I}} = \frac{\partial E}{\partial p} \simeq \mathcal{c} \left[ 1 + \sum\_{n=1}^{\infty} S\_n \frac{n+1}{2} \left( \frac{E}{E\_{\text{QG},n}} \right)^n \right]. \tag{2}$$

Considering each modifying term independently, one can see that for *Sn* = +1, the velocity becomes greater than *c*, while for *Sn* = −1 it becomes smaller than *c*. These two behaviours are known as superluminal and subluminal, respectively.

Once a modification is introduced in the dispersion relation, various effects (other than changes in the photon speed) are conceivable, such as the modification of the electromagnetic interaction. But whatever the effects of modifying the dispersion relation may be, they are minuscule because of the ratio *E*/*E*QG,*n*. The good news, however, is that the effects are cumulative. This is extremely important because gamma rays from some astrophysical sources take billions of years to reach Earth, allowing for these potential effects to accumulate, thus giving hope that we might be able to measure their consequences from Earth.

Additionally, the effects of modifying the dispersion relation are more pronounced for higher energy photons. Thus, searching for them with IACTs, which are instruments optimised for astronomical observations in the very high energy (VHE) gamma-ray band (100 GeV < *E* < 100 TeV), is a sensible thing to do. Given their large collection area and good sensitivity, IACTs are excellent instruments for testing effects of QG on gamma rays, and will be the main focus of this review. At lower energies, satellite-born detectors such as the *Fermi*-Large Area Telescope (LAT) benefit from more distant observations, but suffer from their lower effective area (see Sections 2.13 and 4 for a brief comparison with the IACT results). On the other hand, at higher energies water Cherenkov detectors such as High Altitude Water Cherenkov (HAWC) or Large High Altitude Air Shower Observatory (LHAASO) have an advantage of observing in a higher energy range than IACTs. However, due to the rapid decrease of the flux of gamma rays at these energies

they are handicapped by smaller statistics, which makes them less sensitive to fast flux variations (see Sections 3.3 and 4 for a brief comparison with the IACT results).

Before taking a dive into the methods and results of probing QG with IACTs, let us acknowledge that the modified photon dispersion relation is the usual starting point of experimental tests of QG on gamma rays. While some QG models indeed do not preserve Lorentz symmetry, it is important to note that Equation (1) is not a direct consequence of any particular QG model. Given that there is no fully formulated theory of QG, it would be overambitious to expect exact predictions. Rather, the modified dispersion relation can be regarded as a simple way of parameterizing and modelling phenomena not predicted by the current physical theories and laws. It, therefore, enables us to experimentally search for effects of those phenomena.

That being said, there are two main ways of modifying the dispersion relation that are usually considered. LIV, the main focus of this review, implies the existence of a preferred inertial frame of reference, which breaks the Lorentz symmetry [21]. However, there are also ways of modifying the photon dispersion relation, while at the same time preserving the Lorentz symmetry. One example is the so-called Doubly Special Relativity (DSR) [24,25]. In this model, the symmetry is deformed, rather than broken, and there is no preferred inertial frame of reference. Moreover, in order to keep the conservation laws covariant with respect to deformed symmetries of DSR, the conservation laws themselves need to be modified. This fundamental difference between LIV and DSR becomes important when different possible effects of QG are discussed. In particular, the kinematics, and possibly the dynamics, of electromagnetic interactions in the LIV framework will differ from the Lorentz invariant ones. In the DSR framework, on the other hand, the descriptions of interactions will be the same as (or only slightly different from) the Lorentz invariant descriptions. DSR is a recently discussed promising avenue of research gaining attention and traction. However, there has been no published results from IACTs mentioning explicitly DSR effects thus far. Therefore, in the rest of the text, we will, refer to all effects as LIV, regardless of their true origin. We will, however, keep using *E*QG,*<sup>n</sup>* to note the energy scales at which the effects become relevant. The details of either of these models, and their differences are out of the scope of this work. An interested reader is referred to a review paper by the COST Action 18108<sup>4</sup> (in preparation) and references therein.

In this paper, we will focus on searches for signatures of LIV in measurements with IACTs. We will discuss various effects of modifying the photon dispersion relation and their respective probes, adopting a chronological course. However, it is our intention (instead of simply recalling the most important studies performed) to analyse the evolution of the field, with a particular focus on the development of the analysis methods. Hopefully, this approach will inspire the authors and the readers alike to formulate new ideas on how to search for the effects of QG, and pave the path for future research. Historically, the first effect to be tested was the energy dependence of the photon group velocity, so results of different measurements of the photon time of flight will be covered first, in Section 2. As stated above, LIV can affect the kinematics and dynamics of electromagnetic interactions. This other important class of effects will be discussed in Section 3. We might as well break the suspense and state right away that no effects of QG have been detected so far. Nevertheless, strong constraints have been set on the minimum value of the LIV energy scale. These are usually expressed as lower limits at the 95% confidence level. The results of different effects, obtained from various experiments and analysis methods will be mutually compared and their differences discussed in Section 4. Finally, we turn towards the future in Section 5, to discuss opportunities for development and progress of this field of research.

### **2. Testing Energy-Dependent Photon Group Velocity**

Assuming energy-dependent propagation speeds, two photons of energies *E*<sup>2</sup> > *E*<sup>1</sup> emitted from a source at the same time will have different, energy-dependent, times of flight *t* <sup>2</sup> and *t* <sup>1</sup> respectively, finally reaching Earth with an energy-dependent time delay [26]:

$$
\Delta t' = t\_2' - t\_1' = t \frac{\Delta v\_\gamma}{c} \simeq -S\_n \frac{n+1}{2} \frac{E\_2^n - E\_1^n}{E\_{\rm QG,n}^n} D\_n(z\_\rm s), \tag{3}
$$

where *t* is the time needed for a photon travelling with speed *c* to reach the Earth5. The time delay is proportional to a source distance parameter:

$$D\_{\rm tr}(z\_{\rm s}) = \frac{1}{H\_0} \int\_0^{z\_{\rm s}} \frac{(1+z)^n}{\sqrt{\Omega\_{\rm m}(1+z)^3 + \Omega\_{\Lambda}}} dz\_{\rm r} \tag{4}$$

where *z*<sup>s</sup> is the source redshift, and *H*0, Ωm, and ΩΛ represent cosmological parameters, respectively: the Hubble constant, the matter density parameter, and the dark-energy density parameter6. The time delay expression was derived from comoving trajectories of particles, starting from their modified dispersion relations. More general and alternative expressions can be obtained by modifying the general relativistic dispersion relation as was done in [27], or by adopting that the spacetime translations are modified alongside with the modification of the dispersion relation [28].

### *2.1. The First Test with an Imaging Atmospheric Cherenkov Telescope*

Soon after it was proposed that GRBs could be used to search for effects of LIV, the first test using data from IACTs was performed. In fact, researchers observing with the Whipple telescope7 already had a suitable data set available. Albeit, the source was not a GRB, but the very first active galactic nucleus (AGN) ever detected in the VHE gamma-ray band, Markarian 421 (Mrk 421, redshift *z* = 0.031) [30]. On 15 May 1996, Whipple observed the most rapid flare from Mrk 421 up to that time, with the flux doubling time of less than 15 minutes and including photons of energies up to several TeV [31]. This groundbreaking study used a rather rudimentary analysis: the data set was split in two energy bands (*E* < 1 TeV and *E* > 2 TeV) [32]. In each energy band, the events were further subdivided in time bins of 280 s. The distribution of arrival times of photons with energies *E* > 2 TeV was compared to the distribution of arrival times of events below 1 TeV. The authors used the likelihood-ratio test<sup>8</sup> to compare the contents of time bins in the two energy ranges. In this study, no distinction was made between the subluminal and the superluminal behaviour. No delay in either direction was detected at the 95% confidence level. Combined with the distance to Mrk 421, this result was translated into a lower limit on the LIV energy scale *<sup>E</sup>*QG,1 > <sup>4</sup> × <sup>10</sup><sup>16</sup> GeV. Only the linear contribution was considered in this first study.

### *2.2. Fastest Variability in Blazars*

The observation of Mrk 421 with the Whipple telescope drew attention to flaring blazars as possible probes of LIV. In the summer of 2005, the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes9 observed two flares from the AGN Markarian 501 (Mrk 501, redshift *z* = 0.034) [35]. The data analysis revealed that the flux doubled in only 2 min, which remains until today the fastest flux variability ever observed from a blazar in the VHE gamma-ray band. With the highest energies reaching ∼10 TeV, the flux varying by an order of magnitude, it was a chance not to be missed. Moreover, there was an indication of a 4 ± 1 min delay between the peaks in the light curves in the lowest (0.15–0.25 TeV) and the highest (1.2–10 TeV) energy bins on the 9th of July. A search for an energy-dependent photon time of flight in the Mrk 501 flare of this night was performed employing two distinct statistical analysis methods which we will now describe.

**Energy cost function (ECF)** method utilises the fact that a signal pulse propagating through a dispersive medium will be diluted, and its power (total energy per unit time), consequently, decreased (see, e.g., Section 7.9 in [36]). In the case of the Mrk 501 flare [37], the data sample was chosen by selecting the most active part of the flare, i.e., the time interval in which the temporal distribution of events differs the most from a uniform distribution. We will mark the beginning and the end of this time interval as *t*<sup>1</sup> and *t*2,

respectively. The power of the signal was calculated as the sum of the energies of all the photons within the interval divided by the duration of the interval. Had the photons experienced any energy-dependent time delay, the power would have been smaller than without dispersion. One can then search for the maximal possible power by applying dispersion in the opposite direction, assuming different values of the LIV energy scale. Specifically, in order to compute a new signal power, a new arrival time *t* <sup>i</sup> was calculated for each photon in the sample for a particular value of *E*QG,*n*:

$$t\_{\mathbf{i}}' = t\_{\mathbf{i}} + \eta\_n E\_{\mathbf{i}}^n \tag{5}$$

where *t*<sup>i</sup> and *E*<sup>i</sup> are, respectively, the measured arrival time and the measured energy of the *i*-th photon. Given the large values of *E*QG,*n*, various parameters are often introduced to facilitate numerical computations. Here the parameters *η<sup>n</sup>* is defined from Equation (3) as:

$$\eta\_n = -S\_n \frac{n+1}{2} \frac{1}{E\_{\text{QG},n}^n} D\_n(z\_{\text{s}}). \tag{6}$$

This parameter is introduced for computational reasons because *η<sup>n</sup>* is O(1). Usually expressed in units of [s/GeV] (for *n* = 1) or [s/GeV2] (for *n* = 2), *η<sup>n</sup>* indicates how much a photon will be delayed in arrival compared to a photon propagating at *c* per every GeV of its energy. Limits on *E*QG,*<sup>n</sup>* are then derived by inverting Equation (6).

The arrival time recalculation as described in Equation (5) was performed for each individual gamma ray, and only photons whose recalculated arrival times fell in the [*t*1, *t*2] time interval were retained. In this way, an alternative sample of photons was constituted, and its total energy calculated. The procedure was repeated for different values of *η<sup>n</sup>* (i.e., different values of *E*QG,*n*). The ECF was defined as the total energy as a function of *ηn*. The value of *η<sup>n</sup>* which maximises the ECF, would recover the maximal signal power. In other words, it would correspond to the measurement of the dispersion which the gamma rays experienced because of the LIV effects, assuming no other effects play a significant role. The sensitivity of the method and the confidence interval for parameters *η<sup>n</sup>* were estimated using Monte Carlo simulations of the observed signal. Next, 1000 of simulated data sets were generated, and the ECF method was applied to each of them. The most probable value of *η<sup>n</sup>* and its confidence interval were estimated from the distribution of *ηn*, which were then translated into lower limit on the LIV energy scale at the 95% confidence level. This particular analysis yielded *<sup>E</sup>*QG,1 > 2.1 × <sup>10</sup><sup>17</sup> GeV, which was more constraining than the Whipple result on Mrk 421 flare from 1996 [32] by an order of magnitude. In addition, for the first time the quadratic contribution was constrained, setting *<sup>E</sup>*QG,2 > 2.6 × <sup>10</sup><sup>10</sup> GeV. Unlike the approach used for the Mrk 421 data analysis, the ECF allowed for testing of superluminal, as well as subluminal behaviours. Nevertheless, only the subluminal behaviour was investigated.

There are several methods based on the idea of removing the dispersion from the data: the Sharpness maximisation method (SMM) [38,39], the dispersion cancellation (DisCan) [40], and the minimal dispersion (MD) [41]; the main difference between these approaches being the way the sharpness of the light curve is quantified. We will investigate in more details a variation of the DisCan method in Section 2.8 and the SMM in Section 2.13.

### *2.3. Introducing the Maximum Likelihood Method*

Originally, the **maximum likelihood (ML)** method was proposed for the analysis of the Mrk 501 data set described in the previous section. However, as we shall soon see, it became the standard analysis method used for searches of energy-dependent gammaray group velocity in IACTs data. Therefore, we will dedicate a separate section to its description. Introduced by Martínez & Errando in [42], the authors argued that the analysis methods used should be unbinned (unlike the one previously employed in the case of Mrk 421 [32]), in order to fully exploit the information carried by a relatively small gamma-ray sample. The ECF method, used in [37], was indeed unbinned, however, it depended upon identifying and isolating the flares from the rest of the light curve. While

this particular Mrk 501 light curve from 2005 had a relatively simple structure [35], it was already recognised that the ECF method would not be suited for the analysis of complex light curves, or segments of flares. Therefore, the unbinned ML method soon became a standard approach in searches for energy-dependent time delays, with every new study incorporating additional features and improvements. Here we will depart from the historical course, and describe the ML method in its present form.

In order to search for LIV, the ML method makes use of a profile likelihood ratio test:

$$\lambda\_{\mathcal{P}}(\eta\_{\mathcal{U}} \mid \mathcal{D}) = \frac{\mathcal{L}(\eta\_{\mathcal{U}}; \mathfrak{d} \mid \mathcal{D})}{\mathcal{L}(\hat{\eta}\_{\mathcal{U}}; \mathfrak{d} \mid \mathcal{D})},\tag{7}$$

where *η<sup>n</sup>* is the LIV parameter of order *n* of interest, *ν* represents the nuisance parameters, *<sup>η</sup>*\$*<sup>n</sup>* and *<sup>ν</sup>***<sup>ˆ</sup>** are values that maximize the likelihood <sup>L</sup>, **<sup>ˆ</sup>** *ν***ˆ** maximizes the likelihood L for a given *ηn*, and *D* represents the observed data on which the analysis is performed. According to Wilks' theorem [43], the distribution of −<sup>2</sup> ln *<sup>λ</sup>p*(*ηn*|*D*) follows a *<sup>χ</sup>*<sup>2</sup> distribution with 1 degree of freedom for the true value of *ηn*, i.e., the one we are looking for. The 95% confidence level one-sided upper limits are therefore derived by solving the following equation:

$$-2\ln \lambda\_p(\eta^{\text{UL}}|\mathcal{D}) = 2.71,\tag{8}$$

while 95% confidence level two-sided upper limits are obtained using:

$$-2\ln \lambda\_p(\eta^{\text{UL}}|\mathcal{D}) = 3.84.\tag{9}$$

In the case where the conditions for Wilks' theorem are not fulfilled, one can calibrate intervals using Monte Carlo simulated samples of the null hypothesis. The right value for any particular case can then be derived from the quantiles of the distribution of these simulations. For instance, the 95% two-sided confidence interval is delimited by the lower and upper 2.5% quantiles.

The likelihood function L, for an observed number of events *N*ON, can be written as:

$$\mathcal{L}(\eta\_n) = \prod\_{i=1}^{\text{NON}} \left( p\_i^{(\mathbf{s})} \frac{f^{(\mathbf{s})}(E\_i, t\_i)}{\int\_{E\_{\text{min}}}^{E\_{\text{max}}} dE \int\_{t\_{\text{min}}}^{t\_{\text{max}}} f^{(\mathbf{s})}(E, t) dt} + p\_i^{(\mathbf{b})} \frac{f^{(\mathbf{b})}(E\_i, t\_i)}{\int\_{E\_{\text{min}}}^{E\_{\text{max}}} dE \int\_{t\_{\text{min}}}^{t\_{\text{max}}} f^{(\mathbf{b})}(E, t) dt} \right), \tag{10}$$

where *f* (s)(*E*, *t*) represents the probability distribution function (PDF) for observing a gamma ray of reconstructed energy *E* at the moment *t*, while *f* (b)(*E*, *t*) is the PDF for observing a background event of reconstructed energy *E* at the moment *t*. The energies *Ei* are bounded by *E*min and *E*max, respectively the minimum and maximum energy considered in the analysis expressed in reconstructed (i.e., measured) energy, which in turn usually depend on the instrument and observation conditions. Similarly the times *ti* are bounded by *t*min and *t*max. These four quantities are used to compute the normalisation factors of both the signal and the background part of the likelihood function. Additionally, in standard IACT analyses, the so-called ON region in the field of view contains both signal and background events. Therefore, *p* (s) *<sup>i</sup>* and *p* (b) *<sup>i</sup>* are the probabilities for the event *i* to belong to the signal or the background, respectively. The PDF for observing a gamma ray of reconstructed energy *E* at the moment *t*

$$f^{(\mathbf{s})}(E,t) = \int\_0^\infty F(t + \eta\_n E^n) \, \Phi\_{\text{cbss}}(E) \, G(E, E\_{\text{true}}) \, A\_{\text{eff}}(E\_{\text{true}}, t) \, dE\_{\text{true}} \, \tag{11}$$

contains all available information about the emitted signal at the source, the gamma-ray propagation effects, and the detection process. Namely,

• function *F*(*t*) is the observed light curve. Here, by taking *F*(*t* + *ηnEn*), it is "corrected" for the potential time delay induced by the LIV effects. In this way, assuming that individual events suffered an energy-dependent time delay, and that no other dispersion effects were present, one obtains a source-intrinsic light curve, often referred to as a light curve template. In practice, there are different ways of obtaining *F*(*t*).


The PDF for observing a background event of reconstructed energy *Ei* at the moment *ti* has fundamentally the same form as the PDF for signal events, see Equation (11). However, the origin of background events is generally not known, so time of flights of individual events (whether affected by LIV or not) cannot be determined. Therefore, both temporal and energy distributions of events are taken as observed on Earth. Concretely, in Equation (11), when used for background events *η<sup>n</sup>* = 0, and *F*(*t*) and Φobs(*E*) are the measured background light curve and spectrum on Earth. The final pieces of puzzle are probabilities for each event to be part of the signal or background, *p* (s) *<sup>i</sup>* and *p* (b) *<sup>i</sup>* . In IACTs, the signal is estimated from a region around the source position in the field of view, usually referred to as the ON region. However, besides the signal, the ON region contains an irreducible contribution from the background. The background is estimated from the so-called OFF region, a region in the field of view which contains no sources of gamma rays, and is observed under the same conditions as the ON region. Usually, the probabilities for each event to be a part of the signal or background are calculated as follows:

$$p\_i^{(\mathbf{s})} = \frac{N\_{\rm ON} - \alpha N\_{\rm OFF}}{N\_{\rm ON}}, \qquad p\_i^{(\mathbf{b})} = \frac{\alpha N\_{\rm OFF}}{N\_{\rm ON}} \tag{12}$$

where *N*ON is the total number of events in the ON region, *N*OFF the total number of events in the OFF region, and *α* is the ratio of effective exposure times in the two: *α* = *t*ON/*t*OFF10. A legitimate objection to the ML method is that it relies on our knowledge of sourceintrinsic processes, which is limited at best. In that sense, the ECF or similar methods like the SMM (see Section 2.13) have the advantage of not depending on our knowledge of source-intrinsic effects. However, it is quite imaginable that there are source-intrinsic dispersive processes, which could mimic effects of LIV11. These would not depend on the source redshift, and could be "filtered out" by considering sources at different redshifts. Nevertheless, combining the results of different analyses might prove tricky for ECF and related methods. The likelihood function, on the other hand, should tackle that task with relative ease, as we will discuss in Section 5.2.

Another possible source of systematic effects are secondary gamma rays, which can be produced through one of the following processes: (i) hadrons accelerated within a source interact with the surrounding electromagnetic fields to produce neutral pions, which decay into gamma rays, (ii) gamma rays emitted from the source interact with magnetic fields to produce electron-positron pairs, which can create secondary gamma rays either through annihilation, or by inverse-Compton scattering of lower-energy photons. Secondary gamma rays could create a false signal, especially in the analysis methods based on individual events, such as time of flight studies. However, since secondary gamma rays are not produced within the observed source, their origin is not necessarily on the line of sight. A significant rate of secondary gamma rays would manifest as an extended emission, so-called halo, around an otherwise point-like source. Indeed, several studies

searching for gamma-ray halos have been performed, but have shown no evidence thereof (see, e.g., [46–48], see also [49] and references therein). Though an occasional secondary gamma ray might be mistakenly treated as the signal, the effect should be minor, and it would diminish with an increasing size of a data sample.

### *2.4. Results from the Maximum Likelihood Method on the Mrk 501 Flare from 2005*

In this first application of the ML method, the light curve of the Mrk 501 flare from 2005 was modelled with a Gaussian superimposed on top of a constant baseline emission from the source. A background contribution was not considered because of its negligible contribution in such a flare. The results showed *η*<sup>1</sup> (see Equation (6)) departing from zero by slightly more than 2*σ*, implying an energy-dependent time delay [37]. *η*2, on the other hand, was consistent with zero. Again, only the subluminal behaviour was tested for. The corresponding LIV energy scales were *E*QG,1 = 0.30+0.24 <sup>−</sup>0.10 <sup>×</sup> 1018 GeV and *E*QG,2 = 0.57+0.75 <sup>−</sup>0.19 <sup>×</sup> <sup>10</sup><sup>11</sup> GeV for the linear and quadratic contributions, respectively [42]. Martínez & Errando refrained from interpreting the results and focused on the description of the method. The nonzero time delay was instead discussed in [37]. A possibility of a bias in the ML analysis was investigated on a set of simulated Monte Carlo samples. An independent researcher simulated data sets with injected energy-dependent time delays. These data sets were blindly analysed using the ML method, which correctly reconstructed the injected delay values. It was concluded that the effect was real, although the statistical significance was too low to claim a discovery. Finally, it was concluded that the results obtained with the ECF and the ML methods were mutually consistent. Furthermore, some investigations of emission models suggest that the energy-dependent time delay could be a consequence of source intrinsic spectral variability in time, occurring either because of the acceleration of particles or the absorption of gamma rays [50,51]. In summary, the study on the Mrk 501 flare data from 2005, not only significantly tightened the constraints on the LIV energy scale, compared to the pioneering study of Whipple [32], but it also motivated the introduction of a novel analysis method, and served as a cross check between two fundamentally different analysis approaches. This data set was also the first one to be studied with two fundamentally different analysis approaches, allowing comparisons between their results.

### *2.5. Sensitivity to the Lorentz Invariance Violation Effects*

After taking a look at the first searches for the possible signatures of LIV effects in IACTs data, this is a good place to analyse what properties a signal should have in order to be considered a good probe of such an effect. As we have already discussed in the beginning of Section 2, the more energetic photons will be more strongly affected by the LIV. Therefore, sources with spectra extending to higher energies, and with comparatively larger population of higher energy photons (colloquially called "harder spectra", as opposed to "softer spectra"), are more favourable. Furthermore, the farther the source, the more the effect will be accumulated. However, a large distance carries the caveat that VHE gamma rays are partially absorbed on the EBL (see the description of the ML method in page 7 et sec., and Section 3.1), which softens the spectra and depletes data samples of the most energetic photons. The time delay between two photons of different energies will also be more pronounced for smaller values of *E*QG,*n*, so smaller time delay means stronger constraint on the LIV energy scale. However, it is entirely possible that there are emission time delays present within sources, which could mimic or conceal LIV-induced arrival time delays. Considering our limited knowledge of emission mechanisms, the emission times cannot be precisely modelled. Instead, the emission time has to be constrained based on the flux variability timescale. Emission is more probable during periods of higher flux. However, high flux on its own is not enough. If the flux is constant, or changing monotonically, an application of a spectral dispersion, will not change the shape of the light curve. A variable light curve, on the other hand, will be smeared due to spectral dispersion. The effect will be more pronounced for stronger dispersion, and more detectable for faster

changing flux. By inverting Equation (3), one can make a crude estimate of how LIV energy scale depends on the highest energies of detected photons (*E*max), light curve variability timescale (*t*var), and the redshift of the source (*z*s). These dependencies are summarized in Table 1. Note that the power of the dependence on the redshift was numerically computed for *z*<sup>s</sup> up to ∼10, which is much further than what current IACTs can probe.

**Table 1.** Dependence of *E*QG,*<sup>n</sup>* on the characteristics of the source and the sample. *E*max is the highest photon energy in the sample, *t*var is the shortest variability timescale in the light curve, and *z*s is the redshift of the source.


There is another parameter, not present in Equation (3), whose importance becomes apparent through the data analysis. That is the size of the sample. Its influence on *E*QG,*<sup>n</sup>* depends on the analysis method, and is difficult to estimate it the way it was done for other parameters in Table 1. The general rule, though, is simple: the more the better. More specific estimates will be discussed on particular cases.

Based on this simple analysis, three types of sources are considered to be suitable for testing of LIV on gamma rays:

**Pulsars** can have rotation periods as short as a few milliseconds, although the ones detected with IACTs so far have periods of at least a few tens of milliseconds. The only four pulsars that have been detected with IACTs so far are the Crab pulsar [52], the Vela pulsar [53], the Geminga pulsar [54], and PSR B1706-44 [55]. Their pulsation is highly regular, which makes it predictable, and allows stacking of signal from different periods, thus increasing the detected statistics. Additionally, these four pulsars are located in the Milky Way. This relatively close proximity significantly impairs the sensitivity of LIV tests performed on pulsar data.

**Gamma-ray busts** are powerful transient cosmic explosions, usually associated with collapses of massive stars into black holes (long GRBs), or mergers of neutron stars (short GRBs). Their light curves are variable on timescales of a second. Unlike pulsars, GRBs are completely unpredictable. Satellite-borne detectors with a large field of view, such as Gamma-ray Burst Monitor (GBM) [56] and LAT [57] onboard satellite *Fermi*<sup>12</sup> on average detect one GRB almost every day [58]. However, IACTs with a rather small field of view (an order of few degrees) rely on alerts from satellite borne detectors to trigger observations. Furthermore, because of their large distances, VHE gamma rays are strongly absorbed on the EBL. For these reasons, GRBs are elusive and notoriously difficult to detect with IACTs, with only four detected to date ([59–62]). However, due to their short variability timescales, combined with large distances, once a GRB is detected, the signal becomes a valuable asset for probing QG (see Section 2.12).

**Active galactic nuclei** are persistent sources at distances comparable to GRBs. During their flaring states, they emit signals abundant in VHE gamma rays, with flux variability timescales on the order of minutes. Although unpredictable, flares usually last longer than GRBs. In addition, they emit stronger fluxes, with the most energetic photons reaching higher energies. All of this makes flares from AGN easier to detect with IACTs compared to GRBs.

### *2.6. Lorentz Invariance Violation Study on the Most Variable Blazar Flare*

While the Mrk 501 flare observed by MAGIC showed the fastest changing gammaray flux in blazars, it had a rather simple structure. Almost exactly one year later, on 28 July 2006, while the LIV data analysis on the Mrk 501 sample was still ongoing, another promising flare occurred. This time around it was the High Energy Stereoscopic System (H.E.S.S.) <sup>13</sup> that observed a flare from blazar PKS 2155-304 [64]. During an ∼85 min observation, flare with a quite complex structure was detected, variable on the scale of

∼200 s with several local minima and maxima, and with the signal to background ratio above 300. At the same time, no significant changes of spectrum were found. The highest flux reached more than 15 Crab units (C.U.) <sup>14</sup> above 200 GeV, and a total of more than eleven thousand gamma rays were detected, reaching the highest energies of ∼4 TeV. Moreover, PKS 2155-304 is located at a redshift of *z* = 0.116, more than three times larger than Mrk 421 and Mrk 501.

Several studies of energy-dependent time delay were performed using this signal. The first one, published soon after the flare was observed, used two different statistical methods, both estimating time lag between light curves in different energy ranges [65].

**Modified cross correlation function (MCCF)** was originally developed for timescale analysis of spectral lags, and it enables searches for time lags shorter than the temporal resolution of light curves [66]. In this case, the data were split in two energy bins: 200–800 GeV and >800 GeV, and the MCCF was used to estimate the time lag between the light curves in in these two energy ranges. The analysis resulted in the most stringent constraint on the linear contribution up to that time *<sup>E</sup>*QG,1 > 7.2 × 1017 GeV; more than two times stronger limit than the one set by MAGIC on Mrk 501 data. The lower limit on the quadratic contribution, on the other hand, was set at *<sup>E</sup>*QG,2 > 1.4 × 109 GeV; more than 40 times lower than the one from Mrk 501 by MAGIC using the ML method, and almost 20 times lower than the one set with the ECF.

**Continuous wavelet transform (CWT)** method relies on identifying extrema in two energy bands and measuring their relative time delay. In this case, the chosen energy ranges were: 210–250 GeV and >600 GeV, and two pairs of extrema were identified. Only the constraint on the linear term was set at *<sup>E</sup>*QG,1 > 5.2 × <sup>10</sup><sup>17</sup> GeV, thus confirming the constraint obtained using the MCCF method.

Relying on the rule of thumb, laid out in Table 1, it was expected that the larger distance and faster flux variability of PKS 2155-304, compared to Mrk 501, would make this study more sensitive to the linear modifying term of the dispersion relation. The influence of these two variables to the quadratic contribution is somewhat smaller, because of the exponents, allowing a stronger influence of the highest gamma-ray energies. Nevertheless, it seems unlikely that a factor of 2.5 difference in the highest energies alone would result in a factor of forty difference between limits on the quadratic contribution. It is more likely that the MCCF and CWT methods do not fully exploit all of the potentials of the PKS 2155-304 data sample.

When proposing the ML method in [42], the authors were already aware of the PKS 2155-304 flare, and decided to test their method on that signal as well. Since they did not have the access to the actual data set, and the method relied on individual events, they generated Monte Carlo simulated data sets, based on the published information on the PKS 2155-304 flare. It was estimated that the application of the ML method on the PKS 2155-304 flare sample would be more than six times more sensitive to *E*QG,*<sup>n</sup>* compared to the Mrk 501 case. Moreover, the authors analysed where the sixfold improvement came from, and came up with similar conclusions as we have just discussed: (i) the higher redshift contributed a factor of three, (ii) larger sample of PKS 2155-304, albeit with the highest energies lower than in the Mrk 501 sample, added another factor of two, and (iii) more complex light curve shape was responsible for an additional factor. However, the authors also noted that it was in fact the fastest single change of flux, i.e., the fastest rise time or fall time in the entire light curve, which dominated the sensitivity.

Following the study by Martínez & Errando [42], the H.E.S.S. Collaboration performed another search for effects of LIV in the PKS 2155-304 flare data, this time fully adopting the ML method [67]. For this occasion, a particular H.E.S.S. data analysis was performed, focusing on the initial 4000 s of the observation, during which both the flux and its variability were the highest. Upon applying some additional cuts on the data, only 3526 events remained (out of more than 11,000 in the original data set) in the 0.25–4.0 TeV energy range. This resulted in a strong background suppression, and a very good fit of the light curve and the spectrum. Based on optimisation using Monte Carlo simulations, the data were finally

separated in two energy bins: 0.25–0.28 TeV and 0.3–4.0 TeV. The lower bin was used to create the light curve template. The data were fitted with a sum of a constant baseline emission and five consecutive asymmetric Gaussian curves. The events from the higher energy bin were used to calculate the likelihood. The results were *<sup>E</sup>*QG,1 > 2.1 × 1018 GeV and *<sup>E</sup>*QG,2 > 6.4 × 1010 GeV for the linear and quadratic term, respectively, both significantly more constraining than the ones obtained in the previous analysis by H.E.S.S. using MCCF, demonstrating the dominance of the ML method on a concrete case. Furthermore, both results were in line with the assessments by Martínez & Errando, and finally, both were the most constraining lower limits on the LIV energy scale up to that time. Discussing their results, the authors reached similar conclusions as Martínez & Errando in their work. In particular, the higher sensitivity was due to the high flux variability and large data sample, while the lower maximal energies somewhat impaired the sensitivity. Furthermore, the uncertainty on the estimated parameter depended mostly on the width of the individual flux peaks, which was in agreement with the conclusion by Martínez & Errando that the sensitivity is dominated by the fastest single change of flux. Final important point was that the estimated parameter uncertainty only mildly depended on the number of events used to calculate the likelihood, meaning that robust results are obtainable even with small data sets.

### *2.7. Extending to Higher Redshifts*

On 26 and 27 April 2012, the H.E.S.S. telescopes observed a flare from the blazar PG 1553+113 [68]. The flux was three times higher than the archival measurements, with an indication of intra-night variability. Interestingly, the redshift of the source had been only loosely constrained prior to this study. In order to estimate the redshift more precisely, the authors devised a method based on Bayesian statistics, which relies on accounting for the absorption of VHE gamma rays on the EBL15. This enabled them to estimate the redshift to be *z* = 0.49 ± 0.04. Though the flux showed only a hint of intra-night variability, the relatively large redshift encouraged the authors to perform a search for an energy-dependent time delay. Observations from the second day were used for that purpose. Unlike the flare from PKS 2155-304 (see Section 2.6) the signal to background ratio in this case was only 2. Due to this high background contamination, a PDF for the background had to be introduced into the likelihood function for the first time. Events from the energy range 300–789 GeV, the upper edge corresponding to the last significant bin, were used. The sample was separated into a lower energy bin used to create the light curve template, and a higher bin used for the ML calculation. The delimiter between these two bins was set at 400 GeV, approximately corresponding to the median of the sample. The results, *<sup>E</sup>*QG,1 > 4.1 × 1017 GeV, *<sup>E</sup>*QG,2 > 2.1 × 1010 GeV for the subluminal scenario, and *<sup>E</sup>*QG,1 > 2.8 × 1017 GeV, *<sup>E</sup>*QG,1 > 1.7 × 1010 GeV for the superluminal scenario, did not further constrain the LIV energy scale, but confirmed the already existing limits on the quadratic term. The bounds on the linear term were an order of magnitude below the ones set by H.E.S.S. on PKS 2155-304 flare [67]. The authors did not discuss the reasons for the lower sensitivity, however, referring again to our rule of thumb (Table 1), it seems safe to conclude that this study benefited from the high redshift of the source, while paying dues to the lower gamma ray energies detected, the modest sample size, and a marginal flux variability.

### *2.8. Exploring Lower Time Variability with the Crab Pulsar Observations by VERITAS*

The idea of using pulsar emission to search for LIV was first applied to Crab pulsar observations by EGRET [69]. The Crab pulsar (PSR J0534+2200) is located at the center of the Crab nebula at 2.0 ± 0.5 kpc [70] from Earth, and has a period of rotation of ∼33 ms [71]. In 2011, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) reported the observation of gamma-ray emission from the Crab pulsar above 100 GeV [72]. Its phaseogram, i.e., its emission as a function of the pulsar rotational phase *φ*, distinctly shows a main pulse (referred to as P1) and an inter-pulse (referred to as P2) at a phase *φ*∼0.4 from

P1. For the LIV analysis [73,74], the authors made use of the peak comparison (PC) method. This method can be used to look for an average phase delay Δ*φ* between photons from two different energy bands with mean energies *E*<sup>1</sup> and *E*<sup>2</sup> for the lower and the higher energy band, respectively:

$$
\Delta\phi\_n = -S \frac{d\_{\rm Crab}}{c P\_{\rm Crab}} \frac{n+1}{2} \frac{E\_2^n - E\_1^n}{E\_{\rm QC, n}^n} \tag{1.3}
$$

where *d*Crab is the distance to the Crab pulsar, *P*Crab its period and *c* the Lorentz invariant in vaccuo speed of light. Note that the phase *φ* is a practical quantity when describing pulsar behavior, nevertheless since Δ*t* = Δ*φP*Crab one immediately recovers Equation (3) from Equation (13) under the assumption that *Dn*(*z*s) ≈ *d* which is true for such nearby sources as pulsars. The authors used this method to compare the mean fitted pulse position obtained with VERITAS above 120 GeV to the one obtained with *Fermi*-LAT above 100 MeV [75]. The peak positions agreed within statistical uncertainties, therefore a 95% confidence upper limit on their timing difference of 100 μs could be derived. This limit was then converted into limits on *E*QGn by reversing Equation (13):

$$E\_{\rm QG,n} > \left( -S \frac{d\_{\rm Crab}}{cP\_{\rm Crab}} \frac{n+1}{2} \frac{E\_2^n - E\_1^n}{\Delta \phi\_n} \right)^{\frac{1}{n}},\tag{14}$$

yielding *<sup>E</sup>*QG,1 > 3.0 × 1017 GeV and *<sup>E</sup>*QG,2 > 7.0 × 109 GeV in the subluminal scenario (*S* = −1). Note that in Equations (13) and (14), the distance parameter *Dn*(*z*s) from Equation (4) was replaced by a more standard distance *d* as pulsars are sources within the Milky Way, hence their distance is not properly described by the redshift. This means that the last column of Table 1 is different in the case of pulsars, indeed *E*QG,*<sup>n</sup>* will be proportional to *d*1/*n*.

A variation of the DisCan method was also used in this work [74]. It was first introduced in 2008 [40] and, as its name suggests, consists in looking for the LIV parameter that best cancels out any time dispersion in the data. As such this method is a variation of the ECF with a different cost function. The variation consisted in the use of the *Z*<sup>2</sup> *<sup>m</sup>* test [76] as a test statistic (with *m* = 20 resulting from Monte Carlo optimization for this particular case) applied to the phased data to look for the potential LIV effect. This *Z*<sup>2</sup> <sup>20</sup> DisCan method yields a best value of *η*<sup>1</sup> = −0.49 μs/GeV. The calibration of the method using 1000 Monte Carlo simulations allowed the authors to establish that this value was only 1.4 *σ* away from the null hypothesis and therefore compatible with it. The 95% confidence level limits on *η*<sup>1</sup> reached −1.2 μs/GeV and 1.1 μs/GeV for the lower and upper limits, respectively. These results were then translated into the following limits: *<sup>E</sup>*QG,1 > 1.9 × 1017 GeV and *<sup>E</sup>*QG,1 > 1.7 × <sup>10</sup><sup>17</sup> GeV for the subluminal and the superluminal scenario, respectively.

### *2.9. Applying the Maximum Likelihood Method to the Crab Pulsar with MAGIC*

The Crab pulsar was also observed and detected by MAGIC. The LIV analysis performed on the Crab pulsar [77] focused on the events from the P2 pulse as they reach higher energies, which increases the sensitivity to a LIV effect. For this analysis, the authors used ∼326 h of excellent quality data. This dataset was analysed with two different methods. Three energy bands (mean energies ∼75 GeV, ∼465 GeV, and ∼770 GeV,) were defined for the analysis but the analysis focused primarily on the two highest. The reason for this choice is that the emission's mechanism is likely to be different between the lowest and the highest energies of the pulse. Therefore the comparison focused on the two high energy bands, which are more likely to arise from the same mechanism that will not affect the search for a LIV effect. The first method used was the PC, already introduced in Section 2.8, which yielded the following limits on the LIV energy scale: *<sup>E</sup>*QG,1 > 1.1 × 1017 GeV and *<sup>E</sup>*QG,2 > 1.4 × 1010 GeV for the subluminal scenario, and *<sup>E</sup>*QG,1 > 1.1 × 1017 GeV and *<sup>E</sup>*QG,2 > 1.5 × 1010 GeV for the superluminal scenario. The second method used by the authors was the ML method, here used for the first time to analyse data from a pulsar. The

likelihood approach follows what we introduced in Section 2.3, adapted to the study of events describe by their phase *φ* instead of their absolute time *t*. In addition, the likelihood included terms to describe nuisance parameters among which the parameters used to fit the pulse profile and the background events. The former was used to evaluate systematic uncertainties in the analysis, while the latter was particularly important in the case of pulsar located in a Nebula, itself an important and steady source of gamma rays. An extended investigation of the possible origin of systematic uncertainties in this work was performed, including the uncertainty on the absolute energy and flux scale, the possible contribution from events outside the pulse region, and the relatively large uncertainty on the estimation of the distance to the Crab pulsar. In total, the authors estimated the systematic uncertainties on *E*QG,1 to be less than 42 % and on *E*QG,2 to be less than 36 %. The obtained limits reached *<sup>E</sup>*QG,1 > 5.5 × <sup>10</sup><sup>17</sup> GeV and *<sup>E</sup>*QG,2 > 5.9 × <sup>10</sup><sup>10</sup> GeV, including systematic uncertainties, in the subluminal scenario, and *<sup>E</sup>*QG,1 > 4.5 × 1017 GeV and *<sup>E</sup>*QG,2 > 5.3 × <sup>10</sup><sup>10</sup> GeV, including systematic uncertainties, in the superluminal scenario. The ML method, thus, provided limits a factor 4–5 more stringent than the limits obtained with the PC method.

### *2.10. Lorentz Invariance Violation Study on a New Vela Pulsar*

The Vela pulsar (PSR J0835-4510), located at 0.29+0.08 <sup>−</sup>0.05 kpc [78] from Earth, was observed by H.E.S.S. from March 2013 to April 2015 [53]. In order to reach an energy threshold as low as possible, the analysis only used events recorded by the large 28 m telescope telescope at the centre of the array. The LIV analysis [79] made use of 24 h of good quality data from 2013 to 2014. In this period, the telescope recorded about 10,000 pulsed events above ∼20 GeV. The energy range considered for the analysis was 20 to 100 GeV, yielding a statistics of ∼9300 excess events associated to the pulsar for a signal to noise ratio of ∼0.025. The authors used the same ML method as described in Section 2.9. The ON phase region was defined as the interval [0.5, 0.6]. The signal template was obtained from the fitting of the low energy (20–45 GeV) events from the ON phase region by an asymetrical Lorentzian function (for the signal) plus a constant (for the background). This constant is determined from the fitting of events from the OFF phase region chosen as [0.7, 1]. The authors used dedicated toy Monte Carlo simulations to calibrate their analysis by simulating mock data reproducing Vela's sample characteristics and injecting different simulated phase delays, similar to what was presented in Section 2.4. The method exhibits an almost unbiased reconstruction of the LIV induced delay. Therefore the results of the distribution of the reconstructed delay, when no LIV effect was injected, was used to evaluate the statistical uncertainty of the measurements as well as the systematic uncertainty. Applied to the 45–100 GeV range, the ML analysis provided a measurement of the delay *<sup>φ</sup>* = (−2.0 <sup>±</sup> 5.0stat <sup>±</sup> 3.0sys) <sup>×</sup> <sup>10</sup>−<sup>2</sup> TeV−<sup>1</sup> compatible with no delay. The results were, therefore, converted to 95% confidence level lower limits on the linear term *E*QG,1, yielding *<sup>E</sup>*QG,1 > 4.0 × 1015 GeV and *<sup>E</sup>*QG,1 > 3.7 × <sup>10</sup><sup>15</sup> GeV in the subluminal and superluminal cases, respectively.

### *2.11. First Parallel Study of Energy-Dependent Photon Group Velocity and Gamma-ray Absorption on the Same Data Sample*

As discussed in Section 2.2, Mrk 501 was already observed and studied in the search of LIV after a flare detected by MAGIC in 2005. In 2014, another flare was detected during a monitoring campaign of First G-APD Cherenkov Telescope (FACT) [80]. The alert of this flare triggered observations by the full array of five telescopes of H.E.S.S. on the night of 23–24 June 2014. Observations were performed at high zenith angle (63◦ to 65◦) leading to a high energy threshold of -1 TeV. The LIV analysis on this flare [81] was done using the ML method presented in Section 2.3. The only noticeable difference was the use of the variable *ηn*, defined in Equation (6), as the main likelihood parameter and the explicit mention of the normalization factor depending on *ηn*. The sample of events was divided between the 733 events between 1.3 TeV and 3.25 TeV, which were used to compute the template and the 662 events above 3.25 TeV, which were used to compute the likelihood

and the best values of *ηn*. It is important to note that in this specific analysis, given the high energy threshold of the observations, the low energy template included a potential LIV effect. In practice, while the delay in the template is usually taken as null (*D* = *ηnEn*), here they modelled it as *<sup>D</sup>* = *<sup>η</sup>n*(*E<sup>n</sup>* − *<sup>E</sup>*<sup>T</sup> *n* ) where *E*<sup>T</sup> is the mean energy of the events in the energy range of the template. As the low energy events were used to built the template, only the 662 high energy events were used in the likelihood in the search for a LIV effect in this dataset. The best fitted value of *η<sup>n</sup>* were compatible with the Lorentz invariant scenario. 1000 Monte Carlo simulations with no LIV effect were used to derive calibrated intervals from which uncertainties were derived. Finally, limits on the energy scale of LIV were set to *<sup>E</sup>*QG,1 > 3.6 × <sup>10</sup><sup>17</sup> GeV (*E*QG,1 > 2.6 × <sup>10</sup><sup>17</sup> GeV) and *<sup>E</sup>*QG,2 > 8.5 × <sup>10</sup><sup>10</sup> GeV (*E*QG,2 > 7.3 × 1010 GeV) for the subluminal (superluminal) scenario. These limits include systematic uncertainties, the main one being the determination of the template. Note that, to date, this dataset is the only one that has been used to perform a time of flight study, as described in this section, and a universe transparency study later described in Section 3.1.3.

### *2.12. First Lorentz Invariance Violation Study on a Gamma-ray Burst Observed with Imaging Atmospheric Cherenkov Telescopes*

More that two decades after the proposal by Amelino-Camelia et al., an opportunity presented itself to test the LIV on a signal from a GRB observed with IACTs. The MAGIC Collaboration announced a discovery of a GRB with IACTs for the first time ever [82] 16. A signal from GRB 190114C was detected at energies above 1 TeV [59]. The analysis of this signal for the purpose of testing LIV started immediately. The ML method was applied (in fact, Equations (10) and (11) were adopted from the LIV study on GRB 190114C [83]). The most troublesome issue about the analysis was the formulation of the light curve template. The MAGIC observations started 62 s after the burst, almost completely missing the prompt phase, and detecting gamma rays from almost only afterglow phase of the GRB. The signal in the TeV energy band was observable until ∼40 min after the burst [84], however it was estimated that only the first 20 min (the duration of a single observation run) would be relevant for the test of LIV. After that, the signal rate became comparable to the background rate, meaning that it would not have considerably improved the sensitivity of the analysis, while at the same time, the systematic effects would have increased. The MAGIC data analysis revealed that during the first 20 min of observation about 700 gamma rays were detected with the energies in range of 0.3− 2 TeV. The intrinsic spectral distribution of events was well fitted with a power law [84]. More interestingly, the light curve also demonstrated a monotonic, power law decay of the flux. A monotonic change of flux is no more useful in searches for a spectral dispersion than no change of flux at all would be. A spectral dispersion, applied to a monotonic temporal distribution, would change the rate of a change, but not the functional shape of the distribution. Thus, any effect of a spectral dispersion would be undetectable. Therefore, in order to perform the LIV test, the authors used the light curve model obtained from theoretical inference, and based on the observations performed with the MAGIC telescopes and other facilities observing in lower energy ranges [84]. This template was dubbed *theoretical* by the authors of the LIV study. All ∼700 events were used to calculate the likelihood. Before estimating the values and the confidence interval of LIV parameters, the sensitivity of the method was estimated. This was done by creating 1000 mock data sets and using them to calculate the likelihood. Each mock data set was produced from the original data set by shuffling the arrival times of detected events and then randomly selecting events from this reshuffled data set. In this way, the generated mock data sets consisted of the same number of events as the original data set, with the same energy and temporal distributions. Furthermore, bootstrapping, the procedure of randomly selecting events from the existing (reshuffled) data set, allowed both the energy and temporal distributions to vary in line with their statistical uncertainties. In this way, these uncertainties were propagated to final result. Reshuffling, on the other hand, had the role of removing any correlation between the energy and arrival time, if present in the first place. Therefore, if there was any energy-dependent time delay present in the

original data set, it would have been washed out by the reshuffling. After calculating the likelihood for each of the mock data sets, a distribution of the results was made, revealing a bias in the method, for which the final data, obtained on the real data set, were corrected. The same mock data sets were used to calibrate the confidence interval, as described in Section 2.3.

Upon correcting for the bias and estimating the confidence interval, the resulting lower bounds on the LIV energy scale were as follows: *<sup>E</sup>*QG,1 > 5.8 × <sup>10</sup><sup>18</sup> GeV (*E*QG,1 > 5.5× <sup>10</sup><sup>18</sup> GeV) and *<sup>E</sup>*QG,2 > 6.3 × <sup>10</sup><sup>10</sup> GeV (*E*QG,2 > 5.6 × <sup>10</sup><sup>10</sup> GeV) for the subluminal (superluminal) scenario.

As was already mentioned, the light curve model adopted from [84] was constructed based on observations in lower energies, and theoretical considerations. The power law decay, observed with the MAGIC telescopes, in the model is preceded by a rather sharp peak. The peak was before the MAGIC observation window, so neither confirmed, nor disproved. So even before obtaining the final results of the LIV test, there was a genuine concern that such fast change of the flux was introducing artificially high sensitivity to the LIV effects. As a sort of a sanity check, the LIV analysis was performed on another light curve template. This template, dubbed "minimal", was a step function, with zero value before the burst, and constant value afterwards. Translated to the signal PDF (Equation (11)), it means that there is zero probability of a gamma ray being emitted before the burst, and equal probability of emitting any gamma ray at any time after the burst. This very simple function is clearly not the correct description of the intrinsic light curve. Nevertheless, it avoids sharp peaks not confirmed by observations, consequently, in a sense, minimizing the influence of the light curve template on the sensitivity to the LIV effects. This light curve template will cause the likelihood profile to be minimal and flat for small and negative values of the LIV parameters, thus preventing the estimation of the bias, and only allowing setting constraints on the subluminal scenario. The results obtained using this minimal model (*E*QG,1 > 2.8 × <sup>10</sup><sup>18</sup> GeV and *<sup>E</sup>*QG,2 > 7.3 × <sup>10</sup><sup>10</sup> GeV) are compatible with the ones obtained using the theoretical model, meaning that the usage of the theoretical model did not introduce unreasonably high sensitivity into the analysis.

The bounds on the LIV energy scale obtained in this study, were comparable to the most constraining lower limits present at that time. However, more than confirming the constraints resulting from other studies, the importance of this work was particularly in the fact that it was the first one ever performed on a signal from a GRB observed with IACTs. Especially in the upcoming era of the Cherenkov Telescope Array (CTA) 17, which carries a promise of observing a few GRBs each year with significantly larger data samples for every GRB [86] 18, the test of LIV on GRB 190114C presents an important stepping stone for the future of LIV research.

### *2.13. Lorentz Invariance Violation on Fermi-LAT Gamma-ray Bursts*

In previous sections we laid out analysis methods and results of different studies performed on the IACTs data, searching for the signatures of energy dependence in the photon velocity. Results of all these studies are usually compared to the results from a benchmark work by Vasileiou et al. [38], where the authors collected four GRBs observed with the *Fermi*-LAT instrument19. Vasileiou et al. analysed the *Fermi*-LAT data from four bright GRBs with well determined redshifts: GRB 080916C (*z* = 4.35 ± 0.15 [87]), GRB 090510 (*z* = 0.903 ± 0.003 [88]), GRB 090902B (*z* = 1.822 [89]), and GRB 090926A (*z* = 2.1071 ± 0.0001 [90]). All of these are much farther away than any source used for LIV tests with IACTs. In addition, and unlike the case of GRB 190114C observed with the MAGIC telescopes, a quickly variable prompt GRB phases were observed in all these four cases. An LIV test was performed on each of these sources individually, and three different analysis methods were used on each source.

The **PairView (PV)** method was developed for the purposes of this study. It calculates once the energy-dependent differences in the arrival times for each pair (*i*, *j*) of photons in the sample:

$$d\_{\mathbf{i}\succ\mathbf{j}} \equiv \frac{t\_{\mathbf{i}} - t\_{\mathbf{j}}}{E\_{\mathbf{i}}^{n} - E\_{\mathbf{j}}^{n}}.\tag{15}$$

The distribution of *l*i,j will be peaked at *η<sup>n</sup>* defined in Equation (6), giving the value of the LIV parameter.

**Sharpness maximisation method (SMM)** [38,39] is analogue to the ECF method, which was previously applied to the MAGIC sample of Mrk 501 and explained in Section 2.2. It employs the aforementioned fact that an application of a spectral dispersion to a data set will decreases sharpness of the light curve. While the ECF method maximizes the power in the selected time interval, the SMM measures the sharpness of the light curve, e.g.,

$$S(\eta\_{\rm tr}) = \sum\_{i=1}^{N-\rho} \log \left( \frac{\rho}{t\_{i+\rho}' - t\_i'} \right),\tag{16}$$

after applying an opposite dispersion as described in Equation (5). *ρ* is a fixed parameter making sure that events which are very close together are not considered in the denominator, because that would dominate the function. The intrinsic light curve is expected to be the sharpest one. Therefore, *η<sup>n</sup>* for which the light curve is the sharpest, will be the measure of the spectral dispersion present in the data sample.

The third and final method used was the ML, which we already described in Section 2.3. Final limits on the LIV energy scale were obtained for each source individually, by taking average value of results obtained from three different methods and after accounting for systematic effects (see Table 5 in [38]). The most constraining lower limits resulted from the GRB 090510: *<sup>E</sup>*QG,1 > 2.2 × 1019 GeV, *<sup>E</sup>*QG,2 > 4.0 × 1010 GeV for the subluminal, and *<sup>E</sup>*QG,1 > 3.9 × 1019 GeV, *<sup>E</sup>*QG,1 > 3.0 × 1010 GeV for the superluminal scenarios. The staggering lower limits on the linear term, surpassing the Planck energy are the reason why every other LIV study is compared to this one. Interestingly, GRB 090510 was the one with the smallest redshift in the sample, and the only one with *z* < 1. However, it is also the only one in the sample which was classified as a short GRB, while the other three were long GRBs, with emission spread over somewhat longer time. Moreover, the highest energies in all four data sets were detected from GRB 090510. While the authors at first considered combining the results from all four sources into one single bound on the LIV energy scale, they gave up on the idea because the result from GRB 090510 was so much more constraining than the other three GRBs that a combination would not significantly increase the lower limit.

It should be noted that the *Fermi*-LAT detector is sensitive in the energy range of 20 MeV–300 GeV. The higher energy part of this band partially overlaps with the IACTs sensitivity range (∼30 GeV—few tens of TeV). However, in the overlapping energy region, the *Fermi*-LAT sensitivity deteriorates with increasing energy, while the opposite is true for IACTs20. Lower energy gamma rays, detectable with *Fermi*-LAT, are not absorbed by the EBL, thus enabling *Fermi*-LAT to detect sources at significantly higher redshifts than IACTs, increasing the sensitivity to LIV effects. On the other hand, *Fermi*-LAT reaches significantly lower energies than IACTs, which limits its sensitivity to LIV effects. These characteristics will be important when we compare different results in Section 4.

### **3. Modified Photon Interactions**

Very soon after the modified photon dispersion relation was introduced, it has been realised that it can have consequences on kinematics and dynamics of the processes (see e.g., [92–94]). In some quantum electrodynamics processes, modifications of dispersion relation may cause the change of the reaction energy threshold. On the other hand, some processes forbidden by energy-momentum conservation law in Lorentz invariant scenario, may become allowed if the Lorentz symmetry is broken. In this chapter, we will look more closely into several of these phenomena.

### *3.1. Testing Lorentz Invariance Violation with Universe Transparency*

The universe is filled with low energy photon fields such as extragalactic background light (EBL), cosmic microwave background (CMB) and radio background (RB). Gamma rays traversing cosmological distances scatter off those photons creating electron-positron pairs. Consequently, their flux, observed from Earth, is attenuated [95–98]. The EBL is responsible for the attenuation of gamma rays in 10–105 GeV range, which roughly corresponds to observable energy range of current IACTs. Unfortunately, direct EBL measurements are obstructed by bright foreground emissions, mainly zodiacal light [99], which makes it hard to determine its precise spectrum (for more information about the EBL, photon-photon interactions and the opacity of the universe to gamma rays we referee the reader to [100] and references therein). To tackle this problem, different phenomenological approaches predicting overall EBL spectrum have been followed. Remarkably, EBL models obtained through different methodologies, such as Franceschini et al. [101], Domínguez et al. [102] and Gilmore et al. [103], are in a good agreement. These models were tested on VHE data from sets of AGN by current IACTs [104–106]. Those tests were done presuming Lorentz invariance.

The gamma-ray spectrum observed from Earth is usually written as a convolution of the source intrinsic spectrum and the EBL attenuation effect:

$$\Phi\_{\rm obs}(E) = \Phi\_{\rm int}(E(1+z\_{\sf s})) \times e^{-\tau(E, z\_{\sf s})},\tag{17}$$

where *E* is the observed gamma-ray energy and *z*<sup>s</sup> is the redshift of the observed source. *τ*(*E*, *z*s) is the optical depth, dependent on the two aforementioned parameters and is given by21:

$$\pi(E, z\_s) = \int\_0^{z\_s} \frac{dl}{dz} dz \int\_{-1}^1 \frac{1 - \cos \theta'}{2} d\cos \theta' \int\_{\mathfrak{s}'\_{th}}^\infty \sigma\_{\gamma \gamma}(s) n\left(\mathfrak{e}', z\right) d\mathfrak{e}'.\tag{18}$$

In this expression,


$$
\epsilon\_{\rm th}' = \frac{2m\_{\rm c}^2 c^4}{E'(1 - \cos \theta')}.\tag{19}
$$

The threshold energy, and its changes due to modifications of the special relativity kinematics, will play a vital role in constraining *E*QG.

• The final integral accounts for the distance traveled by the gamma ray, assuming flat ΛCDM cosmology:

$$\frac{dl}{dz} = \frac{c}{H\_0(1+z)\sqrt{\Omega\_\mathrm{Im}(1+z)^3 + \Omega\_\Lambda}}\tag{20}$$

Beyond the gamma-ray horizon (*τ*(*E*, *z*s) = 1) the universe becomes progressively opaque for VHE gamma rays (for further readings on this topic in connection with IACTs we suggest [108] and references therein). For a sources at redshift 0.034, which is a redshift of Mrk 501, gamma-ray horizon is around 10 TeV [101]. When doing calculations, one must be careful to take into account cosmic expansion and notice that measurements are affected by a factor (1 + *z*). Namely, *E* and  change along the line of sight inversely proportional to (1 + *z*); for example,  =  /(1 + *z*).

### 3.1.1. Influence of Lorentz Invariance Violation on Universe Transparency

Detected gamma-ray emission up to ∼22 TeV from Mrk 501 [109] in 1997 by High Energy Gamma Ray Astronomy (HEGRA) experiment22 hinted that the universe is more transparent to VHE gamma rays than expected. One possible solution to this newly arisen problem was the aforementioned modification of photon dispersion relation. Added terms in the photon dispersion relation can cause a change in the energy threshold for pair creation, consequently leading to changes in the gamma-ray absorption. In this scenario, the new energy reaction threshold is [111]:

$$\varepsilon\_{\rm th}^{\prime} = \frac{2m\_{\rm e}^{2}c^{4}}{E^{\prime}(1-\cos\theta^{\prime})} - \frac{S}{2(1-\cos\theta^{\prime})} \left(\frac{E^{\prime}}{E\_{\rm QG,n}}\right)^{n}E^{\prime} \tag{21}$$

Changes in energy reaction threshold are depicted in Figure 1 for a head-on collision and *z* = 0. As defined in Section 1.2, *S* = +1 for superluminal, and *S* = −1 for subluminal behaviour. This modified energy reaction threshold has been derived under two assumptions: (i) the standard energy-momentum conservation law is maintained in LIV scenario, and (ii) LIV affects only dispersion relation for photons, while electrons remain unaffected. Indeed, the effects of LIV on electrons were strongly constrained by independent studies23. Therefore, the majority of studies considering electromagnetic interaction rely on the assumption that only the photon dispersion relation is modified by LIV.

Modifications in the energy reaction threshold could lead to changes in the observed spectra of a distant source, depending on the LIV scale [92,97,113–116]. In the superluminal behaviour (*S* = +1), modifications in the photon dispersion relation will cause lowering of the energy reaction threshold for the electron-positron pair creation. In that case, gamma rays would be absorbed by lower energy photon fields than in the Lorentz invariant case. For example, a 50 TeV gamma ray in Lorentz invariant scenario does not have enough energy to reach the reaction threshold with CMB photons. However, in a LIV superluminal scenario, for sufficiently low values of *E*QG, the reaction threshold will be reached (see Figure 1). This would lead to additional depletion of the most energetic photons resulting in a steeper observed spectrum. Still, so far no way was found to unambiguously disentangle the effects caused by the lowering of the photon-photon energy threshold due to LIV, from the effects arising due to Lorentz invariant EBL attenuation, or intrinsic properties of the source such as a spectral cut off. This is arguably the main reason why all experimentally set limits on *E*QG using the universe transparency to gamma rays were derived for the subluminal behaviour only. In the subluminal scenario (*S* = −1), modifications of the photon dispersion relation will lead to an increase of the energy reaction threshold, resulting in a reduced opacity of the universe to VHE gamma rays. Moreover, the reaction threshold as a function of the gamma-ray energy will have a global minimum [115] as can be seen in Figure 1. Note that there is no equivalent minimum in the Lorentz invariant nor LIV superluminal scenario, since  th, as defined in Equation (19), is a monotonous function of the gamma ray energy. Contrary to the Lorentz invariant scenario and the LIV superluminal scenario, for which once the reaction energy threshold is reached the reaction is allowed for all gamma rays with energies above the reaction energy threshold, a pair creation in the LIV subluminal scenario is kinematically forbidden for gamma-ray energies higher than the reaction energy threshold. The existence of the global minimum implies that the energy domain of EBL photons, as targets for absorption of VHE gamma rays, would be reduced, regardless of the gamma-ray energy. Consequently, a certain number of gamma rays would evade absorption and thus reach the Earth. This most particularly holds for gamma rays with energies above the position of the energy threshold minimum.

**Figure 1.** (**Left**) Energy of the background photons at threshold (th) for the pair-production reaction as a function of a gamma-ray energy (*E*). The black dashed line represents the Lorentz invariant scenario, while solid and doted lines represent LIV subluminal and superluminal scenarios, respectively. Five different values of *E*EG,1 were considered. (**Right**) Spectral energy distributions of the CMB and the two constituents of the EBL (cosmic optical background and cosmic infrared background) were produced using the EBL model by Domínguez et al. [102].

The aforementioned Breit–Wheeler cross section, as a function of the gamma-ray energy, is shown in Figure 2. In the Lorentz invariant scenario, it is represented with a black dashed line. Once the reaction energy threshold is reached, the cross section rises quickly. At the gamma-ray energy roughly twice the threshold energy [117], the Lorentz invariant cross section reaches its maximal value of 1.70 × <sup>10</sup>−<sup>25</sup> cm2 [118]. Afterwards, as the gamma-ray energy increases, the cross section drops and asymptotically approaches zero. In the superluminal scenario, the energy reaction threshold is lower than in the Lorentz invariant scenario. Moreover, lower *E*QG,1 results in lower reaction threshold. The cross section shape remains the same as in the Lorentz invariant scenario, although, it becomes narrower as *E*QG,1 decreases and reaches its maximum at lower gamma-ray energies. A somewhat more interesting development of the Breit–Wheeler cross section occurs in a subluminal LIV scenario. There are three distinct cases: (i) As we saw in Figure 1, for *E*QG,1 low enough, the reaction energy threshold will never be reached. Consequently, the cross section will be zero for all gamma-ray energies (red full line in the bottom panel of Figure 2). (ii) For higher values of *E*QG,1, the horizontal line will be crossed twice. Hence, there will be a lower and an upper reaction energy thresholds, and the reaction will be possible for gamma-ray energies between these thresholds. For relatively low *E*QG,1, this interval will be narrow, and the cross section will never reach its maximum possible value of 1.70 × <sup>10</sup>−<sup>25</sup> cm2 (green and violet full lines in the bottom panel of Figure 2). (iii) For even higher values of *E*QG,1, the gamma-ray energy interval between the reaction energy thresholds will be wide enough for the cross section to reach its maximum possible value. Moreover, the cross section will start to decrease with increasing gamma-ray energy, roughly following the shape of the Lorentz invariant cross section. However, as the gammaray energy rises, the cross section reaches a local minimum, and starts increasing to reach its maximum possible value once again, just below the upper reaction threshold. Once the threshold is reached, the cross section is cut off (light and dark blue full lines in the bottom panel of Figure 2). If one continued to increase the value of *E*QG,1, the second peak would become sharper and move towards higher energies, and the intermediate part of the cross

section would more closely follow the Lorentz invariant cross section. In addition to these three cases, there are borderline cases. Between cases (i) and (ii), for *E*QG,1 precisely such that the horizontal line in Figure 1 is a tangent to the energy threshold line, the reaction energy threshold will be reached at precisely one gamma-ray energy, and the cross section will be a vertical line at that energy, and zero elsewhere. Between cases (ii) and (iii), the cross section would reach its maximum possible value, and monotonically drop to zero.

**Figure 2.** The Breit–Wheeler cross section as a function of the gamma-ray energy for the background photon energy of 11 meV and a head-on collision. The same five values of *E*EG,1 as in Figure 1 were considered. (**Top**) The black dashed line represents the Lorentz invariant scenario while doted lines represent LIV superluminal scenario. (**Bottom**) The black dashed line represents the Lorentz invariant scenario while full lines represent LIV subluminal scenario.

After seeing how modifications of photon dispersion relation influence the reaction energy threshold and the cross section, now it is time to see how the optical depth changes due to those modifications, since all experimental limits on *E*QG, based on the universe transparency, were set only for the subluminal scenario, we will focus only on this scenario. In Figure 3 we depicted a hypothetical gamma-ray absorption for a source at redshift

*z*<sup>s</sup> = 0.03 and gamma-ray energies up to 100 TeV24, assuming different values of *E*QG,1. In the top panel of Figure 3 the gamma-ray horizon is denoted with a maroon line. As previously mentioned, beyond the gamma-ray horizon the universe becomes increasingly opaque for VHE gamma rays and thus the probability of their detection is diminishing. In the Lorentz invariant scenario, once the gamma-ray horizon is reached, the optical depth only increases. On the other hand, in the subluminal LIV scenario, the optical depth has a global maximum, different for different energies depending on the value of *E*QG, after which it decreases again. At some point it goes below the gamma-ray horizon allowing the gamma rays to evade absorption, which would lead to the recovery of the photon flux. The higher the *E*QG scale, the higher the energy of the gamma-ray at which the recovery would occur. The absorption coefficient (*e*−*τ*), as a function of a gamma-ray energy (*E*) is depicted in the bottom panel of Figure 3 and shows how the survival probability of the photons behaves in this scenario.

**Figure 3.** (**Top**) Optical depth (*τ*) as a function of a gamma-ray energy (*E*) for a hypothetical source at *z*<sup>s</sup> = 0.03. The black dashed line represents the Lorentz invariant scenario, while solid lines represent LIV subluminal scenario. Five different values of *E*EG,1 were considered. Gamma-ray horizon is denoted with the maroon line. (**Bottom**) The absorption coefficient (*e*−*τ*) as a function of a gamma-ray energy (*E*).

It should be noted that there are other phenomena other than LIV which could leave imprints in the spectra of observed sources. Most notable are the axion-like particles, into which VHE gamma rays can oscillate in the presence of the external magnetic field. Nevertheless, imprints which axion-like particles and LIV would potentially leave could be mutually distinguished. Namely, axion-like particle imprints should be independent of

the source redshift, while at the same time dependent on the magnetic field. The opposite holds for the LIV. For now, these two effects are being investigated separately, even in studies investigating both of them (see, e.g., [97]). For more information about axion-like particles and their searches with IACTs, we refer the interested reader to [119,120].

### 3.1.2. Testing Lorentz Invariance Violation on Universe Transparency

The first experimental test of LIV on the EBL absorption of gamma rays using data from IACTs was performed by Biteau & Williams [97]. The authors derived a simplified expression for optical depth using analytical methods. Furthermore, they constructed the EBL spectrum using 86 published gamma-ray spectra of 30 blazars with well-established redshifts. A total of ∼270,000 gamma rays constituted this gamma-ray sample. The EBL model was described through eight free parameters, denoted with *A*. In the LIV scenario, *E*QG was added as one additional free parameter to the analysis, using the optical depth dependency on LIV described in the previous section. Furthermore, the EBL parameters were allowed to vary with *E*QG. In order to investigate the possible effects of LIV, Biteau & Williams compared spectra of the aforementioned blazars under the assumption of Lorentz invariance on the one hand, and under the assumption of LIV on the other hand. The effect of LIV was quantified using a test statistic (TS) defined as follows, with L = exp(−*TS*/2):

$$TS = \chi^2(E\_{\rmQG}, A\_{\rmQG}) - \chi^2(\infty, A\_{\rm os}). \tag{22}$$

The term *χ*<sup>2</sup> *E*QG, *A*QG represented the best fit in the LIV scenario. The *χ*2(∞, *A*∞) represented the best fit in the Lorentz invariant scenario since, as mentioned in Section 1.2, letting *E*QG,1 → ∞ leads to the Lorentz invariant photon dispersion relation.

Biteau & Williams adopted the formalism of [115], which presumes that LIV affects both photons and leptons equally. Under that assumption, the last term in the energy threshold expression (Equation (21)) gains another factor of (<sup>1</sup> − <sup>2</sup>−*n*). One of the 86 spectra selected for this study was the spectrum of Mrk 501 historical flare detected by HEGRA in 1997 [109]. When performing the analysis on the originally published Mrk 501 spectrum, Biteau & Williams showed that *E*QG,1 was approximately *E*Pl at 4*σ* level. However, when a newly derived spectrum of the same data set (from [121]) was used, the significance decreased to 2.4 *σ*. The re-analysed spectrum had better energy resolution, therefore it excluded the initially obtained highest energy data point. This example demonstrates how a single spectrum, and the most energetic photons in it, can greatly influence the final result. The first experimentally set 95% confidence level lower limit, using universe transparency to VHE gamma rays, was found to be *<sup>E</sup>*QG,1 > 9.5 × 1018 GeV. This value changed to *<sup>E</sup>*QG,1 > 8.6 × 1018 GeV when a 10% systematic uncertainty on the energy scale (typical for IACTs) was accounted for.

### 3.1.3. The Most Constraining Limits Based on Single Source Analysis

Seventeen years after the historical flare from Mrk 501, which triggered discussions about the influence of LIV to universe's transparency to VHE gamma rays, another bright flare brought Mrk 501 in the spotlight once again [122]. This time it was used to constrain *E*QG by the H.E.S.S. Collaboration using two independent channels [81]. The test of energydependent photon group velocity was described in Section 2.11, while the spectral analysis will be discussed here. Only the possible subluminal behaviour for linear or quadratic contributions was investigated. The pair production cross-section was calculated according to [123]. In this approach, the modified expression for the square of the center of mass energy *s* can be written as:

$$s = 2E'\epsilon^{\prime}(1 - \cos\theta^{\prime}) + S\left(\frac{E^{\prime}}{E\_{\text{QG},n}}\right)^{n}E^{\prime 2} \tag{23}$$

The dependence of the cross section on *s* is considered to be the same in LIV and Lorentz invariant scenarios. The cross section as a function of a gamma-ray energy for the background photon of 11 meV and a head-on collision is depicted in Figure 2.

To quantify the possible effects of LIV on a spectra of this bright flare, authors defined a TS similar to Equation (22):

$$\text{TS} = \chi^2(E\_{\text{QG}}) - \chi^2(E\_{\text{QG}} \to \infty). \tag{24}$$

However, unlike the TS defined in Equation (22), this TS did not contain free-varying EBL parameters. The EBL model of Franceschini et al. [101] was used and the optical depth was calculated in a standard way, as described in Equation (18). The *χ*<sup>2</sup> values in Equation (24) were obtained by varying *E*QG logarithmically and fitting the measured spectrum of a flare by an assumed intrinsic spectrum convoluted with the EBL attenuation effect. The intrinsic spectrum was assumed to be a simple power law. From the TS profiles, lower limits on EQG at 95% confidence level were set to *<sup>E</sup>*QG,1 > 2.6 × 1019 GeV and *<sup>E</sup>*QG,2 > 7.8 × <sup>10</sup><sup>11</sup> GeV for the linear and quadratic contributions, respectively.

A peculiarity of this work lies in the fact that the same data set was used to put the constraints on *E*QG via energy-dependent time delay and the universe transparency effects, which we will discuss in more detail in Section 4.

### 3.1.4. On How the Most Constraining Limits Were Obtained

At the moment of writing this review, the best lower limits on *E*QG obtained by testing the universe transparency were obtained by Lang et al. [124]. The starting data sample consisted of 111 spectra from 38 different sources available in the online catalogue TeVCat25, of which only a subset was used for setting constraints on LIV. In order to select only the relevant sources for probing LIV, the authors defined the attenuation *a*(*E*, *z*) as the ratio between the measured *J*meas(*E*) and the intrinsic *J*int(*E*, *z*) spectra of each source:

$$a(E, z) = e^{-\tau(E, z)} = \frac{f\_{\text{meas}}(E)}{f\_{\text{int}}(E, z)}\tag{25}$$

Lang et al. calculated the ratio between the attenuation assuming LIV and the attenuation in the Lorentz invariant scenario, at the maximum energy (*E*max) measured in a given spectrum. Only 18 spectra from 6 different sources for which the *aLIV*/*aLI* ratio differed by at least 10% and which could be used to further constrain *E*QG were selected for further analysis.

In general, the intrinsic spectrum is obtained via the process of so-called deabsorption which consists in reverting the Equation (17). In previous studies, deabsorption was done under the assumption of Lorentz invariance, not taking into account LIV in this step of the analysis. In order to rectify it, Lang et al. used an energy interval which they call fiducial region. It is defined as the energy range starting at the lowest measured spectral point. The highest spectral point is the last one at which the difference between the fluxes assuming LIV and assuming Lorentz invariance are indistinguishable, considering the measurement uncertainties. Therefore only measured spectral points from bins that satisfy the following condition were used to determine the intrinsic spectrum:

$$\frac{a\_{\rm LIV}}{a\_{\rm LI}} \le \frac{J\_{\rm meas}(E) + \rho \sigma(J\_{\rm meas}(E))}{J\_{\rm meas}(E)}\tag{26}$$

Throughout their work, the authors assumed *ρ* = 1, implying the tightest energy interval and hence leading to the most conservative limits on *E*QG. Every intrinsic spectrum (*J*int) was modeled as a power law with an exponential cut off. For each selected spectrum (*J*int), the energy spectrum on Earth (*J*cal) was computed for multiple *E*QG,*<sup>n</sup>* values using

$$J\_{\rm cal} = a\_{\rm LIV} \times J\_{\rm int} \tag{27}$$

Subsequently, all computed spectra were compared with the complete measured spectra *J*meas using a log-likelihood method. Finally, the authors combined the likelihood results from all the sources to achieve the best possible sensitivity.

In their work, Lang et al. report 2*σ* confidence level lower limits on *E*QG obtained with three different EBL models, using the same procedure. The most conservative limits were derived using the EBL model by Domínguez et al. [102] and were set to be *E*QG,1 > 6.85 × <sup>10</sup><sup>19</sup> GeV and *<sup>E</sup>*QG,2 > 1.56 × <sup>10</sup><sup>12</sup> GeV.

### *3.2. Constraints on Violation of Lorentz Invariance from Atmospheric Showers Initiated by Multi-TeV Photons*

The imaging technique of Cherenkov telescopes relies on recording flashes of Cherenkov light produced in the atmosphere by ultrarelativistic particles constituting extensive air showers. When a VHE gamma ray enters the Earth's atmosphere, it is absorbed in the Coulomb field of an atomic nucleus in the air, creating an electron–positron pair. Each created particle carries approximately one half of the primary gamma ray's energy. Leptons are emitting additional gamma rays through *bremsstrahlung*, each of which again go through the process of pair production. In that way, an electromagnetic cascade is created. The paircreation is, fundamentally, the same process as the gamma-gamma interaction in which VHE gamma rays get absorbed by the EBL. Therefore, if the gamma-gamma interaction was affected by modifying the photon dispersion relation, it would also influence the development of particle showers in the atmosphere. This interesting notion was proposed by Rubtsov, Satunin & Sibiryakov in [125] and tested on data from HEGRA and H.E.S.S. in [126].

A shower development is governed by the Bethe–Heitler (B-H) process. In particular, the depth of the first interaction in the atmosphere is exponentially distributed with the mean value inversely proportional to the cross section [127]

$$
\sigma\_{\rm BH} = \frac{28Z^2 a^3 \hbar^2 c^2}{9m\_e^2 c^4} \left( \log \frac{183}{Z^{1/3}} - \frac{1}{42} \right). \tag{28}
$$

Here *Z* is the atomic number of the nucleus, *α* is the fine structure constant, and *m*<sup>e</sup> is the electron mass. In LIV scenario, the cross section will not change significantly for superluminal photons, unless the threshold for photon decay is reached. However, in that case, the photon decay will be the dominant process, making the LIV influence on the B-H process negligible. On the other hand, if photons are subluminal, the B-H cross section becomes strongly suppressed, leading to the suppression factor

$$\frac{\sigma\_{\rm BH}^{\rm LIV}}{\sigma\_{\rm BH}} \simeq \frac{12 m\_{\rm e}^2 c^4 E\_{\rm QC,2}^2}{7 E\_{\gamma}^4} \log \frac{E\_{\gamma}^4}{2 m\_{\rm e}^2 c^4 E\_{\rm QC,2}^2} \tag{29}$$

As a consequence, the shower development in the LIV scenario will be impeded. The first gamma-gamma interaction will occur deeper in the atmosphere, and the effect will be more pronounced for higher gamma-ray energies. This will lead to showers reaching their maximal sizes also deeper in the atmosphere. Height of the shower maximum is an important parameter in IACTs data analysis (see, e.g., [128]). Depending on the experimental setup and the details of the data analysis, changes in the B-H cross section might lead to the showers induced by the most energetic gamma rays being misrepresented and excluded from further analysis. Ultimately, this will result in an apparent cut off in the spectrum at the high end.

Rubtsov, Satunin & Sibiryakov applied this method on two independent measurements of the Crab nebula spectrum. The first one was obtained by the HEGRA Collaboration, based on 385 h of observations performed between 1997 and 2002 [129]. The highest energy bin in the spectrum was centered at 75 GeV. The analysis method used to compare the spectra in Lorentz invariant versus LIV scenarios, and to determine limits to the LIV energy scale was based on ML method similar to the one used in [81]. They set a limit to the LIV energy scale to *<sup>E</sup>*QG,2 > 2.1 × 1011 GeV. The second sample was the Crab nebula spectrum measured by the H.E.S.S. Collaboration, based on 4.4 h during the flaring episode in March 2013 [130]. In this case, the spectrum was determine up to ∼40 TeV. Because of the smaller data set and lower energies reached, the result was less constraining: *<sup>E</sup>*QG,2 > 1.3 × <sup>10</sup><sup>11</sup> GeV.

Note that this effect would be opposite to the one caused by modified absorption of gamma rays on the EBL (described in Section 3.1). The constraints on the LIV energy scale set in the work by Rubtsov, Satunin & Sibiryakov were lower than the ones obtained based on the universe transparency to gamma rays. However, the constraints based on the universe transparency were obtained assuming that shower development, and consequently measurement with IACTs, is not modified by LIV. The work by Rubtsov, Satunin & Sibiryakov tested and validated that assumption.

### *3.3. Constraints on Lorentz Invariance Violation Based on Photon Stability*

A modifying term in the photon dispersion relation can be treated as the mass term in the (unmodified) dispersion relation of a massive particle. Assigning a photon a mass in the superluminal scenario renders it unstable and prone to decay. A superluminal photon of energy *Eγ* can:

• decay into an electron–positron pair

$$
\gamma \longrightarrow e^+ + e^- \dots
$$

This reaction becomes possible under condition [131]

$$E\_{\rm QG,n} > E\_{\gamma} \left( \frac{E\_{\gamma}^2}{4m\_e^2 c^4} - 1 \right)^{1/n},\tag{30}$$

where *m*e is the electron mass.

• split into multiple photons

$$\gamma \longrightarrow N\gamma\_{\prime}$$

with the dominant channel being splitting into three photons [132].

While the photon splitting has no reaction threshold, and is kinematically allowed for every superluminal photon, the process rate (see [132]) is significantly smaller than the photon decay rate [131,133].

Both processes have a similar effect on the observed spectra from astronomical sources, i.e., spectra will be attenuated at higher energies. Since there is no reaction threshold for the photon splitting, the attenuation will happen gradually. The photon decay rate quickly increases with the gamma-ray energy once the reaction threshold is reached. Consequently, it will be manifested as a cut off in the spectrum. The effects are similar to spectral attenuation due to gamma-ray absorption on the EBL. Therefore, in order to test for photon decay or photon splitting, one needs to exclude the possibility of EBL absorption. An obvious choice of a VHE gamma-ray source for these studies is the Crab nebula. Its spectrum reaches energies well above 100 TeV [134,135], while at the same time, because of its small distance from the Earth, EBL absorption is virtually negligible. These effects were independently tested for on Crab nebula spectral measurements in several studies (see, e.g., [131,132,136]), however, the results setting substantially stronger constraints came not from any IACT experiment, but from the HAWC Collaboration<sup>26</sup> [133]. The photon decay was used to constrain both liner and quadratic terms, obtaining *<sup>E</sup>*QG,1 > 2.2 × <sup>10</sup><sup>22</sup> GeV and *<sup>E</sup>*QG,2 > 0.8 × 1014 GeV, respectively. The photon splitting was used only for the quadratic term, resulting in a much stronger constraint than the photon decay, *<sup>E</sup>*QG,2 > 1.0 × <sup>10</sup><sup>15</sup> GeV. Sheer moments before concluding this review, a very exciting result was published by LHAASO27, announcing a detection of gamma rays with energies up to 1.4 PeV from 12 sources [15]. Based on these measurements the LHAASO Collaboration performed a similar study as HAWC. They searched for a cut of in spectra of the two sources with the highest energies LHAASO J0534+2202 (Crab nebula) and LHAASO J2032+4102 (the source which the 1.4 PeV event was associated with) [138]. Due to significantly higher

spectral measurements by LHAASO, the resulting constraints on the LIV energy scale were also higher. Specifically, their most constraining limits were based on the analysis of LHAASO J2032+4102 spectrum. After including the systematic uncertainties, the limits were set to *<sup>E</sup>*QG,1 > 1.2 × 1024 GeV and *<sup>E</sup>*QG,2 > 1.1 × 1015 GeV, when the photon decay was considered. As in the HAWC analysis, only quadratic term was constrained based on photon splitting, resulting in *<sup>E</sup>*QG,2 > 2.0 × <sup>10</sup><sup>16</sup> GeV.

The scope of this review are studies performed with IACTs. Nevertheless, we included these results from other type of observatories for comparison purposes. In addition, similar studies could be performed with IACTs. With the prospect of the CTA to be commissioned in the next few years, and a recent result from the MAGIC Collaboration measuring the Crab nebula spectrum up to 100 TeV [139], feasibility of a similar study with IACTs does not seem so far-fetched.

### **4. Summary and Discussion**

In this section we will review and compare the results presented in previous sections. The results are summarised and listed chronologically in Table 2. A quick glance already reveals that the constraints, both on linear and quadratic terms, are the strongest in the case of the photon stability measurements [133] by LHAASO, surpassing the Planck energy by four orders of magnitude in the case of the linear term. Apparently, that is the effect the most sensitive to LIV. However, as already stated in Section 3.3, photon decay and photon splitting are processes only allowed in superluminal scenario, and cannot be used to constrain *E*QG in the subluminal scenario. Nevertheless, it was important to constrain this LIV effect. Possible photon decay and photon splitting competes with modified absorption of gamma rays on EBL. Without a confirmation of the photon stability to these substantially high values of *E*QG, it would be virtually impossible to resolve and independently search for LIV effects of modified universe transparency to gamma rays. As it happens, the second most constraining bounds were set precisely on measurements of the universe transparency for gamma rays, or gamma-gamma interaction. The most notable result was obtained by Lang et al. [124] through a simultaneous analysis of spectra of several AGNs. As already argued, combining different sources, should wash out dependence on the properties of a given source and, therefore, help to lift the degeneracy between the LIV and source intrinsic effects. The main characteristics of a desirable source for this method are large source distance and the highest spectral energy measurement. Combining various sources in a single study enables the most pronounced characteristic of each source to be fully exploited. However, the method heavily relies on the EBL modeling and the assumptions made on the intrinsic source spectra. Uncertainties of EBL models, as well as discrepancies between the different models, are the predominant source of systematic effects. Lang et al. considered this uncertainty by presenting the results of their analysis using three different EBL models. In this review, we presented the most conservative result.



A second look at the Table 2 clearly shows that analyses based on photon interactions result in stronger constraints on *E*QG compared to the ones based on photon time of flight. A study by the H.E.S.S. Collaboration was the only one so far in which both tests were performed on the same data set [81]. Granted, the two analyses were performed independently, the time of flight test ignoring modified gamma-ray absorption and vice versa. Nevertheless, it allowed a more direct comparison of the two effects of LIV. The constraints based on absorption of gamma rays on the EBL were more stringent by one order of magnitude on the quadratic term, and two orders of magnitude on the linear term, compared to the constraints based on energy-dependent time delay. Apart from the source distance and the detected energy, when it comes to time delay studies, another important property comes into play. Indeed, fast variability of flux is crucial in constraining emission times of individual photons (see Table 1). It should be noted that, while changes in flux do not strongly interfere with analysis based on EBL absorption, it is extremely important that the spectrum remains constant. A change in the spectrum would introduce additional uncertainties and hence lower the analysis sensitivity. Therefore, one may argue that a faster flux variability is needed to increase the sensitivity of time of flight tests. However, even the most sensitive of the time of flight analyses ([38] for the linear, and [81] for the quadratic term) are below the sensitivities of analyses based on photon interactions, suggesting that (assuming the same *E*QG) LIV affects interactions more strongly than it does the photon group velocity. So, is there a point in testing energy-dependent photon group velocity, when we were able to set much stronger bounds on *E*QG through tests of modified photon interactions? As we pointed out in Section 1.2, the theory of QG has not been formulated yet. Consequently, we do not know what the effects of QG are. It is quite possible that the photon group velocity is affected by the LIV, while interactions remain unaffected, or the other way around. Another possibility is that both interactions and propagation are affected, but on different scales, effectively introducing separate and different values of *E*QG. It should be remembered that the modified dispersion relation (Equation (1)) is merely a mathematical model facilitating experimental tests of LIV, and, in the most general case, different values of *E*QG are applicable in different cases. Yet another imaginable scenario is that both subluminal and superluminal behaviours occur as different effects of QG. In that case, they might start to manifest at different energy scales, resulting in an even more complex expression for the photon group velocity (Equation (2)).

Considering the time delay studies alone, we observe a gradual increase of the lower limits on *E*QG. Of course, a study declaring a stricter limit (or a detection for that matter) is more likely to be published, however we would like to argue that this improvement is a result of several circumstances: (i) improvement on the performances of detectors allowed observations of sources at larger redshifts with better sampling, (ii) longer operations increased the probability of observing transient phenomena as flaring AGNs and GRBs, as well as increased statistics on observations of pulsars, and (iii) analysis techniques, the ML in particular, have been refined over time, allowing for higher analysis sensitivity. A notable exception is the result obtained on the observation of GRB 090510 with *Fermi*-LAT. Published in 2013, the study by Vasileiou et al. [38] still holds the record for the most stringent bound on *E*QG,1. Let us analyse where this sensitivity came from. As conveyed in Table 1, the sensitivity of time of flight analyses increases with the energy of the gamma rays within the sample (*E*max) and the redshift of the source (*z*s), and decreases with the timescale of flux variations (*t*var). GRB 090510 has the second and third criterion well satisfied. The data set used in the analysis was taken from a time interval of less than 5 s (see Figure 1 in [38]), significantly shorter than any other sample covered in this review. At the same time, the redshift of ∼0.9 is more than two times larger than the second furthest source considered in Table 2. Apparently, the downside of the low highest energy in the sample (31 GeV) is more than well compensated by these two advantageous characteristics. In the case of the quadratic term, the relationship between *E*max, *t*var, and *z*<sup>s</sup> is somewhat different. *E*QG is still inversely proportional to the variability timescale, however *t*var now enters through a square root, decreasing its influence on the analysis

sensitivity. Furthermore, *E*QG depends on the source redshift as *z*∼2/3 <sup>s</sup> (in contrast to *z*∼<sup>1</sup> <sup>s</sup> in case of the linear term). Given that all sources are at redshifts smaller than 1, the influence of redshift is weaker on the quadratic term. Weaker influences of variability and redshift make more room for the influence of *E*max. Now, the tables have turned. The bound on *E*QG,1 set using MAGIC observation of GRB 190114C [83] was only about a factor 4 (7) below the limit from GRB 090510 for the subluminal (superluminal) behaviour. In the quadratic term, the influence of the highest gamma-ray energy is more pronounced than in the linear term. So, the constraint on *E*QG,2 became more stringent because of 2 TeV photons in the GRB 190114C sample. Actually, regarding the quadratic case, the strongest constraint came from the H.E.S.S. analysis of AGN Mrk 501, due to ∼20 TeV photons in the sample [81].

Emission time from pulsars is excellently constrained, with the variability timescales down to *t*var∼10 ms. Furthermore, gamma rays from pulses are detected up to ∼7 TeV. Unfortunately, their relatively close proximity renders LIV analyses on pulsars comparatively less sensitive, in particular in the linear scenario. On the other hand, in the quadratic scenario, their fast variability gives them a huge advantage, making them competitive sources for the search of LIV effect. Nonetheless, contrary to flaring AGNs or GRBs, new observations of pulsar such as the Crab one do not depend on luck but can be carefully planned. This continuous accumulation of new data can lead to a predictable increase of statistics and therefore improved sensitivity of the analysis. This is particularly interesting as the current ML analyses are still mainly limited by background fluctuations and systematics such as the pulse shape and its energy evolution in the case of the Crab pulsar. In [77] the authors stated that a total dataset of ∼2000 h on the Crab pulsar is within reach for the MAGIC collaboration alone given the regular observations performed for calibration purposes. And this data set could be further enlarged by taking into account the data accumulated on the Crab pulsar by H.E.S.S. and VERITAS (see also Section 5.2) with the potential of addressing some of the main limitations of the current analyses mentioned above and thus exploring QG scales beyond the current best limits, in particular in the quadratic scenario. Lastly, only pulsars with periods of a few tens of milliseconds have been detected with IACTs so far. Detection of VHE gamma rays from pulsars with periods down to a few milliseconds would allow to constrain the emission time more strongly. This would lead to an improvement of the current limits on *E*QG by an order of magnitude, further exploiting the potential of these sources in the search for LIV.

### **5. An Eye on the Future**

In previous sections, we discussed the evolution of the search for LIV effects with IACTs. The studies, which we reported on, have set strong constraints on the LIV energy scale, and significantly restricted the parameter space. More importantly, in those works, diverse ideas were proposed, various analysis methods had been developed, and different effects investigated. Ever since the first LIV study with an IACT, numerous ameliorations have been brought to the field in the form of technical or analysis improvements and the future ahead of us is certainly no exception. The most important question at this point is where do we go from here. What can we do to accelerate the research and contribute to the understanding of QG? In this section we try to outline some ideas on what could be the next steps in that regard. Hopefully, this will motivate the reader to perform some of the proposed research. We certainly intend to take our part in this endeavour.

### *5.1. Refinement of the Analysis Technique*

In Section 2.3 we presented the state of the art ML method for testing energy-dependent photon group velocity. Here, we will discuss some possibilities for improving the method and increasing the analysis sensitivity.

Firstly, let us return to the definition of the likelihood function (Equation (10)) and in particular how individual events are selected and weighted. Note that the probability for each event to be a part of the signal, as defined in Equation (12), is the same for every event.

The same is true for the probability for being a part of the background. Moreover, in order to reduce the background, the IACT data analysis involves applying cuts on the parameters describing events properties. This approach inevitably leads to cutting out some of the signal events as well. As we have seen, most of the LIV analysis methods rely on individual events. Therefore, cutting out signal events reduces the analysis sensitivity. However, in a recent paper, D'Amico et al. [140] proposed an alternative IACT data analysis method, which considers all events in the ON region without applying any cut. Instead, PDFs for the signal and the background are calculated based on the parameters of individual events. This method could, therefore, be used to calculate *p* (s) *<sup>i</sup>* and *p* (b) *<sup>i</sup>* as PDFs, without discarding any events, whether of signal or background origin.

Secondly, the real strength of the ML method relies in its modularity, meaning that the components of the likelihood function listed in Section 2.3 can be refined, and additional terms describing nuisance parameters can be added without limitations. Furthermore, likelihoods from individual targets can easily be combined in a joint likelihood (see Section 5.2). For example, in case the intrinsic spectrum changes with time, the function Φint(*E*) can be generalised to Φint(*E*, *t*). Additionally, by taking the product *F*(*t* + *ηnEn*)Φint(*E*) in Equation (11) we assumed that the emission time *t* of individual photons does not depend on their energy. Indeed, in the LIV studies performed so far, there was no strong evidence of changes of spectral shape during AGN flaring episodes nor in GRBs in the VHE gammaray range. Nevertheless, waving this simplification might increase the sensitivity of the ML analyses. Present day state of the art models used to describe emission from astrophysical sources are nowhere near refined and accurate enough to predict the exact emission time of each particular photon. Hopefully, future emission models will be precise enough to allow creating emission light curve templates simultaneously depending on emission time and energy. E.g., instead of having independent temporal and spectral distributions as *F*(*t* + *ηnEn*) Φint(*E*), we could take a two-dimensional distribution as *F*2D(*t* + *ηnEn*, *E*) to account for potential source intrinsic delays. Therefore, further progress in the modelling of the sources emission mechanism such as initiated in [45] will certainly play an important role in future LIV studies.

### *5.2. Combining Data from Different Sources and Instruments*

We have already seen on the example of the gamma-ray absorption (see Sections 3.1.2– 3.1.4) how a combination of several sources in a single study improved the sensitivity of the analyses. An equivalent approach is still to be fully applied when testing the photon group velocity. Indeed, a combination of sources observed at a wide range of redshifts is the key to disentangling of intrinsic source effects from a real LIV effect. A source-intrinsic energydependent photon emission time can mimic an effect of LIV, leading to a misinterpretation as energy-dependent time of flight. Alternatively, intrinsic effects could have the same magnitude in the opposite direction from the LIV effect, canceling it and preventing a detection; a scenario known as a conspiracy of nature. Nonetheless, a LIV effect, if it exists, should be present in all observational data, and directly depend on the distance of a source. A combination of sources at different redshifts would mitigate the potential effect of source intrinsic effects. Furthermore, emission from different types of sources is a result of different physical processes, and subject to different emission dynamics. Therefore, combining different types of sources could further limit the contribution of source-intrinsic effects. These two factors, combining data from different types of sources and from observations at different redshifts, are instrumental in the development of a significantly more robust LIV analysis.

The likelihood function, can be relatively easily extended to consider additional data sets from the same or other sources. Note that a joint likelihood can be constructed from the product of individual likelihoods, each of them described by Equation (10). This single analysis of multiple data sets (sharing the same LIV parameter *ηn*) is possible as long as each data set is accompanied by its set of functions (light curve template, spectral distribution, acceptance, energy resolution, etc.) describing its particularity and its condition of observations.

Recently, an inter-experiment collaboration has been formed. The so-called LIV Consortium assembles researchers from all three currently operating IACTs experiments (MAGIC, H.E.S.S., VERITAS) with the goal of combining data from different sources. In addition, the LIV Consortium is working on unifying observational data from different IACTs facilities. This will immediately give access to a notably larger pool of sources than what has been used so far in LIV studies with IACTs. Furthermore, combining different instruments in a single analysis, with a particular consideration of individual instrument response functions, is expected to decrease systematic uncertainties. Finally, such combination effort provides the necessary environment to harmonize the details of the analysis. As we have previously seen, analysis techniques can slightly differ from one experiment to the other. Combining individual best practices will further contribute to the research efficiency. Therefore, a combination of observations from all three currently operating IACTs will provide a major improvement in the constraints on the LIV energy scale, not specifically on the facial value of the limits, but more importantly on the robustness of the results. The work done by the LIV Consortium can also be regarded as preparatory activities for the CTA era, which we discuss in the following paragraph. Preliminary results have been presented in conferences [141], while the final results are expected soon.

Another improvement in the direction of combining instruments would be to extend data samples to lower and higher energies, e.g., *Fermi*-LAT for the MeV–GeV energy range and HAWC or LHAASO for energies above PeV. These experiments provide complementary information to the one of IACTs. It thus makes sense to combine the observations from these experiments to further increase the sensitivity of tests of LIV on gamma rays. Possible benefits are quite tantalizing, although such endeavor would be far from trivial, especially when it comes to treatment of different instrumental effects.

However, near future holds the prospect of the CTA, which will be an order of magnitude more sensitive than any of the existing Cherenkov telescopes [91]. Combined with a large number of telescopes, located in both hemispheres, the CTA will cover larger portions of the sky and observe more sources. This is particularly noteworthy for transient events, such as GRBs and AGN flares, and less bright sources, such as pulsars, all of which are essential for LIV studies. So far, only one LIV study was performed on a GRB observed with an IACT, and only a total of four GRBs have been observed with IACTs until now. The CTA is expected to improve on this statistics. In Section 2.13, we discussed how Vasileiou et al. considered four GRBs in their study. GRB 090510, although located at the lowest redshift, yielded constraints on *E*QG stronger by a factor of 2–20 in the linear and 3–15 in the quadratic scenarios, compared to results from other three GRBs [38]. This was achieved due to a combination of the highest energy in the sample and the fastest variability. In Section 4 we compared the results obtained on GRB 090510 to the ones from GRB 190114C. The first one was a short GRB, with more than double the redshift of the second one. The signal from GRB 190114C was detected at two orders of magnitude higher energies, however the MAGIC telescopes detected mostly the afterglow phase. With the CTA, we hope to detect prompt emission as well, which is expected to be a more variable phase of GRBs. Furthermore, the CTA will extend the range of accessible energies both to lower (down to 20 GeV) and higher bands (up to 300 TeV). The highest energies detectable with the CTA will be almost an order of magnitude higher than the the ones accessible with the currently operating IACTs. On the other hand, extending the observation window to lower energies, will grant access to gamma rays not absorbed by the EBL, enabling observations of sources at higher redshifts. Referring again to Table 1, higher redshift of a source improves the sensitivity to the LIV energy scale proportionally to the redshift in the linear, and somewhat less in the quadratic contribution. Alas, these two improvements cannot be combined. Gamma rays with the highest energies, emitted from the highest redshifts, will be absorbed by EBL before reaching Earth, so only one of these advantages will be accessible at a time. Nevertheless, there is an improvement coming from the wide energy range itself. Whichever effect of LIV is tested for (time of flight, universe transparency, etc.), the flux at the highest energies is compared to what is assumed to be the intrinsic emission. The latter is estimated from observations in the lower energy bands. LIV, if real, affects the gamma rays in the lower energy band as well the most energetic events. Granted, the effect is smaller for low energy band, but still present. Assuming that low energy photons are unaffected by the LIV induces a bias, and, ultimately, decreases the sensitivity of analyses. The bias will be smaller for a wider gap between the two energy bands used. Observations in the lower energy band can be obtained either using the same instrument, as in the case of e.g., PKS 2155-304 (see [67] and Section 2.6), or from other instruments combined with theoretical inferences, as in the case of GRB 190114C (see [83] and Section 2.12). Ideally, observations in both energy bands would be performed with the same instrument with a wide range of observable energies. The former would reduce possible systematic effects, while the latter would decrease the potential LIV-induced bias. The CTA, with the range of accessible energies spanning over more than four orders of magnitude, will answer this need. While it is difficult to predict the light curves and spectra that the CTA will observe from GRBs, we can draw some conclusions by extrapolating the case of GRB 090510 to higher energies. Assuming that gamma rays emitted during the GRBs prompt phases can reach energies as high as few hundred GeV to a few TeV, the sensitivity to LIV effects would increase by one to two orders of magnitude. Even if we have to settle with lower redshifts, thus somewhat reducing the sensitivity (see Table 1; e.g., for *z* = 0.3 the loss of sensitivity is at most a factor of ∼3 compared to *z* = 0.9, the redshift of GRB 090510), the gain would still be substantial. A similar reasoning can be applied to studies based on spectral analysis. By lowering the detection energy threshold, the intrinsic spectra will be more precisely determined at the lower energy end of spectra, leading to better spectral fitting, and thus decreasing the uncertainties on the LIV energy scale. The CTA Consortium has published a projection of the CTA's capabilities for probing fundamental physics, including LIV [120]. The study was limited to universe transparency to gamma rays. The authors estimated that the CTA will be able to probe the LIV energy scale a factor of two to three higher than the most sensitive studies published so far, whether based on a single source, or combining several sources.

### *5.3. Additional and Alternative Lorentz Invariance Violation Effects and Related Phenomena*

Apart from the numerous interesting studies described in the previous sections, there are still a plethora of LIV-induced effects and related phenomena which have not been studied with IACTs data. We will briefly mention those here.

Firstly, the energy-dependent arrival time delay between two simultaneously emitted photons from the same source is calculated from Equations (3) and (4). These were derived taking a comoving trajectories of the photons and their respective energy-dependent velocities. As mentioned in Section 2, one could start the derivation from a modified general relativistic dispersion relation [27], or by modifying spacetime translations with the photon dispersion relation [28], and obtain different results for the photon time of flight. It would be interesting to investigate the differences between these models on IACTs data.

Secondly, all the tests of energy-dependent photon group velocity performed on IACTs data were based on Equation (2), which is deterministic in the sense that it assumes that all photons of the same energy will propagate with the same group velocity. However, there are models which propose fluctuations of the photon group velocity as a consequence of fluctuations of the spacetime foam [142]. This phenomenon, often referred to as stochastic LIV models photon group velocity as [143]:

$$
v\_{\gamma}(E) = c + \delta v\_{\gamma}(E),\tag{31}$$

where the velocity modification *δvγ*(*E*) is randomly distributed according to normal distribution with mean in zero, and a standard deviation given with:

$$
\sigma\_n(E) = c \frac{1+n}{2} \left(\frac{E}{E\_{\text{QG},n}}\right)^n. \tag{32}
$$

The stochastic LIV was tested on *Fermi*-LAT observation of GRB 090510 [143]. Only linear term was constrained to *<sup>E</sup>*QG,1 > 3.4 × 1019 GeV. So far, no similar study was performed on IACTs observations. As pointed out by Bolmont in [144], distant pulsars with sharp pulsation peaks would be excellent probes of stochastic LIV.

A substantial portion of this work was dedicated to effects LIV has on photon interactions. An important process in astrophysical sources of gamma rays is (inverse) Compton scattering. Abdalla & Böttcher analysed a possible influence of LIV on Compton scattering in [98] and concluded that LIV signatures were expected to be important only for incoming gamma-ray energies above ∼1 PeV. While in light of the recent results from LHAASO (see Section 3.3) this effect might draw some attention, Abdalla & Böttcher also concluded that even in the superluminal LIV scenario, the Klein–Nishina cross section would still be strongly dominated by the Thomson cross section, while in the subluminal scenario, the Klein–Nishina cross section would be even more strongly suppressed. Overall conclusion was that an LIV modified Compton scattering was not likely to be relevant in realistic astrophysical situations.

We would also like to point out that all investigations of universe transparency for gamma rays were performed on photons of energies up to 100 TeV. Remembering again the LHAASO detection of a photon of - 1 PeV, we are strongly encouraged to extend our test to higher gamma-ray energies. As we have demonstrated in Section 3.1, and in particular in Figure 1, this will extend the photon target field to the CMB, which is substantially more dense than any of the EBL components. Moreover, in Figure 2, we have shown how the shape of the Breit–Wheeler cross section deforms in LIV subluminal scenario. These deformations were not significant for gamma-rays up to ∼100 TeV, but might play an important role for higher gamma-ray energies.

As previously mentioned, a study done by the H.E.S.S. Collaboration [81] is the only one in which time of flight and universe transparency tests were performed on the same data set (see Sections 2.11 and 3.1.3). However, these two effects were tested independently of each other. In fact, there has never been a study that combined two different LIV effects. We have argued in Section 4 why strong limits based on one LIV effect do not necessarily constrain other effects. For example, limits on *E*QG based on the universe transparency do not apply to the photon group velocity. Excluding one effect does not automatically exclude all LIV effects. Nonetheless, considering VHE gamma rays from astrophysical sources, it seems natural to wonder what would be the net observational result if several LIV effects were present. Namely, in Equation (11), term *F*(*E*) takes into account propagation effects, such as EBL absorption, but under the assumption that photon interactions were not LIV affected. A combined effect study would presume that both photon group velocity and photon interactions were affected by the LIV. Different effects could manifest on the same, or different energy scales. Since there is no fully formulated theory of QG, and we do not know what the effects of QG are, this test is well justified. Implementation of such a test could be realised by modifying terms in the signal PDF in Equation (11). This might present a significant challenge, especially a computational one when maximising the likelihood function, and in particular if an independent value of *E*QG is associated to each considered LIV effect.

Finally, there are possible effects of LIV, which have not been mentioned earlier, such as vacuum birefringence, vacuum dispersion, and vacuum anisotropy [145]. The vacuum birefringence implies that the polarization vector of a linearly polarized photon will rotate dependently on the energy of the photon. As a consequence, a linearly polarised signal from an astrophysical source will be depolarised by the time it reaches Earth. Therefore, measurements of polarisation degree in a signal would constrain the vacuum birefringence effect (see, e.g., refs. [146,147] for studies performed on optical observations, and refs. [148–150] for studies performed on hard X-ray – soft gamma-ray observations.). These results are substantially more constraining than the tests based on modified photon kinematics. The

ratio of sensitivities of the birefringence tests and the time of flight tests, performed on the same data sample, is proportional to the energy of the photons in the sample, as discussed by Kostelecký & Mewes in [151]. As they put it descriptively, in order to achieve sensitivity in the time of flight tests comparable to the birefringence tests, one would need to achieve the timing resolution on the order of the inverse frequency of the photons. Even if this was instrumentally feasible (which is not), the measurement precision would be spoiled by the inability to constrain the photon emission times. Unfortunately, the IACT detection technique does not allow for measurements of gamma-ray polarisation. Recently, a novel satellite-based gamma-ray detector was proposed, which would be capable of measuring polarisation [152] of gamma rays in a lower energy band. Unfortunately, the proposal was not accepted, but there is hope it will be selected for some future space mission, or that a similar concept such as AMEGO<sup>28</sup> will be realised. X-ray polarimetry will certainly gain with soon to be launched IXPE29.

Given that IACTs cannot measure the polarisation of gamma rays, a broad discussion of birefringence would be out of the scope of this work. Nevertheless, considering strong constraints on LIV based on birefringence tests, we should note that these do not render time of flight tests useless. Indeed, already from theoretical considerations it is clear that in some cases the energy-dependent photon group velocity does not imply vacuum birefringence [151] (see also [144] for a critical discussion). Therefore, strong limits on the vacuum birefringence do not necessarily constrain the energy dependence of the photon group velocity. Furthermore, while IACTs are not convenient for tests on photon polarisation, one should keep in mind that, as remarked in [74], vacuum anisotropy could be constrained by performing LIV tests on sources in different directions. Hence, we should not be satisfied with setting stronger and stronger constraints on the LIV energy scale only. We should also strive towards building a rich statistics of sources located at different directions, as well as at different redshifts, as we already argued. Again, the CTA is expected to considerably contribute in this respect.

### **6. Conclusions**

Most of investigators agree that there is a fundamental, quantum description of gravity. Though not formulated yet, the theory of QG is expected to resolve what happens in extreme gravitational potentials, such as singularities within black holes predicted by the general theory of relativity, or early universe, but also to push us in the direction of formulating the next unification theory describing all interactions. The expected realm of QG is the Planck scale, far above the reach of any physical laboratories present today, or any experiment envisaged in the near future. Nevertheless, some investigators believe even VHE gamma rays from astrophysical sources would feel minuscule effects of QG. These effects, consequences of the so-called LIV, would manifest as the photon group velocity deviating from the Lorentz invariant speed of light *c*, or as modified photon interactions. Given the cosmological distances gamma rays cover to reach Earth, there is a hope that these effects would accumulate sufficiently enough to be detected with gamma-ray detectors.

In this review we presented all experimental tests of LIV performed with IACTs. We followed the historical development of the field. However, in order not to create a confusion, we first covered tests of energy-dependent photon group velocity, and then tests of modified photon interactions. A strictly chronological overview of the results was given in Table 2. 24 years have past since the first proposal that gamma rays emitted from astrophysical objects could be used to search for effects of QG. In fact, the proposal singled out GRBs as gamma-ray sources to be used for these tests. Only two years later, the first study using IACT Whipple was published on data from a blazar Mrk 421. Meanwhile, twenty more years had to pass before we were able to test the LIV on GRB 190114C observed by MAGIC. So far, no significant violation of the Lorentz symmetry was detected. However, in the past 22 years since the first experimental result was published, quite strict bounds were set on the level of LIV. In particular, considering the linear term in modified

photon dispersion relation (Equation (1)), the lower limit on *E*QG,1 has surpassed the Planck energy. This is especially true when photon interactions are considered. While the expected energy level of QG is indeed the Planck scale, there is no strictly defined interval for *E*QG, so the test will continue. When it comes to the quadratic term in Equation (1), there is still quite some parameter space of *E*QG,2 to be investigated, both considering the photon group velocity and photon interactions; and in Section 4 we discussed why it is important to test for all possible effects, as well as both for subluminal and superluminal scenarios.

We hope to have demonstrated that IACTs played an important role in search for LIV and setting strong constraints on its energy scale. More importantly, we tried to argue that IACTs still have much to say in this field, and that observations with these instruments will ultimately lead to either detecting LIV, or confirming that the universe remains Lorentz invariant up to trans-Planckian energies. In Section 5 we presented our vision of the future development of this field. It goes without saying that we have great expectations of future facilities. In particular, the CTA, which promises a detection of several GRBs each year. However, new instruments will not do on their own, and additional improvements and development of analysis techniques will be necessary. In addition, there are hypothesised phenomena (e.g., stochastic LIV) which have not been tested for yet. Obviously, there is quite some work cut out for us. Whether we detect some effect of LIV, or it turns out that the Lorentz symmetry is perfectly preserved, one thing is for sure: exciting times are ahead.

**Author Contributions:** All authors contributed equally to the writing. All authors have read and agreed to the published version of the manuscript.

**Funding:** T.T. and J.S. acknowledge funding from the University of Rijeka, project number 13.12.1.3.02. T.T. also acknowledges funding from the Croatian Science Foundation (HrZZ), project number IP-2016-06-9782. D.K. acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 754510, from the ERDF under the Spanish Ministerio de Ciencia e Innovación (MICINN), grant PID2019-107847RB-C41, and from the CERCA program of the Generalitat de Catalunya.

**Acknowledgments:** We would like to thank G. Bonnoli, C. Levy, M. Martínez, H. Martínez-Huerta, D. Paneque, and F. Tavecchio for fruitful discussions, and verification of historical facts. The authors acknowledge networking support by the COST Action CA18108.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following acronyms and abbreviations are used in this manuscript:

**AGN** active galactic nucleus **B-H** Bethe–Heitler

**CMB** cosmic microwave background

**CTA** Cherenkov Telescope Array

**C.U.** Crab units

**CWT** Continuous wavelet transform

**DisCan** dispersion cancellation

**DSR** Doubly Special Relativity

**EBL** extragalactic background light

**ECF** Energy cost function

**FACT** First G-APD Cherenkov Telescope

**GRB** Gamma-ray burst

**GBM** Gamma-ray Burst Monitor

**HAWC** High Altitude Water Cherenkov

**HEGRA** High Energy Gamma Ray Astronomy

**H.E.S.S.** High Energy Stereoscopic System

**IACT** imaging atmospheric Cherenkov telescope

**LAT** Large Area Telescope

**LHAASO** Large High Altitude Air Shower Observatory

**LIV** Lorentz invariance violation

**MAGIC** Major Atmospheric Gamma Imaging Cherenkov

**MCCF** Modified cross correlation function

**MD** minimal dispersion

**ML** maximum likelihood

**PC** peak comparison

**PDF** probability distribution function

**PV** PairView

**RB** radio background

**QG** Quantum Gravity

**SMM** Sharpness maximisation method

**TS** test statistic

**VERITAS** Very Energetic Radiation Imaging Telescope Array System

**VHE** very high energy (100 GeV < *E* < 100 TeV)

### **Notes**


### **References**


### *Review* **Axion-like Particle Searches with IACTs**

### **Ivana Batkovi´c 1,2,\*, Alessandro De Angelis 1,2,3, Michele Doro 1,2 and Marina Manganaro <sup>4</sup>**


**Abstract:** The growing interest in axion-like particles (ALPs) stems from the fact that they provide successful theoretical explanations of physics phenomena, from the anomaly of the *CP*-symmetry conservation in strong interactions to the observation of an unexpectedly large TeV photon flux from astrophysical sources, at distances where the strong absorption by the intergalactic medium should make the signal very dim. In this latter condition, which is the focus of this review, a possible explanation is that TeV photons convert to ALPs in the presence of strong and/or extended magnetic fields, such as those in the core of galaxy clusters or around compact objects, or even those in the intergalactic space. This mixing affects the observed *γ*-ray spectrum of distant sources, either by signal recovery or the production of irregularities in the spectrum, called 'wiggles', according to the specific microscopic realization of the ALP and the ambient magnetic field at the source, and in the Milky Way, where ALPs may be converted back to *γ* rays. ALPs are also proposed as candidate particles for the Dark Matter. Imaging Atmospheric Cherenkov telescopes (IACTs) have the potential to detect the imprint of ALPs in the TeV spectrum from several classes of sources. In this contribution, we present the ALP case and review the past decade of searches for ALPs with this class of instruments.

**Keywords:** axion-like particles; gamma-rays; IACTs

### **1. Axion and Axion-Like-Particles**

The presentation of a new fundamental particle called 'axion' traces back to the late 1970s, when Peccei and Quinn [1] introduced it as a possible solution to the otherwiseunexplained missing *CP*-simmetry violation in strong interactions. The term 'axion' was first used by Weinberg [2], who classified it as a "light, long-lived, pseudoscalar boson" together with Wilczek [3]. Since then, axions were subject of strong scrutiny, from both theory and observation; however, half a century later, they remain one of the most compelling solutions to this so-called strong *CP* problem.

Although there is nothing in the theory forbidding it, and, therefore, it is expected, a violation of the Charge × Parity (*CP*) symmetry in Quantum Chromo-Dynamics (*QCD*) was never experimentally observed. The term of the Lagrangian corresponding to *CP* violation can be written as

$$\mathcal{L}\_{\theta\_{QCD}} = \theta\_{QCD} \frac{\mathcal{S}^2}{32\pi^2} G^{a}\_{\mu\nu} \tilde{G}^{\mu\nu}\_a \, , \tag{1}$$

where *θQCD* is a phase parameter of *QCD*, *G* is the gluon field strength tensor, *a* indicates trace summation over the *SU*(3) colors and *g*<sup>2</sup> is the *QCD* coupling constant. *θQCD* = 0 in case of no *CP* violation.

For example, the electrical dipole moment of the neutron, *dn*, which shows a dependence on the *θQCD* angle, and is, therefore, sensitive to the *CP* violation term, is experimentally bound [4] to be <sup>|</sup>*dn*| ≤ <sup>1</sup> <sup>×</sup> <sup>10</sup>−<sup>26</sup> *<sup>e</sup>* cm, which translates into *<sup>θ</sup>QCD* <sup>&</sup>lt; <sup>10</sup>−10, revealing a *fine-tuning* problem.

**Citation:** Batkovi´c, I.; De Angelis, A.; Doro, M.; Manganaro, M. Axion-like Particle Searches with IACTs. *Universe* **2021**, *7*, 185. https:// doi.org/10.3390/universe7060185

Academic Editor: Tina Kahniashvili

Received: 6 April 2021 Accepted: 3 June 2021 Published: 5 June 2021 Corrected: 27 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The Peccei–Quinn (PQ) mechanism solves the Strong *CP* Problem by introducing a new global symmetry, known as *U*(1)*PQ* symmetry, which makes the *CP*-violating term (Equation (1)) in the *QCD* Lagrangian negligible. Axions are, therefore, pseudo-Nambu– Goldstone bosons associated with the breaking of the *U*(1)*PQ* symmetry [2,3].

In the PQ formalism, the axion is a particle of mass *ma* and decay constant *fa*, related to the decay amplitude, i.e., to the coupling. In the original model of the axion proposed by Peccei and Quinn [1], Weinberg [2], Wilczek [3], the axion decay constant *fa* is of the order of the electroweak scale (∼246 GeV), and the mass of the axion *ma* is inversely proportional to this. Its mass was, therefore, expected to be rather large, i.e., of the order of 100 keV

$$m\_a \simeq 6 \times 10^{-6} \text{ eV} \left( \frac{10^{12} \text{ GeV}}{f\_a} \right). \tag{2}$$

Using experimental limits based on the stellar evolution and rare particle decays, this first model was ruled out. Soon after, two new models, abbreviated as *KSVZ* [5,6] and *DFSZ* [7,8], emerged. They had in common the fact that the energy scale of the symmetry breaking was instead proposed to be large, i.e., close to the "Grand Unification scale", with the energy of 1015 GeV. This translated into a very light axion, with mass *ma* <sup>10</sup>−<sup>9</sup> eV. These axions would be very weakly coupled, hence the name currently used to dub them: "invisible axions". Taking into account their mass and coupling, these axions have eluded several experiments to date. Furthermore, a similar but strictly massless pseudoscalar Goldstone particle was also considered, and named arion [9,10].

At present, after many unsuccessful searches for axions (see Figure 1 for a collection of limits), the axion model was extended to a wider group of particles, called Axion-Like Particles (ALPs), in which the decay constant is no longer coupled with the axion mass, in contrast with the original axion (Equation (2)) [11]. ALPs are also often found in SM extensions, motivated by string theory.

**Figure 1.** ALPs parameter space with current constraints (last update: July 2020). The collected limits, references and plots are available in the git-hub repository: https://cajohare.github.io/AxionLimits/ (accessed on 3 June 2020).

A real "treasure" for the experimental detectability of ALPs is the term representing the axion coupling to photons through the two-photon vertex, shown in Figure 2. The mentioned term is

$$\mathcal{L}\_{a\_{\gamma\gamma}} = -\frac{\mathcal{g}\_{a\gamma\gamma}}{4} F\_{\mu\nu} F^{\mu\nu} a = \mathcal{g}\_{a\gamma\gamma} \mathcal{E} \cdot \mathcal{B} a,\tag{3}$$

7

where *gaγγ* is the photon-ALP coupling, *Fμν* the strength tensor of the electromagnetic field, *F*˜*μν* its dual, *a* is the axion field with mass *ma*, *E* is the electric field of a beam photon, and *B*

is the external<sup>1</sup> magnetic field. This effect, explained as the photon-ALP conversion, occurs in magnetic fields and is the basis of many experiments in the search for ALPs.

**Figure 2.** Feynman diagram of photon-axion coupling vertex.

More recently, axions were also proposed as viable Dark Matter (DM) particle candidates. The reason for this relies upon their small mass, combined with a possibly large decay constant *fa* <sup>10</sup><sup>12</sup> GeV. Since they are connected to spontaneous symmetry breaking, they could have been produced in the early Universe via "misalignment" mechanisms. As such, they could represent a substantial fraction of DM. Arias et al. [11] report that, in order to explain the current amount of DM with ALPs, the axion coupling, dependent on the mass of axions, has to be

$$\mathbf{g}\_{\sigma\gamma\gamma} < 10^{-12} \left[\frac{m\_d}{1\,\mathrm{neV}}\right]^{1/2} \mathrm{GeV}^{-1}.\tag{4}$$

Together with hidden photons, axions pose as viable candidates for DM, and are named Very Weakly Interacting Slim Particles (WISPs).

### *Experimental Searches for ALPs*

A wide class of axion searches are performed with special helioscopes, i.e., instruments pointing at the Sun, such as the well-known CERN Axion Solar Telescope (CAST) [12]. Axion helioscopes search for axions produced in the interior of the Sun by the conversion of plasma photons in the Coulomb field of charged particles, the so-called Primakoff process. By creating a strong magnetic field in the instrument and placing an X-ray detector at the far end, these detectors aim to reveal the reconversion of axions into X-ray photons [13]. CAST uses a dipole magnet with a strength of ≈9 T and length *L* = 9.26 m. The latest constraint on the coupling of photons to axions obtained with CAST [12] is *gaγγ* <sup>&</sup>lt; 6.6 <sup>×</sup> <sup>10</sup>−<sup>10</sup> GeV<sup>−</sup>1. Progress in this detection technique is expected from the new-generation axion helioscope International Axion Observatory (IAXO) [14]. Methods to constrain solar axions can be obtained using the Mössbauer [15] and axioelectric effects [16], among others.

Alternative methods are pursued in so-called 'light-shining-through-the-wall (LSW)' experiments, in which photons from a strong laser beam are searched beyond a wall that can be crossed over by ALPs but not photons, asas wdone with The Optical Search for QED Vacuum Bifringence (OSQAR) [17] at CERN. There are also experiments based on the expected axion-induced birefringence of the vacuum, such as ALPS [18]. As proposed by Sikivie [13], another observable phenomenon could be the conversion of axions to photons in a resonant cavity. This study laid the theoretical ground for modern experiments such as the Axion Dark Matter eXperiment (ADMX) [19] and the QUest for AXions (QUAX) experiment [20]. The QUAX experiment [20] uses a classical haloscope [21] and unlike ADMX, exploits the axion interaction with the fermionic spin. For this purpose, QUAX uses a ferromagnetic haloscope and it has set the limit on the axion–electron coupling for DM axions with masses 42.4 μeV < *ma* < 43.1 μeV [22]. The mentioned experiments, along with helioscopes, are currently the only ones capable of accessing the parameter space corresponding to the QCD axions.

Astrophysics searches for axion and ALPs use cosmic magnetic fields and ample photon fluxes present in the cosmos. Clusters of galaxies, for instance, have magnetic fields at their cores that are orders of magnitude larger than the average intracluster medium [23–27]. Magnetic fields in active galactic nuclei or pulsars could also be considered as a possible "medium" for the conversion of photons in axions or ALPs. In the following section, we will focus on astrophysics experiments in the gamma-ray range.

### **2. Phenomenology of the Mixing between Gamma-Rays and ALP and Propagation in the Astrophysical Environment**

The existence of axions and ALPs can be probed by their imprints on the spectra of astrophysical sources. This is due to the fact that, in the presence of magnetic fields, ALPs couple with photons. Therefore, TeV gamma rays travelling over cosmological distances can oscillate to photons due to the interaction with magnetic fields, and/or convert to ALPs in strong magnetic fields and, as such, cross astrophysical distances until they possibly encounter another strong magnetic field, such as that of the Milky Way, in which they can convert back into observable gamma rays. All these conversion/reconversion processes are governed by a probability term for the mixing *Pγγ*, which depends on the actual ALP mass and coupling, as well as the magnetic field characteristics.

### *2.1. ALP Propagation*

In order to understand the phenomenon of conversion, it is necessary to compute the term *Pγγ*. The Lagrangian of the photon-ALP system can be written as

$$\mathcal{L} = \frac{g\_{a\gamma}}{4} F\_{\mu\nu} \tilde{F}^{\mu\nu} a - \frac{1}{4} F\_{\mu\nu} F^{\mu\nu} + \frac{a^2}{90 \, m\_e^4} \left[ (F\_{\mu\nu} F^{\mu\nu})^2 + \frac{7}{4} (F\_{\mu\nu} \tilde{F}^{\mu\nu})^2 \right] + \frac{1}{2} (\partial\_{\mu} a \, \partial^{\mu} a - m\_a^2 a^2), \tag{5}$$

where the first term relates to the photon–ALP coupling L*aγγ* term discussed in Equation (3), followed by terms relating to the effective Euler–Heisenberg Lagrangian L*EH* for corrections of QED loops in photon propagators due to an external magnetic field [28], where the last term L*<sup>a</sup>* describes the kinetic and mass term of the axionic field. To model the propagation, we consider the motion of the ALP in the *x*<sup>3</sup> direction in a cold and ionized plasma. Generally, for polarized photons and relativistic ALPs, the equations of motion can be written as:

$$
\left(i\frac{d}{dx\_3} + E + \mathcal{M}\right)\begin{pmatrix} A\_1(x\_3) \\ A\_2(x\_3) \\ a(x\_3) \end{pmatrix} = 0,\tag{6}
$$

where M is the photon-ALP mixing matrix. *A*1(*x*3) and *A*2(*x*3) represent the photon linear polarization amplitudes along the *x*<sup>1</sup> and *x*<sup>3</sup> axis, respectively, and *a*(*x*3) is the axion field strength [29]. The solution to this equation is the transfer function T (*x*3, 0; *E*) using the condition T (0, 0; *E*) = 1.

In case a homogeneous magnetic field transverse to the propagation direction (laying in *x*<sup>2</sup> direction) of the photon beam is assumed, then the photon–ALP mixing matrix M can be simplified into

$$\mathcal{M}\_0 = \begin{pmatrix} \Delta\_\perp & 0 & 0 \\ 0 & \Delta\_{\parallel} & \Delta\_{a\gamma} \\ 0 & \Delta\_{a\gamma} & \Delta\_a \end{pmatrix} \prime \tag{7}$$

where the elements in this matrix are written considering the plasma condition, the QED vacuum birefringence effect, the axion field, and the photon–ALP mixing. They can be written as

$$
\Delta\_{\perp} = \Delta\_{pl} + 2\Delta\_{QED}; \quad \Delta\_{\parallel} = \Delta\_{pl} + \frac{7}{2}\Delta\_{QED}; \quad \Delta\_{a\gamma} = \frac{1}{2}\mathcal{g}\_{a\gamma\gamma}B\_{\perp}; \quad \Delta\_{a} = -\frac{m\_{a}^{2}}{2E}.
$$

$$
\text{with}
$$

$$
\Delta\_{pl} = -\frac{\omega\_{pl}^{2}}{2E} \quad \text{and} \quad \Delta\_{QED} = \frac{\kappa EB\_{\perp}^{2}}{45\pi B\_{CR}^{2}}.
$$

In the above equations, *α* is the fine structure constant and *ωpl* is the plasma frequency, connected to the ambient thermal electron density, and a critical magnetic field term *BCR* <sup>∼</sup> 4.4 <sup>×</sup> <sup>10</sup>13G is defined. The term <sup>Δ</sup>*a<sup>γ</sup>* represents the photon-ALP mixing and depends on the strength of the interaction *gaγγ* and the intensity of the transverse magnetic field *B*⊥. Generally, the magnetic field *B* does not have to be in the *x*<sup>2</sup> direction, but at an angle *ψ* from it. In this case, the equations of motion are solved with a transfer function T (*x*3, 0, *<sup>E</sup>*; *<sup>ψ</sup>*) = *<sup>V</sup>*(*ψ*) T (*x*3, 0, *<sup>E</sup>*) × *<sup>V</sup>*†(*ψ*) where M is changed in M = *<sup>V</sup>*(*ψ*) M<sup>0</sup> *<sup>V</sup>*†(*ψ*).

### *2.2. Probability of ALP-Gamma Conversion*

With the transfer function, we can compute the probability of the conversion of a gamma ray to an ALP in an external magnetic field. The simplest description of the magnetic field is that of a single domain. In this case, the probability of the photon–ALP mixing can be written as [28]

$$P\_{\gamma \to a} = \left(\Delta\_{a\gamma} \, d\right)^2 \frac{\sin^2(\Delta\_{\text{osc}} \, d/2)}{\left(\Delta\_{\text{osc}} \, d/2\right)^2} = \sin^2(2\theta) \sin^2\left(\frac{\Delta\_{\text{osc}} \, d}{2}\right),\tag{8}$$

where *θ* is the rotation angle *θ* = 1/2 arcsin(2Δ*aγ*/Δ*osc*), *d* is the size of the domain and Δ*osc* is the oscillation wave number, Δ<sup>2</sup> *osc* = [(Δ*<sup>a</sup>* − Δ*pl*) <sup>2</sup> + 4Δ<sup>2</sup> *<sup>a</sup>γ*]. This term is often written in terms of a critical energy *Ecrit* defined as

$$E\_{crit} \sim 2.5\,\text{GeV} \,\frac{|m\_{a,ncV}^2 - \omega\_{pl,ncV}^2|}{\mathcal{g}\_{11}B\_{\mu G}},\tag{9}$$

where *ωpl*,*neV* is the plasma frequency in units of neV, *Bμ*<sup>G</sup> is magnetic field in microgauss and *g*<sup>11</sup> = *gaγγ*/10−<sup>11</sup> GeV<sup>−</sup>1. The critical energy is computed such that, around and above this value, the probability of conversion *Pγ*→*<sup>a</sup>* in Equation (8) becomes sizable. With *Ecrit*, the term Δ*osc* can be written as [30] Δ*osc* = 2Δ*a<sup>γ</sup>* 1 + (*Ec*/*E*) 2 .

### *2.3. Gamma-Ray Survival Probability*

We are now in the position to compute the gamma-ray survival probability, that is, the fraction of photon that did *not* convert to ALP.

To compute it, the exact morphology of the magnetic field should be considered and the hypothesis of having just one single magnetic field domain with a fixed orientation is not plausible. A common approach is to divide it into *N* different domains. By doing this, the transfer matrix can be reformulated see [31] properly, thus providing the total photon survival probability *Pγγ*

$$P\_{\gamma\gamma} = \frac{1}{3} \left( 1 - \exp\left( -\frac{3}{2} N P\_{\gamma \to a} \right) \right). \tag{10}$$

When we write Equation (8) following the previously introduced substitutions, we can obtain

$$P\_{\gamma \to d} = \sin^2(2\theta) \sin^2\left[\frac{g\_{a\gamma\gamma}Bd}{2}\sqrt{1 + \left(\frac{E\_c}{E}\right)^2}\right].\tag{11}$$

As one can see from Equation (11), *Pγ*→*<sup>a</sup>* is dependent on the product of domain length *d* and magnetic field *B*. Because of this, it is essential to have a well-defined magnetic field model to account for the oscillations in the spectra of astrophysical objects caused by the photon-ALP mixing.

### *2.4. Astrophysical Magnetic Field and Photon Survival*

*Pγγ* depends on the strength of the axion coupling to the photon, the intensity, and the coherence scale of the magnetic field in the medium in which the photon/ALP beam is propagating. While the first term is governed by the microscopic nature of ALP, there are several magnetic field realizations in the universe. Therefore, one needs to consider all different magnetic field environments in the path, from the source to the detector: photon-ALP mixing can be assumed in the magnetic field at the source, in the local environment around the source, in the intergalactic magnetic field and, finally, in the galactic magnetic field [32]. Depending on the observed source, different combinations of magnetic fields can be considered, and as reported by Sikivie [13], there are clearly several ways this problem can be approached. For example, one of the most studied cases is that of the Active Galactic Nuclei (AGN) located in the cores of galaxy clusters. Here, once generated, gamma rays from the AGN would encounter the strong magnetic fields of the cluster core and have a sizeable chance of being converted to ALPs. Such an ALP could travel unimpeded along the intergalactic distances, whose magnetic field is extremely low, thus allowing only a moderate photon reconversion. Finally, the ALP, when entering the Milky Way (MW) magnetic field, could (or could not) be reconverted back to gamma rays.

These are, therefore, several kinds of imprint in the original gamma-ray spectrum. In the first case, if an ample fraction of photons is converted at the source into ALP that do not later convert back in the MW, a signal depletion would be observed. In the second case: if an ample conversion happens in the source but then a back conversion happens in the MW, then one could also observe an ampler signal than expected; for example, if the ALP travelled regions of space that are opaque to gamma-rays (for example, regions with strong particles or radiation fields). One should mention that the above signatures would be observed on top of the well-known gamma-ray extinction due to the interaction with the Extragalactic Background Light (EBL) [30,33–36] which strongly limits the observation of TeV emission above redshift *z* ∼ 1. The propagation of VHE photons is affected by pair production processes with the EBL. Depending on the photon energy, they interact with the extragalactic background photons (EBL) or the cosmic microwave background (CMB), producing an electron– positron pair (*<sup>γ</sup>* <sup>→</sup> *<sup>e</sup>*<sup>+</sup> <sup>+</sup> *<sup>e</sup>*−). The flux attenuation caused by these processes is dominant for photon energies around *<sup>E</sup><sup>γ</sup>* <sup>≈</sup> 500 GeV and *<sup>E</sup><sup>γ</sup>* <sup>≈</sup> <sup>10</sup><sup>6</sup> GeV, respectively [33]. In that way, the greater part of photons is absorbed and evades detection: the universe becomes opaque to VHE gamma rays. The above-mentioned cases of ALP signatures are possible in a regime above the critical energy *Ecrit* of Equation (9), where the photon–ALP mixing is maximum. A third case is possible at around *Ecrit*. In this regime, the oscillatory behaviour in Equation (11) would create 'wiggles' in the spectrum, in correspondence with the probability term. These wiggles would be hardly misinterpreted as being of astrophysical origin and would, therefore, constitute a clear detection. Such a case is extensively discussed by, e.g., Sánchez-Conde et al. [35], de Angelis et al. [36], Hooper and Serpico [37], de Angelis et al. [38].

### *2.5. A Concrete Example of the Photon Survival Probability*

As a showcase, in Figure 3, we report the *Pγγ* calculated for *ma* = 100 neV and *gaγγ* <sup>=</sup> <sup>1</sup> <sup>×</sup> <sup>10</sup>−<sup>11</sup> GeV<sup>−</sup>1, assuming conversion in the Perseus galaxy cluster magnetic field and in the Galactic magnetic field. The reason for neglect of the intergalactic magnetic field in this case is its strength being restricted to B ∼ (0.1 − 1) neV, and is still not confirmed. In order to probe the photon–ALP conversions in a magnetic field of this strength, one needs to access critical energy *Ecrit* 500 TeV or probe significantly low ALPs masses *ma* < 10−<sup>10</sup> eV [30]. On the other hand, there are works considering the photon– ALP mixing only in the host galaxy cluster magnetic field and the intergalactic magnetic

field [39], while some include all three of the mentioned magnetic field environments [29]. For the magnetic field of galaxy clusters, there are usually not well-established values, with the bounds being between 10−<sup>15</sup> G and 10−<sup>9</sup> G, so their strengths are modeled assuming turbulence and using the Kolmogorov power spectrum. Regarding the magnetic field of the Milky Way, a few models are used most often [40–42]. Most of these models are based on the Galactic Synchrotron Emission maps and the extragalactic rotation measurements, modelling a disk field and an extended halo field. In one recent work [43], it is shown that, in ALPs searches using observations of BL Lacertae (often shortened as BL Lac, a well known blazar), there is a sizable jet-mixing effect, meaning that the modelling of the BL Lac jet magnetic field is needed. It is shown that the changes in the parameters of the jet model can cause changes in the photon–ALP mixing in a way that it will enlarge the part of ALP parameter space available for study. It is also shown that, in case of the sources embedded in strong cluster magnetic fields of dense environments, this effect is not relevant, so the constrains set by Abramowski et al. [39], Ajello et al. [44] are still valid. In the future, photon–ALP mixing in the blazar jet might become relevant and, with the new generation of Cherenkov telescopes ([45], e.g., the Cherenkov Telescope Array (CTA)) the detection of more blazars at a higher redshift is expected. In conclusion, very detailed magnetic field models are needed to address the photon-ALP mixing in a more accurate way.

**Figure 3.** Photon survival probability for *ma* <sup>=</sup> <sup>100</sup> neV and *gaγγ* <sup>=</sup> <sup>1</sup> <sup>×</sup> <sup>10</sup>−<sup>11</sup> GeV<sup>−</sup>1. Obtained using the GAMMAALPs code: https://github.com/me-manu/gammaALPs (accessed on 3 June 2020).

### **3. A Decade of Results with IACTs**

*3.1. VHE γ-ray Detection and Analysis Techniques*

While there have been several early attempts to detect gamma rays at the ground starting, from the 1950s [46], ground-based gamma-ray astronomy officially started with the detection, in 1989, of the Crab Nebula by the Whipple telescope, which has been operating since 1986 [47]. Increasingly new TeV emitters populated the gamma-ray sky, considering one of the last unexplored windows in the electromagnetic radiation from the cosmos. Whipple belongs to the Imaging atmospheric Cherenkov telescopes (IACTs) class. IACTs are suitable for the detection of VHE *γ* rays, highly energetic photons which can be produced in the environments of astrophysical objects such as Active Galactic Nuclei (AGNs), supernovae, binary stars, pulsars, etc., as the result of highly accelerated (TeV-PeV) cosmic rays such as electrons and protons. The sensitivity of IACTs is in the range ∼50 GeV–50 TeV. At present, Whipple is decommissioned, and there are currently three major operating IACT arrays: the High Energy Stereoscopic System (H.E.S.S.) [48], the Major Atmospheric Gamma-ray Imaging Cherenkov Telescopes (MAGIC) [49] and the Very Energetic Radiation Imaging Telescope Array System (VERITAS) [50]. IACTs measure the energy and direction of *γ*-rays indirectly: when the *γ*-ray penetrates the atmosphere, it interacts with the present nuclei and produces a shower of particles. Charged particles

belonging to the shower travel faster than the speed of light in the medium atmosphere, consequently producing Cherenkov light. The faint Cherenkov light is collected by the mirror dishes of the telescopes and reflected into a camera positioned in front of the mirror dish. The energy threshold of IACTs is inversely proportional to the signal-to-noise ratio, so it is convenient to maximize the mirror area and throughput of the optical system to minimize the threshold. The shape of the image shower is described by the so-called Hillas parameters [51]. The flux in *γ*-rays is calculated using MonteCarlo simulations trained with OFF data, taking the collection area of the telescopes and the effective time of the observations into account. The IACT observations are usually performed in the so-called wobble-mode, to allow for the subtraction of the background during the observations [52]. The analysis of the data for the existing IACTs differs at the high level of analysis, when different methods to correct (unfold) the energy spectrum are used in the respective collaborations. The unfolding methods can be based on different algorithms, in order to assign to the *γ*-rays a true energy, and to calculate the intrinsic spectrum of a source. In particular, each array of IACTs possess a different configuration and asset so the instrument response function, used to obtain the final spectra, is different. The principles of detection for IACTs are explained in detail in Section 2.2 of [53]. At present, the collaborations are converging towards common software analysis tools, such as ctools<sup>2</sup> and gammapy3. Despite a build-up of successes from the early Crab detection, the technique became really mature in the first decade of this century, when not only were an increasing number of targets acquired, but the results also reached a level of precision and significance never achieved before. As an example, in Reference [54] MAGIC reports the spectrum of the Crab Nebula over three orders of magnitude in energy and four orders of magnitude in intensity, able to detect the source in less than 1 min. Along with this ramp-up of performance, the attention moved from purely astrophysical interests to more fundamental questions, such as the possibility of observing the signature of ALPs in gamma-ray spectra. The first decade of the 21st century brought interest in the imprints and modifications that the conversion of photons to ALPs and vice versa could leave on the spectra of astrophysical objects [31,35–37,55,56].

### *3.2. Astrophysical Targets for ALPs Searches with IACTs*

In the attempt to maximize the ALP signatures, it is possible to select the best target of observation. These are astrophysical emitters, where both ample, high-energy gamma-ray photons fluxes are produced, and where the gamma-ray radiation encounters extended regions with significantly intense magnetic fields, which extend over much larger distances than their coherence length [39]. These conditions guarantee that the probability of interaction is maximal (see Equation (8)). Recently, Abdalla et al. [45] quantified the importance of the intensity of the magnetic field and the source brightness, showing that, for example, a factor of 2.5 more intense magnetic field could result in factor 10 stronger constraints on the ALP coupling ([45] Figure 7). In the gamma-ray TeV sky, sources often display a flaring state, as opposed to a baseline emission state. If possible, flaring states are then preferred to search for ALP. The best candidates for observation are, therefore, Active Galactic Nuclei (AGNs), where particle acceleration and subsequent gamma-ray emission are found in the region around the central supermassive black holes (SMBHs). AGNs are the largest population of TeV targets. An optimal situation is the AGNs being located in the central core of galaxy clusters, especially in a cool core one, in which extended and intense magnetic fields permeate the region around the central galaxies. In this condition, the magnetic field is not only more intense (tens of μG) with respect to that in the intergalactic space, but also more easily experimentally quantifiable. One of the best examples of this is the AGN NGC 1275 at the center of the Perseus Galaxy Cluster, presented above. Another class of objects of interest for ALP searches is compact objects, namely, pulsars and neutron stars, which are also present in binary systems. Here, the magnetic field is more localized, but significantly more intense. We will come back to discussion of source-specific information later in the text. iI order to make a prediction of the ALP–photon interaction pattern, one has to define

both the microscopic nature of the ALP (mass and cross-section) as well as the magnetic field. For the former, one has to build a model of the interaction, as done, for example, in the aforementioned, open-source gammaALPs code, and scan the available parameter space. This is, at present, mostly done with grid sampling. For the magnetic field, since the knowledge is not accurate, the procedure used is normally the computation of several random realizations and attempt of a marginalization procedure of the likelihood over this nuisance parameter, as we will show later.

Other targets have been explored for ALPs searches. In case of a supernova explosion, ALPs would be emitted via the Primakoff process and could be observed with *γ* rays after a possible re-conversion in the magnetic field of the Milky Way. Following the observation of the supernova SN1987A, constraints due to the non-observation of *γ* rays, coincidental with the neutrino observations, were set [57,58], but affected by the strong uncertainties. Due to this, Payez et al. [59] revisited these papers using a more detailed analysis. Additionally, neutron stars are another possible candidate for ALP searches. Considering the radiative decays of axions produced by nucleon–nucleon bremsstrahlung in neutron stars, [60,61], Berenji et al. [62] have set constraints on the axion mass *ma* using the *Fermi*-LAT data of four neutron stars. This phenomenon was investigated in previous works with X-ray [63] and *γ*-ray data from a supernova [64].

### *3.3. Critical Energy and Parameter Space for γ-ray Studies*

This interest in ALP searches in the *γ*-ray range was firstly encouraged by the unexplained observation of a change in light polarization in a vacuum filled with a magnetic field detected by the Polarization of the Vacuum with Laser (PVLAS) experiment, Zavattini et al. [65], that offered an explanation based on the existence of a light axion. The results of the PVLAS experiment were in tension with the astrophysical limits. In order to reconcile the signal obtained with PVLAS, authors theorized an ALP with mass *ma* <sup>=</sup> 1.3 meV and coupling *gaγγ* <sup>=</sup> <sup>3</sup> <sup>×</sup> <sup>10</sup>−<sup>6</sup> Gev−1. Following on this interpretation, Mirizzi et al. [55] included photon–ALP conversion in the magnetic field of our galaxy and, taking the mentioned parameters into account, present the possible distortions in the photon spectra above the energies *E<sup>γ</sup>* ≥ 10 TeV. A few months later, de Angelis et al. [66] and Hooper and Serpico [37] extended this approach. Taking into account the possibility of the photon–ALP conversion in and around the gamma-ray source, as strong astrophysical accelerators, they showed that the critical energy in Equation (9) falls directly in the gamma-ray range. The photon–ALP conversion then depends on the condition

$$\lg\_{a\gamma\gamma} B \,\mathrm{s}/2 \ge 1 \tag{12}$$

where *B* is the magnetic field component aligned with the photon polarization vector and *s* is the size of the magnetic field domain. If the photon–axion conversion happens at the source, the product *B s* in Equation (12) is directly connected to the Hillas criterion [67] for the maximum possible acceleration energy of cosmic rays, and taking into account that cosmic rays with energies up to a few times 10<sup>20</sup> eV have been observed, it follows that sources with *BG spc* ≥ 0.3 should exist [37]. Hooper and Serpico [37] showed that IACTs such as H.E.S.S., MAGIC and VERITAS could have probed the range of masses of *ma* = (10−<sup>9</sup> − <sup>10</sup>−3) eV with sensitivities stronger than CAST, as shown in Figure 4. The best candidates for observation were identified with AGNs located in the cores of galaxy clusters. One can now compare Figure 4 with Figure 1 to see how Hooper and Serpico [37] were right in their predictions.

**Figure 4.** ALPs parameter space available for gamma-ray observations. Reprinted from Hooper and Serpico [37].

The first works by de Angelis et al. [33,66] are based on the unexpected transparency of the universe: EBL observations at the time showed higher transparency at higher redshifts than expected [68,69]. Following the idea that, if converted to ALPs, photons could travel through the extragalactic space without interaction with the EBL or CMB photons, be converted back to photons in the Galactic magnetic field and be detected as such, the photon–ALP conversion could reduce the opacity of the universe to VHE gamma rays, as discussed above. In order to explain the possible detection of TeV photons from a source located at *z* = 0.44, which was not expected by conventional physics of photon propagation at the time, Sánchez-Conde et al. [35] laid out a similar model. They built a model combining both the mixing near or in the source and mixing in the intergalactic space, stressing the importance of observations, both in the lower and highest energies in order to better constrain the intrinsic spectra of the sources, the EBL attenuation and explore the morphology of the considered magnetic fields. The photon flux attenuation was investigated by varying and combining the photon energy, magnetic field intensity, source redshift and ALPs parameters, showing that these effects could be observed in the spectra of AGNs at the higher energies, *E<sup>γ</sup>* ≥ 1 TeV, especially if combined with the *Fermi*-LAT energy regime [35]. After MAGIC detected the surprising rapidly varying emission from the flat spectrum radio quasar (FSRQ) PKS 1222+216 [70], Tavecchio et al. [71] performed a combined ALPs study using the MAGIC and *Fermi*-LAT data. The aim of [71] was to present the emission model, including the photon-ALP oscillations mechanism, and explain the mentioned detection. The results showed an agreement with the previously introduced De Angelis, Roncadelli and Mansutti (DARMA) scenario that includes photon– ALP oscillations triggered by large-scale magnetic fields to effectively reduce the EBL attenuation at the energies above 100 GeV [36,38]. These results showed the possibility of explaining such emissions with photon–ALPs oscillations by applying them to the other detected FSRQs.

The challenge related to the detection of spectral features induced by ALPs in the gamma-ray spectra is due to the the number of statistical and systematics fluctuations that shape the spectrum, even in the case of no ALP effect. First of all, the intrinsic spectrum is shaped by the absorption by the EBL, as discussed above. Such an effect is not-negligible for targets farther than *z* ∼ 0.1, but many models have been created based on EBL observations in the UV-infrared. Therefore, it is possible to correct the spectra for EBL absorption at different redshifts. The effects of LIV on the flux in photons could also compete with ALPs conversion, but the power of a given source to constrain LIV increases with its distance, its variability in time and the hardness of its energy spectrum, so not all the considered targets are also good targets for studying LIV. The energy reconstruction is generally performed with IACTs at about 10–20% precision, depending on the energy. Finally, the data are affected by a variety of systematics due to the instrument itself (e.g., telescope mirror reflectivity) as well as external factors (atmospheric optical depth). While the former are estimated more accurately, less accurate results were obtained for the latter. Observing irregularities in the spectrum, such as those caused by the ALPs—see Figure 3—is, therefore, challenging.

### *3.4. H.E.S.S. Results with PKS 2155-304*

After the first predictions of Hooper and Serpico [37], one of the first attempts to constraint ALP with gamma rays was made by H.E.S.S., using the data from the BL Lac object PKS 2155-304 [39]. In this work, a search for irregularities induced by the photon– ALP mixing in the spectrum was performed, and schematically shown in Figure 5. The problem is in searching for ALP-induced spectral patterns on top of a spectrum generated by the main astrophysical processes at the source. Normally, these generate rather smooth and featureless spectra, such as power-laws, with or without a cutoff or log-parabolic shape. Abramowski et al. [39] assumed a power-law function as a local spectral model, justified by the processes explaining the acceleration and radiation in the extreme astrophysical sources, such as BL Lacs [72]. For an estimation of the irregularities, Wouters and Brun [73] proposed a reduced *<sup>χ</sup>*<sup>2</sup> test with the null hypothesis build without the ALP (*φw*/*oALP*(*θ*)):

$$I = \frac{1}{d} \sum\_{k}^{N} \frac{\left(\phi\_{w/oALP}(\vec{\theta}) - \phi\_k\right)^2}{\sigma\_k^2} = \frac{\chi^2}{d},\tag{13}$$

where *<sup>d</sup>* is the number of degrees of freedom, *<sup>k</sup>* runs over the *<sup>N</sup>* bins, and *<sup>φ</sup>w*/*oALP*(*θ*) is a global fit without ALPs with spectral parameters *θ*. This method relies on the accuracy of the assumed shape of the spectrum, and is, therefore, subject to possible bias, but can be used in the case when the global fit represents a good estimate on the spectrum [74]. Expanding on this, Abramowski et al. [39] searched for irregularities avoiding a global fit and using only a spectral shape over three adjacent points in the energy spectrum (a triplet *i*):

$$\mathcal{L}^2 = \sum\_{i} \frac{\left(\vec{\phi}\_i - \phi\_i\right)^2}{\vec{d}\_i^T \mathbb{C}\_i \vec{d}\_i},\tag{14}$$

where (*φ*˜ *<sup>i</sup>* − *φ<sup>i</sup>* <sup>2</sup> is the residual of the middle bin in the triplet, *<sup>φ</sup><sup>i</sup>* the measured flux, *<sup>φ</sup>*˜ *<sup>i</sup>* the flux in the median bin expected from the power-law fit to the side bins, *Ci* covariance matrix for the triplet and *d<sup>T</sup> <sup>i</sup>* <sup>=</sup> *∂φ*˜ *i ∂φi*−<sup>1</sup> , <sup>−</sup>1, *∂φ*˜ *i ∂φi*+<sup>1</sup> . Although both methods showed consistent results, Abramowski et al. [39] evaluated that, due to its independence of the global spectral model assumption, the sum of residuals over three adjacent spectral bins is preferred for this kind of analysis. This estimator is calculated for each set of ALPs parameters and 1000 spectra are simulated in order to take the randomness of both the intergalactic magnetic field and the galaxy cluster magnetic field into account. The distribution of values of the spectral irregularity estimator for both the observed spectrum and spectra folded with photon–ALP oscillations for different ALPs parameters are compared, and exclusions of the ALPs parameter space were obtained at 95 % confidence level. The results (Figure 5, right) yielded constraints on the photon–ALP coupling value *gaγγ* <sup>&</sup>lt; 2.1 <sup>×</sup> <sup>10</sup>−<sup>11</sup> GeV−<sup>1</sup> for masses of the ALPs *ma* in the range (15–60) neV [39].

**Figure 5.** (**Left**) Schematic view of spectral irregularity quantification. Reprinted from Abramowski et al. [39]. (**Right**) Constraints on ALPs parameter space set by CAST, compared with results from the previous helioscope Sumico and DAMA experiment, as well as with PVLAS [75] and OSQAR [17] experiments, constraints set by H.E.S.S collaboration, observations of SN1987A, Solar astrophysics and Dark Matter (DM) searches. Reprinted from Anastassopoulos et al. [12].

### *3.5. Studies on Spectral Irregularities of NGC 1275*

The IACT results were completed at lower energies, making use of the *Fermi*-LAT instrument data. Ajello et al. [44] analyzed 6 years of NGC 1275 data, collected with *Fermi*-LAT, using the Pass 8 event analysis, and produced ALP predictions by including the photon–ALP conversion in the intracluster magnetic field and in the galactic magnetic field of the Milky Way. A fit of the time-averaged spectrum of NGC 1275 and ALPs models was made, and a likelihood analysis was performed. In Figure 6, one can see the likelihood of one of the event types, together with the best spectral fit with and without ALPs. To evaluate the ALPs hypothesis, Ajello et al. [44] exploited a likelihood ratio test statistics (*TS*). In the procedure, a time-averaged spectrum is modelled by a smooth function, and likelihood is extracted for each reconstructed energy bin *k* , L(*μ<sup>k</sup>* , *θ*|*Dk* ), where *μ<sup>k</sup>* is the expected number of photons in the photon–ALP conversion scenario, *θ* are the nuisance parameters of the fit, and *Dk* is the observed photon count. For each set of ALPs parameters and magnetic field, the joint likelihood of all reconstructed energy bins *k* is maximized and the best-fit parameters are determined. Among the different turbulent magnetic field realizations, simulated by accounting for its randomness, the one corresponding to the 0.95 quantile of the likelihood distribution is chosen. The likelihood ratio test is performed as

$$TS = -2\ln\left(\frac{\mathcal{L}(\mu\_{0\prime}\mathring{\theta}|D)}{\mathcal{L}(\mu\_{95\prime}\mathring{\theta}|D)}\right),\tag{15}$$

where the null hypothesis is the no-ALP scenario (including the EBL attenuation) with expected photon count *<sup>μ</sup>*<sup>0</sup> and nuisance parameters ˆˆ *θ*, and the alternative hypothesis of ALP, shows an expected photon count *μ*<sup>95</sup> and nuisance parameters ˆ *θ* [44]. Aside from the degeneracy of the photon–ALP conversion in coupling and magnetic fields, and non-linearly scaled irregularities considering the ALPs parameters, in comparison with the ALP hypothesis, the null-hypothesis is independent of the realisations of the magnetic field. Considering this, the null distribution needs to be derived from Monte Carlo simulations [44]. The exclusion threshold value, above which the set of ALPs parameters can be excluded with the 95% confidence level statistics, is also calculated from Monte Carlo simulations. The result of this research was the exclusion of the ALP coupling values in the range 0.5 <sup>×</sup> <sup>10</sup>−<sup>11</sup> GeV−<sup>1</sup> <sup>≤</sup> *gaγγ* ≤ ×10−<sup>11</sup> GeV−<sup>1</sup> for ALPs masses 0.5 neV <sup>≤</sup> *ma* <sup>≤</sup> <sup>5</sup> neV and *gaγγ* <sup>≥</sup> <sup>1</sup> <sup>×</sup> <sup>10</sup>−<sup>11</sup> GeV−<sup>1</sup> for 5 neV <sup>≤</sup> *ma* <sup>≤</sup> <sup>10</sup> neV [44], as seen in Figure 6.

**Figure 6.** (**Left**) Likelihood curves for one event type and best spectral fits with and without ALPs. Reprinted from Ajello et al. [44]. (**Right**) Projected limits on the ALPs parameter space obtained with the *Fermi*-LAT study of the NGC 1275 data, compared with the results from other experiments at the time. Reprinted from Ajello et al. [44].

### *3.6. Combined Fermi-LAT and H.E.S.S. Observations of PKS 2155-304*

Another study using the *Fermi*-LAT data from the PKS 2155-304 was carried out by Zhang et al. [76]. The used data were taken from *Fermi*-LAT observations in the energy range of 100 MeV–500 GeV. Photon–ALP oscillations in the inter-cluster magnetic field and the galactic magnetic field of the Milky Way are included. For different sets of couplings in the range of 10−<sup>12</sup> GeV−<sup>1</sup> <sup>≤</sup> *gaγγ* ≤ ×10−<sup>10</sup> GeV−<sup>1</sup> and mass of ALPs 10−<sup>1</sup> neV <sup>≤</sup> *ma* ≤ 102 neV and 800 different realizations of the inter-cluster magnetic field, a binned likelihood analysis similar to [44] was performed. The best fits with and without ALPs were compared to the observed spectrum, and the result is shown in Figure 7. A joint likelihood was calculated; parameter space regions were excluded with 99.9% confidence level and compared with the previous results from H.E.S.S. [39] and with the *Fermi*-LAT observations of NGC 1275 [44] in Figure 7.

**Figure 7.** (**Left**) Likelihood curves for the observed spectrum of PKS 2155-304. Solid lines represent, best fits including the photon–ALP oscillations and best spectral fit without oscillations included. Reprinted from Zhang et al. [76]. (**Right**) Comparison of exclusion regions derived in [76], compared with exclusion regions from H.E.S.S. observations of PKS-2155- 304 [39] and *Fermi*-LAT observations of NGC 1275 [44]. Reprinted from Zhang et al. [76].

### *3.7. H.E.S.S. Study with Galactic Sources*

More recently, H.E.S.S. data of galactic TeV *γ*-ray sources were used to search for ALP oscillation effects [77]. Ten sources, mainly supernova remnants and pulsar wind nebulae studied by H.E.S.S., were utilized. By using sources in the galactic plane, one can probe the ALPs parameter space with higher ALP mass, *ma* > 10−<sup>7</sup> neV. This is due to the strength of the galactic magnetic field, an important factor for the photon-ALP oscillation, as seen in Equation (9). The ALP model was obtained by multiplying a spectral fit without ALPs with the *Pγγ* for a certain parameter set (*ma*, *gaγγ*), and including the instrument energy resolution. As above, for each set of parameters (*ma*, *gaγγ*) the ALP model was fitted to the observed spectrum, and a *χ*<sup>2</sup> value was calculated and compared to the best fit over the whole parameter space. The best parameters were deduced from the calculation of the *χ*2; however, as the photon–ALP conversion is degenerate in the coupling and magnetic field, and that the induced irregularities are not linearly scaled with the ALPs parameters, a threshold value for excluding the ALPs parameters was derived using the Monte Carlo simulations. In, e.g., in [77] the threshold value is calculated from Monte Carlo simulations and compared to the difference in *χ*<sup>2</sup> values for each set of ALPs parameters and the best fit over the whole parameter space. Since a scan of the whole parameter space is not feasible [44], it is assumed that the overall shape probability distribution of the alternative hypothesis (with ALPs) can be approximated with the null distribution (no ALPs). It has been shown that such an approach yields conservative limits [44]. The results of Liang et al. [77] were consistent with other limits, but were uniquely sensitive towards the higher mass range. This showed that using galactic observations of TeV sources can improve and further constrain the high-mass part of the ALPs parameter space.

Other studies using *Fermi*-LAT observations combined with IACTs results have been carried out, using the MAGIC [78] and the H.E.S.S. [79] data. In [78], both signatures induced by the photon–ALP oscillations and step-like flux suppression at the energies *E<sup>γ</sup>* > *Ecrit* in the spectrum of NCG1275 were investigated. As can be seen, the irregularity estimator in Equation (13) is the reduced-*χ*2. For its general applicability in testing fits to the observed data, and simplicity of calculation, the *χ*<sup>2</sup> test has been used in several works [77,78]. For each set of the considered ALPs and each magnetic field realization, and photon survival probability is calculated and multiplied by the best fit of the timeaveraged spectrum, not including the ALPs effects. *χ*<sup>2</sup> values for each of these fits are calculated. Testing of the ALPs hypothesis is performed using the Δ*χ*2, defined as Δ*χ*<sup>2</sup> = *χ*2 *wALP* − *<sup>χ</sup>*<sup>2</sup> *<sup>w</sup>*/*oALP*. Based on the distribution of these values for each set of ALPs parameters, the exclusion region is evaluated under specific criteria and ALPs parameters are excluded. Malyshev et al. [78] considered 1000 different random realizations of the cluster magnetic field (modelled as in [44]) for a range of ALP parameters, coupling 10−<sup>14</sup> GeV−<sup>1</sup> <sup>≤</sup> *gaγγ* <sup>≤</sup> <sup>10</sup>−<sup>9</sup> GeV−<sup>1</sup> and mass of ALPs 10−<sup>2</sup> neV <sup>≤</sup> *ma* <sup>≤</sup> 102 neV. By combining observations of *Fermi*–LAT, the MAGIC energy range available is extended, and by observing both the patterns of the spectrum, a higher sensitivity to the photon-ALP coupling values is reached, dropping down to *gaγγ* <sup>∼</sup> <sup>10</sup>−<sup>12</sup> GeV<sup>−</sup>1. The result was the exclusion of the broader part of ALPs parameter space, compared to the previous analysis of the *Fermi*-LAT data alone. The excluded region also included the part of the ALPs parameter space which can be assigned to the possible ALP Dm. This showed the potential of combining data obtained by different instruments for the purpose of increasing part of the available ALPs parameter space and increasing the sensitivity. Following previous interest in the effects ALPs oscillations could have on the BL Lac spectra [80,81], recent works investigated the same using the simulations for the upcoming experiments and showed that BL Lac could be used for future studies of the ALPs oscillations [82].

### *3.8. Supernova Remnants*

Expanding their previous work, [83], Xia et al. [84] performed a search for spectral irregularities in three galactic supernova remnants, combining GeV data from Fermi-LAT and TeV data from IACTs (H.E.S.S., MAGIC and VERITAS). The broadband spectra were fitted with models with and without photon–ALP conversion in the galactic magnetic field. The ALP hypothesis was tested using the *χ*<sup>2</sup> analysis and the combined limits were again shown to be inconsistent with limits already set by CAST. The authors speculated that a possible reason for this result could be the uncertain connection between the *Fermi*-LAT spectrum and the IACT observations, which are not easily calibrated in energy, and also the systematic uncertainties of the instruments that were not taken into account [84]. This approach is likely to be revisited once CTA start taking data.

### *3.9. Studies Obtained Comparing Data from Different Blazars*

In [79], the *Fermi*-LAT and H.E.S.S. data of two BL-Lacs are analyzed. Two different EBL models are also probed. The ALP model included mixing of the inter-cluster magnetic field modeled as a Gaussian turbulent field with zero mean and variance *σB*, as in [85], and the Galactic magnetic field [40]. The ALPs hypothesis was evaluated in a similar way, as in [44], using a likelihood ratio test. The results showed the improvement of the fit when ALP models are included and set constraints on the ALPs parameter space consistent with the previously obtained ones.

In [86] the highest energy spectra of AGN studied by *Fermi*-LAT and IACTs are compared, showing that the inclusion of proton–ALPs oscillation effects improves the agreement of the standard AGN model with the data. Recently, the analysis of Ajello et al. [44] was revisited by Cheng et al. [74] using a different analysis method, calculating the irregularity of the spectrum of NGC 1275. Aiming to measure the irregularity of the spectrum, an estimator needs to be chosen. Looking back to the article by H.E.S.S [39], one could decide to use the estimator from Equation (14). A possible problem arises in the case of a large number of energy bins (∼100) (as in [74]), since the ALPs signatures might become wider than the bin size, making this kind of estimator insensitive to such alternations. Using the energy windows, instead of the spectral points triplets, and following the assumption of a power-law model in those energy windows, Cheng et al. [74] proposed an alternative version of the estimator,

$$\mathcal{Z}\_{alt} = \sum\_{i} \sum\_{j} \frac{\left(\phi\_{i,j}^{pl} - \phi\_{i,j}\right)^2}{\sigma\_{i,j}^2}. \tag{16}$$

where *i* and *j* represent the energy window and bin, respectively, while *φpl* is the flux assumed by the power-law spectral fit in each energy window, and *φ* and *σ* are the measured values of the flux and uncertainty, respectively [74]. Each of the simulated ALPs models were fitted assuming a baseline log parabola. From the assumption that the observed irregularity can be explained by the photon conversion connected to a given set of ALPs parameters, exclusion limits were set. This study included mixing in the intracluster magnetic field and in the galactic magnetic field of the Milky Way. Excluded couplings are *gaγγ* <sup>&</sup>gt; <sup>3</sup> <sup>×</sup> <sup>10</sup>−<sup>12</sup> GeV−<sup>1</sup> for masses of the ALPs *ma* <sup>∼</sup> 1 neV at a 95 % confidence level. The results of this search show the possibility of further improving the constraints by combining NGC 1275 observations with observations of another source PKS 2155-304 [74].

### *3.10. ALP-Photon Back Conversion in the Galactic Magnetic Field*

As investigated by Long et al. [32], new observations of VHE *γ*-ray sources could lead to the detection of the flux enhancement due to the ALP–photon back-conversion in the Galactic magnetic field. This enhancement is expected at energies *Ecrit* ∼ 100 TeV [32] and could be detected by the Large High-Altitude Air Shower Observatory (LHAASO) [87], CTA, and by the planned Southern Wide-field Gamma-ray Observatory (SWGO) [88]. Long et al. [32] analyzed HE and VHE *γ*-ray data from three promising AGNs and the spectra were extrapolated to the energies *E* ∼ 100 TeV. Further on, the assumed intrinsic spectra were folded with the *Pγγ*, assuming the photon–ALP conversions in the source magnetic field and the back-conversion in the magnetic field of the Milky Way. These spectra were compared to the ones obtained only by including the EBL and CMB attenuation. The results showed that, in the respective energy range (above *E* ∼ 100 TeV), predicted flux enhancement is above one order of magnitude and higher than the sensitivity of the instrument, which will allow for the constraints to be set on the ALPs parameter space [32]. It is also emphasized that, to set stringent constraints, a better estimation of the intrinsic spectra, magnetic fields and EBL attenuation needs to be established, all of which are expected with the upcoming experiments in HE and VHE *γ*-ray astronomy.

The next generation of IACTs, CTA, is expected to lower the uncertainties in the spectra of astrophysical sources such as active galaxies and BL Lacs and increase the sensitivity to photon–ALP oscillations [45]. In that way, they may surpass the current constraints by broadening the part of the ALPs parameter space available for *γ*-ray studies, and excluding it even further. A comparable performance is expected from future laboratory experiments: the axion helioscope IAXO [14] and Any Light Particle Search II (ALPSII) [89].

### **4. Outlook: The Cherenkov Telescope Array**

In the previous section, we discussed the current constraints set by IACTs on ALPs. As noted, to date, several studies using H.E.S.S. and MAGIC data have been performed, but there is still room for improvement with the new upcoming generation of experiments. In particular, CTA is expected to probe the energies up to *E<sup>γ</sup>* ≈ 300 TeV, which directly improves the possibility of studying ALPs manifestations. Abdalla et al. [45] created simulations of the observation of the radio galaxy NGC 1275. The magnetic field of the Perseus cluster is modeled following Jansson and Farrar [40], with morphology modeled as a random field with Gaussian turbulence. The conservative value of the central magneticfield strength was set to 10 μG, along with the other parameters listed in [45]. Using three different sets of ALPs parameters with 100 different magnetic field realizations, *Pγγ* was calculated using the GAMMAALPs code4. GAMMAALPs solves the equations of motion for the photon–ALP system using the transfer matrix method. Considering other effects that could impact the photon flux, GAMMAALPs includes the EBL absorption, dissipation in QED, and CMB effects. Observations in both the quiescent and the flaring state are included in a ≈300 h exposure. The authors included the systematic uncertainties of the instrument, and fits are performed both with and without ALPs effects. As the energy binning has a great importance for observing wiggles in the spectrum, three different sets of parameters are used, and fits for each of them are performed by maximizing the likelihood and summing over 40 energy bins. For each set, 100 different magnetic field realizations are computed and likelihood values corresponding to quantile *Q* = 0.95 of the distributions are chosen. To obtain the confidence intervals of 95% and 99%, Monte Carlo simulations are used.

The results showed that, in contrast to the quiescent state, the flaring state of the source provides a stronger exclusion of the ALPs parameter space, reaching a level where ALPs could constitute the entirety of DM. A probable reason for this is a strong background cut on the quiescent data, which causes the exclusion of the low energy bins from the analysis. On the other side, flaring state observations extend to lower energies. As concluded by the authors, this shows the great importance of observing the high activity states of this and other sources that have yet to be studied. Changes in the magnetic field parameters are also tested and the projected exclusion parameters are presented in Figure 8.

It is important to note that the constraints on the ALPs parameter space are sensitive to changes in the assumed parameters' values in the model of the magnetic field of the Perseus galaxy cluster. Moreover, it is found that finer energy binning gives stronger constraints, as the analysis becomes more sensitive to small and fast oscillations in the spectrum, caused by the photon–ALP oscillations. The projected limits obtained in [45] can be seen in Figure 8. Compared to future laboratory experiments (e.g. ALPSII [89]), CTA exclusions of the ALPs parameter space will be dominated by the systematic uncertainties of the model [45]; CTA is expected to have a similar sensitivity to the planned IAXO [14] and ALPS II experiments [89].

**Figure 8.** (**Top**) Projected CTA exclusions on the ALPs parameter space for different assumptions on the intracluster magnetic field parameters. Reprinted from Abdalla et al. [45]. (**Bottom**) Projected limits from the CTA simulations, compared to constraints on the ALPs parameter space with *Fermi* LAT and H.E.S.S. Reprinted from Abdalla et al. [45].

### **5. Summary and Conclusions**

ALPs are one of the most promising candidates to solvethe strong *CP* problem, and also a viable solution to the long-lived mystery of astrophysics, the existence of DM. In this review, searches for ALPs are presented, focusing on VHE *γ*-ray astronomy and IACTs. Current constraints set with IACTs [39] are still viable, and complementing constraints are set by other experiments and instruments, such as axion helioscopes, see, e.g., [12,17–19,44]. Even though a great number of searches have been performed, ALPs parameter space still leaves room for future developments. Probably the most interesting part of yet-unexplored ALPs parameter space is accounting for ALPs which are able to explain and constitute most or all of the DM in the universe. This region is anticipated in the future *γ*-ray experiments, such as CTA [45] and SWGO [88], or in LHAASO [32], that will be able to explore higher energies of up to about 100 TeV and exclude masses of ALPs of *ma* ∼ 200 neV [45]. With these and other upcoming laboratory axion experiments, constraints on the ALPs parameter space, or even a possible detection of the ALPs, are increasingly anticipated.

**Author Contributions:** Conceptualization, I.B., M.D.; methodology, I.B.; validation, A.D.A.; writing original draft preparation, I.B., M.D. and M.M.; writing—review and editing, A.D.A., M.M.; supervision, M.D. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received funding from the Italian Ministry of Education, University and Research (MIUR) through the "Dipartimenti di eccellenza" project Science of the Universe, the University of Padova for the XXXVIII PhD call cycles. M.M. acknowledges the Croatian Science Foundation (HrZZ) Project IP-2016-06-9782 and the University of Rijeka Project 13.12.1.3.02.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **Notes**


### **References**


### *Review* **20 Years of Indian Gamma Ray Astronomy Using Imaging Cherenkov Telescopes and Road Ahead**

**Krishna Kumar Singh 1,\* and Kuldeep Kumar Yadav 1,2**


**\*** Correspondence: kksastro@barc.gov.in

**Abstract:** The field of ground-based *γ*-ray astronomy has made very significant advances over the last three decades with the extremely successful operations of several atmospheric Cherenkov telescopes worldwide. The advent of the imaging Cherenkov technique for indirect detection of cosmic *γ* rays has immensely contributed to this field with the discovery of more than 220 *γ*-ray sources in the Universe. This has greatly improved our understanding of the various astrophysical processes involved in the non-thermal emission at energies above 100 GeV. In this paper, we summarize the important results achieved by the Indian *γ*-ray astronomers from the GeV-TeV observations using imaging Cherenkov telescopes over the last two decades. We mainly emphasize the results obtained from the observations of active galactic nuclei with the *TACTIC* (TeV Atmospheric Cherenkov Telescope with Imaging Camera) telescope, which has been operational since 1997 at Mount Abu, India. We also discuss the future plans of the Indian *γ*-ray astronomy program with special focus on the scientific objectives of the recently installed 21 m diameter MACE (Major Atmospheric Cherenkov Experiment) telescope at Hanle, India.

**Keywords:** gamma ray astronomy; imaging atmospheric Cherenkov technique; TeV gamma-rays; non-thermal radiation

### **1. Introduction**

The concept of gamma-ray astronomy (GRA) was first coined by Phillip Morrison in 1958 [1]. In a seminal paper, Morrison suggested that nuclear or high-energy processes give rise to continuous and discrete spectra of *γ*-radiation over the energy range of 0.2–400 MeV in astronomical objects. According to Morrison, continuum *γ*-radiation from astronomical sources is produced by three physical processes: synchrotron radiation, bremsstrahlung or braking radiation, and radiative decay of neutral pions, whereas discrete *γ*-ray lines can be emitted by de-excitation of nuclei formed by radioactive decay and electron-positron pair annihilation. These *γ*-ray photons are directly related to their origins and relatively accessible to observation. Crab Nebula, extragalactic radio source M87, and Cygnus A were suggested to be possible sources of the cosmic *γ*-radiation. In 1960, Cocconi proposed the possibility of detecting the TeV *γ*-ray photons from discrete astronomical sources using an air shower telescope [2]. This motivated the Crimean group to develop a ground-based experimental technique for recording the extensive air showers initiated by the TeV *γ*-ray photons by detecting the Cherenkov radiation produced from them in the Earth's atmosphere [3]. Using this experiment, Crab Nebula, radio galaxies, and supernova remnants were observed by the Crimean group.

The first telescope for ground-based GRA using the atmospheric Cherenkov technique was constructed by G. G. Fazio in 1968. The Whipple telescope with a 10 m optical reflector reported the first tentative detection of cosmic *γ* rays with energy above 250 GeV in 1972 [4]. The first detection of cosmic *γ*-radiation with the space-based telescope Explorer XI was demonstrated in 1965 [5]. By this period, it was thus realized that cosmic *γ*-ray photons

**Citation:** Singh, K.K.; Yadav, K.K. 20 Years of Indian Gamma Ray Astronomy Using Imaging Cherenkov Telescopes and Road Ahead. *Universe* **2021**, *7*, 96. https://doi.org/10.3390/ universe7040096

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 9 February 2021 Accepted: 6 April 2021 Published: 10 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

can be either observed directly by space-based detectors onboard satellites/balloons in the stratosphere or indirectly from the ground by detecting the extensive air showers produced by the *γ*-ray photons. Space-based GRA was developed to cover the MeV-GeV energy range, whereas ground-based GRA explores the GeV-TeV energy band. In 1989, the atmospheric Cherenkov imaging technique was applied to discriminate between images of cosmic *γ* ray- and proton-induced showers based on their shape and orientation at the focal plane of the Whipple telescope. This technique offered the first detection of *γ* rays with energy above 700 GeV from the Crab Nebula at very high statistical significance level [6]. With an excellent capability of rejecting the cosmic ray hadronic background, a large field of view and very good energy resolution, imaging atmospheric Cherenkov telescopes (IACTs) laid a strong foundation for the present and future ground-based GRA in GeV-TeV energy range.

In India, ground-based experimental investigations for detecting celestial TeV *γ* rays were initiated in 1969 by setting up an atmospheric Cherenkov telescope with two search light mirrors mounted on an orienting platform at Ooty (altitude ∼ 2300 m) in south India, more or less concurrently with the corresponding efforts made worldwide. Over the years, the number of mirrors was increased, and detection method were improved. For 16 years, an array of tracking Cherenkov telescopes was operated in and around Ooty for observing TeV *γ*-ray emission from radio pulsars. In order to take advantage of better sky conditions, the experimental set-up was shifted from Ooty to Pachmarhi (altitude ∼ 1075 m) in central India in 1986. Another Indian group started observing the night sky by setting up an atmospheric fluorescence experiment at Gulmarg (altitude ∼ 2700 m) in North India using two bare-faced photomultiplier tubes separated by 1 m in 1972. Later on, an experimental set-up consisting of six equatorially mounted parabolic search light mirrors, similar to the Pachmarhi facility, was also installed at Gulmarg in 1984 [7]. This was the first generation of indigenously developed multi-mirror atmospheric Cherenkov telescope commissioned at Gulmarg for dedicated GRA in the TeV energy range. Surprisingly, the mirrors for Gulmarg Cherenkov telescope were found in the Mumbai junk market. These parabolic mirrors were searchlight reflectors of the Second World War vintage, and evidently without great optical quality. A special design feature of this telescope was that two identical sections of the telescope could be deployed for observing either two different sources concurrently, or, taking observations on the same source, with one section viewing on-source and the other engaged in the simultaneous off-source monitoring. The later mode was suitable for observations of *γ*-ray sources with episodic emissions. This facility was operated during 1984–1989. A multi-station wide-angle photomultiplier-tubes-based experiment was also set up simultaneously at Gulmarg and Srinagar (altitude ∼ 1500 m) to search for simultaneous arrival of short timescale *γ*-ray and optical emissions from the explosive evaporation of primordial black holes [8–10]. This experiment employed three closely placed, vertically oriented, large-area photomultiplier tubes at a baseline of 30 km between Gulmarg and Srinagar. The most promising result from this experiment was based on a time-series analysis of atmospheric Cherenkov pulses recorded in the direction of Cygnus X-3.

Equipped with over twenty years of experience in experimental techniques and a good understanding of the theoretical concepts in GRA, researchers associated with the Gulmarg observatory moved to Mount Abu (altitude ∼ 1300 m) in the western region of India in 1992 for setting up a new GRA facility with higher detection sensitivity. The first IACT in Asia, TACTIC (TeV Atmospheric Cherenkov Telescope with Imaging Camera), was installed at Mount Abu in 1997 [11]. In order to reduce the energy threshold to a few tens of GeV, the Pachmarhi group shifted their activity to a high-altitude site with low night sky background at Hanle (altitude ∼ 4200 m) in the Himalayas. A seven-element wavefront sampling telescope, HAGAR (High Altitude GAmma Ray telescope) was commissioned at Hanle in 2008 [12]. Another IACT, namely MACE (Major Atmospheric Cherenkov Experiment), has recently been installed at Hanle by a collaboration of Indian gamma ray astronomers called Himalayan Gamma Ray Observatory (HiGRO) [13].

Significant advances in the instrumentation and technology have led to the development of several state-of-the-art IACTs around the globe in the last 30 years and very exciting

results have been produced by them in the *γ*-ray energy range above ∼100 GeV [14–20]. The motivation behind GRA using IACTs is to explore the most violent and energetic physical processes for addressing the questions related to the origin of the GeV-TeV cosmic *γ* rays and radiative and acceleration processes under extreme physical conditions in the nonthermal Universe. The field of GRA is also expected to make an impact on cosmology and astroparticle physics through the propagation effects of *γ* rays over cosmological distances. Unveiling the nature of dark matter candidates such as weakly interacting massive particles, axion-like particles, comprehension of the astrophysical acceleration processes and lepto-hadronic origin of high *γ* rays are the major challenges for the present and future instruments for the ground-based GRA research. In this contribution, we present the scientific achievements of the Indian GRA program using IACTs in the last twenty years and future roadmap for the next decade. The structure of paper is as follows. In Section 2, important results from early times of an Indian GRA program are briefly described. Current status of the experimental facilities and results obtained so far are discussed in Sections 3 and 4, respectively. Section 5 outlines the future program of Indian GRA with IACT. Finally, we conclude in Section 6.

### **2. Early Times: Important Achievements**

Ground-based GRA was pioneered in India in 1969, soon after the discovery of a pulsar in the Crab Nebula in 1968. In the beginning, a very systematic search was made for pulsed *γ*-ray emission above the detection threshold energy of ∼10 TeV from six pulsars using Cherenkov telescopes at Ooty [21]. Not having found any statistically significant TeV *γ*-ray emission from any source, upper limits of the order of 10−<sup>11</sup> ph cm−<sup>2</sup> s−<sup>1</sup> were placed on the *γ*-ray fluxes [22]. A number of isolated pulsars (Crab, Vela, PSR0355+54, Geminga) and Xray binaries (Hercules X-1) were monitored by experimental set-ups at Ooty and Pachmarhi, and positive signals were detected from them on several occasions. The Ooty array started the observation of the Crab Pulsar in 1977 and signals in the time-averaged phasograms were detected only on a few occasions. The Durham University group detected pulsed TeV *γ*-ray emission from the Crab pulsar during two transients lasting 15 min, each in October 1981 [23], while the Ooty group reported that the transient duration could be on time scales of minutes or even seconds [24]. A 15 min interval was identified on 23 January 1985, during which pulsed TeV *γ* rays were detected at 5.1*σ* confidence level from the Crab pulsar [25]. An important feature of this observation was that two independent telescopes tracking the Crab pulsar from locations at Ooty separated by 11 km detected the signal, while a third telescope adjacent to one of them but pointing towards a background region did not show any effect. This strengthened the inference of a transient pulsed TeV *γ*-ray emission with the peak of the pulse at the position of radio main-pulse. An identical burst from the Crab pulsar was detected at 6*σ* statistical significance level on 2 January 1989 while operating at Pachmarhi with an altogether different set-up [26]. This burst was observed by all five quasi-independent telescopes and lasted for 5 min. Evidence for variability over time scales of minutes and possibly hours in the TeV light curves of the Crab pulsar was also investigated [27].

The observations on Vela pulsar were made between February and March 1979 for 17 nights by the Ooty observatory, and detection of *γ* rays at energies above 500 GeV was reported from the Vela pulsar [28]. This observation detected two narrow peaks separated by 0.42 in phase space (a characteristic feature noticed at lower energy by satellite-based observations) but did not provide absolute phase information. Data from later observations provided information on the absolute phase. The signal-to-noise ratio improved significantly when lower energy events were preferentially selected, and the resultant TeV *γ*-ray phasogram of Vela pulsar showed a 4*σ* peak aligned with the optical first pulse position [29]. A weak second pulse separated by 0.43 from the main pulse in the phase space was also noticed at 1.5*σ* level. From the excesses observed at different *γ*-ray energy thresholds (from 4.9 teV to 10.4 TeV) over a span of five years, the integral energy spectrum was found to follow a power-law with slope −(2.5 ± 0.3). The radio pulsar PSR0355+54 was observed

in December 1987 for 25 h in the TeV energy range at Pachmarhi, and a steady pulsed emission signal at a phase of 0.53 with respect to the radio pulse was detected at 4.3*σ* level above 1.3 TeV energy threshold [30]. A large increase in the trigger rate in the direction of X-ray binary candidate Hercules X-1 was observed in the atmospheric Cherenkov telescope array at Pachmarhi on 11 April 1986. The accidental coincidence rate did not show any increase during this burst and the number of detected *γ* ray events amounted to ∼54% of the cosmic-ray flux, resulting in a 42*σ* effect [31]. This was the largest TeV *γ* ray signal from any source until that time. In the search for pulsed TeV gamma rays in the archival data of 1984–1985 on Geming pulsar while operating at Ooty, two peaks at phases 0.4 and 0.9 (as observed in COS-B satellite data) with a separation of 0.5 in phase were seen during a few minutes of short term activity of the source [32].

The modest start in the then-budding field of GRA at Gulmarg was made in 1972 by setting up an atmospheric scintillation experiment to search for prompt *γ*-ray emissions from supernovae explosions and primordial black hole outbursts. In a supplementary mode of operations, this experiment was deployed as the atmospheric Cherenkov detection technique to investigate the cosmic-ray energy spectrum around the *knee* position and to obtain corroborative evidence for ultra-high-energy *γ*-ray emission from the binary system Cygnus X-3 during the exploratory stage between 1974 and 1984. The energy threshold of the system for showers initiated by primary cosmic rays was determined to be ∼500 TeV from the fluctuations in the night-sky background [33]. A pulse-height analysis of the Cherenkov pulses detected by the wide-angle photomultiplier tube system at Gulmarg indicated a break near 10<sup>15</sup> eV in the cosmic-ray spectrum, and it was argued to be primary in nature [33]. In the drift scan mode (telescope is kept stationary and sky is scanned as the Earth rotates), the Gulmarg experimental facility detected Cherenkov pulses during 1976–1978 corresponding to an average event rate of 55 *h*<sup>−</sup>1. During this ground-based search for episodic cosmic events at Gulmarg, the arrival time distribution of atmospheric Cherenkov pulses revealed a significant overabundance of cosmic ray events inter-separation of <40 s [34]. Analysis of events recorded during 1976–1977 indicated the presence of a 4.5*σ* significant phase-dependent signal from the binary system Cygnus X-3 with the characteristic modulation period of 4.8 h [35]. This was the most important result from the Gulmarg wide-angle photomultiplier tube experiment based on a timeseries analysis of the Cherenkov pulses. Following its publication [35], the Gulmarg result on Cygnus X-3 attracted a lot of international attention. Evidence for possible TeV *γ*-ray emission from several candidate sources were obtained during the observation period 1985–1990 with the multi-mirror atmospheric Cherenkov telescope at Gulmarg [36]. A possible discovery result from these observations was the detection of TeV *γ*-ray signal from the prototype cataclysmic variable AM-Herculis [37]. Other interesting results with the Gulmarg telescope were obtained from the observations of pulsar PSR 0355+54 [38], Crab nebula/pulsar [39], X-ray binary source Cassiopeia *γ*-1 [40], and search for millisecond pulsar in Cygnus X-3 [41].

The real breakthrough in ground-based GRA occurred in 1989, when the Whipple Collaboration used Hillas image parameters (proposed by A. M. Hillas in 1985 [42]) to distinguish very effectively between background hadronic showers and TeV gamma ray showers from a point source [6]. Subsequently, development of several IACTs was initiated throughout the world for GRA in the GeV-TeV energy range. The early leads from the Gulmarg exploratory phase and contemporary global trends in the field of GRA provided a novel approach and motivation to the researchers associated with the Gulmarg observatory for setting up lower-threshold energy *γ*-ray telescopes like TACTIC and MACE based on the imaging atmospheric Cherenkov technique. The TACTIC telescope belongs to firstgeneration IACTs such as Whipple, HEGRA, CANGAROO, SHALON, and CAT, whereas MACE can be placed among the second-generation state-of-the-art telescopes like H.E.S.S., VERITAS, and MAGIC on the world map.

### **3. TACTIC Telescope**

The TACTIC telescope was set up at Mount Abu (24.6◦ N, 72.7◦ E, 1300 m above sea level) in 1997. A comprehensive site survey was performed in 1993, and Gurushikhar in Mount Abu turned out to be the best-known location for GRA research using imaging the Cherenkov technique [43]. This site offered a very significant enhancement in the observation time (∼1200 h per year) more or less evenly spaced throughout the year with respect to the Gulmarg observatory. Mount Abu is a hill resort with good logistics, mild climate, dust-free atmosphere and offers ready accessibility to major Indian cities. The site is located at nearly the same longitudinal belt in which several major astronomical experiments in India were operational during that period. This fortuitous longitude clustering was greatly helpful in time-coordinated multi-band observations on candidate *γ*-ray sources. The longitude of the Mount Abu site is also important in long-term monitoring of the *γ*-ray sources as compared to other GRA observatories around the world.

The design of TACTIC telescope is based on imaging atmospheric Cherenkov technique for indirect detection of TeV photons from the cosmic sources [11]. A photograph of the observatory at Mount Abu is depicted in Figure 1. The instrument at the center of the array has been deployed as an imaging unit for TeV *γ*-ray observations since 1997. The imaging element is equipped with an altitude-azimuth mounted light collector of ∼4.0 m diameter and ∼3.8 m focal length. The light collector employs 34 front facing. aluminumcoated, glass spherical mirrors of 0.6 m diameter, each providing a light collection area of ∼10 m2. When all the 34 mirror-facets are properly aligned on the basket, the overall light reflector corresponds to a quasi-paraboloid surface. With focal length-to-diameter ratio ∼1, hybrid design of light collector is close to the Davies-Cotton design. This was achieved by deploying the shorter focal length mirror facets close to the principal axis of basket, while mirrors with longer focal lengths are placed around the periphery using longer studs on the frame structure to raise their pole position. This arrangement minimizes the off-axis effects on overall spot size of the reflector. A maximum spot size of ∼4 arcmin can be expected in the image plane of the telescope. The imaging camera at the focal plane uses an array of photomultiplier tubes (pixels) to detect the Cherenkov light flash with a resolution of 0.31◦. Initially, source observations using TACTIC started with only 81-pixel imaging camera covering a field of view of ∼2.8◦ × 2.8◦ in early 1997. Within a few days of its first light in 1997, the telescope with its prototype 81-pixel camera successfully detected a flaring activity from the blazar Mrk 501 during April–May 1997. This observation was almost synchronized by five other gamma-ray telescopes operating around the globe [44]. The prototype camera was first upgraded to 144 pixels, and the final camera configuration of 349-pixels with a field of view of ∼6◦ × 6◦ was attained in December 2000. The simulation studies using CORSIKA [45] suggested threshold energy of the TACTIC imaging telescope for *γ*-rays and protons to be ∼1.0 TeV and ∼1.8 TeV, respectively. The sensitivity of the telescope was estimated as detection of 5*σ* steady signal from the standard candle Crab Nebula in 25 h. A detailed description of the TACTIC telescope design and instrumentation can be found in [46–51]. The data recorded by the TACTIC telescope are first corrected for inter-pixel gain variation and then subjected to the standard image cleaning procedure [52]. The image cleaning threshold levels (for boundary and core pixels) are optimized on the Crab Nebula data. The clean images are characterized by calculating their *Hillas parameters* [42] followed by the application of standard Dynamic Supercuts procedure [53] to segregate *γ*-ray like events from the huge cosmic-ray background events. The significance *γ*-ray like excess events is estimated using the maximum-likelihood ratio method proposed by Li and Ma [54].

**Figure 1.** The TeV Atmospheric Cherenkov Telescope with Imaging Camera (TACTIC) telescope array operational since 2001 at Mount Abu, India. Single telescope at the center is being used as an imaging unit for TeV *γ*-ray observations.

A major upgrade program was taken up in 2011 to improve the overall performance of the telescope. The main motivation for the hardware and software upgrade of the system was to increase its sensitivity and lower the threshold energy. This translated to the reduction in the threshold energy of the telescope for cosmic rays from 1.8 TeV to 1.4 TeV and from 1.2 TeV to 0.8 TeV for the *γ*-ray events [55,56]. Application of gamma/hadron separation strategies using artificial neural networks and random forest classification further enhanced the performance of the telescope after upgrade [57,58]. The upgraded telescope has an improved sensitivity to detect the TeV *γ*-ray emission from the Crab Nebula at 5*σ* statistical significance level in an observation time of 12 h as compared to 25 h earlier. The *<sup>γ</sup>*-ray detection rate from the Crab Nebula also increased from ∼10 h−<sup>1</sup> to ∼15 h−<sup>1</sup> (defined as one Crab Unit for TACTIC). The TACTIC telescope with significant improvement in its performance has greatly helped in the monitoring of potential *γ*-ray sources in multi-TeV energy range during flaring activities for a short period and in low state for a long duration.

### **4. Important Results from TACTIC Observations**

The TACTIC telescope started regular observations of TeV *γ*-ray sources with a full 349-pixel camera in January 2001. First observations were carried out on the Crab Nebula for 41.5 h during 19 January–23 February 2001. A statistically significant number of *γ*-ray like events (447 ± 71) were detected from the direction of Crab Nebula at 6.3*σ* significance level. A long-term monitoring of the Crab Nebula with TACTIC between 2003 and 2010 (before upgrade) yielded (3742 ± 192) *γ*-ray like events with statistical significance of ∼20*σ* in ∼400 h of live observation time [56]. Results from this observation are shown in Figure 2a–c. The differential energy spectrum of the Crab Nebula using the consolidated TACTIC data was very well described by a power law of the form

$$\frac{d\phi}{dE} = f\_0 \left(\frac{E}{1\text{TeV}}\right)^{-\Gamma} \tag{1}$$

where *<sup>f</sup>*<sup>0</sup> = (2.66 ± 0.29) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> is the normalization constant at energy E = 1 TeV and Γ = 2.56 ± 0.10 is the photon spectral index [56]. The spectrum obtained matches reasonably well with that measured by the Whipple and HEGRA groups [59,60]. Use of artificial neural network methodology for energy reconstruction of *γ* ray events detected by the TACTIC telescope was found to be more effective at higher energies and led to determining the Crab Nebula energy spectrum in the energy range 1–24 TeV [61,62]. The TACTIC telescope had been mainly deployed for observations of blazars, which represent the dominant population of TeV *γ*-ray sources in the extragalactic Universe. Blazars are radio-loud active galactic nuclei (AGN) having an elliptical host galaxy with a supermassive black hole at the center. Broadband non-thermal emission over the entire electromagnetic spectrum ranging from radio to TeV *γ*-rays is produced from blazars in a relativistic plasma jet originating from the central region and oriented at small angles to our line of sight. A detailed description of the present understanding of blazars is given in [63,64]. Important results obtained from the TACTIC observations of the potential TeV *γ*-ray sources over the last 20 years are described in the following subsections.

**Figure 2.** (**a**) TeV *γ*-ray detection from Crab Nebula with the TACTIC telescope; (**b**) Cumulative significance level as a function of observation time; and (**c**) Differential energy spectrum of the Crab Nebula measured by TACTIC [56]. Reprinted by permission from Springer Nature Customer Service Centre GmbH: Springer Nature Pramana, Long- term performance evaluation of the TACTIC imaging telescope using ∼400 h Crab Nebula observation during 2003–2010, A K Tickoo et al., Copyright 2014.

### *4.1. Mrk 501*

Mrk 501 is one of the brightest TeV *γ*-ray blazars at redshift *z* = 0.034 in the extragalactic Universe. The first TeV emission from this source was detected in 1995 by the Whipple telescope above 0.3 TeV [65]. During April–May 1997, a significant detection of *γ*-ray events from this active galaxy was claimed by the newly-commissioned TACTIC telescope at statistical significance of ∼13*σ* in ∼50 h of observation time. This observation with the TACTIC telescope produced evidence for a series of TeV *γ*-ray flares from Mrk 501 [44]. The TeV *γ*-ray emission from this source was further monitored by the TACTIC telescope during March–May 2005 and February–May 2006 for ∼46 h and ∼67 h, respectively, [66,67]. During 2005 observations, no significant *γ*-ray emission was detected from the source direc-

tion, and therefore an upper limit of 4.62 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> at 3*<sup>σ</sup>* confidence level was placed on the integrated flux above 1 TeV. However, during 2006 observations, presence of a TeV *γ*-ray signal from the source with a statistical significance of 7.5*σ* was found. The time-averaged differential energy spectrum of the source was described by a power law in the energy range 1–11 TeV with *<sup>f</sup>*<sup>0</sup> = (1.66 ± 0.52) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 2.80 ± 0.27. These results closely followed those obtained by the HEGRA collaboration during 1998–1999, except for the exponential cutoff feature in the spectrum at 2.61 TeV [68].

A multi-wavelength study of the TeV *γ*-ray emission from Mrk 501 was performed using TACTIC observations during April–May, 2012 [69]. Analysis of the TACTIC light curve during this period indicated a relatively high gamma-ray emission state of the source between 22 and 27 May 2012. The time-averaged differential energy spectrum measured by TACTIC during the high-state of Mrk 501 is shown in Figure 3. The intrinsic TeV *γ*-ray spectrum was estimated from the observed spectra after correcting for absorption due to extragalactic background light (EBL) as explained in [70,71]. The intrinsic emission from the source can be described by a power law with *<sup>f</sup>*<sup>0</sup> = (2.73 ± 0.51) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and Γ = 2.19 ± 0.18 in the energy range 0.85–17 TeV. This TeV *γ*-ray emission was satisfactorily reproduced by the synchrotron self-Compton process under the framework of widely used homogeneous single zone leptonic model for broadband spectral energy distribution of blazars [72]. The model predicted a tangled magnetic field of 0.12 Gauss in a spherical emission region of radius 6.1 × 1015 cm. The energy distribution of relativistic electrons in the emission zone followed a smooth broken power law with indices 2.1 and 4.9 before and after the break, respectively [69,73]. These findings are found to be broadly consistent with the results from the multi-wavelength observations of the source during April–August, 2013, including data from MAGIC and VERITAS telescopes [74].

**Figure 3.** Observed and intrinsic differential energy spectra of Mrk 501 during high state from 22 to 27 May 2012 measured by the TACTIC telescope. Reproduced with permission from the journal, Reference [69], Copyright 2017, Elsevier.

Mrk 501 showed a major TeV *γ*-ray flaring activity on the night of 23–24 June 2014 with a flux variation characterized by the flux doubling timescale of a few minutes [75]. Unfortunately, the TACTIC telescope could not observe Mrk 501 during this flaring episode due to bad weather conditions. However, a near-simultaneous multi-wavelength study of this giant flaring episode by us suggested a correlation between the TeV *γ*-ray emission and soft X-ray emission [76]. The soft X-ray photon spectral index was observed to be anti-correlated with the integral flux showing harder-when-brighter behavior. The nature of X-ray and *γ*-ray emissions from Mrk 501 shows different behavior during low and high activity states of the source [63]. Therefore, further monitoring of Mrk 501 is very important

to explore the exact nature of the TeV *γ*-ray emission, and TACTIC will continue to observe this source as one of its potential targets.

### *4.2. 1ES 2344+514*

First TeV *γ*-ray emission from the blazar 1ES 2344+514 (*z* = 0.044) was discovered by Whipple Collaboration in 1995. The TACTIC telescope monitored this source from 18 October to 9 December 2004 and 27 October 2005 to 1 January 2006 for a total live time of ∼60 h in the zenith angle range of 27◦–45◦ [66,77]. Analysis of the data indicated absence of a statistically significant TeV *γ*-ray signal from the source direction. Therefore, an upper limit of 3.84 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> at 3*<sup>σ</sup>* confidence level on the integral flux above 1.5 TeV was estimated. The derived upper limit from TACTIC observations was in agreement with the detection of very high-energy *γ*-ray emission in low emission state of the source during 2005–2006 with the MAGIC telescope [78].

### *4.3. H 1426+428*

H 1486+428 was discovered in X-ray observations at redshift *z* = 0.129. Whipple group reported the first TeV *γ*-ray detection from this blazar in 2002 at a high statistical significance level using the data collected during 1999–2001. The TACTIC telescope observed H 1486+428 for 244 h between March 2004 and June 2007 in the continuous source tracking mode over the zenith angle range of 18◦–45◦ [79,80]. The TeV *γ*-ray emission from this source was found to be below the TACTIC sensitivity level during the period of these observations. In the absence of statistically significant detection of the TeV *γ*-ray signal from the source detection, an upper limit of 1.18 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> (∼13% of the TACTIC detected Crab Nebula integrated *γ*-ray flux above 1 TeV) at 3*σ* confidence level was placed on the integral flux from the source. The TACTIC results on the source H 1426+428 were consistent with those obtained with the CELESTE system during the same period [81] but in conflict with GT-48 telescope observations during the period 15–25 April 2004, wherein a *γ*-ray signal at 5.8*σ* statistical significance level was reported [82].

### *4.4. Mrk 421*

Mrk 421 at redshift *z* = 0.031 is the first blazar detected at TeV energies by groundbased IACT, and the second source after the Crab Nebula. The pioneer Whipple telescope discovered *γ*-ray emission above 0.5 TeV from Mrk 421 in 1992. The TACTIC telescope was deployed to monitor the TeV *γ*-ray emission from this source for the first time during April-May 1997 for 26 h. No significant detection was found, and the null result was used to estimate a 3*<sup>σ</sup>* confidence level upper limit of ∼5.0 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> on the integral flux above 2 TeV [48]. This comparatively low state of the source was consistent with the time-averaged flux estimate from contemporaneous Whipple observations above 0.3 TeV. This reassured the satisfactory performance of the TACTIC telescope during the maiden observation campaign in 1997 soon after its commissioning with 81-pixel prototype imaging camera. Equipped with a full 349-pixel camera, the telescope was deployed to observe Mrk 421 during January–April, 2004. An evidence of the TeV *γ*-ray signal from the source direction with a statistical significance of 6.8*σ* in ∼79 h was found [83]. The differential energy spectrum of the detected *γ*-ray photons was derived as a power law with Γ = 2.80 ± 0.20 in the energy range 2–9 TeV. A long-term monitoring of Mrk 421 with TACTIC was undertaken between December 2005 and April 2006 for a total observation time of ∼202 h [84]. A flaring activity was detected between December 2005 and February 2006. During the flaring period, presence of a strong *γ*-ray signal at 12*σ* statistical significance level in 97 h was found from the source direction. The time-averaged differential energy spectrum during the high-activity state was reasonably fitted by a power law with *<sup>f</sup>*<sup>0</sup> = (4.66 ± 0.46) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 3.11 ± 0.11 in the energy range 1–11 TeV [84]. Another flaring activity was detected between January–May, 2008, by the TACTIC telescope during the long-term observations of the blazar Mrk 421 from December 2006 to May 2008 [85]. The time-averaged differential spectrum during the high

emission state period was again described by a power law with *<sup>f</sup>*<sup>0</sup> = (6.8 ± 1.4) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 3.32 ± 0.22 in the energy band 1–10 TeV. Few important results derived from the giant flaring episodes of Mrk 421 using the TACTIC telescope are briefly discussed below.

### 4.4.1. February 2010 Giant Flare

In February 2010, a giant flaring activity was observed from Mrk 421 at all energies by almost all instruments worldwide [86]. The TACTIC telescope was engaged in the long-term monitoring of TeV *γ*-ray emission from the source during November 2009–May 2010, and ∼265 h of data were collected [87]. Clean data of ∼230 h revealed the presence of a TeV *γ*-ray signal with a statistical significance of 12.12*σ*. The estimated time-averaged differential energy spectrum in the energy range 1.0–16.44 TeV was fitted well by a power law with *<sup>f</sup>*<sup>0</sup> = (1.39 ± 0.24) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 2.31 ± 0.14 [87]. Analysis of TACTIC data from Mrk 421 during 10–23 February 2010 resulted in detection of 737 ± 87 *γ*-ray like events corresponding to a statistical significance of 8.46*σ* in nearly 48 h, and data on 16 February 2010 (MJD 55243) alone yielded 172 ± 30 *γ*-ray like events in only 4.9 h with a statistical significance of 5.92*σ* [88,89]. The epoch during 15–17 February 2010 (MJD 55242– 55244) was characterized as the giant TeV flaring episode with a peak on 16 February 2010 (MJD 55243) from the blazar Mrk 421. Near-simultaneous daily light curves of the source for the period 10–23 February 2010 (MJD 55237–55250) in TeV, MeV-GeV, X-ray, optical, and radio bands are shown in Figure 4. With the motivation of understanding the physics involved in this major flaring activity, variability study of light curves was performed using a temporal profile with exponential rise and decay [90]. It was observed that the variation in the one-day-averaged flux from the source during the flare is characterized by fast rise and slow decay. In addition, the TeV *γ*-ray flux obtained from the TACTIC observations showed a strong correlation with the X-ray flux, suggesting the former to be an outcome of the synchrotron self-Compton emission process. To model the observed X-ray and *γ*-ray light curves, kinetic equations describing the evolution of particle distribution in the emission region were numerically solved. The injection of particle distribution into the emission region, from the putative acceleration region, was assumed to be a time -ependent power law. The synchrotron and synchrotron self-Compton emission from the evolving particle distribution in the emission region were used to reproduce the X-ray and *γ*-ray flares successfully. This suggested that the flaring activity of Mrk 421 could be an outcome of an efficient acceleration process associated with the increase in underlying non-thermal particle distribution [90].

### 4.4.2. High Activity in March 2012

A high flux state of the blazar Mrk 421 was observed at TeV energies between 15 and 26 March 2012 by the TACTIC telescope with the detection of 529 ± 76 *γ*-rays at 6.9*<sup>σ</sup>* significance level in ∼39 h [91]. This translates to an event rate of ∼15 h−1. The observed differential energy spectrum was described by a power law with *f*<sup>0</sup> = (2.47 ± 0.48) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 2.58 ± 0.22 in the energy range 0.85–9.36 TeV. The contemporaneous spectrum of the source measured by the Large Area Telescope (LAT) onboard the *Fermi* satellite in the energy range 0.1–300 GeV was also fitted well using a power law function with *<sup>f</sup>*<sup>0</sup> = (2.14 ± 0.26) × <sup>10</sup>−<sup>5</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 1.81 ± 0.07. These two *γ*-ray spectra of Mrk 421 are shown in Figure 5. A prominent change in the differential energy spectral index from GeV to TeV energies was observed. This spectral break in the *γ*-ray spectrum might be attributed to Klein–Nishina effect on inverse Compton scattering of synchrotron photons in the TeV energy range [91].

**Figure 4.** Multi-wavelength light curves of the blazar Mrk 421 during 10–23 February 2010 from various ground and space-based observations. The dotted magenta curves correspond to the best-fit temporal profile with an exponential rise and decay, while green dashed curves represent the flux evolution in a single zone synchrotron and synchrotron self-Compton model with time-dependent injection. For more details about the temporal profile and model curve, the reader is referred to [90]. Reproduced with permission from journal, Reference [90], Copyright 2017, Elsevier.

**Figure 5.** *γ*-ray spectra of Mrk 421 measured by the *Fermi*-LAT and TACTIC in 0.1–300 GeV and 0.85–9.36 TeV energy bands, respectively, during 15–26 March 2012. Reproduced with permission from journal, Reference [91], Copyright 2017, Elsevier.

### 4.4.3. One Day TeV flare in December 2014

A sudden enhancement in the TeV *γ*-ray emission from Mrk 421 was detected by the TACTIC telescope on the night of 28 December 2014 [92]. The TACTIC data on 28 December, 2014 alone resulted in the detection of 86 ± 17 *γ*-ray-like events from Mrk 421 with a statistical significance of 5.17*σ* in an observation time of ∼2.2 h above an energy threshold of 0.85 TeV. The high statistics (higher than three Crab Units) of TeV photons enabled us to study the very high-energy *γ*-ray emission from the source at shorter timescales. A minimum-variability timescale of ∼0.72 days was estimated for the TeV *γ*-ray emission during the above flaring episode of the blazar Mrk 421. The integral flux above 0.85 TeV measured with the TACTIC on the night of 28 December, 2014, was estimated to be (3.68 ± 0.64) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> in 2.2 h. This was observed to be consistent with the integral flux of (2.91 ± 0.0.38) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> above 2 TeV from the near-simultaneous observations with the HAWC observatory for ∼6 h on 29 December 2014. This confirmed the sudden increase in TeV *γ*-ray activity of Mrk 421 detected by the TACTIC telescope [92]. Near-simultaneous multi-wavelength observations in the one-day broadband spectral energy distribution of the source, shown in Figure 6, were satisfactorily reproduced by simple one-zone leptonic synchrotron and synchrotron self-Compton model. The model parameters estimated from the fitting of the spectral energy distribution were in good agreement with the values reported in the literature for Mrk 421.

Subsequent long-term monitoring of the source with TACTIC during January–February, 2015, for an effective observation time of ∼44 h resulted in the detection of 311 ± 57 *γ*-ray photons at a statistical significance of 5.6*σ* [93]. This indicated the low activity state of the source after the sudden flaring episode on 28 December 2014. The time-averaged intrinsic differential energy spectrum in the low-emission state was described by a power law with *<sup>f</sup>*<sup>0</sup> = (1.13 ± 0.39) × <sup>10</sup>−<sup>11</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> TeV−<sup>1</sup> and <sup>Γ</sup> = 2.34 ± 0.39 in the energy range 0.85–15 TeV [93].

**Figure 6.** Broadband spectral energy distribution of the blazar Mrk 421 under the framework of single-zone homogeneous synchrotron and synchrotron self-Compton (SSC) model observed on 28 December 2014. The multi-wavelength (MWL) data involve near-simultaneous observations from SPOL, Swift, *Fermi*-LAT, and TACTIC. Details of the data set shown in the Figure can be found in [92]. Reproduced with permission from journal, Reference [92], Copyright 2018, Elsevier.

### *4.5. Flare in January 2018*

The TACTIC telescope detected an enhanced *γ*-ray emission from Mrk 421 on the night of 17 January 2018 [94]. Preliminary analysis of the data collected for 5.6 h indicated that the average flux during this night was around 5 times the Crab nebula flux above 0.85 TeV. The source was also observed on an hourly basis with detection of TeV photons at a significance level of 5*σ* and a peak flux of ∼7.7 times the Crab nebula flux during the night of 17 January 2018. This was the highest gamma-ray flux recorded by the TACTIC telescope from Mrk 421. This blazar is being regularly monitored by TACTIC as one of potential targets at TeV energies with frequent flaring episodes.

### *4.6. 1ES 1218+304*

The blazar 1ES 1218+304 was discovered as an X-ray source at redshift *z* = 0.182. The source was predicted to be a promising candidate for TeV *γ*-ray emission from the position of synchrotron peak at X-rays in its broad band spectral energy distribution. With the motivation of detecting TeV gamma-rays, the source 1ES 1218+304 was monitored by the TACTIC telescope during March–April, 2013 for a total observation time of approximately 40 h [95]. No evidence for the TeV *γ*-ray emission from the source was found, and therefore a 99% confidence level upper limit on the integral flux above 1 TeV wasestimated as 3.41 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> (∼23% of Crab Nebula flux) assuming a power law differential energy spectrum with Γ = 3.0 as previously observed by the MAGIC and VERITAS telescopes. Recent long-term multi-wavelength study of 1ES 1218+304 suggested that the source was in steady state with no significant change in its emission activity over a period of 10 years, and optical/UV fluxes were found to be dominated by the host galaxy emission [96]. The stellar emission from the host galaxy was modeled using the PEGASE code. Due to its hard X-ray and TeV *γ*-ray spectra, 1ES 1218+304 is an important source for probing the particle acceleration mechanisms that can produce hard power law distribution in the astrophysical jets.

### *4.7. B2 0806+35*

B2 0806+35 is a hard spectrum *γ*-ray blazar at redshift *z* = 0.083. The first *γ*-ray emission from this source was reported by the *Fermi*-LAT in 2010 with an integral flux of (6.7 ± 2.5) × <sup>10</sup>−<sup>10</sup> ph cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> in the energy range 1–100 GeV. The *<sup>γ</sup>*-ray spectra of such sources find special importance in cosmological studies due their intrinsic hard nature. The TACTIC telescope observed B2 0806+35 during December 2015–February 2016 for an effective observations time of ∼68 h [97]. Only 79 ± 45 *γ*-like events with a statistical significance of 1.74*σ* were detected by the TACTIC telescope in the direction of B2 0806+35. Therefore, an upper limit of 4.8 × <sup>10</sup>−<sup>12</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> with 3*<sup>σ</sup>* confidence level on the integral flux above 0.85 TeV was estimated. However, a long-term multi-wavelength study of the source suggested that the *γ*-ray spectrum measured by the *Fermi*-LAT is a power law with spectral index of 1.74 ± 0.15 in the energy range 0.1–300 GeV [97]. Modeling of the observed UV bump in the broadband spectral energy distribution with a blackbody spectrum indicated a definite trend that the inner radius of the accretion disc was increasing with time whereas the disc temperature was decreasing with time. This suggests that the optically thick and geometrically thin accretion disc was receding from the central black hole [97]. More observations using instruments with higher sensitivity are required to understand the nature of emission from the source.

### *4.8. IC 310*

IC 310 is an active galaxy at redshift *z* = 0.0189 located in the Perseus cluster. Radio observations of this galaxy indicate a one-sided core-jet structure blazar-like characteristics. The GeV-TeV emission from this source exhibits substantial flux variability. With the motivation of probing the TeV *γ*-ray emission from the jet of such peculiar active galaxy, the TACTIC telescope was used for long-term monitoring of IC 310 from December 2012 to January 2015 for an effective observation time of ∼95 h [98]. Only 102 ± 81 *γ*-ray like events were detected with a statistical significance of 1.26*σ* from the source direction. Due to absence of a statistically significant detection in the low activity state, the 3*σ* upper limit on the integral flux above 0.85 TeV was as estimated as 4.99 × <sup>10</sup>−<sup>12</sup> ph cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1. This was found to be compatible with the MAGIC telescope results during 2012–2013, when the source was in a low state after a rapid TeV flare on the night of 12–13 November 2012 [99]. The extrapolated flux above 0.85 TeV, calculated using the low activity state energy spectrum derived from MAGIC observations, was obtained to be 2.86 × <sup>10</sup>−<sup>13</sup> ph cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1.

### *4.9. NGC 1275*

The radio galaxy NGC 1275 is the brightest member of the Perseus cluster located in its central region at redshift *z* = 0.0179. The TACTIC telescope was deployed to monitor the Perseus galaxy cluster in search of TeV *γ*-ray emission since 2012. After the long-term observation of IC 310, it was an obvious choice to monitor the brightest radio galaxy NGC 1275, which is just ∼0.6◦ away from IC 310 in the same galaxy cluster. The TeV observations of NGC 1275 with TACTIC were performed between December 2016 and February 2017 for a live time of ∼37 h in the zenith angle range of 20◦–45◦ [100]. In the absence of a statistically significant detection of TeV *γ*-rays from the source direction, the 3*σ* upper limit on the integral flux was estimated to be 2.85 × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> above 0.85 TeV. In comparison to the other very high-energy observations, the estimated upper limit using the TACTIC telescope was consistent with the low-state-energy spectrum of the source reported by the MAGIC collaboration [101]. NGC 1275 is among a few radio galaxies from which TeV *γ*-ray emission has been detected by instruments of better sensitivity. Therefore, this source remains a potential target for the upcoming high-sensitivity MACE telescope.

### **5. Future Roadmap: MACE Telescope**

In order to make more effective and significant contributions in the field of GRA and its future science goals, a state-of-the art IACT, MACE (Major Atmospheric Cherenkov Experiment), with improved point source sensitivity and lower threshold energy, has been recently installed at Hanle (32.8◦ N, 78.9◦ E) by the group of Indian gamma-ray astronomers [13]. The altitude of the astronomical site at Hanle (4270 m) in the Himalayan range of North India is the highest for any existing IACT in the world. This highest-altitude Himalayan desert offers an annual average of more than 260 uniformly distributed dark nights, leading to an excellent duty cycle of the telescope for *γ*-ray observations. The MACE telescope installed at Hanle site is shown in Figure 7. With an altitude-azimuth mount, the telescope deploys a parabolic light collector of 21 m diameter and 25 m focal length. The light collector comprises 356 mirror panels each of ∼1 m × 1 m size. Each panel consists of indigenously developed 4 diamond turned spherical metallic honeycomb mirror facets of ∼0.5 m × 0.5 m size. This provides a single reflecting surface of area ∼346 m<sup>2</sup> with uniform reflectivity more than 85%. The mirror facets have graded focal lengths varying from ∼25 m to 26.5 m from the centre of light collector to its periphery, . This arrangement of mirrors ensures the minimum on-axis spot size at the focal plane of the telescope. The imaging camera at the focal plane deploys a modular structure with 68 Camera Integrated Modules each having 16 photomultiplier tubes, commonly referred to as pixels. The linear diameter of each photomultiplier tube is 38 mm. All the photomultiplier tubes in the camera are fitted with hexagonal compound parabolic concentrators to cover the entire surface of the camera. The entry apertures of these light concentrators have an angular size of 0.125◦. The camera provides a total optical field of view of ∼4.36◦× 4.03◦. Out of total 1088 pixels, the innermost 576 pixels (36 modules) will be used for event trigger generation with a field of view of ∼2.62◦× 3.02◦ based on a predefined trigger criterion. A trigger configuration of four close-cluster nearest-neighbor pixels is implemented in the MACE hardware. Each 16-channel module has its signal processing electronics built into it. An analogue switched capacitor array DRS-4 operating at 10<sup>9</sup> Hz is used for continuous digitization of the signal from photomultiplier tubes.

**Figure 7.** The 21 m diameter MACE *γ*-ray telescope at Hanle site (4270 m above sea level).

### *5.1. Expected Performance of MACE*

The simulation study of the trigger performance using *CORSIKA* package [102] suggests that the *γ*-ray trigger energy threshold of the MACE telescope is ∼20 GeV in the low zenith angle range of 0◦–40◦ and increases to ∼173 GeV for large zenith angle of 60◦ [103]. As expected, for any IACT, integral rates for MACE are dominated by protons with nearly 80% contribution to total trigger rate. In the zenith angle range of 0◦–40◦, the integral rate is estimated to be ∼650 Hz, and it decreases sharply to ∼305 Hz at 60◦ zenith angle. The 50 h integral sensitivity of the telescope is estimated to be ∼2.7% of the Crab Unit at the analysis energy threshold of ∼38 GeV at 5◦ zenith angle [104]. This has been estimated by carrying out the *γ*-hadron segregation using the Random Forest method. A comparison of the integral sensitivity of the MACE telescope with MAGIC-I is shown in Figure 8. It is evident from Figure 8 that, compared to the MAGIC-I telescope, the MACE telescope has a lower analysis energy threshold (as expected on account of higher altitude). Furthermore, it is observed that the MACE telescope would be more sensitive than the MAGIC-I telescope up to ∼150 GeV energy. The estimated energy and angular resolutions of the MACE telescope as a function energy are reported in Figure 9. The telescope is expected to have an energy resolution of ∼40% in the energy range of 30–47 GeV, which improves to ∼20% in the energy bin 1.8–3 TeV [105]. Furthermore, the angular resolution of the MACE telescope is estimated to be ∼0.21◦ in the energy range of 30–47 GeV, and it improves to a value of ∼0.06◦ in the energy range of 1.8–3 TeV [105]. Overall expected performance of MACE is similar to that MAGIC-I, although MACE has lower analysis energy threshold ∼30 GeV. The MACE telescope is expected to see its first light in March–April 2021.

**Figure 8.** Integral flux sensitivity of the MACE telescope and its comparison with that of MAGIC-I [105].

**Figure 9.** Energy and angular resolution of the MACE *γ*-ray telescope as a function of energy [105].

### *5.2. MACE on the World Map*

On the world map, the MACE telescope fills up the longitudinal gap between different major IACTs (H.E.S.S., MAGIC, and VERITAS) operating around the globe. The altitude of MACE site (4270 m) makes it the highest altitude IACT in the world. Given the construction of CTA (Cherenkov Telescope Array) observatory (https://www.cta-observatory.org (accessed on 8 February 2021)) both in northern and southern hemispheres, the MACE telescope (21 m diameter) also has the distinction of being second largest IACT after the Large Size Telescope (23 m diameter) in the northern hemisphere and third largest IACT after H.E.S.S.-II (28 m diameter) in the world. Therefore, along with other existing IACTs, MACE will be very useful for exploring the GeV-TeV *γ*-ray sky, in particular for continuous monitoring campaigns of flaring sources. The MACE telescope, having an excellent energy overlap with the space-based *γ*-ray detectors like *Fermi*-LAT [106], will be extremely important to explore several outstanding problems in GRA over the next decade. The *Fermi*-LAT fourth source catalog (4FGL) has reported more than 5000 *γ*-ray sources including 3130 blazars and 239 pulsars above 4*σ* significance [107]. Although *Fermi*-LAT has detection capability beyond 500 GeV, its sensitivity above 100 GeV is not sufficient to detect weak *γ*-ray emission from cosmic sources. Therefore, there is a strong possibility to detect *γ*-rays from more sources with IACTs like MACE having better sensitivity in the tens of GeV energy regime and for exploring the exact nature of unidentified sources reported in the 4FGL catalog. The lower energy threshold of MACE is expected to help in observation of the deep *γ*-ray Universe beyond redshift *z* > 2 and monitoring of astrophysical transients like gamma ray bursts. On the other hand, MACE, with an analysis energy threshold of ∼30 GeV, will play an important role in the study of pulsars, which are assumed to have a separate GeV-TeV emission component. Apart from exploring the Universe at GeV-TeV energies, MACE has the capability to address a range of open problems in observational cosmology such as constraining the density of EBL photons and strength of intergalactic magnetic field, probing the nature of dark matter candidates like weakly interacting massive particles of masses above 1 TeV, the existence of axion-like particles beyond the standard model of particle physics, and so on.

### **6. Conclusions**

Ground-based GRA in India has been pursued for the last five decades. The activity started with low-sensitivity instruments at different locations in India. However, the journey of Indian GRA using IACTs over the last two decades seems to be very satisfying on the whole. It has experienced an era of very encouraging and impressive advances at both scientific and technological fronts. Starting from the 81-pixel prototype camera of the TACTIC telescope to the development of 1088-pixel camera for the MACE telescope (second largest IACT at the highest altitude in the northern hemisphere), indicates the advances made in the field of technology and resources in a very short span of time. At present, the TACTIC telescope is among the few ground-based telescopes in the world being used for observations of TeV *γ*-ray emission from different astrophysical sources. The telescope provides a unique opportunity to monitor potential *γ*-ray sources in the multi-TeV energy regime during flaring and low activity states. Despite relatively limited sensitivity, the TACTIC telescope has provided very important information during the flaring episodes of blazars like Mrk 421 and Mrk 501 in the TeV energy range. The energy spectra of TeV *γ*-ray photons derived using TACTIC observations are very useful in understanding the relativistic acceleration of charged particles in astrophysical environments. Upper limits obtained from the non-detections of the TeV *γ*-ray emission from the different sources after a long monitoring with the TACTIC telescope are important in constraining the source emission models for production of *γ*-ray photons in the TeV regime. The upper limit reported for a source observed with the TACTIC telescope can also be used for constraining the variability properties if the source has been previously detected or will be detected at a later time during the flaring state. Such observations are also important in obtaining the information from all the past observations from various telescopes to make predictions for future instruments. The MACE telescope with lower energy threshold and higher sensitivity is expected to provide path-breaking results in GRA over the next decade.

**Author Contributions:** Conceptualization, K.K.S. and K.K.Y.; Writing–original draft preparation, K.K.S.; Supervision, K.K.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** Authors thank the anonymous reviewers for their critical comments and suggestions. We are grateful to all the former and present colleagues of Astrophysical Sciences Division at Bhabha Atomic Research Centre, Mumbai for their contributions in this article. A special thank to our colleague Chinmay Borwankar for sharing the MACE simulation results presented in this article.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


### *Article* **INTEGRAL View of TeV Sources: A Legacy for the CTA Project**

**Angela Malizia 1,\*,†, Mariateresa Fiocchi 2,†, Lorenzo Natalucci 2,†, Vito Sguera 1,†, John B. Stephen 1,†, Loredana Bassani 1,†, Angela Bazzano 2,†, Pietro Ubertini 2,†, Elena Pian <sup>1</sup> and Antony J. Bird <sup>3</sup>**


**Abstract:** Investigations that were carried out over the last two decades with novel and more sensitive instrumentation have dramatically improved our knowledge of the more violent physical processes taking place in galactic and extra-galactic Black-Holes, Neutron Stars, Supernova Remnants/Pulsar Wind Nebulae, and other regions of the Universe where relativistic acceleration processes are in place. In particular, simultaneous and/or combined observations with *γ*-ray satellites and ground based high-energy telescopes, have clarified the scenario of the mechanisms responsible for high energy photon emission by leptonic and hadronic accelerated particles in the presence of magnetic fields. Specifically, the European Space Agency INTEGRAL soft *γ*-ray observatory has detected more than 1000 sources in the soft *γ*-ray band, providing accurate positions, light curves and time resolved spectral data for them. Space observations with Fermi-LAT and observations that were carried out from the ground with H.E.S.S., MAGIC, VERITAS, and other telescopes sensitive in the GeV-TeV domain have, at the same time, provided evidence that a substantial fraction of the cosmic sources detected are emitting in the keV to TeV band via Synchrotron-Inverse Compton processes, in particular from stellar galactic BH systems as well as from distant black holes. In this work, employing a spatial cross correlation technique, we compare the INTEGRAL/IBIS and TeV all-sky data in search of secure or likely associations. Although this analysis is based on a subset of the INTEGRAL all-sky observations (1000 orbits), we find that there is a significant correlation: 39 objects (∼20% of the VHE *γ*-ray catalogue) show emission in both soft *γ*-ray and TeV wavebands. The full INTEGRAL database, now comprising almost 19 years of public data available, will represent an important legacy that will be useful for the Cherenkov Telescope Array (CTA) and other ground based large projects.

**Keywords:** keV-TeV cosmic sources; INTEGRAL legacy data base; relativistic astrophysics

### **1. Introduction**

In recent years, our knowledge of the most violent phenomena in the Universe has progressed impressively thanks to the advent of new detectors for *γ*-ray, on both the ground and in orbit. At the furthest extremes of this observational energy window, we have now discovered more than a thousand sources in the soft *γ*-ray band (20–100 keV) and more than 200 in the TeV band. At low energies, operating telescopes include INTEGRAL/IBIS, Swift/BAT, and NuSTAR; the first is the one providing the most extensive and deepest view of the galactic plane where many TeV sources are located, while the other two are the most efficient in mapping and studying the extra-galactic sky.

**Citation:** Malizia, A.; Fiocchi, M.; Natalucci, L.; Sguera, V.; Stephen, J.B.; Bassani, L.; Bazzano, A.; Ubertini, P.; Pian, E.; Bird, A.J. Integral View of TeV Sources: A Legacy for the CTA Project. *Universe* **2021**, *7*, 135. https://doi.org/10.3390/ universe7050135

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 4 March 2021 Accepted: 30 April 2021 Published: 7 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

INTEGRAL<sup>1</sup> [1], which is the most relevant for the present work, was designed to perform observations in the hard X-ray /soft *γ*-ray energy range (15 keV–10 MeV), over a 30 × 30 square degrees field of view (FoV) with two main instruments IBIS [2] and SPI [3] optimised for high angular and high spectral resolution, respectively. The simultaneous monitoring at soft X-ray (3–35 keV) and optical (V-band, 550 nm) wavebands is carried out with Jem-X and OMC [4,5].

The INTEGRAL Instrument Consortium comprised 26 companies that were located across Europe and NASA; it was launched on a PROTON rocket on 17 October 2002 and, after more than 19 years in orbit, all of its instruments are still fully working. The mission has been recently extended up to the end of 2022, while a further extension up to 2025 is expected. For the purpose of the present study, the most important instrument is the imager IBIS, which by combining all available observations can now reach ∼0.2–0.5 mCrab sensitivity (depending on the sky region observed) combined with good location accuracy (∼arcmin), large field of view (80 square degrees fully coded; 850 square degrees full-width at zero response), as well as good timing (∼120 μs accuracy) and spectroscopic capabilities (spectral resolution of ∼8% at 100 keV) [2].

On the other side of the spectrum, the current generation of Imaging Atmospheric Cherenkov Telescopes (IACT) include MAGIC and VERITAS, in the northern hemisphere and H.E.S.S. in the southern hemisphere. These instruments are now allowing imaging, photometry, and spectroscopy of sources of high energy radiation with good sensitivity about 10 mCrab in 50 h of observation time) combined with good angular (few arcmin) and energy (ΔE/E∼10–20%) resolution. They typically work in an energy range spanning between 50–100 GeV to about 100 TeV and they have a field of view of 3–5 degrees [6]. For comparison, CTA will be about a factor 10 more sensitive than any of these instruments; it will cover, with a single instrument, three to four orders of magnitude in energy range, from about 100 GeV to several TeV. It will also reach an angular resolutions at the arcmin level, a factor of few lower than current instruments. These characteristics, combined with high temporal resolution (on sub-minute time scales, which are currently not accessible), great pointing flexibility, and global sky coverage, will allow a major leap in the future of GeV/TeV astronomy (for more information, see Actis et al. [7], Cherenkov Telescope Array Consortium [8]).

Connecting the properties of sources that are seen at both these extremes is very important, as it allows us to discriminate between various emission scenarios and, in turn, fully understand their nature.

Combining INTEGRAL data with TeV information and MeV/GeV archival measurements should be the most appropriate approach as it could provide an unprecedented waveband coverage of more than nine orders of magnitude and could possibly allow to discriminate between the leptonic or hadronic origin of the highest energy gamma-rays. The observation of galactic objects in the TeV energy band can greatly contribute the identification of the source of cosmic rays. Based on energetic considerations, the best candidates are supernova remnants (SNRs) that are able to accelerate particles at very high energies; this will hopefully be confirmed by the observation of gamma-rays of unambiguous hadronic origin, as well as detection of neutrinos. CTA will provide a detailed view of the acceleration sites as well as information on the propagation of these high energy particles in individual sources. Therefore, the INTEGRAL archive can play an important role in the analysis of the candidate accelerators.

In the case of extra-galactic astrophysics, CTA studies of blazars can lead to firmly establishing the origin and the production mechanisms of TeV photons from relativistic jets. Additionally, in this case, the INTEGRAL database can be useful, as it will provide essential information in terms of light curves and spectra, particularly for high energy

<sup>1</sup> More information on the satellite and specific instruments, as well on the the INTEGRAL Science Operations Centre (ISOC at European Space Astronomy Center ) can be found on the ESA webpage dedicated to the mission (https://www.cosmos.esa.int/web/integral/home, accessed on 3 May 2021)

peaked objects, where it will give a significant contribution in building their spectral energy distributions (SED).

Finally, CTA is expected to discover many new sources, and their identification and study can benefit from INTEGRAL current and future survey data, where unidentified and/or poorly known objects are continuously being followed-up in the X-rays and at optical/infrared wavelengths, not only for classification purposes, but also for multiwaveband characterisation.

Here, we provide an overview of the soft *γ*-ray counterparts of TeV sources, as observed by INTEGRAL, which is the instrument that is providing the deepest survey of the Galactic plane and centre (12.5 Msec along the Plane and up to 52 Msec in the Galactic centre at the end of the current AO, December 2021), where most TeV sources have been discovered so far. Figure 1 illustrates the potential of INTEGRAL to provide soft *γ*-ray information for TeV sources: it shows an image of INTEGRAL Galactic Plane Survey with some TeV source positions superimposed. A cross correlation between the TeV on-line catalogue (http://tevcat.uchicago.edu/, accessed on 3 May 2021) and our latest IBIS survey [9], indicates that around 15–20% of the very high energy (VHE) sources have a counterpart in the soft *γ*-ray domain; this fraction includes objects of various types, such as X-ray binaries, pulsars/pulsar wind nebulae, and blazars, as well as some still-unidentified objects. INTEGRAL images, light curves, and spectra for all these sources represent a useful source of information for current VHE observations and a strong legacy for future projects, such as the CTA.

**Figure 1.** INTEGRAL image of a Galactic Plane region showing IBIS soft *γ*-ray sources and positions of some TeV sources.

### **2. Cross Correlation Analysis and Soft** *γ***-ray/TeV Associations**

Employing a spatial cross correlation technique that has been successfully used to identify X-ray counterparts of high energy sources (e.g., Stephen et al. [10,11]), we compare the INTEGRAL/IBIS (15 keV–10 MeV) [2] and TeV all-sky data in search of secure or likely associations.

For the INTEGRAL database, we use the 1000 orbit catalogue [9], which lists 939 sources that were detected in the 17–100 keV energy band above a 4.5*σ* significance threshold; the catalogue is extracted from the mosaic of all observations that were performed by the IBIS instrument up to orbit 1000, i.e., up to the end of 2010 and, therefore, only gives a glimpse of the possible associations that are available now after 10 more years of measurements. The reason why we only concentrate on the first eight years of data is due to problems that are related to changes in the IBIS telescope performances over its lifetime (it is now almost 19 years in orbit); although these changes are small and quite expected, they nevertheless imply a correction to be made on some data analysis tools, which are, at the moment, in progress. In the future, the revised data analysis software should allow observations taken many years apart to be summed up together without compromising the correctness of the process.

For the TeV database, we used the TeV on-line catalogue considering only default and new associations, which provides a list of 229 objects (by end of 2020). We note that interesting sources, like Cygnus X-1, which is listed among candidate TeV objects, can be studied in detail at both low and high *γ*-ray energies by exploring the wealth of imaging, spectral, and timing data that are available in the INTEGRAL archive. Figure 2 (left side) shows the distribution of sources in the two catalogues, where the deep coverage that is present in both along the Galactic plane is evident.

The cross correlation technique consists of simply calculating the number of TeV sources for which at least one INTEGRAL counterpart was found within a specified angular distance, out to a distance where all of the TeV sources had at least one soft *γ*-ray counterpart. Because the positional errors are comparable for all of the sources in either of the catalogues, the positional uncertainty was not used in the initial cross correlation, but it would come into play in the more detailed analyses after the list of possible associations was formed. To have a control group, we then created a list of *'anti-TeV sources'*, mirrored in Galactic longitude and latitude, and the same correlation algorithm applied. Figure 2 (right side) shows the results of this process. The solid curve shows that a strong correlation exists, out to about 330 arcseconds, where only 2–3 false associations are expected to be, by chance, coincidence, with the remaining being likely true ones.

**Figure 2.** (**Left**) The distribution of TeV (**upper**) and INTEGRAL sources (**lower**) showing the galactic plane clustering. (**Upper right**) The number of TeV-INTEGRAL spatial correlations (solid curve) and the same for a 'fake' set of TeV sources with positions mirrored in latitude and longitude (dashed). (**Lower right**) The difference between the two curves showing the number of 'excess' correlations.

We found 37 possible associations, which reduces to 33 after we exclude the galactic centre/ridge regions where associations are difficult to verify and a few clearly false matches. In the top section of Table 1, we report, for each of these 33 likely associations, their names, coordinates, and classes, as reported by the TeV and INTEGRAL catalogues, respectively, being ordered by distance between the TeV and gamma excesses. It is evident from the comparison of data from both catalogues that the majority of associations are straightforward, but there are also a few cases that deserve to be studied in more detail, plus some sources (see, for example, Crab Pulsar versus its nebula) for which it is sometimes difficult to discriminate between emission regions that are seen at different wavebands. In Table 1 we also highlight those sources that are marked as extended in the TeV catalogue.

At distances greater than 330 arcsec, chance correlations become increasingly more important, although some true correlations may still be present above this threshold. A number of these extra associations are worth further analysis; they are listed as extra matches in the second part of Table 1 and are briefly discussed in the following. To enlarge the database we have also cross correlated the TeV catalogue with all sources flagged as having been seen by ISGRI in the INTEGRAL Reference Source Catalogue (http://www.isdc.unige.ch/integral/science/catalogue, accessed on 3 May 2021). This gives us 3 extra active galactic nuclei (AGN) which have been added to the list of sources at the bottom of Table 1.

From our analysis, we find a total of 39 TeV sources having a soft *γ*-ray counterpart in IBIS catalogues, which suggests that around 20% of VHE objects have INTEGRAL coverage and useful data over the 20–100 keV band. This implies that the INTEGRAL legacy (to date, almost 19 years of observations as compared to the eight years of data used for the 1000 orbit catalogue) will be extremely important for any current or future TeV observations. These 39 associations cover all types of VHE objects from galactic to extra-galactic, from binaries, SNR, pulsar/pulsar wind nebulae systems to unidentified objects and AGN of various classifications. Some examples of these associations and the wealth of information that INTEGRAL can provide are described and discussed in the following sections.

**Table 1.** Association of TeV sources with INTEGRAL/IBIS.


† Coordinates are at J2000; positional uncertainties for the TeV coordinates can be found at http://tevcat.uchicago.edu/, accessed on 3 May 2021, while, for those of IBIS coordinated, are listed in Bird et al. [9], Mereminskiy et al. [12] ; () unclassified source.

### **3. INTEGRAL/TeV Associations**

### *3.1. Supernovae, Pulsars, and Pulsar Wind Nebulae*

Supernova Remnants (SNR) and Pulsar Wind Nebulae (PWN) are historically known to be sites of Cosmic Ray (CR) acceleration [13]. In addition, the detection of synchrotron emission from keV to TeV has provided further evidence that charged leptons or hadrons can be accelerated up to TeV energies, possibly via diffusive shock acceleration processes. In the last two decades, data collected with instruments from space (e.g., AGILE, FERMI, INTEGRAL) and ground (e.g., H.E.S.S., MAGIC, VERITAS), providing imaging and spectral capability, have permitted new insights into the acceleration mechanisms responsible for the production of the high energy photons to be obtained. In spite of the good sky coverage and spectral resolution, leptonic vs. hadronic models are still under debate. One more complication in the understanding of the high energy production mechanisms of the sources detected in the GeV-TeV range, is the presence (or not) of a pulsar that is generated at the time of the SN collapse, which is, in general, off-set, due to proper motion, with respect to the SNR centre. The pulsar, with its strong magnetic field and spin, often generates additional component(s) in some of the SNR spectral emission.

### 3.1.1. SNR View at Soft *γ*-ray and TeV Energies

It is well known that the expanding shells of supernova remnants are the sites of cosmic rays accelerated up to PeV energies. This process manifests itself in intense nonthermal radiation spanning many decades in photon energies from the radio to the TeV wavebands. INTEGRAL/IBIS offers the opportunity to cover the relatively unexplored soft *γ*-ray window in at least some bright objects. Three SNR-shell associations are found by the cross correlation analysis: Tycho, Cas-A, and RX J1713.7−3946, with the last showing extended emission in the form of a ring like configuration. The study of the shape of the broad band spectrum as well as the morphology of these three objects measured by INTEGRAL in conjunction with other observatories, provide clues for our understanding of the details of cosmic ray acceleration and the radiation mechanisms at work in the expanding shell of these remnants.

Figure 3 compares the INTEGRAL/ISGRI spectra of the Tycho and Cas-A SNR in the 20–150 keV band summing the data related to all observations performed during the first 1500 satellite orbits. For Tycho (black points in Figure 3) a simple power law with photon index <sup>Γ</sup> = 2.2 ± 0.5 and a 20–150 keV flux of 1.2 × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup>−1, fits reasonably well the high energy spectrum (errors in Figure 3 and elsewhere include statistical and systematic uncertainties). However, in the case of Cas-A (red points in Figure 3), a steeper power law of <sup>Γ</sup> = 3 ± 0.1 (20–150 keV flux of 4.9 × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1) is insufficient to fit the data mainly due to the presence of a significant excess around 70–90 keV, which we attribute to the presence in the source of Titanium-44 (44Ti) decaying lines, likely located at 68 and 78 keV. Indeed, the addition of a single Gaussian line is highly required by the data at a significance level greater than 99.99%, but with the data used in the present work we are not able to fully characterise the 44Ti line complex, due to the poor energy resolution of the average IBIS spectra.

A more detailed analysis of the Tycho spectrum over a broader (3–100 keV) energy band was presented by Wang and Li [14], who found a two component model fit comprising thermal bremsstrahlung with kT∼0.8 keV plus a power-law with Γ ∼ 3, i.e., not much different to our fit when considering the limited energy band considered here. Based on the diffusive shock acceleration theory, this non-thermal emission, together with radio measurements, implies that in this remnant protons are accelerated up to hundreds of TeV.

The broad-band spectrum for Cas-A (thermal bremsstrahlung with kT∼0.8 keV plus a power-law with Γ ∼ 3.2) is similar to our previous result for the high energy spectrum; in this case, Renaud et al. [15], using INTEGRAL data, were also able to resolve the two 44Ti decaying lines at 68 and 78 keV and measure a 44Ti yield from the SNR explosion of 10−<sup>4</sup> solar masses. Furthermore, the continuum emission seems to extend beyond 100 keV. Cas-A has been observed with the MAGIC telescope [16], its spectrum in con-

junction with the Fermi-LAT one, shows a clear turn-off (4.6*σ*) at the highest energies in the 60 MeV to 10 TeV energy band, which can be described with an exponential cut-off at E*<sup>c</sup>* = 3.5(+1.6 −1.0)*stat*(+0.8 <sup>−</sup>0.9)*sys* TeV. This is expected in the diffusive shock acceleration theory, which predicts a slowly decreasing spectral shape for the synchrotron radiation. This result implies that, even if all the TeV emission was of hadronic origin, Cas-A could not be a PeVatron at its present age [16].

**Figure 3.** INTEGRAL/IBIS unfolded spectra and the data to model ratio of the Tycho (black) and Cas-A (red) over the 20–150 keV energy range. A simple power law fits well the data of Tycho but it is not sufficient for Cas A data where an excess around 70–90 keV is clearly detected (see text).

The supernova remnant RX J1713.7−3946 provides, on the other hand, a nice example of the imaging capability of INTEGRAL in the case of extended objects, since IBIS has, for the first time, resolved its spatial structure in soft *γ*-rays [17]. Figure 4a,b shows the colour-coded image of RX J1713.7−3946 obtained in the 17–60 keV energy band: a clear ring-like structure with ∼24 arcmin radius is evident. Superimposed in green are the surface brightness contours in the soft (0.1–2.4 keV) X-ray band mapped with ROSAT (top panel, Pfeffermann and Aschenbach [18]) and in TeV *γ*-rays that were obtained with the HESS telescope (bottom panel Aharonian et al. [19]). The similarity of the images in soft X-rays, soft *γ*-rays and at TeV energies is striking, especially when considering the nine orders of magnitude coverage in photon energies. The X-ray emission of RX J1713.7−3946 is most likely dominated by synchrotron radiation of electrons in the shell regions [20] accelerated up to multi-TeV energies at the supernova shock [21].

More recently, Kuznetsova et al. [22] presented a more detailed analysis of the shell morphology, as seen by IBIS, further highlighting the presence of two extended hard X-ray sources that are spatially consistent with the northwest and southwest rims of RX J1713.7–3946, i.e., the brightest parts of the SNR in Figure 4. Interestingly, Sano et al. [23]) found a good correlation between the X-ray intensity map and the presence of major CO/HI clumps with mass greater than 50 solar masses interacting with the shock waves in the SNR. The magneto-hydrodynamic numerical simulations show that the interaction between the shock waves and the clumps generates turbulence that enhances the magnetic field and synchrotron X-rays at the shocked surface of the clumps, and the enhanced turbulence and/or magnetic field re-accelerates electrons to higher energies, providing TeV emission. Thus, RX J1713.7−3946 represents a unique laboratory for the study of a

core-collapse SNR that emits bright TeV *γ*-rays and synchrotron X/soft *γ*-rays that are caused by cosmic rays, in addition to interactions with interstellar gas clouds.

**Figure 4.** Colour-coded image of RX J1713.7−3946 in the 17–60 keV energy band obtained with INTEGRAL/IBIS with superimposed in green the surface brightness contours in the soft (0.1–2.4 keV) X-ray band mapped with ROSAT (top panel) and in TeV gamma-rays obtained with the H.E.S.S. telescope (bottom panel). The coordinates are in RA/Dec. INTEGRAL picture of the month for April 2008 (https://www.cosmos.esa.int/web/integral/pom-archive, accessed on 3 May 2021) and Krivonos et al. [17]).

### 3.1.2. Soft *γ*-ray Pulsars/PWN with TeV Emission

Among the several pulsar/PWN systems emitting at high energies, only four pulsars are reported in the TeV catalogue, and two of them are also clearly detected by INTEGRAL: Vela and Crab. They have both been extensively studied at soft *γ*-ray energies and their broad band properties, including the VHE emission, is discussed in a review conducted by Kuiper and Hermsen [24]. Regarding instead PWN, approximately 11 such systems are detected in common by INTEGRAL and TeV telescopes (see Table 1), the uncertainty being due to a few sources where the IBIS/TeV association is not straightforward. For example, the PWN that is associated with the Vela pulsar (named Vela X) has not been picked up by the cross correlation analysis, probably due to the complex morphology of this system at very high energies.

Indeed, an extended emission of about 0.8 degrees radius, located south of the Vela pulsar, has been observed with H.E.S.S. [25]. From observations with INTEGRAL/IBIS, Mattana et al. [26] claimed the detection of a spatially extended emission above 18 keV, after the subtraction of the main radiation from the pulsar. The morphology of this emission appears less extended than the source observed at TeV energies and it is consistent with the size of the X-ray cocoon observed at lower energies (see e.g., Mangano et al. [27], Slane et al. [28]. This suggests that INTEGRAL has actually observed both the pulsar and emission associated with the Vela-X PWN.

Other examples of complex systems are those of IGR J14193−6048/Kookaburra and HESS J1616−508/PSR J1617−5055. In the first case two close-by, distinct TeV sources have been observed by H.E.S.S. in the Kookaburra complex (HESS J1420−607 and HESS J1418−609, [29], see left panel of Figure 5). INTEGRAL detects a source, named IGR J14193−6048, in between these two TeV objects, although its position is shifted towards HESS J1420−607 (Figure 5, right panel). The analysis of the low energy X-ray data from Chandra suggests that the most likely identification for the INTEGRAL source is PSR J1420−6048, the pulsar that powers HESS J1420−607, although its location is barely inside the 99% IBIS error circle [30]. A soft *γ*-ray source that is associated with PSR J1420−6048 is also reported in the BAT 105 month catalogue [31], providing further evidence that this system (pulsar plus its PWN) is detected in the INTEGRAL energy range. The spatial coincidence observed between HESS J1420−607 and IGR J14193−604 (right part of Figure 5) further indicates that INTEGRAL also detected this TeV object.

**Figure 5.** (**Left panel**): HESS image of the Kookaburra complex showing two distinct sources coincident with a PWN surrounding the pulsar PSR J1420−607 and the "Rabbit" nebula (from Aharonian et al. [29]. (**Right panel**): IBIS image of the complex in soft gamma-rays (20–100 keV). The extent of the two HESS sources (green circles) are shown together with the position of PSR J1420−607 (yellow X) and the IBIS 99% error circle for IGR J14193−6048.

A mirror situation occurs in the case of HESS J1616−508/PSR J1617−5055: the pulsar and its PWN are certainly the counterpart of the INTEGRAL source, but not necessarily that of the object emitting at TeV energies. Despite being identified with a PWN in the TeV catalogue, HESS J1616−508 is not firmly associated with any known counterpart at other wavelengths and can, therefore, be considered as still unidentified. PSR J1617−5055 is one of the possible counterparts, and its association with the TeV object has been extensively discussed by Landi et al. [32]; it is also the only pulsar in the region that is energetic enough to power a relic PWN, despite it being offset from the centre of the TeV source by ∼9 arcminutes. However, since there is no strong and convincing evidence linking other sources in the field (SNR, star cluster, etc.) to HESS J1616−508 [33,34], we consider this association to be valid for further consideration.

In summary, we find at least 11 PWN systems in common between the INTEGRAL survey catalogue and the TeV surveys; interestingly, half of them are located in the Scutum arm region of our Galaxy. In Table 2, we collect for each of these 11 associations some relevant parameters both at soft and very high gamma-ray energies, including the system distance, PWN offset from the pulsar, if emission for either component (Pulsar/PWN) is present, the photon index and luminosity both in the 20–100 keV and 0.1–10 TeV energy bands. The systems are relatively nearby (within 10 kpc), with Vela being the closest.

Systems with Pulsar/PWN offsets are also observed, namely AX J1838.0−0655 and PSR J1617−5055. From Table 2, it is clear that, while at soft *γ*-ray energies, the emission is most likely due to the contribution of both the pulsar and its nebula, at TeV energies the nebula dominates in most objects, except for the case of Crab and Vela, where pulsed emission is clearly observed and independently reported. However, we note that, also in the 20–100 keV waveband, the PWN is a significant component of the 20–100 keV emission in approximately half of the systems; their contributions, typically in the range 20–30% of the total soft *γ*-ray luminosity, can reach higher values up to 50% in some sources [35].

A comparison between the photon indices obtained from INTEGRAL/IBIS and TeV observations indicates that PWN spectra steepen going from the soft *γ*-ray to the TeV waveband. The emission spectra of PWN can be broadly characterised by a double-peak structure: a low-energy peak that is produced by synchrotron radiation of electrons and a high energy hump due to Inverse-Compton (IC) up-scattering of soft-photon fields by plasma particles. As is evident in some PWN spectral energy distributions [36], the Xray/soft *γ*-ray spectra mark the final part of the first peak, while the TeV emission defines the end part of the second peak; therefore, combining INTEGRAL and TeV data allows for us to properly model the source SED and estimate fundamental physical parameters. Finally, we note that, for our PWN systems, the ratio between TeV and soft *γ*-ray luminosity (under the assumption that the latter is 70% due to the pulsar and 30% due to the PWN) lies typically in the range 0.1–2.

**Table 2.** IBIS/TeV PWN parameters.


"y" indicates if a component (PSR or PWN) is present; y in parenthesis refers to the less prominent component. L*γ*, L*TeV* are the 2– 100 keV and 0.1–10 TeV luminosities in units of 1033 erg/sec. References: (1) McBride et al. [37]; (2) Kuiper and Hermsen [24]; (3) de Rosa et al. [35]; (4) Forot et al. [38]; (5) Mattana et al. [39]; (6) HESS Collaboration et al. [40]; (7) Hare et al. [33]; (8) TeV catalogue.

> In Table 3, we collect data that are related to the pulsars that power the wind nebulae seen by both INTEGRAL and TeV telescopes: in particular, we list their age, pulsar period, spin down luminosity, and photon index, as measured in the X-ray band. From this Table, it is evident that the characteristic age of this specific pulsar sample, ranges from 0.7 kyr (PSR J1846−0258) up to 42.8 kyr (PSR J18490−0000), thus representing a very young population, well below 50 ky. They are all fast rotators with spin-periods between 33.5 and 324 ms, most of them (6) even spinning well below 100 ms. Moreover, the population is very energetic with spin-down powers above 5 × <sup>10</sup><sup>33</sup> erg s−<sup>1</sup> (the lowest value is measured for the pulsar that is associated with AX J1838.0−0655 system).

> Assuming again that the soft *γ*-ray emission is only 70% due to the pulsar, the remaining part being related to the PWN, we obtain that the pulsar soft *γ*-ray luminosity is typically in the range 0.1–1% of the spin down luminosity. We also note that the photon index measured from the pulsar in X-rays is typically harder that seen by INTEGRAL (Δ Γ∼0.5), a further indication of the contribution of the PWN to the total soft gamma-ray emission, since the observed steepening could be explained as synchrotron cooling away from the pulsar and in the outskirts of the PWN.

> Finally, future long (>Ms) observations of these objects with INTEGRAL, thanks to its spectral, imaging, and timing capabilities, will be very useful in order to disentangle the

contribution from the different parts of these systems, allowing for the detection of various spectral components, and enabling breaks to be identified: see, for example, the case of MSH 15–52 [38].

**System Name Age (ky) Period (ms) Edot (erg/s) X-ray Index** PSR J1846−0258 0.73 (1) 324 (3) 36.91 (1) 1.2 (3) IGRJ18135−1751 5.6 (1) 44.7 (3) 37.75 (1) 1.3 (3) SNR 021.5−00.9 4.85 (1) 61.8 (4) 37.53 (1) 1.47 (5) PSR B1509−58 1.56 (1) 150 (3) 37.23 (1) curved (3) PSR J1930+1852 2.89 (1) 136 (3) 37.08 (1) 1.21 (3) IGRJ18490−0000 42.90 (1) 38.5 (3) 36.99 (1) 1.37 (3) AX J1838.0−0655 22.7 (1) 70.5 (3) 36.74 (1) 1.1 (3) IGR J14193−6048 13.00 (1) 68 (3) 37.00 (1) 0.5 (3) PSR J1617−5055 8.15 (2) 69 (3) 37.20 (1) 1.42 (3) Crab 1.24 (2) 33.5 (3) 38.67 (2) curved (3) Vela Pulsar 11.3 (1) 89 (3) 36.84 (1) 1.1 (3)

**Table 3.** Characteristics of the Pulsars associated with IBIS/TeV PWN.

References: (1) HESS Collaboration et al. [40] but see ATNF Pulsar Catalogue (http://www.atnf.csiro.au/people/ pulsar/psrcat/, accessed on 3 May 2021) references for detailed information; (2) Mattana et al. [39]; (3) Kuiper and Hermsen [24]; (4) Camilo et al. [41]; (5) Matheson and Safi-Harb [42].

### *3.2. Binaries from keV to TeV*

Gamma-ray binaries are systems that emit the dominant part of their non-thermal emission in the *γ*-ray domain and at VHE. They are still a rare class of objects that consist of a stellar mass compact object (either a non-accreting neutron star or an accreting black hole) characterised by their broad-band emission ranging from radio to VHE. Contrary to the X/soft *γ*-ray regimes, where about 300 sources have been listed [9], only a very few cases, about 11 (online TeV catalogue), have been firmly identified at VHE, and not all of them have been clearly identified in the soft *γ*-ray band. Indeed from our cross correlation only four sources satisfy our association criteria, namely PSR B1259−63, LS 5039, Eta Carinae, and LS I + 61 03.

The mechanisms for the production of TeV photons can have different physical origins. In the microquasar scenario, non-thermal particle acceleration processes occur in the jet of an accreting compact object [43,44], while, in the pulsar binary scenario, the particle acceleration is the result of the shock between the stellar and the pulsar winds [45]. In both models, the observed non-thermal emission can be derived from a hadronic or leptonic primary population. In a leptonic scenario, the TeV emission is the result of inverse-Compton scattering of electrons accelerated in the jet or in the shock region, for microquasars or pulsar binaries, respectively.

Multiple observations have been performed from radio to soft *γ*-rays up to VHE for this small class of *γ*-ray binaries, but the lack of simultaneous long term monitoring over a very broad band, including X-rays and *γ*-rays, prevents us from understanding the nature of the compact object (firmly known only for PRS B1259−63) and the physical processes that are responsible for the particle acceleration. In the following, we briefly review each object found by the cross correlation analysis. We analyse the average INTEGRAL/IBIS spectra of the three most statistically significant detected binaries by summing the data of all the observations performed during the first 1500 INTEGRAL orbits. Figure 6 shows the spectra and residuals with respect to a simple power-law model, in which we compare the IBIS spectra of the three binaries in the 20–200 keV energy range: LS 5039 in red, PRS B1259−63 in black, and LS I +61 303 in green. For LS 5039, a broken power-law is needed to fit high energy data well. Fit results are reported in Table 4, showing that the photon index spans from ∼0.8 for LS 5039 to 1.7 for PSR B1259−63. The steeper power law corresponds to the lower flux (∼2.1 × <sup>10</sup>−<sup>11</sup> erg cm<sup>−</sup>2s−1) and the harder power law to the higher flux (∼7.8 × <sup>10</sup>−<sup>11</sup> erg cm<sup>−</sup>2s−1), in agreement with class behaviour.

**Figure 6.** The top panel shows the IBIS/INTEGRAL spectra and models for the fit with a simple power law model of PSR B 1259−63 (red), LS5039 (green), and LSI +61 303 (black). The bottom panel shows the residuals in terms of sigmas; colours are the same as in the top panel.

**Table 4.** Results of the INTEGRAL/IBIS and SWIFT/XRT average spectra. Fluxes are in 20–200 keV and 2–10 keV energy band for INTEGRAL/IBIS and SWIFT/XRT, respectively. The SWIFT/XRT spectral fit was performed including a galactic column density.


<sup>1</sup> BKN means a broken power-law and PL a simple power law model.

### 3.2.1. LS 5039

LS 5039 is a variable and periodic *γ*-ray source [46–51], having the shortest orbital period among the *γ*-ray binaries (3.9 days). At first, it was considered to be a microquasar [51], then the source showed evidence of X-ray pulsations with a period of 9s suggesting that LS 5039 hosts a pulsar, although no radio pulsations have been reported to date [52,53].

Each of the models proposed to reproduce the observed SED of LS 5039, involving a microquasar [54] or a non-accreting pulsar [55], fail in describing some of its features. The recent model that was proposed by Molina and Bosch-Ramon [56] reproduces the observed X-ray and VHE *γ*-ray emission well, but it fails to properly account for the *γ*-ray modulation. The non-thermal emission mechanism is still unknown, multiple models [57–60]) have been proposed to fit the LS 5039 SED, but all fail to reproduce some of the spectral features, so it would appear that there is a need for more complex models to describe the observed emission of this source. The INTEGRAL/IBIS observations show an orbital variability in the soft *γ*-ray band (25–200 keV) in phase with the modulation detected at TeV energies by H.E.S.S. [61], suggesting that the emission could originate from the same region, possibly from particles produced in the same acceleration process. During the phase when the source is detected, it has a flux of ∼3.5 × <sup>10</sup>−<sup>11</sup> erg cm−2s−<sup>1</sup> and the spectrum (exposure of 3 Ms) is well fitted with a power law with Γ ∼ 2.0.

The INTEGRAL/IBIS spectrum of LS 5039 is fitted with a power law with Γ ∼ 1.3 and F20−−200*keV* <sup>∼</sup> <sup>7</sup> <sup>×</sup> <sup>10</sup>−<sup>11</sup> erg cm−2s−1. Residuals that are plotted in Figure <sup>6</sup> show a break at ∼50 keV. For this reason, we use a broken power law model to better fit these data; the results are reported in Table 4. We build the SED in a broad energy range and plotted it in Figure 7. From low to high energy, data are from the 2Mass survey [62], WISE [63], SWIFT/BAT catalogs [64,65], EGRET catalog [66], AGILE catalog [67], Third Fermi catalog [68], and HAWC [46]. The SWIFT/XRT spectrum is extracted from an observation started on 2011-09-30T20:37:08 UTC, in photon count mode, in a 20 pixel radius circle. Table 4 reports the XRT/SWIFT fit results.

The proposed SED model does not favour any particular scenario. A SED constructed using simultaneously obtained data from radio to VHE, including INTEGRAL and CTA data, will be crucial in discriminating the physical process generating the non thermal photons at high energies. Figure 7 shows the sensitivity for the southern and northern sites of the CTA observatory (from https://www.cta-observatory.org/science/cta-performance/, accessed on 3 May 2021). CTA's unprecedented sensitivity between 20 GeV to 300 TeV will allow LS 5039 to be studied in more details at VHE and, in general, will allow us to delve into these sources like never before with detailed studies of the jet-medium interaction.

**Figure 7.** The spectral energy distribution of LS5039 using non-simultaneous data and showing the sensitivity expected from the southern (black line) and northern (orange line) CTA observatories **for 50 h** (black lines, from https://www.cta-observatory.org/science/cta-performance/, accessed on 3 May 2021). From low to high energy, the data used are from the: 2Mass survey (blue light points) [62], WISE (purple points) [63], EGRET catalog (black points) [66], AGILE catalog (green points) [67], Third Fermi catalog (blue light points) [68], HAWC (orange points) [46], and HESS (green points) [69].

### 3.2.2. PSR B1259−63

PSR B1259−63 is a *γ*-ray binary system composed of a rapidly rotating pulsar with a spin period of 48 ms and a bright O9.5V*<sup>e</sup>* stellar-type companion of *M*<sup>∗</sup> ≈ 30*Msun*, at a distance of 1.5 kpc [70]. The binary system has a period *P*orb ∼ 3.4 years in an eccentric orbit (*e* = 0.87) and an orbital inclination angle *iorb* ≈ 25 degrees [71–73]. This source displays broad-band emission, which extends from radio wavelengths up to VHE *γ*-rays. In the radio domain, PSR B1259−63 shows a pulsed component detected near the periastron [71] and a transient unpulsed component far from periastron [74]. In the *γ*-ray band the source was periodically detected by the Fermi-LAT [75–77] and by the H.E.S.S. telescopes at the periastron passage [78–81]. H.E.S.S. observations conducted from 2014 to 2017 and Fermi-LAT observations from 2010 to 2017 have been analysed by the HESS Collaboration et al. [82]. These authors reported a periodic flux variability without a

clear detection of super-orbital modulation that was most probably caused by the changing environmental conditions near the periastron passage. Multiple flares of PSR B1259−63 have been detected before and after periastron by the Fermi-LAT instrument. The spectra obtained in both energy bands display a similar slope, although a common physical origin is ruled out.

The INTEGRAL satellite observed this source at the 2004 periastron passage. The IBIS/ISGRI spectrum is well fitted with a power law spectrum of photon index of Γ = 1.3 ± 0.5 and an average luminosity of (8.1 ± 1.6)×>1033 erg s−<sup>1</sup> in the 20–80 keV energy band [83]. Several physical models have been proposed to understand the multi-wavelength emission. The favoured model is one considering the electrons that are accelerated by the shock between the pulsar and stellar wind [84–86]. Yi and Cheng [87] considered a complex model, according to which the GeV emission is due to Inverse Compton scattering of soft photons by the pulsar wind. The soft photons are from an accretion disk before the periastron while the density of the disk is not high enough after the periastron and the accretion is prevented by the pulsar wind shock. However none of the proposed models explain all of the features of the GeV emission.

### 3.2.3. Eta Carinae

The colliding wind binary system Eta Carinae includes two very massive stars and it is characterised from time to time by very energetic eruptions. The collision of strong stellar winds generates shocks that are expected to also produce cosmic rays. Non-thermal emission at X and *γ*-rays is produced by accelerated particles colliding with photons or with ambient matter. In recent years, the system has been observed with INTEGRAL [88,89], Suzaku [90], XMM-Newton and NuSTAR [90]. At *γ*-ray energies, detections have been reported with AGILE [91] and Fermi [92], while the H.E.S.S. telescope detected emission up to 400 GeV [93,94]. Although the analysis indicated a point-like morphology, the picture was not sufficiently clear and later observations did not indicate variability associated with lower energy measurements [94] and no flare similar to the one reported with AGILE has been revealed. The most recent oNuSTARbservation identifies Eta Carinae as the source of the high energy emission that is supported by flux variability with the binary orbital period reported before [95]. Additionally, the high energy emission connects to the soft-GeV and *γ*-ray spectrum with a power law slope of Γ ∼ 1.65. This high energy emission seems not fully in agreement with the ones that were reported with Suzaku and INTEGRAL for which a larger flux was detected (see also [96]). Simultaneous measurements at different orbital phases are needed to further constrain the nature of the source emission.

### 3.2.4. LS I +61 303

The *γ*-ray binary LS I +61 303, located at a distance of ∼2 kpc [97], consists of a compact object and a B0-Ve star in an eccentric orbit (*e* 0.7) [98] of orbital period *Porb* ∼ 26.5 days [99] confirmed by INTEGRAL/IBIS long monitoring performed since 2002 up to 2008 in hard X-rays [43]. This source also exhibits a periodic superorbital modulation with a period of ∼4.5 years from radio [100] to X-ray [101] and GeV [102] bands.

The system, which wsa reported at VHE for the first time by MAGIC [103], is generally bright at TeV energies around apastron passage with flux levels between 1% and 25% of the Crab flux above 100 GeV [104–108]. VERITAS observed this source around apastron in 2014 when bright TeV flares were also seen and flux levels peaked above 30% of the Crab Nebula in less than one day [109]. The short timescale and the properties of the flares, in conjunction with the observed emission at 10 TeV during the flare, favour a micro-quasar scenario, although a jet that is produced in a pulsar binary cannot be ruled out. For these exceptionally bright TeV flares, Paredes-Fortuny et al. [110] present a pulsar wind shock scenario with an in-homogeneous stellar wind in which the star disc is disrupted, thereby increasing acceleration efficiency on a short timescale.

A young pulsar was at first suggested to be responsible for the observed radio emission [45], but no pulsations or spin period were ever detected, while the presence of long

quasi-periodic oscillations in radio and X-rays [111] supports a microquasar scenario. The observation of the correlated X-ray and TeV emissions and the non-correlated GeV-TeV emission imply that there are two distinct populations of accelerated particles producing the GeV and TeV photons [112]. Using non simultaneous radio, X-ray, and *γ*-ray observations, Massi et al. [113] study the emission along one single orbit. They report on a two peak profile in line with the accretion theory predicting two accretion-ejection events along the orbit. They also show that the positions of radio and *γ*-ray peaks are coincident with X-ray dips, as expected for radio and *γ*-rays, again supporting an ejection-accretion model. Similar to other *γ*-ray binaries, the nature of the compact object is unconfirmed and the physical SED models are not well understood. Because LS I +61 303 is a highly variable source, future simultaneous observations, including INTEGRAL and CTA, will be essential for the study of the physical processes at VHE.

### *3.3. Soft γ-ray to TeV Radiation in INTEGRAL AGN*

In relation to the extragalactic sky, we have found 17 AGN, which are detected both at TeV and soft *γ*-ray energies by INTEGRAL/IBIS: 11 are BL Lac objects (nine high and two intermediate frequency peaked objects or HBL and IBL, respectively, as described in Table 1) 2, three Flat Spectrum Radio Quasars or FSRQ (PKS 1510−089, 4C +21.35 and 3C 279), and two FR I radio galaxies (Cen A and NGC 1275). One extra source, IGR J20569+4940 was up to now generically classified as a blazar, but of unknown class and redshift [114]. This source has only recently been announced as a TeV emitter [115] after observations with VERITAS, and it is also associated with a Fermi source (2FGL J2056.7+4939, [116,117]); it has a bright radio counterpart (NVSS 205642+494005) with a flat 2.7-10 GHz spectrum [118], as typically observed in radio-loud AGNs. It appears to be variable both in radio [119] and in X-rays [120]. Recently, using a Keck/LRIS spectrum, Clavel et al. [121] classified this source as a highly absorbed BL Lac object on the basis of a red and featureless continuum; the optical extinction is consistent with the high column density found in X-rays, which can be ascribed to both Galactic and intrinsic absorption.

In Figure 8, we assemble high energy data on this source using a NuSTAR observation that was performed in November 2015 with the average INTEGRAL and Fermi/LAT spectra with the addition of the reported TeV data: as can be seen in the figure the source SED probably has a peak in the GeV region. This SED resembles that of IGR J19443+2117 = HESS J1943+213 (see also Table 5), also in our list of associations, and one of the most extreme high peaked blazars shining through the Galactic plane. Therefore, we conclude that IGR J20569+4940 is another example of a high synchrotron peaked BL Lac, as originally proposed by Fan et al. [122]. This brings the number of HBL in our list to 10, which makes this class the most significant among TeV/INTEGRAL extra-galactic associations.

As recently suggested by Foffano et al. [123], the class of HBL may not be homogeneous, but made of two main sub-classes: one containing classical objects with very high energy *γ*-ray spectra peaking just below 1.0 TeV, and the other comprising hard-TeV sources peaking instead above 10 TeV. The former are probably those that are characterised in their SED by a high synchrotron peak above 10<sup>17</sup> Hz, but with an inverse Compton hump peaking in the near TeV *γ*-ray band. These sources show moderate to high flux variability and, in some cases, display flaring activity. Conversely, the latter show a less variable flux, but an inverse Compton peak energy exceeding the 10 TeV threshold and a synchrotron peak near or exceeding 100 keV. In Table 5, we have collected SED data as well as IBIS/TeV spectral indices (errors include statistical and systematic uncertainties) for our small sample of high energy peaked BL Lacs to check their TeV sub-classification.

<sup>2</sup> The two IBL are BL Lacertae and S5 0716+714

IGR J20569+4940

**Figure 8.** Spectral Energy Distribution of the blazar candidate IGR J20569+4940 **(see text)**.


**Table 5.** INTEGRAL/IBIS HBL.

‡ S: soft *<sup>γ</sup>*-ray and TeV slopes are related to the power-law spectral index S in the same energy band by S = 2.0 <sup>−</sup> <sup>Γ</sup>; **References**: (1) Chang et al. [124]; (2) Foffano et al. [123]; (3) this work; (4) Archer et al. [125]; (5) Aleksi´c et al. [126] (6) Hayashida et al. [127]; (7) Acciari et al. [128]; (8) Benbow and VERITAS Collaboration [115].

> Where the extra objects discussed in Foffano et al. [123] have been plotted for comparison, all of our sources belong to the subclass of typical HBL, with the only exception of 1H 1426+428, as is evident in Table 5 and Figure 9. This source is unique, as its synchrotron peak is located above 100 keV, even if the source is in a low flux state [129], making it a very rare case of a hard-TeV BL Lac, as only four more such cases are known to date. It is worth noting (see Figure 9, right panel) that in the Compton peak versus soft *γ*-ray slope diagram the discrimination between typical and hard-TeV objects is more pronounced, which makes this diagram a more useful one to highlight the most extreme high energy peaked BL Lac objects.

> Only three FSRQ are detected by the IBIS and TeV telescopes. In these cases, INTE-GRAL spectra most likely probe the ascending side of the inverse Compton part of the SED contributing to establish its peak, while the very high energy *γ*-ray emission defines the final descending part of the same component.

**Figure 9.** (**Left**): Synchrotron peak versus TeV slope; (**Right**): Compton peak versus soft *γ*-ray slope. Filled circles and star are sources reported in Table 5, i.e., HBL objects that are detected by INTEGRAL. Eight extra sources discussed in Foffano et al. [123], open squares, have been reported for comparison and their slopes have been evaluated using BAT 158-months spectra.

Typically, very high-energy (>100 GeV) emission is observed from this type of blazar either during flares or during their high activity states, although persistent emission during the low state was also detected. Historical TeV activity states can be found in MAGIC Collaboration et al. [130] and references therein for PKS 1510−089, in Albert et al. [131] and Emery et al. [132] for 3C 279, in Aleksi´c et al. [133] and Holder [134] for 4C 21.35. INTEGRAL IBIS, by covering the soft *γ*-ray band of the SED, therefore provides useful information for the understanding of emission models and offers the possibility of comparing variability studies in these two wavebands where the emission may have the same origin.

The TeV emission in all blazar type objects is believed to originate over rather small spatial (≤1 pc) scales, in the fast parts of the jet viewed almost pole-on where strong relativistic Doppler amplification favours their high energy detection. However in the radio galaxy Cen A, extended TeV emission has recently been reported [135], thanks to the unique proximity of the source. The estimated projected distance from the core is roughly a few kpc, although the real distance may be larger due to projection effects [136]. This observational result provides further evidence for the acceleration of ultra-relativistic electrons in radio jets and confirms their major role in the TeV emission and misaligned AGN. INTEGRAL/IBIS is not able to resolve different components within the same source, even in the case of Cen A, which is favourably located close by: the instrument angular resolution of 12 arcmin is not sufficient for observing extended emission at a level of a few arcminutes. However, indirect evidence for jet emission could come from broad band spectral analysis if the data are over a sufficiently wide energy range and of good statistical quality. The best analysis of the Cen A soft *γ*-ray spectrum using all INTEGRAL instruments was performed by Beckmann et al. [137]: they gave evidence for the presence of a curvature in the spectrum with an exponential cut-off at around 400 keV, which is likely associated with thermal Comptonisation in the hot corona, around the nucleus. Extending this Comptonisation model to the GeV range shows that it cannot explain the high-energy emission, implying the existence of an extra component of non-thermal origin probably related to the jet. At the moment, the relative weight of these two components (thermal versus non thermal) within the soft *γ*-ray waveband is still unknown, but further INTEGRAL observations of the source combined with variability studies of its soft *γ*-ray emission can help in solving this issue.

### *3.4. INTEGRAL Counterparts of Unidentified TeV Sources*

The online TeV catalogue lists ∼230 TeV sources (as of January 2021) and the nature of the majority of them has been firmly established through multi-wavelength observations. Interestingly the fraction of unidentified TeV sources, with no firmly established counterparts at other wavebands, is about 25%. They lie essentially on the Galactic plane (only four unidentified sources, out of a total of 54, are far from the plane), although this could be due to an observational bias. In fact, to date, a large portion of the entire Galactic plane has been covered by observations of the current IACTs, although with non-uniform exposure. In particular, the best surveyed regions, in terms of exposure and sensitivity, are those in the inner Galactic plane at longitudes approximately from l = 250◦ to l = 65◦, thanks to the Galactic plane surveys performed by the IACT array H.E.S.S. [138]. In the light of the above, it can be inferred that the great majority of unidentified TeV sources are galactic objects within the Milky Way. As examples, to date, the most common TeV sources firmly identified with Galactic objects mainly include pulsar wind nebulae, supernova remnants, and high mass X-ray binaries (HMXB).

The identification of unknown TeV sources is crucial for obtaining a deep insight into the nature of the source population at TeV energies. In addition, it is most probably among unidentified objects that peculiar sources, or even a new class of sources, could emerge. However, one of the main difficulties in the identification process is the often large error box of TeV sources, hence positional correlation with known objects is not usually enough to firmly identify the TeV nature of the source. For this reason, a multiwavelength approach is strongly needed to understand their nature. Searches for X-ray counterparts, especially above 20 keV, are particularly useful in finding a positionallycorrelated, best candidate counterpart with parameters that might be expected to produce *γ*-rays at GeV-TeV energies. Furthermore, information from the soft *γ*-ray band is very useful in characterising these sources in terms of spectral shape, flux, absorption properties, and variability. In the following sub-sections, we highlight our findings from positional correlations that are found between some unidentified TeV sources and soft *γ*-ray objects detected by INTEGRAL above 20 keV.

### 3.4.1. HESS J1841−055

HESS J1841−055 is an unidentified source discovered at TeV energies by H.E.S.S. in 2007 during the Galactic plane survey [139]. Remarkably, its emission was observed as highly extended. This characteristic has been confirmed by the more recent H.E.S.S. results published in 2018 from the latest Galactic plane survey [138]. The source has been studied above 1 TeV by other experiments such as ARGO-YBJ [140] and HAWC [141]. From the Fermi-LAT catalogue of Galactic extended sources [142], in this region there are two extended sources above 10 GeV, whose emission is overlapping with the extension at TeV enegies. Recently, the MAGIC collaboration [143] reported a deep study of HESS J1841−055 using MAGIC and Fermi-LAT data. They fully confirmed the diffuse and significantly extended emission from the source, with an estimated size (∼0.4 degrees) that is similar to that previously measured by H.E.S.S. In addition, there are several bright and highly significant hot spots in the diffuse TeV emission that strongly hint at the presence of multiple TeV sources in the region. This indicates that probably several point-like sources are contributing to the extended high energy emission from HESS J1841−055. In particular, MAGIC Collaboration et al. [143] concluded that such emission can be best interpreted by scenarios, like PWN and SNR.

The source sky region has been further investigated at much lower energies to search for the possible best candidate counterparts. Although no firm counterparts have been identified, several likely associations have been proposed in the literature. To date, the most promising associations are those involving the pulsar PSR J1838−0537 and the SNR G26.6−0.1, based on spatial and energetic considerations. In addition, the point-like source AX J1841.0−0536 is an interesting candidate counterpart" as reported by Sguera et al. [144], making use of INTEGRAL data in both energy bands 3–10 keV and 20–100 keV. In fact,

AX J1841.0−0536 is the only soft *γ*-ray source detected by INTEGRAL above 20 keV inside the error region of HESS J1841−055. As suggested by Sguera et al. [144], AX J1841.0−0536 could be responsible for at least a fraction of the entire TeV emission from the extended source HESS J1841−055. This proposed association, based on a striking spatial match, is also supported from an energetic standpoint by a theoretical scenario, where AX J1841.0−0536 is a low magnetised pulsar which, due to accretion from the super-giant companion donor star, undergoes sporadic changes to transient Atoll-states (e.g., [145]) where a magnetic tower can produce transient jets and, as a consequence, high-energy emission. The proposed association is interesting, because AX J1841.0−0536 might eventually be the prototype of a new class of Galactic TeV emitters. In fact, it is a source firmly identified as a member of the newly discovered class of transient HMXBs, named as Supergiant Fast X-ray Transient (SFXTs, [146,147]). It is noteworthy that the possible association between the SFXT AX J1841.0−0536 and HESS J1841−055 might not be a unique and rare case. In addition, other interesting associations between SFXTs and unidentified high energy sources have been proposed in the literature [148,149].

Instruments with much better angular resolution than the current generation of very high energy telescopes is required to disentangle all of the proposed counterparts associated with HESS J1841−055 at TeV energies. This source is naturally an interesting target for further studies with the forthcoming CTA due to its excellent angular resolution (expected to be better than two arcminutes at energies above several TeV) as well as source localisation accuracy (namely, aimed for 10 arcseconds).

### 3.4.2. HESS J1844−030

Most of the TeV sources that are detected by H.E.S.S. during its survey of the Galactic plane are extended. Point-like TeV sources, i.e., having a spatial extension on the scale of the few arcminutes resolution of H.E.S.S., are the exception. HESS J1844−030 is an unidentified point-like TeV source, as reported in the last release of the H.E.S.S. Galactic plane survey [138]. It is spatially associated with the radio source G29.37+0.1 and the candidate X-ray pulsar wind nebula (PWN) G29.4+0.1. Castelletti et al. [150] performed a multi-wavelength study of the source region, showing that G29.37+0.1 is a radio source with a complex morphology that could be due to the superposition of two different and unrelated objects: a background extragalactic source (likely a radio galaxy) and a foreground galactic source (likely an SNR). Because of the particularly complex morphology of G29.37+0.1, the authors were not able to disentangle the origin of the high energy nature of HESS J1844−030. Recently Petriella [151], using radio and X-ray data, presented strong evidence that G29.4+0.1 is a PWN that is powered by a point-like X-ray source embedded in it. The author proposed that HESS J1844−030 is the high energy counterpart of G29.4+0.1, supporting this association with a leptonic mechanism to explain the observed TeV emission.

The sky region of HESS J1844−030 has been covered by INTEGRAL observations with an exposure of 5 Ms, according to the latest published catalogue of Bird et al. [9]. Figure 10 shows the corresponding IBIS/ISGRI deep significance mosaic map of the source sky region (20–40 keV). As can be seen, only one persistent soft *γ*-ray source has been significantly detected by INTEGRAL (7*σ* level, 20–40 keV), which is spatially associated with the positional uncertainty region of HESS J1844−030. As such, it is the best candidate counterpart in the soft *<sup>γ</sup>*-ray band. The measured 20–40 keV flux is 0.6 mCrab or 4.5 × <sup>10</sup>−<sup>12</sup> erg cm−<sup>2</sup> <sup>s</sup><sup>−</sup>1. The source detected by INTEGRAL has been named AX J1844.7−0305 [9], since the latter is the closest catalogued X-ray object. A further in-depth investigation of the INTEGRAL data on AX J1844.7−0305 is currently under way, to definitely confirm the proposed physical association with HESS J1844−030. The collected information in the soft *γ*-ray band could be useful for characterising HESS J1844−030 in terms of variability, absorption properties, and spectral shape, hence to shed more light on its nature.

**Figure 10.** INTEGRAL/IBIS mosaic significance image (20–40 keV, 5 Ms exposure time, Galactic coordinates) of HESS J1844−030 sky region. The white circle (90% confidence radius of 3.7 arcminutes) represents the positional uncertainty region of the source AX J1844.7−0305 detected by INTEGRAL (7*σ* level). The green circle (95% confidence radius of 2.4 arcminutes) represents the positional uncertainty region of the point-like TeV source HESS J1844−030. The other very bright source that is detected by INTEGRAL in the image is the PWN around PSR J1846−0258.

### 3.4.3. HESS J1808−204

HESS J1808−20 is an unidentified TeV source that is characterised by steady and extended emission. Based on a striking spatial correlation, potential counterparts to the high energy emission are the massive stellar cluster Cl\* 1806−20, the luminous hypergiant star LBV 1806−20, and the soft *γ*-ray repeater SGR 1806−20 [138]. Although not firmly established yet, all such associations are very interesting, because they provide intriguing potential for high energy particle acceleration from peculiar sources. In fact, Cl\* 1806−20 is a massive stellar cluster hosting numerous energetic stars, such as Wolf-Rayet stars, OB super-giants, and, in particular, the rare luminous blue variable hyper-giant LBV 1806−20 that generates a very powerful wind and it is among the most massive and luminous stars known in our Galaxy. Interestingly, SGR 1806−20 is also a member of the cluster Cl\* 1806−20. It is a strongly magnetised neutron star belonging to the class of magnetars whose soft/hard X-ray emission is powered by the decay of their huge magnetic field. In particular, SGR 1806−20 is known to be an active source of short and energetic soft-*γ*-ray flares, especially famous for its 2004 December very giant flare with an energy release of several 10<sup>46</sup> erg s<sup>−</sup>1, i.e., one of the strongest outbursts ever recorded at soft *γ*-rays from any known soft gamma-ray repeater [152]. From an energetic stand-point, in principle, the TeV energy flux of HESS J1808−204 could be produced by the energetic members of the massive star cluster Cl\* 1806−20 through particle acceleration that results from the stellar wind interaction over parsec scales according to the cluster size and stellar density [138,153]. In particular, the member LBV 1806−20 could dominate much of the stellar wind energy and, therefore, drive most of the particle acceleration. An intriguing possibility is that SGR 1806−20 could contribute to an additional component of emission from HESS J1808−204 through inverse Compton scattering from a PWN powered by the magnetic field rapid decay, similarly to the case of HESS J1713−381 and the magnetar CXOU J171405.7−-381031 [154]. A major uncertainty of such a leptonic scenario is that X-ray observations have, so far, failed to identify a PWN in the region.

We note that SGR 1806-20 is the only persistent soft *γ*-ray source that is detected by INTEGRAL inside the error region of HESS J1808-204 in the broad energy range 20– 100 keV, based on a total of about 8 Ms of exposure time observations according to the latest

INTEGRAL/IBIS catalogue [9]. The measured fluxes are equal to 2 × <sup>10</sup>−<sup>11</sup> erg cm <sup>−</sup><sup>2</sup> <sup>s</sup>−<sup>1</sup> (20–40 keV) and 3.2 × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> (40–100 keV). Clearly, the source is characterised by a soft *γ*-ray tail significantly extending up to at least 150 keV [155]. The emission has a power law spectrum with a photon index in the range 1.5–1.9 and a 20–100 keV flux of <sup>5</sup> × <sup>10</sup>−<sup>11</sup> erg cm−<sup>2</sup> <sup>s</sup>−<sup>1</sup> (see the spectrum in Mereghetti et al. [155]), corresponding to a luminosity of 1.3 × 1036 erg s−<sup>1</sup> at 15 kpc distance. INTEGRAL observations of magnetars provided the first ever detection of persistent emission in the energy range 20–200 keV and opened a new important diagnostic to study the physics of magnetars.

### **4. Summary**

We have shown that, as expected, there is a significant correlation between the recent INTEGRAL/IBIS 1000 orbit catalogue and the online TeV source list. By analysing this correlation, we have further shown that 39 objects (∼20% of the VHE *γ*-ray catalogue) have emission in both the soft *γ*-ray and TeV wavebands, providing an indication of the usefulness of combining information at these frequencies. The objects discussed belong to various classes, both galactic and extra-galactic, as well as unclassified sources. In the galactic realm, compact objects, like binary systems, pulsars, and extended objects, like SNR and PWN, are reported and discussed in detail. In the extra-galactic case, AGN of various classes have been found and investigated, like the two types of Blazars (BL Lac and FSRQ), as well as radio galaxies. Finally, the identification of objects that are still lacking a definite counterpart at TeV energies can benefit from information at soft *γ*-ray frequencies. Overall, by discussing the individual object class, we have further emphasised the importance of adding to the broad knowledge of TeV sources information in the soft *γ*-ray waveband. With this in mind, the INTEGRAL legacy (19 years of measurement collection with spectral, timing, and imaging information) will provide one of the most extended databases to be exploited, particularly on the galactic plane, where many TeV sources are located. These data are made available to the astrophysical community by means of various channels: through the on line archive service at the ISDC (https://www. isdc.unige.ch/integral/heavens, accessed on 3 May 2021), by means of specific on going projects, like, for example, the galactic plane survey that makes data publicly available almost in real time (http://gps.iaps.inaf.it/, accessed on 3 May 2021) and direct contact with the teams in charge of INTEGRAL surveys (any of the authors of this paper for INTEGRAL 1000 orbit catalogue products).

At the moment, INTEGRAL is operating in nominal mode with all of the instruments working well. The mission is approved for operations till the end of 2022, with a possible further extension until 2025. If approved, this will give an opportunity to perform high sensitivity simultaneous investigations in the keV to TeV energy range in the next half of the decade. As such, it provides an observational opportunity for currently operational VHE telescopes, while building, at the same time, a strong legacy for future projects, such as the CTA.

**Author Contributions:** Authors of the present paper are part of the IBIS survey team. As such they all contributed to build up images, mosaics, light curves and spectra of all sources included in the IBIS catalogues constructed over the years. They are also deeply involved in the search of counterparts of the unidentified detected sources from INTEGRAL/IBIS and SWIFT/BAT. They are all on the way to perform a new survey that will be the legacy for the future observation with the CTA project. Conceptualization, all authors have contributed after previous work done together on the same argument; methodology, all authors have discussed the analysis method after several discussions; software for cross correlation analysis and results, J.B.S.; formal analysis, investigation and writing of the original draft preparation, all authors with separate responsibility for individual sessions (Section 1: P.U., L.B.; A.J.B.; Section 2: J.B.S.; Section 3.1: L.B., A.M., L.N., E.P.; Section 3.2: A.B., M.F.; Section 3.3: A.M., E.P.; Section 3.4: V.S.; summary: A.J.B.); data curation, all authors have contributed within their specific expertise (see above); writing—review and editing, A.M., J.B.S., P.U.; visualization, A.M., J.B.S., M.F., V.S.; supervision, A.M.; funding acquisition, A.M., L.N. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research by Italian co-authors was funded by different agreements between the Italian Space Agency (ASI) and INAF over the past 20 years, last of which is 2019–35-HH.0. AJB acknowledges partial support for this research to the European Union Horizon 2020 Programme under the AHEAD project (grant agreement n. 654215).

**Data Availability Statement:** All data reported in the paper are publicly available on the catalogues linked throughout the text.

**Acknowledgments:** The authors would like to acknowledge the contribution made in the field of INTEGRAL/TeV associations, particularly PWN, by their friend and colleague A.J. Dean, who has played over the years an inspiring role for many scientists.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this manuscript:


### **References**


### *Review* **EAS Arrays at High Altitudes Start the Era of UHE** *γ***-ray Astronomy**

**Zhen Cao 1,2,3**


**Abstract:** The evolution of extensive air shower detection as a technique for *γ*-ray astronomical instrumentation for the last three decades is reviewed. The first discoveries of galactic PeVatrons by the Large High Altitude Air Shower Observatory demonstrate the importance of this technique in ultra-high energy *γ*-ray astronomy. Utilizing this technique, the origins of high energy cosmic rays may be discovered in the near future.

**Keywords:** PeVatron; Crab Nebula; angular resolution; energy spectral distribution; *γ*-ray astronomy

### **1. Introduction**

As of today, the Large High Altitude Air Shower Observatory (LHAASO) has detected a dozen PeVatrons. A few of them emit very energetic photons around and even above 1 PeV. It is a discovery that opens the window to Ultra High Energy (UHE, E*<sup>γ</sup>* > 0.1 PeV) *γ*-ray observation. The traditional cosmic ray (CR) detection method of observing extensive air shower (EAS) on the ground has evolved as a successful *γ*-ray detecting technique over three decades of remarkable development. The successful detection of PeVatrons in the UHE band indicates that both sensitivity and angular resolution of the EAS detection have reached the high standards of the *γ*-ray astronomical studies set by other very successful experimental research in Very High Energy (VHE, E*<sup>γ</sup>* > 0.1 TeV) and High Energy (HE, E*<sup>γ</sup>* > 0.1 GeV) bands. The long-standing mystery of galactic CR origins may be uncovered in this new era of UHE CR observation utilizing EAS detection at high altitudes. As this technology may soon lead to further important discoveries, this paper reviews the history of EAS detection technology in *γ*-ray astronomy, outlining major achievements, advantages and disadvantages.

### **2. The Pioneers of EAS Arrays for** *γ***-ray Astronomy**

In the 1980s, at the onset of VHE *γ*-ray astronomy, the goal was to develop a technique that should be sensitive enough to detect *γ*-rays from a single source. However, it was unclear whether or not such a source existed, or more quantitatively what could have been the flux level of the *γ*-rays to be detected. It was a job of these pathfinders to create or improve detection techniques available at the time. EAS detection had already been a viable and acceptable technique for CRs at energies well above 100 TeV. A reasonable sampling rate in the shower fronts that allowed measurement of the shower direction and energy with desirable accuracies is economically acceptable. However, this technique suffered from well-known weaknesses. One of them was that the threshold energy was high at sea level where most arrays were at that time. The other was that the solution for distinguishing between CR and *γ*-ray events, thus suppressing CR background, was too expensive to be affordable. EAS technique therefore was not an obvious choice unless there existed a very strong motivation. The claim of *γ*-ray detection at PeV energy level [1] in 1983 provided the motivation required to spur a wave of experiments using EAS arrays.

**Citation:** Cao, Z. EAS Arrays at High Altitudes Start the Era of UHE *γ*-ray Astronomy. *Universe* **2021**, *7*, 339. https://doi.org/10.3390/ universe7090339

Academic Editors: Ulisses Barres de Almeida and Michele Doro

Received: 10 August 2021 Accepted: 5 September 2021 Published: 9 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The working energies of a typical EAS array of shower particle counters with a typical spacing of about 15 m would be ideal for the detection of PeV events. Two experiments, CASA-MIA [2] in Utah and AS*γ* [3] in Tibet, set out to overcome the shortcomings of the EAS detection method.

CASA-MIA [2], as schematically shown in Figure 1, was built to achieve a threshold energy around 0.1 PeV by selecting the site at 1600 m above sea level (a.s.l.). Detectors spaced 15 m apart over 1/4 km<sup>2</sup> attempted to observe the theorized very low flux of *γ*-ray photons. The array achieved an angular resolution of 1.2◦ (hereafter in this paper, the angular resolution is defined as the space angle of a cone in which 68% of light from a point source is contained). In addition to the EAS particle counter array, a muon detector array with an active area of 2560 m<sup>2</sup> was built to measure the *μ*-content of EAS events. The *μ*-content is much smaller in a *γ*-ray induced EAS than that by an ordinary CR particle; therefore, around 90% of the background CR events were rejected using the muon-poor cut method.

**Figure 1.** Schematic of layouts of the CASA-MIA array and AS*γ* in phases before 2002.

AS*γ* [3], schematically shown in Figure 1, is a smaller array compared with CASA-MIA situated at 4300 m a.s.l. The array used smaller counters of 0.5 m<sup>2</sup> with the identical spacing of 15 m. Due to the high altitude, the threshold energy was below 10 TeV and a similar angular resolution of 1◦ was achieved. Without muon detector in the array, the detection of *γ*-rays from a point-like source was solely based on statistics. With a sufficient exposure to the source within the angular range defined by the Point Spread Function (PSF), the significance of ∼*Nγ*/ <sup>√</sup>*NCR* eventually becomes acceptable, e.g., <sup>&</sup>gt;5*σ*. *<sup>N</sup><sup>γ</sup>* is the number of photons from the source and the *NCR* is the number of background CR events estimated using the same size window as the PSF in a slightly different direction from the source. As both of them increase with the exposure time linearly, the significance of the detection of the *γ*-ray source will become statistically acceptable. Since the AS*<sup>γ</sup>* experiment has had many major upgrades since 1993, the original array at that time has been referred to as Tibet-I. The difference between the two experiments, CASA-MIA and Tibet-I, was quite obvious.

### **3. Altitudes of the EAS Array**

A few more years after the discovery of the first VHE *γ*-ray source, namely the Crab Nebula, referred to as the 'standard candle' in the northern sky nowadays, by Whipple experiment [4], the CASA-MIA collaboration published upper limits of *γ*-ray flux above 100 TeV for Cyg X-3, Her X-1 [5] and the Crab Nebula [6]. In 1996, the angular resolution of the array had already been improved to be 0.5◦ above 100 TeV. The experiment reached its limit and decided to terminate the operation.

Remarkably, the AS*<sup>γ</sup>* experiment was also able to set the limit of *γ*-ray flux for the Crab Nebula in 1992 [3] at much lower energy around 10 TeV. Without discrimination between *γ*-ray or proton induced showers, the small array (about 1/25 of CASA-MIA) of smaller scintillator counters (1/2 of the size of counters in the CASA array) set the limit closer to the extrapolated flux from the Whipple experiment's measurements. This achievement highlighted the huge advantage of the high altitude of the array site. However, it is still difficult to reach to the sensitivity of 1 Crab Unit (CU), i.e., the photon flux from the Crab Nebula. The experiment was upgraded to Tibet-III by 2002. To further take advantage of the site's high altitude, the spacing between counters was reduced down to 7.5 m. This pushed the threshold energy lower but kept the same angular resolution of 1◦. Lowering the threshold energy provided an advantage in signal-to-noise ratio *Nγ*/ <sup>√</sup>*NCR* as long as the spectra of *γ*-rays from the sources were not too much harder than the CR spectrum. For the Crab Nebula, the spectral index is ∼2.6, which is close enough to the CR's ∼2.7.

Achieving the sensitivity of 1 CU was certainly a landmark of the first generation of the VHE *γ*-ray EAS detectors. During the same period, two other techniques were also developed and brought some more diversities to the EAS detection community. They were water Cherenkov detector (WCD) array from the MILAGRO experiment and Resistive Plate Chamber (RPC) array from the ARGO-YBJ experiment.

### **4. Background Suppression via** *γ***/CR Separation**

At Fenton Hill, New Mexico, US, the MILAGRO experiment [7] reused a water pool, 60 m × 80 m × 8 m as part of a Los Alamos geothermal experiment, at an altitude of 2600 m a.s.l. A3m × 3 m grid of 8" photo multiplier tubes (PMT) were deployed at the depth of 1.5 m beneath the surface to measure EAS secondary particles, mainly *γ* photons, electrons and positrons (*e*−,*e*+) via Cherenkov light produced by sub-showers initiated by those EAS particles in water. In CR induced showers, certain amount of muons are produced and easily penetrate the entire depth of the pool. Muons have a wider lateral distribution than electromagnetic (EM) particles in a shower. At a distance far from the shower core, muons produce larger Cherenkov light signals than those by EM particles due to the longer trajectories in water. In the MILAGRO experiment, a so-called muonlayer of PMTs was deployed at 7 m beneath the water surface also in a 3 m × 3 m grid to detect those muon signals. The MILAGRO experiment was a successful attempt at using EAS detection technique in terms of angular resolution of the shower direction, 0.4◦, and the *γ*/CR discrimination capability. These are two key parameters in order to enhance the sensitivity to the *γ*-ray sources. Their improvements both cause the reduction of the denominator, <sup>√</sup>*NCR*, in the signal-to-noise ratio. It was remarkable that the 4800 m2 detector achieved a sensitivity of 1 CU even at the altitude of 2600 m a.s.l., where EASs of energies around 10 TeV had severely decayed in atmosphere. In this regard, the water Cherenkov technique performed better than scintillator because all EAS particles were detected, including secondary *γ*-rays which cannot be detected by thin scintillator plates. Because photons are about a factor of 10 more than electrons and positrons in a shower, the efficient detection of those photons in water enables acceptable measurements of the EAS's fronts even at the late stages of shower development. Simultaneously, it demonstrated that the muon content in an EAS is a powerful discrimination tool in *γ*/CR separation, even if the exact number of muons in an EAS is not necessarily precisely measured.

As a pioneer WCD experiment, MILAGRO provided us many good lessons at technical level as well. For instance, the importance of avoiding the optical cross-talking between photo-sensors is a good example to improve the angular resolution. Another very useful lesson is that the muon content could be sufficiently well measured without the bottom layer of PMTs in the purpose of *γ*/CR separation. The size of the pond is crucial as well, it has to be big so that the entire shower front can be sufficiently contained. At relatively low altitude, shower fluctuation eventually destroys the shower energy measurement. However, the last two points are common knowledge, not to specifically apply to the WCD technique.

Most of those improvements were justified in a better designed and realized experiment HAWC [8], located at Sierra Negra, Mexico (18◦59 41 N 97◦18 30 W) at an altitude of 4100 m a.s.l. with an effective size of 22,500 m2. The sensitivity of 0.1 CU for *γ*-ray detection was achieved with an angular resolution of ∼0.2◦ at few TeV and even close to 0.1◦ around 10 TeV for well selected events. Thanks to the large size and high altitude, HAWC also achieved a good shower energy resolution of better than 20%. Many discoveries have been made since it was built in 2015, including the pulsar halos associated with Geminga and Monogem [9].

The AS*<sup>γ</sup>* experiment deployed muon detector over a large area of 3600 m<sup>2</sup> about 23 years after its phase-I operation. The *γ*/CR discrimination power was significantly improved with the muon content being well measured for showers falling inside the surface array. This boosted the sensitivity of the array to ∼0.1 CU at higher energy than HAWC, ∼20 TeV. This enabled the detection of the most energetic photon at ∼450 TeV from the Crab Nebula at the time of 2019 [10].

### **5. Low and High Threshold Energy and Field of View of** *γ***-ray Detection**

By 2006, the ARGO-YBJ experiment deployed 5600 m2 RPCs in a warehouse at Yangbajing, China, at 4300 m a.s.l. (30◦06 38 N 90◦31 50 E). With an active coverage area of 93%, most secondary charged particles in an EAS event below few TeV were recorded, as shown in Figure 2. At such altitude, this allowed a detection of showers at energies as low as 300 GeV. The shower arrival directions were measured with a resolution of 1.0◦ around 1 TeV. A sensitivity of 0.7 CU was achieved without any *γ*/CR discrimination capability.

**Figure 2.** ARGO-YBJ detectors and an example of recoded event. In the left panel, the inside view of the ARGO-YBJ hall with RPCs deployed on the floor. Each RPC has a size of 3.3 m2 including 10 units, each of which measures number of particles up to 8 and times the arrival of those particles with a resolution of ∼1 ns. In the right panel, an event recorded by the center array. The vertical axis is the timing of the particles in each unit whose location is shown in the x-y-plane of the array.

With the low energy threshold, the median EAS energy for the lowest bin of number of hits measured in a shower (i.e., 20–39) reached 300 GeV. ARGO-YBJ opened a window to monitor the transient phenomena in a range of 150 Mpc using such a round-the-clock operated wide field-of-view (FoV) detector. Flares of Mrk421 and Mrk501 had been studied in detail. A hardening of their Inverse Compton (IC) spectra during the flares was evidently observed [11] with a strong correlation to the similar features of the synchrotron spectra observed by X-ray telescopes. Unfortunately, no Gamma Ray Burst (GRB) was detected due to the threshold, which was not sufficiently low, before the operation stopped in 2013.

The wide FoV revealed interesting features of galactic *γ*-ray sources, including the identification of Cygnus Cocoon in the TeV band [12]. A couple of 'point-sources' were found to be spatially extended. The integrated *γ*-ray fluxes over the extended regions were found higher than 'point-sources' observed by using imaging air Cherenkov telescopes (IACT) instruments. J1908+0621 was a typical example [13]. Matter distributions around sources, such as molecular clouds, should be considered for investigations of those extended sources. They could be the targets hit by the primary CRs accelerated in PeVatrons

and emit very energetic *γ*-rays through *π*<sup>0</sup> decays. These extended systems may bring us extra information about *γ*-ray radiation mechanism. As electrons typically do not reach distant clouds due to short cooling distance, this may provide further constraints on the origin of the *γ*-rays.

Many lessons have been learned about utilizing the EAS technique for *γ*-ray detection. There is a clear low energy limit of around 100 GeV between 4000 m to 5000 m a.s.l. Shower fluctuation limits angular resolution. A lack of *γ*-ray specific signature in its EAS versus the CR showers limits the sensitivity of *γ*-ray detection. In the EAS detection, secondary *γ*-rays are an important component that should not be ignored. WCD technique that fully uses all *γ*-rays and *e*+,*e*<sup>−</sup> in shower front achieves better angular resolution at same energy than the RPC technique. Moreover, the outstandingly big Cherenkov light signals generated by muons on top of the shower front could be used to differentiate CR showers from *γ*-ray showers. However, CR showers under several hundred GeVs do not have sufficient muon content to be detected as an indicator. To carry out precise spectral and morphologic measurements for *γ*-ray sources, experiments should go into the high energy domain to maximize the advantages of EAS technique. In the low energy domain, i.e., below several hundred GeV, EAS techniques can take advantage of the wide FoV and 24-h operation to monitor all kinds of transient phenomena. Those observations have more weight in the era of multi-messenger astronomy.

### **6. LHAASO: The New Generation of the** *γ***-ray Experiment**

The Large High Altitude Air Shower Observatory (LHAASO) [14] [15] is a complex of EAS detector arrays, located at Mt. Haizi, China (29◦21 27.6 N, 100◦08 19.6 E) at 4410 m a.s.l. The site is at the edge of the Qingzang plateau near Daocheng, Sichuan Province. LHAASO consists of uniformly distributed EAS arrays covering an area of 1.3 km<sup>2</sup> (KM2A) with 5216 scintillator counters (ED, 1 m<sup>2</sup> active area) and 1188 muon detectors (MD, a WCD of 36 m2 and 1.2 m depth, buried under 2.5 m of ground). EDs are spaced 15 m apart and MDs at 30 m. At the center of the array, the WCD Array (WCDA) covers 78,000 m2. The Wide FoV air Cherenkov/fluorescence Telescope Array (WFCTA) of 18 telescopes is also deployed. With the exception of the WFCTA, designed for charged CR detection and knees of the spectra of individual CR species, the main deployment of LHAASO detectors are for *γ*-ray astronomical observation. The layout of the entire LHAASO array is illustrated in Figure 3.

WCDA at the center of the LHAASO array is designed for surveying of VHE *γ*ray sources in the northern hemisphere using the WCD technique. 3120 detector units, 5 m × 5 m each and optically separated with black curtains, are equipped with a pair of PMTs, bigger and smaller, at 4 m beneath the water surface to measure the arrival time and amount of Cherenkov light generated by EAS particles in the shower front, respectively. Such a layout enables an angular resolution of 0.5◦ at the most favorable energy of 2 TeV. 99.5% of CR showers can be rejected as background using the outstanding signals generated by muons at certain distance from the shower cores. A sensitivity of 1% CU is achieved for the all-sky survey. In 2/3 of the units, the larger PMTs have 20" photocathodes so that WCDA still have a sensitivity of 0.5 CU at 70 GeV. This enables the WCDA to be an efficient monitor for transient phenomena, such as GRBs, in an 1/7 of the northern sky at any moment. A dynamic range from 1 to 3000 photoelectrons (PE) of a single unit allows a good measurement of the spectral energy distribution (SED) of a *γ*-ray source from 0.5 to 10 TeV.

**Figure 3.** Schematic of LHAASO layout. Red dots represent 5216 EDs distributed in the inner circular array of 1 km<sup>2</sup> with the spacing of 15 m, and in the outer ring with the spacing of 30 m. The sub-array in the ring is for vetoing showers which have cores outside the 1 km<sup>2</sup> array. Blue filled circles represent 1188 MDs in the 1 km2 array with the spacing of 30 m. Light blue rectangles at the center of the site and enlarged represent WCDA in three ponds. Enlarged pond has 900 units that are divided into 25 clusters and covering an area of 150 m × 150 m. Black rectangles near the WCDA represent 18 telescopes in WFCTA. In the enlarged part, the detailed locations and orientations of the telescopes are indicated as well.

KM2A covers the higher energy range from 10 TeV to few PeV with the sensitivity of also 1% CU at the favorable energy around 50 TeV for *γ*-ray detection. This means that a source with a flux as faint as 2 × <sup>10</sup>−14TeVcm−2s−<sup>1</sup> above 100 TeV can be detected in one year with a significance of 5*σ*. Such a capability is mainly due to two features of KM2A. First, the active area of MDs reaches 4% of the total area of the entire 1 km<sup>2</sup> array. This enables a rejection rate of CR background of 10−<sup>4</sup> at 100 TeV and 10−<sup>5</sup> above 600 TeV, meaning essentially a 'background free' detection for *γ*-ray sources above few hundred TeV, in other name 'PeVatrons'. Secondly, the angular resolution of 0.3◦ for the shower arrival direction using the array of EDs with a spacing of 15 m and a clock synchronization accuracy of 0.5 ns [16] for all 5216 EDs. Further contributing to the 0.3◦ resolution, a lead plate of 5 mm was installed on top of each scintillator in ED to convert EAS secondary *γ* rays into pairs of *e*<sup>+</sup> and *e*−.

LHAASO is, therefore, a dedicated facility of search for PeVatrons via UHE *γ*-rays with unprecedented sensitivity. This is no doubt the most promising step of unveiling the UHE CR origins. Before the entire array was deployed in July, 2021, the 1/2 KM2A started operation at the beginning of 2020 and followed by another 1/4 of the array merged in the data stream by the end of the same year. By March 2021, 530 UHE photons had been collected from 12 sources with almost no CR background. This reveals the PeVatrons in our galaxy unambiguously. The significance threshold had been set as 7*σ* in the PeVatron search. Except for very few newly discovered PeVatrons, most of the PeVatrons have been well-known and found in the VHE *γ*-ray catalog. Our galaxy is actually full of PeVatrons, almost evenly distributed in the disk. Most of the spectra of the UHE photons do not have clear 'cut-off' feature. This indicates that the PeVatrons may be accelerating particles to even higher energies than what have been reported in Ref. [17] as the maximal energies of photons ever detected. In fact, two super-PeV photons have been detected from the Crab at 1.1 PeV and the Cygnus region at 1.4 PeV [17,18]. This indicates the existence of super-PeVatrons in our galaxy. The super-PeVatrons, certainly not limited to those two, may be responsible for the flux of CRs above the knees. These first discoveries mark the onset of UHE *γ*-ray astronomy. In the following years, more PeVatrons will be unveiled and their detailed radiation features will be discovered, thus bringing us closer to the goal of finding the origins of CRs.

As one of the PeV *γ*-ray sources, the Crab Nebula has been measured in detail over 3.5 orders of magnitude of energy from 0.5 TeV to 1.1 PeV using LHAASO. WCDA measured the SED in the energy range from 0.5 to 12 TeV and KM2A measured the energy range from 12 TeV and above. These two components of the LHAASO detector provide independent overlapping measurements of the *γ*-ray flux of the 'standard candle' at the same energy bin at 12 TeV, thus providing cross-checking inside a single collaboration. The agreement of the SED measurement with other experiments below 100 TeV affirms the maturity of the techniques [18]. Adding the measurements in the highest energy band to the multiwavelength analysis, many interesting features of the Crab Nebula are unveiled. For such a well-known point-source, LHAASO data supports the well-established radiation mechanism at least up to 100 TeV, i.e., strong pulsar wind of *e*+*e*<sup>−</sup> forms the nebula with termination shocks that further accelerate *e*+*e*<sup>−</sup> to very high energies. It is found that a simple one-zone model is able to produce the very complex SED over 10 decades of energy. In this model, a same bulk of *e*<sup>+</sup> and *e*<sup>−</sup> are assumed to make the synchrotron radiation in a magnetic field of 112 μG and inverse-Compton scattering on soft photon fields, e.g., CMB photons, simultaneously. The agreement between data and the simple model is impressive. However, we do observe some deviations of LHAASO data from such a beautiful model above 50 TeV at a level of 4*σ* [18]. This makes the origin of the highest energy photons unclear, either from electrons or possibly protons. Both indications imply important outcomes which will be clarified in 2 or 3 years. According to what we have learned from the observations using 3/4 of the LHAASO array, 1 or 2 photons at energies around 1 PeV are expected to be detected per year with the full scale array. More importantly, the SED above 50 TeV will be measured much more precisely while we are waiting for the anticipated PeV photons.

### **7. Summary**

In this paper, the development of the EAS particle detection arrays as *γ*-ray astronomical instruments over the past 30 years has been summarized. The performances of the instruments can be summarized in a plot of the integrated sensitivities of their *γ*-ray detection as a function of *γ*-ray energy, as shown in Figure 4. Three generations of the detectors using various techniques are clearly shown in the figure according to the improvement of their sensitivity from 1 CU for the 1st generation to 0.01 CU for the 3rd generation represented by LHAASO. It is quite clear that the WCD technique and scintillator detector are very useful, especially in taking full advantage of high altitude sites. We have also realized that the high altitudes are important. Since photons will be absorbed even inside our galaxy at energies higher than few PeV, there is no need to go higher than 4500 m a.s.l. where PeV EAS may have already reached the ground before showers developed to their maxima. After exploring for 30 years, EAS techniques are finally found essential in the suitable energy range above few tens of TeV when PeVatrons are suddenly found everywhere in our Milky Way. Soon, we will witness a wave of discoveries in finding the acceleration and radiation mechanisms behind the PeVatrons or even super-PeVatrons. The very preliminary investigation of the Crab Nebula using LHAASO, as the first attempt, has already shown many unknown fascinating features of this very familiar celestial body. In a couple more years of data collection, the production of photons above 1 PeV may be clarified with surprising discoveries. This is just the first observations in the UHE domain of *γ*-ray astronomy. Many more investigations of PeVatrons, discovered and to be discovered, are anticipated. It is doubtless that another milestone has been passed in the way of finding the origins of high energy CRs.

**Figure 4.** Sensitivities of VHE and UHG *γ*-ray astronomical instruments as functions of *γ*-ray energy, E. The Crab Nebula SED in a log-parabola functional form in gray short dashed lines has been adopted from Ref. [15], and the power law extension has been normalized at 1 TeV to be the same as the log-parabola. The ground based EAS experiments, AS*γ*+MD [19], HAWC [20], ARGO-YBJ [21] and MILAGRO [22] are represented by colored solid lines. The IACT experiments, CTA [23], VERITAS [24], HESS [25],MAGIC [26] are represented by colored dotted lines. The 10 year sensitivity of FERMI-LAT [27] is represented by the gray solid line. LHAASO sensitivity represented by the pink solid line is from Ref. [15] and the extension in low energies down to 50 GeV in dashed line is newly estimated based on the upgraded deployment using 20" PMTs.

**Funding:** This research work was funded by the Chengdu Management Committee of Tianfu New Area. It was also supported by following grants. The National Key R&D program of China under the grant 2018YFA0404201, 2018YFA0404202 and 2018YFA0404203.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were used in this reviewing paper.

**Acknowledgments:** The author would like to thank all the supports from colleagues and friends in various experiments that I participated, namely AS*γ*, CASA-MIA, ARGO-YBJ and LHAASO. The author also appreciate the proof reading and efforts in improving English presentation of the MS by Andrew J Cao.

**Conflicts of Interest:** The author declare no conflict of interest.

### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Universe* Editorial Office E-mail: universe@mdpi.com www.mdpi.com/journal/universe

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-5728-1