Next Article in Journal
A System Size Analysis of the Fireball Produced in Heavy-Ion Collisions
Next Article in Special Issue
Information Field Theory for Two Applications in Astroparticle Physics
Previous Article in Journal
Shape Coexistence in Odd-Z Isotopes from Fluorine to Potassium
Previous Article in Special Issue
Automatic Optimization of a Parallel-Plate Avalanche Counter with Optical Readout
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications

1
School of Physics & Astronomy, University of Glasgow, Kelvin Building, University Avenue, Glasgow G12 8QQ, UK
2
Lynkeos Technology Ltd., University of Glasgow, No. 11 The Square, Glasgow G12 8QQ, UK
*
Author to whom correspondence should be addressed.
Particles 2025, 8(1), 33; https://doi.org/10.3390/particles8010033
Submission received: 30 January 2025 / Revised: 21 February 2025 / Accepted: 14 March 2025 / Published: 18 March 2025

Abstract

:
The civil engineering industry faces a critical need for innovative non-destructive evaluation methods, particularly for ageing critical infrastructure, such as bridges, where current techniques fall short. Muography, a non-invasive imaging technique, constructs three-dimensional density maps by detecting the interactions of naturally occurring cosmic-ray muons within the scanned volume. Cosmic-ray muons offer both deep penetration capabilities due to their high momenta and inherent safety due to their natural source. However, the technology’s reliance on this natural source results in a constrained muon flux, leading to prolonged acquisition times, noisy reconstructions, and challenges in image interpretation. To address these limitations, we developed a two-model deep learning approach. First, we employed a conditional Wasserstein Generative Adversarial Network with Gradient Penalty (cWGAN-GP) to perform predictive upsampling of undersampled muography images. Using the Structural Similarity Index Measure (SSIM), 1-day sampled images were able to match the perceptual qualities of a 21-day image, while the Peak Signal-to-Noise Ratio (PSNR) indicated a noise improvement to that of 31 days worth of sampling. A second cWGAN-GP model, trained for semantic segmentation, was developed to quantitatively assess the upsampling model’s impact on each of the features within the concrete samples. This model was able to achieve segmentation of rebar grids and tendon ducts embedded in the concrete, with respective Dice–Sørensen accuracy coefficients of 0.8174 and 0.8663. This model also revealed an unexpected capability to mitigate—and in some cases entirely remove—z-plane smearing artifacts caused by the muography’s inherent inverse imaging problem. Both models were trained on a comprehensive dataset generated through Geant4 Monte Carlo simulations designed to reflect realistic civil infrastructure scenarios. Our results demonstrate significant improvements in both acquisition speed and image quality, marking a substantial step toward making muography more practical for reinforced concrete infrastructure monitoring applications.

1. Introduction

It has been widely established that the growing quantity of reinforced-concrete-based infrastructure nearing the end of its original intended service life poses a significant challenge. Current reflection-based near-surface detection techniques—Ground-p Penetrating Radar (GPR) and ultrasonic echo measurements—are limited in scenarios with high concrete thickness and crowded imaging volumes. While X-ray planar tomography can provide high-resolution imaging at depth, its use is heavily restricted by radiation protection regulations, meaning that it goes largely unused. Consequently, there are no established Non-Destructive Testing (NDT) technologies capable of inspecting deep into concrete structures to determine the placement of steel rebar reinforcement grids and tendon ducts, as well as identifying defects, such as honeycombing, tendon duct strand corrosion, and air voiding.
Muon scattering tomography, an NDT technique which utilises naturally occurring high-energy (GeV) cosmic-ray muons as its imaging source, has shown promise in addressing this gap. Successfully deployed in applications such as nuclear waste characterization [1], cargo scanning [2], and industrial pipe inspection [3], muon scattering techniques have been more recently explored as a tool for civil engineering applications, including the inspection of bridges and other built infrastructure such as historical buildings [4]. A comparative study by Niederleithinger et al. [5] evaluated GPR, ultrasound, X-rays, and muography through the imaging of a 600 kg reference block of reinforced concrete, demonstrating that muography could identify the same features as benchmark X-rays while outperforming the resolution of GPR and ultrasound. However, challenges remain: muography requires days to weeks to produce an accurate tomograph caused by the low muon flux (1 muon cm 2 min 1 ) compared to hours for GPR and ultrasound. Additionally, detector acceptance and limited vertical resolution caused by the inverse imaging problem also result in noisy images and shadowing artefacts from objects above and below the image plane.
While computationally complex statistical methods [6] and pattern recognition techniques, including density-based clustering [7] and support vector machines [8], have been proposed to enhance muographic imaging, they often fail to address the core issues of long imaging times, smearing effects, and noisy outputs. Adjacent fields, such as medical imaging, have overcome some of these issues by using Image-to-Image (I2I) machine learning for image reconstruction and enhancement for Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET) [9,10]. While I2I applications are used to reduce imaging times and doses in hospital imaging, machine learning is also deployed as a detection tool using segmentation to aid medical professionals in the detection of cancers, organs, and polyps [11].
We can apply similar machine learning techniques to our muography images; however, unlike medical imaging, there is a lack of sufficient real-world data, which are required for training models. To address this, the Monte Carlo modelling software Geant4 (version 11.3.0) [12,13,14], widely used in muography for proof-of-concept and feasibility studies, was employed. Geant4 allows for the simulation of the detectors, concrete blocks, and muons, bypassing the need for costly experimental setups and their associated lengthy integration times. Using Geant4 along with the Ecomug cosmic muon event generator [15], a realistic dataset of high- and low-sampled muographic images of concrete interiors was generated for model training. Preliminary results demonstrate that a conditional Wasserstein Generative Adversarial Network with Gradient Penalty (cWGAN-GP) model can effectively reduce noise and upsample features such as rebar grids and tendon ducts embedded in concrete. To analyse the perceptual improvements from upsampling, a second model was trained using the same architecture for a semantic segmentation task to enable pixelwise object detection, thus providing a quantified breakdown of the enhancements on each concrete feature as a result of the upsampling process. The models significantly reduced noise, smoothed image edges, and mitigated shadow artefacts caused by low vertical resolution. This work highlights the potential of machine learning in the post-processing of muographic images, addressing key challenges in the field. By enhancing the image quality and reducing imaging artifacts, these advancements make muography a more viable and attractive technology for investment and industry adoption.

1.1. Muographic Imaging for Built Infrastructure

Muon scattering tomography and muon absorption radiography—collectively referred to as muography—are next-generation imaging techniques that leverage naturally occurring, high-energy cosmic-ray muons. These offer a novel NDT solution that overcomes the poor depth resolution and safety concerns of existing methods. Absorption measurements utilise the attenuation of cosmic-ray muons as they pass through matter, with lower detection rates indicating higher density regions of the imaged volume. This technique only requires a single tracker (consisting of a minimum of two detection planes) adjacent to or behind the target volume. Therefore, it is suited to a wide variety of tasks, having been demonstrated on built infrastructure—such as railway tunnels [16], dams [17], and reactor buildings [18]—as well as larger structures such as volcanoes [19] and pyramids [20].
Scattering tomography (hereafter referred to as muography) examines the multiple Coulomb scattering of muons as they interact with atomic nuclei while traversing matter. This technique requires a minimum of four detector planes arranged as two pairs on either side of the volume of interest, which limits its practical applications, yet it remains preferred over absorption methods due to its superior spatial resolution and detail. By analyzing the variance in the muon scattering angle, the radiation length of the material along the muon’s path can be estimated. The true distribution of muon multiple scattering angles follows a Cauchy–Lorentz profile, but a Gaussian approximation can effectively describe 98% of the distribution, leading to the Rossi formula [21] for the standard deviation of muon multiple scattering angles:
σ θ 15 MeV β c p L X 0 ,
where β c is the velocity ( β = 1 ), L is the length of matter traversed by the muon, and X 0 is the radiation length of the material. The measurement of the momentum p (in MeV/c) of a detected muon is dependent on the muography detector, with cost-effective detectors electing to use the average muon momentum (3–4 GeV / c ) over measurement. Simplifying Equation (1) for a non-momentum measurement, we find that the variance of the muon scattering angles is inversely proportional to the radiation length along the muons path:
σ θ 2 L X 0 .
Positional detector hits can be used to calculate ingoing and outgoing muon vectors as they traverse the VoI, allowing the scattering angle to be determined. Using the above relationships, material composition can then be inferred from the scattering angles.
Muography is inherently an inverse imaging problem, as due to multiple scattering, it is impossible to infer the exact path taken by the muon through the VoI. Various reconstruction algorithms have been developed over the years—detailed reviews of which can be found at [22,23]—however, the Point of Closest Approach (PoCA) algorithm [24] remains the most widely used for its simplicity and computational efficiency. Like most reconstruction algorithms, the PoCA assumes one scattering event per muon, which is calculated as the angle between the incoming and outgoing muon vectors, with the scattering point placed halfway along the vector describing the distance of closest approach between these vectors. Due to low muon flux and the ignorance of multiple scattering by these algorithms, accurate tomographs require many muon events. Therefore, integration times, of the order of days and weeks, are required in order to gather enough muons events to statistically and sufficiently resolve objects embedded within thick structures such as concrete.

1.2. Convolutional Machine Learning Techniques for Image Processing

Supervised machine learning has emerged as a powerful data-driven tool capable of constructing models that are trained to generate outputs y ^ , which approximate ground truths defined as y = f ( x ) . Machine learning is often only applied when the problem is too complex for conventional methods to solve, where it is able to identify obscure patterns and structures inherent to the dataset.
Model training consists of two key steps: the forward pass and the backward pass. During the forward pass, inputs x are passed through the parameters of the model to produce predictions y ^ . The accuracy of these predictions—measured by their similarity to the ground truth y—is quantified by a differentiable loss function L ( y , y ^ ) , which evaluates the error between the predicted and actual outputs. This is followed by the backward pass, where the gradient of the loss function with respect to each model parameter is calculated. These gradients are then used by an optimization function to update model parameters accordingly. This process is repeated thousands of times during training, with model parameters being iteratively adjusted to minimise the global loss to achieve y ^ y .
A trained model can, at best, only generalise to the dataset on which it was trained; therefore, it is crucial that the dataset accurately reflects the inputs that the fully trained model is expected to encounter. A well-trained model relies on a dataset that is representative of unseen data; in addition to being diverse, expansive, and sufficiently large to prevent underfitting, the model fails to capture necessary features, resulting in poor performance on new data. Conversely, overfitting is often caused by an overly complex model which learns the specifics of the training dataset well, thus failing to generalise to unseen data. However, we can penalise overfit models so that they are more generalised—called regularisation—using common techniques such as non-linear activation functions, batch normalization, and dropout. Model performance is assessed on an unseen dataset (validation dataset) for continued assessment of the model generalization over the course of training, which is used to assess and guide hyperparameter tuning. Once training is complete, the fully trained model is assessed on the test dataset—independent of the training and validation sets—to compare the performance of different models and architectures.
There are two primary mechanisms that can be used within image processing: convolution and attention. Convolutions have long been a cornerstone of image processing even before the rise of modern machine learning, enabling the extraction of localised features from image data. These methods rely on predefined kernels to generate feature maps from input images. For instance, Sobel filters [25] can be commonly used to detect vertical and horizontal edges in images. Convolutional Neural Networks (CNNs) build upon this concept by using learnable kernels consisting of weight parameters which are tuned during training to extract intricate features and patterns from the data. CNNs typically consist of cascading convolutional layers, each containing multiple filters which act on the feature maps produced by the previous layers. This iterative process enables CNNs to learn increasingly abstract and detailed features, making them highly effective for a wide range of image processing tasks, including denoising, segmentation, classification, object detection, super-resolution, data fusion, and image synthesis. However, CNNs have a fundamental limitation: their small receptive field. The information accessible to any one component is restricted to the spatial extent of the square convolutional kernel, which is typically sized between 3 × 3 and 7 × 7 pixels. Consequently, CNNs can only process localised areas of the input image at a time. In contrast, attention mechanisms can capture a global context, enabling the model to account for long-range dependencies across an image. Despite their advantages, attention mechanisms are more complex than CNNs and, given their relative recency, are less tested and explored in the imaging domain. For this initial study on the application of machine learning to muography, the focus will remain on well-established CNN architectures due to their proven reliability and effectiveness in the image processing domain.

The Conditional Generative Adversarial Network (cGAN)

An encoder–decoder network is a model methodology that is widely used for CNNs. The encoder aims to learn the features that should be extracted from the input image, where the decoder aims to build the feature maps back up to an output image, enhancing relevant features and reducing redundant information. However, this is a lossy process so information deemed redundant to the network is lost, resulting in a loss of finer detail.
In order to address the loss of detail in encoder–decoder convolutional networks, a 2015 paper by Ronneberger et al. [26] proposed a new architecture called a U-Net. They solved the problem of information loss by introducing skip connections between the encoder and decoder layers by cropping feature maps from the encoder and concatenating them to complimentary-sized feature maps in the decoder. This allows for minor details to be preserved, producing a higher quality output image. This architecture is widely used in image-to-image translation tasks [27] due to its simplicity when compared to alternative designs.
A conditional Generative Adversarial Network (cGAN) consists of a generator that produces outputs and a discriminator that moderates these outputs during training, resulting in more accurate results than using a standalone U-Net generator. Whereas cGANs are conditional on an input, GANs by contrast are unsupervised generative models that aim to create data from a source of random noise. A standard GAN comprises a generator, which transforms the noise into an output, and a discriminator, which evaluates whether the output has generated data or part of the real dataset. The generator’s goal is to produce data that are indistinguishable from the real data, while the discriminator strives to correctly identify whether the data are real or fake. This dynamic creates an adversarial process, with the generator attempting to ‘fool’ the discriminator and the discriminator learning to better distinguish real from fake data.
The back-and-forth continues during training until a Nash equilibrium is reached—a state in game theory where neither player (the generator or the discriminator) can improve their position without affecting the other. In this context, Nash equilibrium is achieved when the generator produces outputs that are realistic enough that the discriminator can not reliably differentiate between real and fake data and is forced to guess. Despite their powerful capabilities, GANs are notoriously difficult to train, often suffering from issues such as mode collapse (where the generator produces limited variability) or imbalance between the generator and discriminator (where one dominates the training process). To mitigate these challenges, techniques like the Wasserstein GAN and gradient penalty are frequently employed, improving training stability and convergence.

2. Materials and Methods

Training a supervised machine learning model to upsample muography images of concrete blocks requires a large dataset of paired images. Therefore, we must first create a dataset that contains different sizes, orientations, and types of rebar grids and tendon ducts, as well as a variety of different equivalent sampling times. Spherical air voids were also added to the geometry design in order to assess detection ability—as a precursor to a complimentary study to look at defects—due to their simplicity and small size.
The design of the Geant4 simulations to produce a large and varied dataset of 2D muography images from 700 unique concrete block designs is outlined below in Section 2.1. Each concrete block was simulated using 100 days worth of muons, resulting in a total dataset size of 70,000 500 × 500 images, each with 100 different versions corresponding to equivalent sampling times of 1 day to 100 days. The total image dataset therefore contained seven million images, requiring over a trillion muon events to be simulated. Simulating a maximum equivalent sampling time of 100 days was arbitrary but intentionally overestimated for this initial study to ensure sufficient sampling for a clear ground truth; by comparison, the concrete block in Niederleithinger et al. [5] was imaged for only 50 days. These data were used to train the cWGAN-GP models (outlined below in Section 2.2) using the 100-day image as the ground truth and the input being randomly selected from one of the equivalent sampling times of 1–99 days, ensuring image sampling time generalization.
One challenging task was to assess the perceptual quality of the output images. While there are a wide variety of metrics that can be used to assess global image quality of outputs, such as the Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR), these metrics do not give a detailed breakdown about how the model is affecting the perception of features, such as rebar grids and tendon ducts. If we want to know how the upsampling model is affecting the perception of features within the concrete, we must find a way of separating features from one another. A solution to this problem is to train a second model to perform semantic segmentation, which is a machine learning technique used to classify pixels within an image and assign them to a specific class. This means that each pixel in the muographic image is assigned to either concrete, rebar, tendon duct, air void, or ‘unknown’. ‘Unknown’ objects were added as the fourth and final object type such that the segmentation model could assign a class to miscellaneous objects it deemed to not fall into any of the other classes. The shape, size, and materials used for ‘unknown’ objects are less constrained; hence, a variety of shapes and densities were used to define them. To train a model to produce segmentation maps, a set of images detailing ‘true’ object placement of features is required. Due to the use of Geant4 Monte Carlo simulations, ‘true’ geometry information is available to generate the segmentation ground truths. For simplicity reasons, all components of the tendon ducts were assigned to one class.
Assessing the performance of segmentation models is much less ambiguous than that of an I2I task like denoising or upsampling, as each class can be split up into multiple binary masks from which to calculate metrics. One of the most popular metrics for the assessment of segmentation models is the Dice–Sørensen coefficient [28,29] (hereafter referred to as Dice), which is used to gauge the similarity between two sets of data:
Dice = 2 T P 2 T P + F P + F N ,
where T P defines the number of true positive pixels, F P defines the number of false positive pixels, and F N defines the number of false negative pixels in a given binary image.

2.1. Simulating a Dataset

The detector system used in the simulations was that of the Muon Imaging System (MIS) in place at the University of Glasgow, which is a scintillating-fibre tracking system [30]. It consists of four detection modules, each consisting of two orthogonal planes of 1024 2 millimetre-pitch polystyrene-based plastic scintillating fibres with polymethylmethacrylate optical cladding coupled to Hamamatsu H12700A multi-anode Photomultiplier Tubes (PMTs). When a muon passes through a fibre, it imparts energy, causing photoluminescence which is detected by the PMTs. A muon track was recorded when coincident muon hits were detected in all four detector planes. The MIS has an imaging area of 1066 mm × 1066 mm and was set up with a vertical spacing of 530 mm between the upstream and downstream modules, between which the concrete block samples of 1000 mm × 1000 mm × 200 mm were placed. Within this block, the four different object types were placed: rebar grids, tendon ducts, air voids, and ‘unknowns’. The subsections immediately below outline the parameters which constrained individual object design and placement by a concrete interior randomisation algorithm. Objects’ parameters were first generated—through sampling from uniform distributions—and then placed in order of rebar–duct–void–unknown, where objects would only be placed if they fit within the boundaries of the concrete block and did not overlap other objects. In order to preserve the uniform distribution of the number of objects per volume, overlapping objects had their parameters re-sampled until they were successfully placed or the number of placement attempts exceeded 1000.
This setup was modelled in Geant4, where Ecomug was used as the cosmic-ray event generator. A generating plane was placed directly above the detector from which only muons were generated at randomised positions. The event generator was configured with its default settings, where the angular distribution was defined with a zenith angle θ spanning from 0 to π / 2 and an azimuthal angle ϕ ranging from 0 to 2 π . The charge ratio of positive to negative muons was set to 1.28, which followed a momentum distribution ranging from 10 MeV to 1 TeV.
A total of 700 concrete samples were each imaged for 100 days worth of muons, with resultant vector pairs processed using the PoCA to calculate scattering angles. The resultant volume of scattering angles was subsequently voxelised into 2 mm cubes. Image slices of the X–Y plane for each voxelised geometry were saved for intervals of one day of sampling. This resulted in the creation of 70,000 unique images, of which there were 100 different versions to reflect each cumulative day of sampling.

2.1.1. Carbon-Steel Rebar Grids

The cylindrical steel rods that make up rebar grids were defined in Geant4 to have a uniform density of 7.84 g · cm 3 , with typical chemical composition within the limits of British standard grade 500B BS 4449. The diameters of these rods were kept the same for all rods within the same grid, which were chosen from standard rebar sizing diameters of 8, 10, 12, 16, 20, and 25 mm. Square grid spacing (100 mm, 150 mm, 200 mm, 250 mm) between rods was also randomised, of which there were a random number of between 2 and 12 in the X or Y direction. Between one and four different rebar grids were positioned parallel to the X–Y plane for each concrete block configuration. The minimum of a single grid here ensured that each configuration had at least one feature.

2.1.2. Tendon Ducts

Tendon ducts consisted of large cylindrical objects that were placed such that they spanned the 1000 mm volume in either the X–Z or Y–Z planes. They contained three subcomponents: casing, grout, and steel strands. The casing material was randomly chosen between a 0.5 mm thick galvanised carbon-steel casing (7.97 g · cm 3 ), 3 mm thick High Density Polyethylene (HDPE) with a density of 0.94 g · cm 3 , or 3 mm thick High Density Polypropylene (HDPP) with a density of 0.90 g · cm 3 . Casing diameter was sampled from from that of 50, 60, 70, 80, 90, and 100 mm, a value which also determined the number of high tensile steel strands (density of 7.85 g · cm 3 ) contained within the duct. The grout which filled the space inside the casing had a bulk density of 1.30 g · cm 3 using British Standard Grade C40/50 BS 8500-1.

2.1.3. Air Voids

Air pockets caused by improper concrete pouring are a defect that can reduce the structural integrity of reinforced concrete [31]. Due to the simplicity of this defect, spherical air voids were placed in the concrete in order to gauge muography’s resolving capacity in lieu of a future study. Each volume contained 0–3 spherical air voids, with diameters ranging from 10 to 100 mm.

2.1.4. ‘Unknown’ Objects

Between 0 and 2 ‘unknowns’ were added as a control to the dataset consisting of objects we would not find in reinforced concrete to add variety and shape for model generalization. These consisted of boxes, cylinders, and spheres which did not exceed a 3D bounding box with edges limited between 35 mm and 75 mm. The densities of these objects were randomly selected from water, aluminium, iron, lead, and uranium and were randomly orientated upon placement in the volume.

2.2. Pix2pix with WGAN-GP

The pix2pix model is a conditional Generative Adversarial Network (cGAN) that utilises a U-Net for its generator and a PatchGAN for its discriminator [32]. In the base pix2pix cGAN model, the generator minimises a combined loss:
L G = L adv + λ pixel · L pixel ,
where L pixel is the pixelwise loss function of the generator, λ pixel is the weighting of the pixelwise loss, and L adv is the adversarial loss based on binary classification in the discriminator (real/fake). This binary classification provides the generator with little information on how close the distribution of generated images is to true distribution. To better quantify this difference, we can use the Wasserstein distance, also known as the Earth mover’s distance [33]. This measures the performance of the discriminator, increasing the smoothness of gradients and importantly providing informative feedback for training the generator. Unlike binary classification, the Wasserstein distance is computed using the Kantorovich–Rubinstein duality, which restricts the discriminator to be a 1-Lipschitz function. This constraint ensures that the discriminator’s gradients are stable and meaningful, which is important for the generator to learn effectively.
The gradient penalty is the preferred solution for satisfying the Lipschitz constraint in WGANs [34]. It directly penalises the gradients of the discriminator to ensure they remain close to 1. Specifically, the gradient penalty is computed as follows:
L GP = E x ^ ( x ^ D ( x ^ ) 2 1 ) 2 ,
where x ^ represents interpolated samples between real and generated data, and D ( x ^ ) is the output of the discriminator. By applying this penalty, the discriminator is constrained to satisfy the 1-Lipschitz condition, ensuring stable gradients and improving the overall training stability. This leads to smoother, more informative gradients for the generator, enabling it to produce higher quality images. In the case of WGAN-GP, the generator’s loss function, L G , is updated to include the gradient penalty:
L G = L adv + λ pixel · L pixel + λ GP · L GP ,
where L adv is the adversarial loss based on the Wasserstein distance, L pixel is the pixelwise loss, and L GP is the gradient penalty term which is weighted by λ GP .
The U-Net generator architecture remains similar to pix2pix, with five encoder and decoder blocks each consisting of 4 × 4 convolution—batch normalization—ReLU activation. The upsampling model’s generator pixelwise loss uses the mean absolute error (MAE or L1) loss:
L L 1 = 1 N i = 1 N y ^ i y i ,
where N is the number of samples, and y ^ i and y i are the predicted value and ground truth for the ith sample. The segmentation model uses a custom pixelwise loss function of an evenly weighted cross-entropy loss and Dice loss (which is one minus the Dice score),
L custom = 1 2 L CE + 1 2 L Dice ,
which balances pixelwise prediction accuracy (via cross-entropy) and structural similarity (via Dice). Since the background class (concrete) makes up a significant proportion of all pixels, the model is biased to focus on these pixels. Therefore, the Dice loss calculation excludes this class, consisting of evenly weighted contributions of the four object classes: rebar grids, tendon ducts, air voids, and ‘unknowns’. Both models were trained for 100 epochs (one epoch is a full iteration of the training set), with a batch size of 48. The upsampling model training set had each of its input images re-sampled from one of its 100 versions at the start of each epoch in order to achieve generalization to different sampling times. The upsampling model generates outputs y ^ that approximate the expected outputs y of 100 days of sampling. We therefore trained the segmentation model to take inputs y instead of upsampled images y ^ , as we aimed to assess the performance of the y ^ outputs without introducing any bias from the upsampling process. An initial learning rate of 0.01 was used for the generator and 0.001 for the discriminator, after which a step scheduler was used to steady model convergence, thus reducing the learning rates by an order of magnitude every 25 epochs.

3. Results

3.1. Upsampling Model Results

The upsampling model aims not only to enhance the perceptual characteristics of low-quality muographic images but also to reduce image noise—a key issue identified in [5] when comparing muography to other NDT techniques. To evaluate the quality difference between images, we used two common computer vision metrics: the Structural Similarity Index Measure (SSIM) [35], which assesses perception, and the Peak Signal-to-Noise Ratio (PSNR), which is widely used in signal processing and imaging to quantify image noise. These metrics provide insights not only on the upsampling model’s performance but also on the quality of the input muography images as they are sampled over time.
Utilizing the different equivalent sampling time versions for each image in our dataset, we used the aforementioned metrics to look at (a) the effect of the muography sampling time with respect to the image quality and (b) the performance of the upsampling model on different sampling times. We averaged the SSIM and PSNR metrics of the 6900 images in the test dataset—calculated with respect to the 100-day ground truth—for both the inputs and outputs of the model at each sampling time. The results are shown in Figure 1. We see that the 1-day upsampled images exhibited SSIM and PSNR scores of 0.88 and 37dB, respectively. This is equivalent to the perceptual quality of a 21-day unaltered image and the noise quality of a 31-day unaltered image. However, improvements diminished as the sampling time increased, with both the input and upsampled images for the SSIM and PSNR converging at an equivalent sampling time of 55 and 85 days, respectively. Visual inspection of the images before and after upsampling confirmed the improvements indicated by the SSIM and PSNR metrics, even at the lowest quality images—from the 1-day equivalent sampling time.
Demonstrating these drastic improvements, Figure 2 compares five 1-day undersampled X–Y plane images to their respective outputs from the upsampled model, as well as their associated 100-day ground truth values for reference.
These results confirm that the upsampling model does an excellent job of identifying feature patterns across all sampling times to create clearer and more refined representations of undersampled muography images.

3.2. Segmentation Model Results

Using standard evaluation metrics to compare the quality of two images gives a limited interpretation of how the model improves the perception of the features within the muographic images. Additionally, insufficient feature distinction results in the background pixels (attributed to concrete) in the images distorting the evaluation metrics by overwhelming the signal from other classes. In order to evaluate the effect of the upsampling model on each feature type, we trained a second model to perform semantic segmentation of these images, where each pixel in the image is classified as one of the five predefined object labels: concrete, rebar grid, tendon duct, air void, or unknown. If we were using experimental data, we would have to manually label images to retrieve ground truths; however, as we were using Geant4 simulations, the ‘true’ position and composition of each object is known, so we used this to generate our ground truth segmentation images. The segmentation model serves as an independent evaluation metric to assess the performance of the upsampling model. To minimise potential bias and maintain objectivity, this model was trained exclusively on high sampling time (100-day) muography images, deliberately excluding upsampler outputs from the training process. The model performance was quantified using the Dice coefficient, which measures the overlap between pixelwise label predictions and ground truth values. The five-class segmentation labels were one-hot encoded into discrete binary masks, enabling class-specific overlap calculations. By applying the segmentation model to both original and upsampled muography images, we could assess the upsampler’s effect on each structural class. A comparative analysis of Dice coefficients between original and upsampled images, focusing on the four non-background structural classes, is presented in Figure 3.
The accuracy of the segmentation model for each class is indicated by the performance of the 100-day dataset inputs, showing Dice coefficients for rebar grids, tendon ducts, air voids, and ‘unknowns’ of 0.8173, 0.8663, 0.1265, and 0.5132 respectively. For the Dice coefficient difference subplots (in purple), we observe that the use of the upsampling model resulted in an excellent improvement for the identification of tendon ducts, a good improvement for rebar and unknowns, and little improvement for air voids. We observe trends similar to that of the SSIM metric in Figure 1, where the upsampling model was most effective at lower sampling times before converging with scores similar to the inputs at around 50–70 days of sampling.
We can also perform visual inspection to obtain a better picture of how the segmentation model is performing, here using an image slice which is representative of model performance of on the test dataset. Each of the four panels in Figure 4 show eight versions of a single X–Y plane, seven at different equivalent sampling times (a–g), and one as the ground truth (h). The four panels show the original image, outputs of the upsampling model, outputs of the segmentation model, and outputs from a concatenation of both models. The upsampled segmentation results (lower right) show a significant improvement at low sampling times in panels (a) and (b), aligning with trends shown by the Dice coefficients in Figure 3. A key feature of the segmentation model is the lack of y-aligned rebar features present.
Looking at the segmentation ground truths in panel (h), we see that the segmentation model was correct in ignoring the y-aligned values, as indicated by the muography images. This shows that the segmentation model has identified the y-aligned values as a shadowing effect, hence overcoming the z-smearing limitation of muography and ultimately improving the reliability of muography as a technique for accurate concrete analysis.

4. Discussion

The results presented in this study demonstrate promising advancements in applying machine learning techniques to muography image enhancement and analysis. Our findings show that cWGAN-GP models can effectively reduce required sampling times while maintaining image quality, with particularly strong performance in feature detection for larger structural elements. The following subsections examine the model performance in detail, addressing both its capabilities and limitations for both upsampling and segmentation tasks.

4.1. Image Upsampling Capabilities

The upsampling model effectively improved image quality, as demonstrated by both the SSIM and PSNR metrics in Figure 1. Looking at the input images (blue circular points), both metrics showed rapid improvement for equivalent sampling times between one and 35 days. The SSIM followed a logarithmic trend that aligns with the feature detection performance shown in Figure 3 for rebar, tendon ducts, and ‘unknown’ objects, reflecting its design to quantify perceptual similarity. In contrast, the PSNR, which is calculated as a logarithm of the mean squared error (MSE), shows a different behaviour: after the initial 30-day period, it increased linearly until approximately 75 days before tending to infinity. This difference arises because the PSNR is more sensitive to pixel-level variations, while the SSIM captures perceptual changes. The combined analysis suggests that 30–40 days of sampling is sufficient to resolve large-scale features and reduce noise, with additional sampling time primarily improving pixel-level detail that may not significantly impact image perception.
The upsampled outputs (red triangular points) exhibited systematically higher SSIM and PSNR values compared to the original inputs at lower sampling times while following similar overall trends. However, this advantage diminished as the sampling time increased, with upsampled and original images showing identical SSIM performance at around 50 days—the same point at which the corresponding input data (blue circular points) began to show significant diminishing returns. This convergence can be understood from the data processing used: each image pixel value corresponds to a specific voxel in its z-plane layer, with longer sampling times allowing more muon hits per voxel. As the sampling time increased, these voxel values gradually approached their ‘true’ values, and the 50-day convergence point suggests sufficient sampling had been achieved to closely approximate these true values. The PSNR of the upsampled images converged with its corresponding input images much later, at around 80–85 days, before the input metrics began to outperform. This was likely caused by the lack of reconstruction of exact pixel-by-pixel noise variations present in the 100-day images, for example, as shown in Figure 2, where the edges and corners exhibited some noise fluctuation that was correspondingly more smooth in the upsampled 1-day image.
Visual inspection of the images in Figure 2 and the top panels from Figure 4 can confirm the results exhibited from dataset-wide analysis. At low sampling times, the noise in the input images was significant enough to drown out most human-perceivable detail, combined with low or non-existent counting statistics around image edges caused by detector acceptance. However, their upsampled counterparts exhibited drastic feature enhancement, combined with a significant reduction in noise to output a smooth but detailed image. While the upsampling of 1-day images demonstrated remarkable improvement, some limitations remain evident—particularly in regions of very low statistics where noise fluctuations dominated. For example, gaps appearing in rebar grids remained unfilled, despite our dataset containing only intact grids, due to the limited context window (4 × 4 pixels) of the convolutional approach.

4.2. Feature Segmentation Performance

Building on the image quality metrics discussed above, segmentation analysis provides deeper insight into how the upsampling model affects different structural features within the concrete samples. By examining the Dice scores for each object type, we can quantitatively assess how well geometric features are preserved and enhanced through the upsampling process.
Figure 3 shows a breakdown of the average segmentation performance for each object feature as a function of equivalent sampling time. The rebar segmentation scores performed well, but early convergence occurred between the original and upsampled images at around 20 days—notably earlier than the convergence points for the SSIM, PSNR, tendon ducts, and ‘unknowns’. Also noted is that between 20 and 85 days, the original images slightly outperformed the upsampled ones, suggesting the upsampling model was negatively degrading rebar features. Early convergence and outputs being outperformed by inputs suggest the model was washing out some rebar features. This degradation likely stems from the varying dimensions of rebar in our dataset: grids range from 8 to 25 mm, with the thinnest rebar being only four pixels wide at 2 mm pixelization. The model may be struggling to preserve thin rebar features or mistaking them for shadows of larger ones. The cylindrical grid structure of the rebar compounds this challenge, as segmentation in the X–Y plane alone makes edge detection particularly difficult. Visual inspections of samples containing thin rebar (8–10 mm) revealed poor segmentation performance; however, a comprehensive breakdown would confirm if this is the true source of the issue.
The tendon ducts performed very well for the input data; additionally, the integration time to achieve the same results with the upsampling model was much improved. This high performance is attributed to the large diameter and pixel volume of the objects. Visual inspection of the dataset also revealed that the tendon ducts tended to be placed nearer the edges in low-statistics regions also attributing to a high relative performance gain—as the large size of the objects means that they are easily detectable through the high noise. The placement of these ducts near the edges is caused by the concrete sample placement logic (where they are always placed after any rebar grids), which will on average tend to occupy the centres of blocks. Bundling all three tendon duct features together will have also resulted in better object identification.
The air voids were very poorly segmented. The primary causes are their challenging detection in scattering tomographs due to low scattering statistics and their small spherical nature, which constitutes a low pixelwise percentage of the total dataset. Visual inspection of the muography images revealed that most voids greater than 50 mm in diameter can be perceived. For larger voids at least, the low Dice scores seem to stem from class imbalance. During training, the Dice component of the loss function (Equation (8)) was calculated as the mean of the four object components. However, due to the class imbalance—most notably in the case of air voids—the model is not incentivised to improve the correct detection of these pixels. Their low representation in the dataset means they contribute minimally to the loss function, preventing significant improvement in their segmentation. This issue can be addressed by setting custom Dice class weightings such that they reflect the proportion of pixels that each class occupies in the test dataset.
The performance of the ‘unknown’ object class was generally satisfactory but leaves room for improvement. This class is particularly challenging to analyse due to its significant variety and density variations. High-density materials, such as uranium and lead, tend to exhibit greater smearing, which can negatively impact Dice scores by increasing false positives. Conversely, low-density materials, such as air and water, also contribute to lower Dice scores due to their propensity for false positives. The purpose of this class is to provide the model with a miscellaneous category for objects that do not fall into the predefined categories of rebar, tendon ducts, or air voids. Further analysis of this class, broken down by material composition and size, would provide a clearer understanding of how objects with varying densities perform. Additionally, evaluating models using objects with unseen densities and shapes may provide better insight to the success of this class’ objective.
Visual representations of the X–Y plane segmentation outputs are shown in Figure 4, revealing several challenges. The noisy original 1-day image struggled to perform any form of accurate segmentation map—note that it is predicting tendon duct pixels around the edges; the model is ‘guessing’ tendons to exist in low-sample edge regions. Low-sampled upsampled images (panels (a) on the right hand side) which were absent of high noise fluctuations showed a much better performance; however, they seemed to struggle to segment all rows of rebar despite being clearly visible. This lack of spatial context limits the model’s ability to accurately interpret the full geometry of the objects (in complex structures like cylindrical tendon ducts and rebar grids), where straight-edge detection and thicknesses are slightly lacking. In addition to these issues, there are a small number of artefacts exhibited at the top of the images where the shadow of an x-aligned tendon duct is present. Many of these are caused by the limited context size of the 4 × 4 convolutional kernels, which are unable to make predictions based on pixels outwith this area.

A Solution to Smearing Artefacts

We define our inverse imaging problem in muography as the imprecision of a scattering point in a direction orthogonal to the detector planes caused by the lack of information to place said scattering point on a given muon’s line path. For a detector plane parallel to the ground, the muon path between the two sets of detectors will be along the z direction, with the increased uncertainty in the z axis resulting in objects appearing to ‘smear’ along this direction—also referred to as ‘shadows’ or ‘shadowing’. We can see in Figure 4 that the segmented images only contain x-aligned rebar, despite the associated muon images appearing to contain a rebar grid of y- and x-aligned rebar. This is due to the segmentation model being trained to reproduce geometric ground truths that were absent of any smearing artefacts, learning the ability to accurately differentiate between shadows and ‘true’ objects despite a lack of information along the z axis.
We can investigate this phenomenon further by looking at a concrete block sample side-on by stacking all segmented X–Y plane images back into a three-dimensional volume and taking a slice along the X–Z plane. An example of a side-on slice showing the original, upsampled, segmented, and upsampled and segmented versions at different equivalent sampling times is shown in Figure 5. Despite the model not having any context in the z plane, segmentation outputs (bottom two panels) show remarkable improvements in object clarity when compared to the muographic images. The upsampling model also exhibits a clear improvement to the low sampling times in panels (a) and (h). However, this ability to reduce smearing effects to accurately distinguish smearing artefacts from actual object geometry would be much improved by using a model input which encompasses all three dimensions.
For simplicity and computational efficiency, our current model processes each X–Y plane layer independently. While effective, this method’s primary limitation is the model’s lack of contextual awareness of a layer’s position within the overall scan, and it cannot utilise information from adjacent layers. However, several methods exist to incorporate 3D context without processing full volumes. These include orthogonal plane processing (where separate models are trained on X–Y, X–Z, and Y–Z planes), multi-layer processing (which analyses small stacks of adjacent layers simultaneously) and patch-based processing (which splits the volume into manageable 3D sub-volumes). These approaches would provide three-dimensional context while maintaining reasonable computational requirements and model complexity.

4.3. Future Directions: Model Refinement and Defect Detection Applications

The dataset used to train both models covered a wide range of sampling times. The upsampling performance on the lowest generated image data was much better than anticipated, whereas performance at equivalent sampling times greater than 50–60 days yielded diminishing returns as well as indications of convergence between the input and output metric scores. Reducing the maximum sampling time would drastically decrease the required computation time for simulations and PoCA algorithm processing. The high performance of the upsampling model on a 1-day equivalent sampling time, especially shown by the visual examples, indicates that the limits of the model should be further pushed to provide results for equivalent sampling times on the order of hours.
Despite using segmentation purely as an analysis tool, its ability to resolve true object boundaries and therefore decrease smearing effects is a significant result. Furthermore, it was noted that the segmentation performance on the upsampled images was limited, despite there being clear objects present. This may be a symptom of using a two-model approach, as the outputs of the upsampling model y ^ are similar but not identical to the input y used to train the segmentation model. In order to achieve the most accurate outputs, without the complexity of two models, we can simplify the task to a one-model approach which uses the true object geometries to generate a ground truth—whether that be a segmentation map of labels or of material-specific values such as density or radiation length.
The limited receptive field of convolutional kernels restricted the model to having a localised receptive field—prevalent in the non-uniformity of the rebar grids or straight edges of the segmented images. While this issue can be circumvented through the use of hierarchical dilated convolutions [36], allowing for a slight increase in context size, the most popular solution is to use attention mechanisms [37]. Attention is a more recent and increasingly prominent methodology for model construction, powering state-of-the-art models in both language and vision tasks. It uses a tokenization system that splits images into patches, which are flattened and processed as individual tokens. These tokens are positionally embedded to preserve spatial relationships, enabling the model to account for location dependence. Unlike convolutions, which have a limited receptive field that grows with depth, attention mechanisms inherently consider relationships between all tokens, providing a global context at every layer. This ability to capture long-range dependencies and spatial relationships makes attention a powerful alternative for vision tasks.
While the concrete samples used to train the models are of realistic design, the dataset poorly reflects a wide variety real-world scenarios which include more varied object orientations, concrete thicknesses, and detector orientations. Therefore, future dataset generation should aim to vary these parameters to allow the model to generalise to different scenarios.
This study has looked at the upsampling and identification of objects commonly found within concrete interiors. However, the identification of objects solves only part of the problem pertaining to the characterization of built infrastructure. The next step is to alter the model’s objective to look toward not only identifying object location and material but to also accurately identify defects—such as tendon duct strand corrosion, honeycombing, and air voids.
Despite the general agreement of Geant4 simulations and real data for muography, it is important to verify these models on real-world data. Therefore, testing of the models on data gathered from Niederleithinger et al. [5], as well as planned future muography scans by Lynkeos Technology Ltd, will be important for validation of the results of this study.

5. Conclusions

In this study, we have demonstrated the successful application of deep learning techniques to address two significant challenges in muon scattering tomography: reducing required sampling times and improving feature identification in concrete structures. Our cWGAN-GP model achieved remarkable upsampling performance, with 1-day sampled images being enhanced to match the quality metrics of 21 baseline images for the SSIM and 31 baseline images for the PSNR. This represents a significant reduction in the required sampling times for achieving high-quality muography images. To evaluate the model’s performance beyond traditional metrics, we developed an approach using semantic segmentation as an assessment tool. This method revealed that the upsampling model’s effectiveness varies significantly across different structural features, with particularly strong performance in enhancing the visibility of tendon ducts and moderate improvements in rebar grid detection. The segmentation model also demonstrated an unexpected capability to differentiate between real features and shadowing artefacts caused by a low vertical resolution, thus addressing the inverse imaging problem inherent to muography. While our results are promising, key challenges remain. The model’s performance on air void detection requires improvement, likely through the implementation of class-weighted loss functions to address class imbalance. Additionally, the current approach of processing 2D slices independently limits the model’s ability to utilise full 3D contextual information, suggesting potential improvements through multi-plane or patch-based processing methods. The transition from simulation to real-world application represents the next critical step in this research. Future work will focus on generalizing the models to a range of realistic imaging scenarios, as well as validating these models with experimental data and extending the segmentation capabilities to identify specific structural defects, such as tendon duct corrosion and concrete honeycombing. These advancements could significantly impact the practical application of muography in infrastructure inspection, potentially reducing inspection times while improving the accuracy of defect detection.

Author Contributions

Conceptualization, W.O., D.M., G.Y. and S.G.; methodology, W.O.; software, W.O. and G.Y.; validation, W.O.; formal analysis, W.O.; investigation, W.O.; resources, W.O.; data curation, W.O.; writing—original draft preparation, W.O.; writing—review and editing, D.M. and G.Y.; visualization, W.O.; supervision, D.M., G.Y. and S.G.; project administration, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data are available upon request.

Acknowledgments

We give additional thanks to Ernst Niederleithinger at BAM for his support and technical expertise, which informed the simulation design.

Conflicts of Interest

Authors William O’Donnell, David Mahon, Guangliang Yang were employed by the company Lynkeos Technology Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Mahon, D.; Clarkson, A.; Gardner, S.; Ireland, D.; Jebali, R.; Kaiser, R.; Ryan, M.; Shearer, C.; Yang, G. First-of-a-kind muography for nuclear waste characterization. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2019, 377, 20180048. [Google Scholar] [CrossRef]
  2. Barnes, S.; Georgadze, A.; Giammanco, A.; Kiisk, M.; Kudryavtsev, V.A.; Lagrange, M.; Pinto, O.L. Cosmic-Ray Tomography for Border Security. Instruments 2023, 7, 13. [Google Scholar] [CrossRef]
  3. Martinez Ruiz Del Arbol, P.; Orio Alonso, A.; Diez, C.; Garcia, P. Applications of Muography to the Industrial Sector. J. Adv. Instrum. Sci. 2022, 2022. [Google Scholar] [CrossRef]
  4. Bonomi, G.; Caccia, M.; Donzella, A.; Pagano, D.; Villa, V.; Zenoni, A. Cosmic ray tracking to monitor the stability of historical buildings: A feasibility study. Meas. Sci. Technol. 2019, 30, 045901. [Google Scholar] [CrossRef]
  5. Niederleithinger, E.; Gardner, S.; Kind, T.; Kaiser, R.; Grunwald, M.; Yang, G.; Redmer, B.; Waske, A.; Mielentz, F.; Effner, U.; et al. Muon Tomography of the Interior of a Reinforced Concrete Block: First Experimental Proof of Concept. J. Nondestruct. Eval. 2021, 40, 65. [Google Scholar] [CrossRef]
  6. Schultz, L.J.; Blanpied, G.S.; Borozdin, K.N.; Fraser, A.M.; Hengartner, N.W.; Klimenko, A.V.; Morris, C.L.; Orum, C.; Sossong, M.J. Statistical Reconstruction for Cosmic Ray Muon Tomography. IEEE Trans. Image Process. 2007, 16, 1985–1993. [Google Scholar] [CrossRef] [PubMed]
  7. Giammanco, A.; Moussawi, M.A.; Boone, M.; Kock, T.D.; Roy, J.D.; Huysmans, S.; Kumar, V.; Lagrange, M.; Tytgat, M. Cosmic rays for imaging cultural heritage objects. arXiv 2024, arXiv:2405.10417. [Google Scholar] [CrossRef]
  8. Das, S.; Tripathy, S.; Jagga, P.; Bhattacharya, P.; Majumdar, N.; Mukhopadhyay, S. Muography for Inspection of Civil Structures. Instruments 2022, 6, 77. [Google Scholar] [CrossRef]
  9. Armanious, K.; Jiang, C.; Fischer, M.; Küstner, T.; Hepp, T.; Nikolaou, K.; Gatidis, S.; Yang, B. MedGAN: Medical image translation using GANs. Comput. Med. Imaging Graph. 2020, 79, 101684. [Google Scholar] [CrossRef]
  10. Zhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Van Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers, R.M. A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies with Progress Highlights, and Future Promises. Proc. IEEE 2021, 109, 820–838. [Google Scholar] [CrossRef]
  11. Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment anything in medical images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef] [PubMed]
  12. Agostinelli, S.; Allison, J.; Amako, K.; Apostolakis, J.; Araujo, H.; Arce, P.; Asai, M.; Axen, D.; Banerjee, S.; Barrand, G.; et al. Geant4—A simulation toolkit. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2003, 506, 250–303. [Google Scholar] [CrossRef]
  13. Allison, J.; Amako, K.; Apostolakis, J.; Araujo, H.; Arce Dubois, P.; Asai, M.; Barrand, G.; Capra, R.; Chauvie, S.; Chytracek, R.; et al. Geant4 developments and applications. IEEE Trans. Nucl. Sci. 2006, 53, 270–278. [Google Scholar] [CrossRef]
  14. Allison, J.; Amako, K.; Apostolakis, J.; Arce, P.; Asai, M.; Aso, T.; Bagli, E.; Bagulya, A.; Banerjee, S.; Barrand, G.; et al. Recent developments in Geant4. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2016, 835, 186–225. [Google Scholar] [CrossRef]
  15. Pagano, D.; Bonomi, G.; Donzella, A.; Zenoni, A.; Zumerle, G.; Zurlo, N. EcoMug: An Efficient COsmic MUon Generator for cosmic-ray muon applications. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2021, 1014, 165732. [Google Scholar] [CrossRef]
  16. Thompson, L.F.; Stowell, J.P.; Fargher, S.J.; Steer, C.A.; Loughney, K.L.; O’Sullivan, E.M.; Gluyas, J.G.; Blaney, S.W.; Pidcock, R.J. Muon tomography for railway tunnel imaging. Phys. Rev. Res. 2020, 2, 023017. [Google Scholar] [CrossRef]
  17. Oláh, L.; Tanaka, H.K.; Mori, T.; Sakatani, Y.; Varga, D. Structural health monitoring of sabo check dams with cosmic-ray muography. iScience 2023, 26, 108019. [Google Scholar] [CrossRef] [PubMed]
  18. Fujii, H.; Hara, K.; Hayashi, K.; Kakuno, H.; Kodama, H.; Nagamine, K.; Sato, K.; Kim, S.H.; Suzuki, A.; Sumiyoshi, T.; et al. Investigation of the Unit-1 nuclear reactor of Fukushima Daiichi by cosmic muon radiography. Prog. Theor. Exp. Phys. 2020, 2020, 043C02. [Google Scholar] [CrossRef]
  19. Tanaka, H.K.M.; Kusagaya, T.; Shinohara, H. Radiographic visualization of magma dynamics in an erupting volcano. Nat. Commun. 2014, 5, 3381. [Google Scholar] [CrossRef]
  20. Morishima, K.; Kuno, M.; Nishio, A.; Kitagawa, N.; Manabe, Y.; Moto, M.; Takasaki, F.; Fujii, H.; Satoh, K.; Kodama, H.; et al. Discovery of a big void in Khufu’s Pyramid by observation of cosmic-ray muons. Nature 2017, 552, 386–390. [Google Scholar] [CrossRef] [PubMed]
  21. Rossi, B.; Greisen, K. Cosmic-Ray Theory. Rev. Mod. Phys. 1941, 13, 240–309. [Google Scholar] [CrossRef]
  22. Yang, G.; Clarkson, T.; Gardner, S.; Ireland, D.; Kaiser, R.; Mahon, D.; Jebali, R.A.; Shearer, C.; Ryan, M. Novel muon imaging techniques. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2019, 377, 20180062. [Google Scholar] [CrossRef]
  23. Ughade, R.; Bae, J.; Chatzidakis, S. Performance evaluation of cosmic ray muon trajectory estimation algorithms. AIP Adv. 2023, 13, 125301. [Google Scholar] [CrossRef]
  24. Schultz, L.J. Cosmic Ray Muon Radiography. Ph.D. Thesis, Portland State University, Portland, OR, USA, 2003. [Google Scholar]
  25. Sobel, I.; Feldman, G. An Isotropic 3 × 3 Image Gradient Operator. Presentation at Stanford A.I. Project 1968. 2014. Available online: https://www.researchgate.net/publication/281104656_An_Isotropic_3x3_Image_Gradient_Operator (accessed on 20 January 2025).
  26. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Series Title: Lecture Notes in Computer Science; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
  27. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  28. Dice, L.R. Measures of the Amount of Ecologic Association Between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  29. Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. K. Dan. Vidensk. Selsk. 1948, 5, 1–34. [Google Scholar]
  30. Clarkson, A.; Hamilton, D.; Hoek, M.; Ireland, D.; Johnstone, J.; Kaiser, R.; Keri, T.; Lumsden, S.; Mahon, D.; McKinnon, B.; et al. The design and performance of a scintillating-fibre tracker for the cosmic-ray muon tomography of legacy nuclear waste containers. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2014, 745, 138–149. [Google Scholar] [CrossRef]
  31. Sun, W.; Wang, K.; Taylor, P.C.; Wang, X. Air void clustering in concrete and its effect on concrete strength. Int. J. Pavement Eng. 2022, 23, 5127–5141. [Google Scholar] [CrossRef]
  32. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef]
  33. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017. [Google Scholar] [CrossRef]
  34. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar] [CrossRef]
  37. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar] [CrossRef]
Figure 1. Average Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) metrics of the test dataset as a function of equivalent sampling time. The original dataset performance is indicated by blue circular points, where the upsampled outputs of the model are indicated by red triangular points.
Figure 1. Average Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) metrics of the test dataset as a function of equivalent sampling time. The original dataset performance is indicated by blue circular points, where the upsampled outputs of the model are indicated by red triangular points.
Particles 08 00033 g001
Figure 2. Five examples of 1-day input images from the test dataset (left column) and their corresponding output from the upsampling model (middle column). The 100-day ground truth image for each example is shown for comparison (right column).
Figure 2. Five examples of 1-day input images from the test dataset (left column) and their corresponding output from the upsampling model (middle column). The 100-day ground truth image for each example is shown for comparison (right column).
Particles 08 00033 g002
Figure 3. Average Dice coefficient of the test dataset, calculated at each equivalent sampling time, for each of the four segmentation object labels (rebar grid, tendon duct, void, unknown). Performance of the original input dataset is shown by blue circular points, where upsampled outputs are displayed with red triangular points. The relative improvement of the upsampling model for a particular label is indicated by Dice difference below each object plot, shown in purple.
Figure 3. Average Dice coefficient of the test dataset, calculated at each equivalent sampling time, for each of the four segmentation object labels (rebar grid, tendon duct, void, unknown). Performance of the original input dataset is shown by blue circular points, where upsampled outputs are displayed with red triangular points. The relative improvement of the upsampling model for a particular label is indicated by Dice difference below each object plot, shown in purple.
Particles 08 00033 g003
Figure 4. A single X–Y plane image slice for different equivalent sampling times: (a) one day, (b) five days, (c) 10 days, (d) 20 days, (e) 40 days, (f) 60 days, (g) 80 days, (h) ground truth (100-day image for top panels, geometry truth for bottom panels). These eight image versions are displayed as raw input (top left), upsampled (top right), segmented (bottom left), and upsampled and segmented (bottom right). Lilac, blue, red, and yellow indicate concrete, rebar, tendon ducts, and air voids, respectively.
Figure 4. A single X–Y plane image slice for different equivalent sampling times: (a) one day, (b) five days, (c) 10 days, (d) 20 days, (e) 40 days, (f) 60 days, (g) 80 days, (h) ground truth (100-day image for top panels, geometry truth for bottom panels). These eight image versions are displayed as raw input (top left), upsampled (top right), segmented (bottom left), and upsampled and segmented (bottom right). Lilac, blue, red, and yellow indicate concrete, rebar, tendon ducts, and air voids, respectively.
Particles 08 00033 g004
Figure 5. A single X–Z plane vertical image slice for different equivalent sampling times: (a) one day, (b) five days, (c) 10 days, (d) 20 days, (e) 40 days, (f) 60 days, (g) 80 days, (h) ground truth (100-day image for the top two panels, geometry truth for the bottom two panels). These eight image versions are displayed as raw input (first), upsampled (second down), segmented (third down), and upsampled and segmented (bottom). Lilac, blue, and red indicate concrete, rebar, and tendon ducts, respectively.
Figure 5. A single X–Z plane vertical image slice for different equivalent sampling times: (a) one day, (b) five days, (c) 10 days, (d) 20 days, (e) 40 days, (f) 60 days, (g) 80 days, (h) ground truth (100-day image for the top two panels, geometry truth for the bottom two panels). These eight image versions are displayed as raw input (first), upsampled (second down), segmented (third down), and upsampled and segmented (bottom). Lilac, blue, and red indicate concrete, rebar, and tendon ducts, respectively.
Particles 08 00033 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

O’Donnell, W.; Mahon, D.; Yang, G.; Gardner, S. Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications. Particles 2025, 8, 33. https://doi.org/10.3390/particles8010033

AMA Style

O’Donnell W, Mahon D, Yang G, Gardner S. Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications. Particles. 2025; 8(1):33. https://doi.org/10.3390/particles8010033

Chicago/Turabian Style

O’Donnell, William, David Mahon, Guangliang Yang, and Simon Gardner. 2025. "Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications" Particles 8, no. 1: 33. https://doi.org/10.3390/particles8010033

APA Style

O’Donnell, W., Mahon, D., Yang, G., & Gardner, S. (2025). Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications. Particles, 8(1), 33. https://doi.org/10.3390/particles8010033

Article Metrics

Back to TopTop