1. Introduction
X-ray microcomputed tomography (micro-CT) is a three-dimensional imaging technique which allows nondestructive analysis of samples. Originally developed as a medical diagnostic tool with X-ray needle or fan beam geometry, further development towards powerful flat panel detectors enabled the construction of CT systems with cone beam geometry and, thus, a significant improvement in resolution and scanning speed [
1]. In addition to the geometrical magnification, a setup with additional optics, a so-called X-ray microscope (XRM), allows a further increase in resolution by one order of magnitude. Nowadays, XRM systems which can achieve a resolution of a few µm have become a standard tool in the engineering sciences for the characterization of particle systems, for example to support the understanding of separation processes [
2,
3].
When penetrating the sample, X-rays are attenuated due to interactions of X-ray photons with the matter which is described by Beer–Lambert’s law:
with
and
describing the intensity of X-rays before and after penetrating the sample, respectively. The linear attenuation coefficient of a specific material
depends on the material density
and is a function of the X-ray wavelength
and thus X-ray energy [
1]. Although the attenuation is strongly influenced by the samples’ material properties, conventional computed tomography is not an element-specific analysis, as the attenuation is normally recorded through greyscale images. Then, during tomographic reconstruction, each smallest possible volume element, a so-called voxel, is assigned to a grey value, which is composed of the attenuation properties of all material phases contained in the respective volume. This summation of material properties inside a voxel is called partial volume effect (PVE) [
4]. Mostly considered as an imaging artefact, the PVE can be used to obtain additional information: the grey value of voxels affected by the PVE can be assumed to be the linear mixture of the attenuation property of each contained phase, weighted by their respective volume fraction [
5]. For polychromatic X-ray sources, this assumption can only be made if artefacts resulting from beam hardening are kept to a minimum [
6]. Beam hardening occurs along the radial dimension of the sample, when low energy X-rays are attenuated more strongly in the center of the sample. Additionally, structural beam hardening results from highly attenuating material inside the sample. Both PVE and beam hardening lead to a change of grey value over the radial distance from the sample center and around highly attenuating structures [
7].
To obtain quantitative information from a CT measurement, a segmentation is usually performed after post-processing of the reconstructed data. With each contained phase of the sample assigned to a specific grey value, the sample can be analyzed quantitatively. However, this method requires the structural feature of interest to be significantly larger than the voxel size in order to be able to be identified during image processing [
8]. Although the PVE is present in the whole volume, it manifests itself visually mainly at interfaces between differently absorbing phases. As a result, these interfaces become blurred and can only be distinguished from neighboring phases if there is a minimum lateral distance of two to four voxels [
6,
9]. Thus, voxel size is an insufficient parameter when talking about resolution. Instead, spatial resolution as the minimum distance between to features that can be detected as individuals should be considered. For the used setup, a factor of five is assumed as the correlation between voxel and spatial resolution. The achievable voxel resolution—and thus the field of view (FOV)—is limited by the dimension of the sample unless the setup permits scanning within the sample [
10]. Note that with larger features, and thus a larger FOV, the resolution decreases because both are linked via a fixed number of detector pixels [
11]. Therefore, performing an adequate segmentation for a quantitative analysis will not be feasible if the particles of interest are smaller than the spatial resolution [
12].
In spite of this challenge, there are other methods of performing quantitative analyses based on tomographic greyscale information. One of these approaches uses a calibration sample also known as a phantom. The method was originally developed in the field of medical imaging and aims at creating an object with precisely known properties that behaves as similarly as possible to the real object when imaged. Initially, the term “phantom” would refer to an object mimicking a part of the human body [
13]. Medical phantoms are used to determine bone mineral content [
14] or bone density [
15], to evaluate electron density/proton stopping power for proton radiation therapy [
16], to distinguish tissue types in breast tumor diagnosis [
17], or to quantify the accumulation of drug particles in organs [
18]. As of today, a phantom is any calibration sample with known defined material properties that is used to calibrate a grey value histogram to establish a clear link between the material property and a certain grey value. In most cases, the use of calibration samples serves the determination of the concentration of particles inside a surrounding matrix because the geometric dimensions of the particles are well below the highest possible resolution of the scan [
9,
19,
20].
In this paper, the phantom technique is applied to filter cake structures. XRM offers an excellent opportunity in the field of liquid filtration research to investigate and describe prevailing mechanisms in filter cake formation and dewatering. Laboratory filtration experiments are often carried out in standardized test apparatuses, with resulting filter cakes having thicknesses in the range of a few centimeters. To obtain reliable results from XRM measurements for the whole filtration process, the entire height of a filter cake has to be considered. Data stitching of several individual scans covering certain areas of the filter cakes should be avoided in order to keep scanning time to a minimum, which in effect reduces drying and shrinking of the cake structures [
21]. With an appropriate FOV, the individual particles, whose sizes are usually in the µm range, cannot be resolved unless the size of the test apparatus itself is adjusted under certain assumptions [
22]. In this case, however, it must be ensured that the experiment can be transferred to a smaller scale without the loss of information. If downscaling is not possible, a sampling technique will be required [
23].
Filter aid filtration is a solid–liquid separation process. It is mainly applied for the separation of very fine particles below 5 µm from low concentrated suspensions which present considerable challenges for conventional filtration processes [
24]. The aim is to obtain a particle-free filtrate as well as to build up a filter cake with a sufficiently high permeability. A filter aid is added to the suspension. The filter cake is therefore formed by the filter aid, with the impurities being deposited inside this cake. Filter aid filtration can be achieved via several process strategies [
25]. Our work focuses on the case of filter aid filtration with a precoat and a growing layer (see
Figure 1). Here, a pure filter aid suspension is first filtered until a considerable amount of filter aid has been deposited on top of the support layer, forming a so-called precoat. The support layer is commonly a wire mesh with a relatively large aperture size, which in itself does not take part in the filtration of impurities at all. The precoat layer takes the functionality of the filter medium commonly used in filtration and is intended to protect against bleeding of impurities into the filtrate. After a precoat has been formed, the actual suspension containing the impurity is filtered together with the filter aid, comprising the so-called body feed.
In addition to the deposition of impurities in newly formed cake layers, transport of impurities into lower cake layers may take place [
26,
27]. Therefore, both cake filtration and depth filtration must be considered as taking place in the filter aid filtration process. In modeling the filtration process, it is important to have information about the fraction of impurity particles migrating into lower layers of the filter cake [
28,
29,
30]. X-ray microscopy combined with the phantom calibration method offers the possibility of providing an insight into the impurity concentration profile if there is sufficient difference in X-ray attenuation between the filter aid and the impurity to be separated.
5. Conclusions
The phantom calibration method is applied to the three-dimensional filter cake characterization using X-ray microcomputed tomography with the aim of determining the amount of separated impurity inside a filter cake. This in turn allows for the quantification of filtration effects. An approach using a calibration in CT imaging is necessary as the particles of interest, especially the impurity particles, are much smaller than the maximum possible spatial resolution of the measuring setup. As a consequence, segmentation into distinct phases of filter aid, impurity, and pore volume and a following quantitative analysis are not feasible.
Kieselguhr is used as the filter aid and barium sulphate acts as an impurity substance. The grey values of prepared phantoms are correlated with the corresponding BaSO4 volume fraction. The thereby acquired calibration function is used to transfer the grey values of the filter cake datasets into BaSO4 volume fractions, which allow further analysis. The validation confirms the correctness of the calibration function.
The tested approach is new in the field of filter cake characterization. It is promising and delivers reproducible and valid results for the filter cakes prepared in constant volume flow experiments, with precoat and body feed layers. The resulting BaSO4 volume fractions are higher than anticipated beforehand, indicating that the filtration process using a pressurized tank influences the composition of the feed suspension. After evaluating all experimental parts, the significance of many possible influences is ruled out. Only the occurrence of air inclusions is expected to influence the resulting impurity volume fractions. The occurrence of imaging artefacts should not be underestimated but could be kept to a minimum and is ruled out as a significant factor here.
A positive outlook is drawn as the presented work shows the immense value of additional information by exploiting subvoxel data. An extension towards the quantification of two different impurities in a simplified system should be possible with the described methodology, provided the contents of all other phases in the system are known. Filtration effects can now be studied in more detail for a wider range of experimental setups with different process parameters.