**From Light to Displacement: A Design Framework for Optimising Spectral-Domain Low-Coherence Interferometric Sensors for In Situ Measurement**

## **Tom Hovell \*, Jon Petzing, Laura Justham and Peter Kinnell**

Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK; J.Petzing@lboro.ac.uk (J.P.); L.Justham@lboro.ac.uk (L.J.); P.Kinnell@lboro.ac.uk (P.K.)

**\*** Correspondence: T.Hovell@Lboro.ac.uk

Received: 23 October 2020; Accepted: 26 November 2020; Published: 30 November 2020

**Abstract:** Growing requirements for in situ metrology during manufacturing have led to an increased interest in optical coherence tomography (OCT) configurations of low coherence interferometry (LCI) for industrial domains. This paper investigates the optimisation of spectral domain OCT hardware and signal processing for such implementations. A collation of the underlying theory of OCT configured LCI systems from disparate sources linking the journey of the light reflected from the object surface to the definition of the measurand is presented. This is portrayed in an applicable, comprehensible design framework through its application to profilometry measurements for optimising system performance.

**Keywords:** in-process; metrology for machining; optical coherence tomography

## **1. Introduction**

In situ metrology is a growing requirement for many high value manufacturing processes, ensuring traceability of parts with high tolerances and complex geometric forms, in addition to process control [1,2]. Current in situ forms of metrology commonly involve the use of sensors collecting information on parameters such as temperature, pressure, vibration and torque [3,4] to quantify part quality. Verification of quality becomes increasingly difficult via these methods with adaptive production lines creating made to measure custom parts such as for biomedical implants. Current systems used for geometric measurements on-machine are often tactile touch trigger probes, due to the measurement speed of tactile probes only sparse datasets can be practically obtained in production line scenarios, leading to the ability to only capture simple geometric forms requiring a low datapoint density to accurately describe the structure [2,5,6]. Such methods of metrology do not allow for comprehensive analysis of component manufacture progress in-process nor do they allow for a complete verification of the parts adherence to required tolerances. Such a system would enable quick adaptation to change in component design, ensure traceability of each stage of the manufacturing process and identification of issues at the point of occurrence, then corrective actions can be taken without needing to remove and re-fixture the workpiece.

In order to achieve this goal direct measurement of the component is required through the migration of complex metrology systems from the measurement lab and into the domain of manufacturing, allowing for geometric measurement data to be collected while a manufacturing process is underway, or soon after it is completed. However, due to spatial constraints and impacts from the environment, direct measurement of geometric form is not easily achieved using conventional, currently developed metrology tools.

Low-coherence interferometry (LCI) is a well-established optical method used for obtaining absolute geometric measurements present in many metrology laboratories around the world. However, typical LCI instruments are large and not suited to direct implementation into machining processes, often utilising white light scanning configurations prone to error from external environmental factors and requiring dedicated workstations. Optical coherence tomography (OCT) is a branch of applied LCI developed for operating in non-ideal environments within biomedical science [7]. OCT has been developed for over three decades and demonstrates high data integrity with high spatial resolution even whilst operating within the hostile environment of the human body [8]. OCT configurations have demonstrated straightforward coupling with fibre optics, allowing sensors to be located in situ and at a distance from the readout sensor and electronics, meaning only the chemically inert, extremely compact and robust optical fibre needs to be integrated within the manufacturing process environment. This allows for potential sensor integration into hostile areas in the presence of issues such as high temperatures, electromagnetic interference, corrosive fluids and radioactive activity, whereas current electronic sensors would be prevented from working properly or require extensive shielding. Additionally, the sensors require relatively inexpensive optical components that are readily available making this technology highly accessible. These characteristics make OCT configurations of LCI particularly interesting for in situ measurement where integration using current sensor technology is currently not possible for profilometry.

Although several review papers have been generated from literature on the developments in OCT systems [8–11], these focus on its application in the biomedical domain. As a result, areas of the underlying theoretical principles of parameters affecting the captured signal and methods of its processing for such configurations remain largely dispersed and isolated amongst the literature. An in-depth end-to-end demonstration of the theory of operation, from light source to spatial measurement in non-ideal, realistic scenarios with the additional focus for manufacturing regimes, has not yet been completed. As this technology transitions over to greater use in different engineering sectors and applications, a complete practical methodology for its design and application into such environments is required.

The purpose of this article is to create an optical and data processing design framework for system hardware characteristic selection and signal processing to optimise desired system performance. This is achieved though collation of the underlying theory of spectral domain OCT systems from disparate sources with an application to industrial metrology, linking the journey from the light reflected from the object surface to the definition of the measurand, portraying it in an applicable, comprehensible format through its application to profilometry.

#### *Sensor Applications in Industry*

It can be seen from Table 1 that a diverse range of applications have already had initial investigation into the use of LCI sensors for monitoring processes. Recent work demonstrating the use of LCI in the domain of a liquid environment has shown that the system is able to provide stable, accurate data during profilometry measurements across various step-heights [12]. An extension of this work demonstrates an embedded LCI sensor for absolute distance measurement of samples operating within a jet of water has also shown significant data stability [13], demonstrating the potential usefulness of the LCI sensor for post-processing verification of parts, utilising a liquid jet for flushing of debris and coolant during measurement. This growing interest in the use of LCI sensors in industrial applications indicated a requirement of a text focused on optimising systems for such environments, along with expected outcomes and potential issues commonly found.

Although there are several implementations of OCT configurations, due to the sensitivity advantages of Fourier-domain approaches over time-domain ones [14] and the prohibitive cost of operating with laser-swept light sources, this paper focuses on the use of spectrometer or spectral-domain based implementations.


**Table 1.** Publication breakdown by application domain, table ordered by first appearance of application in literature.

## **2. Fundamental Principles**

Spectral domain LCI investigates the optical path length and magnitude of backreflected light from a sample to determine the samples' distance offset. This involves analysing the achieved interferometric signal frequency modulation which is dependent on the optical path difference between a stable reference signal and the signal returned from the measured sample; this is commonly achieved through conversion of the signal into the Fourier domain via a Fast Fourier Transform (FFT). This behaviour is well outlined in literature especially for measurement of biological systems with OCT [45,46]. This section will clarify the step by step underlying theory of how to achieve a spatial distance measurement from a simple LCI system.

A Michelson interferometer configuration is commonly used for the interferometric portion of the LCI sensor which is outlined in Figure 1a, the mathematical description of the light signal is also shown at each stage of progress. Figure 1b,c shows optical components can also be added at the exit of the fibre, allowing for the control of specific beam parameters (shape, depth of focus, focal length, and intensity distribution). Additionally, scanning methods can be added to perform profilometry as shown in Figure 1b. However, when applying such a sensor to operate within the spatially constrained non-ideal environment present in many manufacturing procedures scanning systems may not allow for the required robustness and spatial tolerances imposed within the measurement area, thus scanning procedures such as shown in Figure 1c may be more appropriate.

**Figure 1.** (**a**) schematic of Michelson interferometer for LCI sensor, (**b**) scanning by galvanometer mirror, (**c**) scanning via moving probe or moving sample, (**d**) signal term for a singular sample reflector and corresponding equations relating to signal features. For multiple reflectors, the cross-correlation component is a superposition of sinusoids.

## *2.1. Mathematical Formulation*

The progress of light through a Michelson configured LCI is shown in Figure 2; offshoots at each stage in the process show key parameters which will be affected by the system components. By understanding the impacting factors on signal quality at each stage of this progression, optimisation of system design and signal processing can be undertaken to maximise achieved signal quality.

The theoretical principles of LCI have been detailed in literature for an object with *M* discrete back-scattering reflectors [45,46]. This section collates that information in an end-to-end theoretical breakdown from light source to spatial measurement.

**Figure 2.** Flowchart of the signal progress throughout a Michelson interferometer lit by a broadband light source. The grey boxes indicate effects resulting from each stage.

The interferometric signal that is generated by the superposition of reference and sample signals is shown in Figure 1d and Equations (2) and (3), respectively; these are composed of the same original signal from the light source represented by Equation (1) but with a phase delay due to the optical path difference between them.

$$\mathbf{E}\mathbf{o} = \text{s}\_0 \left( k, \omega \right) e^{i \left( kz - \omega t \right)},\tag{1}$$

$$\mathbf{E}\_R = \mathbf{A}\_R e^{i(2k\tau\_R)} ; \mathbf{A}\_R = r\_R \frac{1}{\sqrt{2}} \mathbf{s}\_0 \left(k, \omega\right) , \tag{2}$$

$$\mathbf{E}\_S = \mathbf{A}\_S \* e^{i(2k\varepsilon\_S)} ; \mathbf{A}\_R = r\_S \frac{1}{\sqrt{2}} \mathbf{s}\_0 \left( k, \omega \right) \,, \tag{3}$$

$$\mathbf{E}\_D = \frac{1}{\sqrt{2}} \left( \mathbf{E}\_S + \mathbf{E}\_R \right) = \frac{1}{\sqrt{2}} \left[ \mathbf{A}\_R \boldsymbol{\epsilon}^{i(2kz\_R)} + \mathbf{A}\_S \boldsymbol{\epsilon}^{i(2kz\_S)} \right],\tag{4}$$

where *s*<sup>0</sup> represents the light source spectrum as a function of wavenumber *k* also known as spatial frequency and angular frequency *ω*. The phase offset (*kz*) can be set to zero for the initial signal in Equation (1). The reference signal and sample signal are shown by Equations (2) and (3), respectively, having their corresponding reflectivity represented by *rR* and *rS*(*zS*), where *zS* denotes the path length variable in the sample arm from the beam splitter, the factor 1/ <sup>√</sup><sup>2</sup> representing a 50:50 distribution of the optical power of the source signal from the beam splitter. The interferogram signal is composed of the superposition of the returned reference and signal electric fields as shown by Equation (4), the factor 1/ <sup>√</sup><sup>2</sup> represents that only half of the returned signal power will be passed towards the detector as shown in Figure 1a; this signal takes the form shown in Figure 1d.

To digitise the interferometric signal, an optical detector is used; this comes in the form of a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS). This means that the recorded intensity *I* is proportional to the time averaged electric field multiplied by its complex conjugate. The signal obtained by the spectrometer is shown in Equation (5), where the photocurrent is generated by a square-law detector expressed as a function of spatial frequency. Using the knowledge that the modulus of a complex number has the relationship: <sup>|</sup>*z*˜| ≡ <sup>√</sup>*z*˜*z*˜<sup>∗</sup> the following equation for intensity can be determined:

$$\begin{split} I\_D \left( k\_\prime \Delta z \right) &= \left| \left< \mathbf{E}\_D \cdot \mathbf{E}\_D^\* \right> \right|, \\ &= \left< \mathbf{E}\_S \cdot \mathbf{E}\_S^\* \right> + \left< \mathbf{E}\_R \cdot \mathbf{E}\_R^\* \right> + 2 \mathcal{R} \left< \left< \mathbf{E}\_S \cdot \mathbf{E}\_R^\* \right> \right>, \end{split} \tag{5}$$

where R represents the real component and the angled brackets denote a time-average which can be displayed as:

$$I\_D\left(k, \Delta z\right) = \lim\_{T \to \infty} \frac{1}{2T} \int\_{-T}^{T} \mathbf{E}\_D \cdot \mathbf{E}\_D^\* dt\_\prime \tag{6}$$

where *ID* is the photocurrent generated by the spectrometer. Substituting Equations (2) and (3) for the reference, *ER* and signal, *ES* components in Equation (5) will yield Equation (7). Here, the light source intensity spectrum is denoted as *S*(*k*) = |*s*(*k*)| 2. This represents a modulating signal term combined with a carrier signal term "DC term" to produce the modulated signal shown in Figure 1d.

$$\begin{aligned} I\_D(k) &= \frac{\rho}{4} S\left(k\right) \left[ R\_R + \sum\_{n=1}^N R\_{Sn} \right] \\ &+ \frac{\rho}{2} \left[ S\left(k\right) \sum\_{n=1}^N \sqrt{R\_R R\_{Sn}} \left(\cos\left[2kn\left(z\_R - z\_{Sn}\right)\right]\right) \right] \\ &+ \frac{\rho}{4} \left[ S\left(k\right) \sum\_{n \neq m=1}^N \sqrt{R\_{Sn} R\_{Sm}} \left(\cos\left[2kn\left(z\_{Sn} - z\_{Sm}\right)\right]\right) \right], \end{aligned} \tag{7}$$

where *ρ* is the responsivity of the line scan camera, *RR* is the reflectance of the reference, *RS* is the reflectance of the sample reflector, 2(*zR* − *zSn*) is the round trip optical path difference (OPD) between the reference and *n*th; and *m*th. sample reflectors. It can be seen from Equation (7) that the intensity of the signal is composed from the sum of three terms:

The 1st term is a constant "DC" signal, independent of changes in the OPD between the reference and sample signals. This term dominates the measured signal with an amplitude proportional to the power reflectivity of the reference and sample, scaled by the light source optical intensity as a function of wavenumber. This dominance can be reduced through changing the beam splitter ratio such that a greater percentage of the signal is directed towards the sample arm of the interferometer. Additionally, as the reference signal optical power is typically far greater than the samples returned signal optical power, this term will limit the signal strength before the spectrometer detector becomes saturated; by reducing the reference signal, the signal strength can be increased without saturation.

The 2nd term represents the reflected signals from the measured sample. The signal magnitude contribution from this term is typically far less than the DC signal contribution, due to the optical intensity being proportional to the square root of the reference and sample reflectivity.

The 3rd term represents the interference between different sample reflectors for a system where the reference and sample paths are separate as in Figure 1a. This term will appear as additional peak artefacts in the Fourier domain of the measured signal; reduction can be achieved through increasing the reference signal power through changing the beam splitter ratio directing light along each path. As shown in Equation (7), the 2nd term signal amplitude is dependent on the reflectivity of the reference and sample whilst the 3rd term is dependent on only the sample signal reflectivity. However, for a common-path configuration, this represents the reflected signals from the measured sample.

*Appl. Sci.* **2020**, *10*, 8590

Depth information is obtained by performing a Fourier transform on Equation (7) to form Equation (8), allowing for determination of the modulating frequencies corresponding to sample reflector locations:

$$\begin{split} i\_D(z) &= \frac{\rho}{8} \gamma(z) [R\_R + R\_{S1} + R\_{S2} + \ldots] + \frac{\rho}{4} \gamma(z) \ast \sum\_{n=1}^{N} \sqrt{R\_R R\_{Sn}} (\delta[z \pm 2(z\_R - z\_{Sn})]) \\ &+ \frac{\rho}{8} \gamma(z) \ast \sum\_{n \neq m=1}^{N} \sqrt{R\_{Sn} R\_{Sm}} (\delta[z \pm 2(z\_{Sn} - z\_{Sm})]), \end{split} \tag{8}$$

where *iD*(*z*) is intensity recorded as a function of OPD and *γ* (*z*) = *FT*{*S*(*k*)} is the coherence function manifesting as a convolution of the Dirac delta function which represents frequency elements of the Fourier transformed signal at set bins. Hence, this term determines the point spread function (PSF) over which the spread of interference fringes occur for a set sample reflector.

#### Signal Analysis for Singular Reflecting Objects

When taking measurements of opaque, singular reflecting objects such as metallic surfaces, as is often the case in manufacturing scenarios (*N* = 1); this allows for a reduction of Equation (7) to the form shown in Equation (9). This removes the interference noise, which occurs between reflectors at different OPDs for *N* > 1:

$$I\_D(k) = \frac{\rho}{4} \mathcal{S}\left(k\right) \left[\mathcal{R}\_R + \mathcal{R}\_{S1}\right] + \frac{\rho}{2} \mathcal{S}\left(k\right) \left[\sqrt{\mathcal{R}\_R \mathcal{R}\_{S1}} \left(\cos\left[2kn\left(z\_R - z\_{S1}\right)\right]\right)\right].\tag{9}$$

Depth information can be obtained by performing an inverse Fourier transform on Equation (9) to form Equation (10):

$$i\_D(z) = \frac{\rho}{8}\gamma\left(z\right)\left[R\_R + R\_{S1}\right] + \frac{\rho}{4}\gamma\left(z\right) \* \left[\sqrt{R\_R R\_{S1}} (\delta\left[\left(z \pm 2n(z\_R - z\_{S1})\right)\right])\right].\tag{10}$$

## *2.2. Extracting Spatial Depth from Spectral Interferogram*

From Equations (8) and (10), the spatial distance measurement which is of interest is a sinusoidal frequency dependent on *cos*(2*n*(*zR* − *zS*)); hence, the period of the wave is *π*/*n*(*zR* − *zS*).

The Fourier transformed result is composed from a series of discrete steps dependent on the sampling resolution *δk* of the spectrometer with an array of *Ns* pixels. The spectral sampling resolution can be determined by *δλ* = Δ*λ*/*Ns* or *δk* = *δλ*/*λ*<sup>2</sup> <sup>0</sup>, hence the sampling interval in the *z*-domain is given by *δz*ˆ = 2*π*/2*δsk*, where *z*ˆ = 2*z* to account for the apparent depth-doubling factor shown in the modulation frequency term *cos*(2*n*(*zR* − *zS*)). Hence, a peak located at pixel *Ni* on the spectrometer reading will correspond to a depth shown in Equation (11):

$$z\_i = \left(\frac{\pi}{\delta k}\right) \left(\frac{N\_i}{N\_s}\right). \tag{11}$$

However, systems are not perfectly representative of their theoretical counterparts, errors in various parts of the system such as pixel spacing and diffraction grating ruling errors will bleed through into the measurement data if such a linear fit approach is taken. As such, further calibration of the system can be undertaken to improve the accuracy of this transfer function. Previous work discovered for one setup that, during measurements in air, this would lead to errors of up to approximately 50 nm and whilst operating in water approximately 100 nm [12]. Hence, depending on the measurement accuracy requirements, further action may be required to compensate for nonlinearities in the system. To calibrate the LCI system, sensor readings were obtained from a mirror sample at a range of offsets from the fibre tip. Simultaneous reference measurements were taken using a reference interferometer (Renishaw XL-80, United Kingdom), allowing accurate determination of *δk* and Δ*k* across the useable range of operation.

## *2.3. Signal Noise*

The noise in a detected signal is a result of noise introduced from the measurement hardware, and as result of non-ideal measurement conditions. Noise sources resulting from the measurement hardware can be grouped generally as: readout noise, photon noise, relative intensity noise (RIN) and dark noise [47]. These can be further classified as temporal or spatial noise sources. Temporal sources such as photon and dark noise vary with time and thus can be reduced through averaging of frames. Spatial noise arises from non-uniform outputs in dark current non-uniformity and photo response.

Photon noise also known as shot noise is a statistical variation in the arrival rate of photons on the camera detector governed by Poisson statistics and as such the magnitude is linked to the square root of the signal optical intensity as shown in Equation (12). Dark noise results from a charge accumulation in each pixel due to thermal fluctuations in the silicon which can release electrons. Readout noise arises from converting the pixel charge carriers into a voltage signal and processing from analog-to-digital conversion with the largest contributor being the amplifier electronics. RIN describes the instability in the power level of the light source and is typically independent of laser power.

Spectrometer CCD's commonly have cooling which results in contributions to noise from dark current being insignificant. Noise contributions from photon variance reduces as photon flux increases due to the square root relationship with signal. When the measured signal has a low optical intensity, the read noise will be greater than the photon noise and the signal is termed read-noise limited. Increasing the optical power input into the spectrometer or increasing the integration time will allow the pixels to collect more photons, which in turn increases the SNR, there will then come a crossover point where photon noise exceeds read and dark noise, and the signal is termed photon or shot-noise limited:

$$
\sigma\_{\text{photon}} = \sqrt{\frac{\rho \eta \tau}{h v\_0} \frac{P\_0}{N} \left(\gamma\_S R\_S + \gamma\_R R\_R\right)}\tag{12}
$$

where *η* is the quantum efficiency of the detector, *τ* is the exposure time , *h* is the Planck constant, *v*<sup>0</sup> is the centre frequency of the light source spectrum, *P*<sup>0</sup> is the optical power, and *γ<sup>S</sup>* and *γ<sup>R</sup>* denote the quantity of input power that will exit the interferometer from each arm, assuming *RS* = *RR* = 1.

## *2.4. Measurement Range*

The useable measurement range of an LCI system is dependent on: the sampling rate of the signal due to Nyquist's theory, the sensitivity fall-off and the returned signal strength from measured regions such that the returned signal can be distinguished from the systems noise. This section looks at factors impacting on these areas.

## 2.4.1. Maximum Range—Nyquist Theory

Since the analogue signal is being digitised through its sampling via the optical detector pixels the maximum sample distance from the fibre that the sensor can resolve is limited by the sampling rate of the signal by the detector. The signal is digitised through the sampling of its wavelength components intensities across an optical detectors pixel array and thus is victim to Nyquist theory of requiring a sampling rate of at least twice that of the measured signal frequency. This limitation due to hardware can be calculated through the use of Equation (13):

$$z\_{\max} = \left(\frac{\lambda\_0^2}{4}\right) \left(\frac{N\_\text{s}}{\Delta\lambda}\right). \tag{13}$$

#### 2.4.2. Sensitivity Fall-Off

When using spectrometer based LCI techniques, there is an apparent fall-off in signal strength following a Gaussian rule with increasing OPD between the reference and sample signal leg, even if the reflected signal amplitude is constant. This is caused by the following two factors: finite pixel width and finite spectrometer resolution.

## Finite Pixel Width

The diffraction grating within the spectrometer spectrally separates the interferometric signal in terms of wavelength, directing spectral components of the signal onto discrete pixel detectors across the pixel array. The optical intensity of each of these spectrally dispersed spots is sampled across each pixel. The optical spot intensity is integrated over the pixel area which in effect imposes a rectangular function as a convolution of the Gaussian beam spot. This signal decay factor as a function of the finite pixel width (*RPW*) manifests as a non-normalized Sinc function as shown in Equation (14) along with the Rect function [47]:

$$R\_{PW} = \text{sinc}^2\left(\frac{\tau z}{2 \cdot z\_{\text{max}}}\right), \text{Rect}\left(\frac{k}{\delta k}\right) = \begin{cases} 0 & \text{for } |k| > \frac{\delta k}{2} \\ \frac{1}{2} & \text{for } |k| = \frac{\delta k}{2} \\ 0 & \text{for } |k| < \frac{\delta k}{2} \end{cases} \tag{14}$$

The result of this is a 4 dB drop in the signal-to-noise ratio (SNR) near the Nyquist limit [47]. However, the finite pixel width does not introduce a decay of the noise level due to the statistical independence of the shot noise between neighbouring pixels of the array [46].

## Finite Spectrometer Resolution

The finite spectrometer resolution determined by the Gaussian beam profile of the spot size incident on the optical detector pixel array also introduces a sensitivity decay (*RSR*) as shown in Equation (15):

$$R\_{SR} = \exp\left[-\frac{\omega^2}{2ln2} \left(\frac{\pi z}{2 \cdot z\_{\text{max}}}\right)^2\right],\tag{15}$$

where *ω* = *<sup>δ</sup><sup>k</sup>* <sup>Δ</sup>*<sup>k</sup>* and *δk* is the spectrometer's spectral resolution dependent on the size of the beam spot in the focal plane of the detector pixel. Thus, the total signal decay in the system as a function of depth comes in the form of a convolution of the finite pixel size with the Gaussian spectral resolution. After applying a Fourier transform, this leads to a multiplication of both of the decay functions which gives the expression shown in Equation (16) for sensitivity reduction R, as a function of imaging depth *z* [48]:

$$R(z) = \operatorname{sinc}^2\left(\frac{\pi z}{2 \cdot z\_{\text{max}}}\right) \exp\left[-\frac{\omega^2}{2ln2} \left(\frac{\pi z}{2 \cdot z\_{\text{max}}}\right)^2\right].\tag{16}$$

The depth-dependent sensitivity of the system presented in this work is shown in Figure 3a, the SNR sharply declines with depth and measurements near the Nyquist limit become difficult to reliably analyse as shown in Figure 3b.

**Figure 3.** Depth-dependent loss in signal sensitivity from sample reflector. (**a**) overlaid signals for sample measurements up to 2.17 mm, (**b**) an example signal at a depth of 2 mm, (**c**) dataset (**a**) after normalisation, (**d**) the same signal in (**b**) after the normalisation relationship.

A normalisation procedure can be applied in the Fourier domain to compensate for fall-off by multiplying the frequency components of the signal by the inverse of the signal drop-off envelope, indicated by the red dashed line in Figure 3a. This improvement can be seen in Figure 3c across the range of operation, Figure 3d shows the same signal as in Figure 3b with the normalisation curve applied to it. However, care should be taken to optimise this normalisation curve as it will also amplify high frequency noise and dampen low frequency signal components.

## Signal Strength

An additional factor limiting a systems ability to measure over large distances is signal strength resulting from the amount of light backreflected or backscattered into the interferometer. The surface which is being measured will also have a large impact on the signal magnitude and quality along with the orientation of the surface in relation to the axis of measurement. Environmental impacts may also degrade the signal, from attenuation due to operating medium, or the effects of vibration and particulates in solution causing scattering of the light source.

Some improvement can be achieved using optics, focussing the light on a constrained small region to improve re-coupling of back-reflected light into the system from the region of interest. However, for in situ measurements or for integration into many manufacturing regions, tight geometric constraints along with the influence of non-ideal environmental factors may limit the ability to add such optical elements into the system.

#### **3. LCI System Design Framework**

The prospect of designing an LCI system from first principles can be somewhat daunting. For practical implementation, the system will be built with a design specification of requirements, thus a robust methodology and knowledge of the various parameters which affect the outcome of critical operating qualifiers such as system resolution, operating range and measurement speed is required. Furthermore, many parameters are inter-linked making the design process somewhat iterative and, in many cases, trade-offs or compromises will need to be made to achieve a system

that can perform in the desired manner under the required circumstances of operation. Figure 4 shows a system design framework based around metrology design operating criteria depicted by the blue highlighted elements with the offshoots in grey representing parameters impacting them. Some of the parameters are broken down into smaller elements with the positive and negative symbols representing if the parameter should be increased or decreased to maximise each design criteria based on our exemplar; other designs may vary based on prioritisation of certain requirements.

**Figure 4.** LCI sensor design framework, optimising hardware choices for key operating criteria.

The operating domain should also be taken into account during the design phase, such as operating medium, environmental conditions and sample surface characteristics and their impact on the mentioned design parameters outlined in Figure 4. The following sections consider each step of the flowchart in Figure 4 with more in depth discussion on the various parameters and the cross-linking that occurs.

## *3.1. Axial Resolution*

The axial resolution in an LCI system is determined by the smallest change in OPD that the system can detect, determined by the spectral resolution of the spectrometer. Previous work utilising the LCI system described in Section 3.2.1 has demonstrated submicrometric accuracy across step heights ranging from 5 μm to 8 μm with a precision of ±56 nm [12]. However, if transparent samples are measured with multiple reflective layers, the smallest discernible distance between layers will be limited by the coherence length of the light source and the axial resolution will be as shown in Equation (17). The larger the bandwidth of the light that is sent into the interferometer, the narrower the coherence function/PSF upon performing an FFT. Increasing the bandwidth of the light source for a detector of set pixel number will result in a reduction in the maximum achievable measurement range as shown in Equation (13); thus, a balance must be found:

$$
\delta\_z = \frac{2\ln 2}{\pi} \times \frac{\lambda\_0^2}{\Delta \lambda}. \tag{17}
$$

When performing tomographic measurements, the resolution of the instrument is limited by the coherence function due to potential reflectors interfering with one another. It has been shown for measurements of singular reflecting surfaces the accuracy of measurement is not limited by the peak PSF with step-height measurements with submicrometric accuracy [12].

## *3.2. Lateral Resolution*

The lateral resolution of the system is controlled by the optics of the sensing arm of the interferometer. These fixed optics will allow for an axial field of view (*FOVaxial*) as shown in Equation (18) in which the lateral resolution (*δx*) will be approximately equivalent to the focused spot size shown from Equation (19). Outside of this field of view, the lateral resolution will deteriorate:

$$FOV\_{axial} = \frac{0.221\lambda}{\sin^2\left[\frac{\sin^{-1}(NA)}{2}\right]}.\tag{18}$$

$$
\delta \mathbf{x} = 0.37 \frac{\lambda\_0}{\mathcal{N} A}.\tag{19}
$$

Hence, from Equations (18) and (19), it can be seen that, as lateral resolution is increased due to a large NA focusing optic, the axial field of view over which the spot size is constant decreases.

The refractive index of the operating medium may also impact the lateral resolution [34]. Additionally, for in situ measurements during manufacturing processes, tight spatial constraints may be in place along with environmental factors such as vibration and non-ideal operating regions containing dust and liquid contaminants which may affect optical components. In such a situation, the use of un-lensed optical fibres is a desirable solution due to the compact size and robustness nature. However, as highlighted above, the lateral resolution severely degrades as a result of the light diffraction upon exiting the fibre.

## 3.2.1. Operating outside the Region of Focus

To demonstrate degradation in lateral resolution outside of a focused region, profile measurements were taken at two offsets of 235 μm and 1020 μm from a British five pence coin using the setup described in Figure 5. The system was configured as a fully fibre-enclosed implementation of an LCI with a common path for both the reference and sample signal leg, taking advantage of reduced sensitivity to vibrations, thermal fluctuations, and humidity, and removing the requirement for dispersion compensation between signals [49]. The system was un-lensed with the bare cleaved fibre having a numerical aperture of 0.13. This provides a micrometre-level footprint and form factor of the sensor. The system in this configuration has a confocal parameter of 83.5 μm and a 1/*e*<sup>2</sup> transverse resolution of 5 μm, hence both measured regions will be outside of the focused beam region.

**Figure 5.** Experimental setup of a common path low coherence interferometer.

The system components consist of a broadband light source provided by a super luminescent diode (EXS210068-01, Beratron, 850 nm) with a 3-dB bandwidth of 58 nm and an emitting power of 5.14 mW at 160 mA. A single-mode fibre coupler with a splitting ratio of 50:50 for beam splitting and coupling was used, with only one branch of signal output being used as the common path for the signal and reference. A spectrometer (MayaPro2000, Ocean Optics) operating at 125 Hz with a 2048 × 64 pixel array, starting wavelength of 756 nm and spectral range of 174 nm with a resolution of 0.21 nm was used, giving a theoretical axial operating range of 2.15 mm before aliasing as defined from Equation (13).

A region of the coin was focused on which contains the artisan's initials "J.C" to highlight the variation in lateral resolution with the distance of the coin from the fibre. Unilateral profile measurements were taken with the coin being translated at a constant speed of 100 μm/s on the *x*-axis over 1100 μm with a measurement integration time of 8 ms and a single spectrum captured per datapoint. This translated to a resolution of approximately 5 μm per datapoint. Steps of 5 μm were taken on the *y*-axis over a range of 800 μm.

Figure 6 shows the results from the two measurements, the red circles indicate a predicted spot size calculated from the NA of the fibre for an offset of 235 μm and 1020 μm. A clear reduction in the lateral resolution is present between the two offsets which manifests in a blurring of the measured feature. This degradation is a result of the increased spot on the surface of the sample reflecting light back from a larger area of focus. This results in an integration over the illuminated area which can shift the focus to other regions as the primary returned signal. From the measurement data, an increase of 1784% in the theoretical spot size area relating to an increase of 88% in the measured area covered by the "J.C" was observed; it is worth noting that an uncertainty metric for this measurement has not been completed. However, the values reflect the reduction in lateral resolution seen in Figure 6, where the achievable resolution is not equal to the spot size; this is due to the optical intensity distribution across the spot and that, with large incident angles to the sample surface, it is less likely for the light component to be recoupled into the system upon back reflection. Although the comparison may appear dramatic for a change in offset of approximately 900 μm, the lateral scale of measurement should be taken into account which is approximately 700 μm; thus, for many applications, this introduced blur will not impact measurement data integrity. However, for some small features such as shown in Figure 6, or surface roughness, this may be an unacceptable compromise and thus measurement offset must be taken into account or additional steps such as focussing optics to improve results must be used.

Several methods of improving the lateral resolution of an OCT profilometry measurement have been shown in literature [50–52]. A promising technique is the use of synthetic-aperture radar (SAR) [53] which has been used for improving the lateral resolution of radar systems for decades; due to LCI operating characteristics being analogous to radar, the same principles can be applied whilst performing line scan measurements to drastically improve the measurement resolution. This may circumvent the issue of lateral resolution whilst operating with an un-lensed system or outside of a systems lateral field of view. Application of SAR to LCI has been demonstrated with some success in literature with a technique named 'Interferometric Synthetic Aperture Microscopy' (ISAM) [51].

#### *3.3. Motion Artefacts and Fringe Washout*

For in situ profilometry, data measurement on the fly is necessary. However, the effect of sample motion can negatively impact on data integrity due to the signal being integrated over a period of time. This results in a reduction in SNR and image degradation [54]. The magnitude of motion artefacts and SNR reduction is controlled by the total axial or transverse displacement during a singular signal acquisition time. Hence, for a given sample motion, if the capture rate is increased or axial and transverse displacements are decreased, then the impact of motion artefacts will reduce.

**Figure 6.** Effect of offset distance on artefact measurement.

#### 3.3.1. Axial Motion Effects

Motion in the axial plane may occur due to vibrations of the measured sample or sensor leading to SNR reduction. This manifests as fringe washout due to the continuous phase changes of fringes over the integration period of the measurement. Equation (20) represents the detected number of electrons per wavenumber, *N*(*k*):

$$N\left(k\right) = N\_0 \text{sinc}\left(k\Delta z\right),\tag{20}$$

where *N*<sup>0</sup> is the number of electrons obtained when the sample and sensor are stationary and Δ*z* = |*vzT*| is the axial displacement of the sample during the integration time (*T*) with axial velocity (*vz*). Performing a Fourier transform on Equation (20) also allows for the approximation of the factor *sinc*(*k*Δ*z*) to be seen as *sinc*(*k*0Δ*z*) for the case |Δ*z*| << |*z* − *zb*|, where *zb* is the zero path length location. Thus, the axial motion of the sample or sensor gives rise to an SNR penalty of *sinc*2(*k*0Δ*z*). For *k*Δ*z* >> 1, broadening of the axial resolution occurs, due to the Sinc function in Equation (20) limiting the effecting spectral bandwidth. This can be seen as a convolution of the original image with a Rect function given by the Fourier transform of the Sinc function [54].

#### 3.3.2. Lateral Motion Effects

Transverse motion will degrade the lateral resolution leading to a blurring of the image in the direction of travel [55] and a reduction in SNR [54]. For a continuous wave (CW) source, the SNR degradation (*SNRdecrease*) due to lateral motion is given by Equation (21) [54]:

$$SNR\_{decrense} \cong -5\log\_{10}\left(1 + 0.5\frac{\Delta x^2}{w\_0^2}\right),\tag{21}$$

where *x* is the scanning distance during the camera integration time and *w*<sup>0</sup> denotes FWHM of the beam profile. The normalised displacement is defined as Δ*x*2/*w*<sup>2</sup> <sup>0</sup> . For pulsed illumination, Δ*x* is replaced by (*Tpulse*/*Tcamera*). Δ*x*, where *Tpulse* is the pulse width in time and *Tcamera* is the integration time of the camera for a single data point capture. To combat the lateral motion artefacts, the spectra capture speed can be increased, the motion speed decreased, or a pulsed or stroboscopic instead of CW light source can be used. [54,56,57]. Additionally, pulsed illumination can also offer SNR improvements for measurement on the fly over CW sources [58].

The irradiance distribution profile for a Gaussian beam can be found using Equation (22) whilst stationary. However, once there is a transverse movement between the probe and surface, then the irradiance distribution profile for a fixed exposure period (*T*) can be calculated by Equation (23) [54]:

$$g(\mathbf{x}, y, z) \approx \frac{4ln2}{\pi w\_0^2} \exp\left[\frac{-4ln2(\mathbf{x}^2 + y^2)}{w\_0^2}\right],\tag{22}$$

$$G(\mathbf{x}, y) = \frac{1}{T} \int\_{-T/2}^{T/2} g(\mathbf{x} + v\_{\mathbf{x}}t, y + vyt)dt,\tag{23}$$

where *x* and *y* are locations along the beam profile, *w*<sup>0</sup> is the FWHM of the beam profile and *vx* and *vy* represent constant velocities along the transverse x and *y*-axis respectively. The impact on transverse resolution in the *x*-axis for instance can be found by solving for *G*(*x*, *y* = 0) and then locating the FWHM of the calculated irradiance profile.

## *3.4. Dispersion Compensation*

Many manufacturing conditions do not offer an ideal environment to operate within, with machining processes including various liquids for lubrication and for flushing of debris. OCT has already shown that measurement errors due to chromatic dispersion in optically dense materials occur, resulting from the frequency dependence of propagation constants for different mediums [59]. The result of unbalanced dispersion within an LCI system is a reduction in signal peak intensity and a broadening of its Point Spread Function (PSF) shown in literature to be dependent on refractive index, central wavelength, bandwidth and propagation distance [59–61]. LCI is very tolerant of chromatic dispersion if the amount of dispersion occurring in the reference and sample arms are equal. This can be achieved through insertion of artefacts of varying thickness and refractive index into the reference arm of the spectrometer [60] or through using a common-path setup. Software based compensation is also achievable offering a more flexible method of compensation [59,62,63].

The propagation constant for a material can be described by a Taylor series expansion shown by Equation (24) with the expansion made around the light sources central frequency *k*<sup>0</sup> [63]:

$$
\theta(k) = \theta(k\_0) + \frac{\partial \theta(k)}{\partial k}(k\_0 - k) + \frac{1}{2!} \cdot \frac{\partial^2 \theta(k)}{\partial k^2}(k\_0 - k)^2 + \dots + \frac{1}{n!} \cdot \frac{\partial^n \theta(k)}{\partial k^n}(k\_0 - k)^n. \tag{24}
$$

The terms in Equation (24) relate to the zeroth, first and second orders of dispersion which respectively represent: constant offset, group velocity, both of which are not related to dispersive broadening and group-delay dispersion per unit length, though higher order terms may be present.

In practice, the frequency dependent dispersion variation can be compensated for by multiplying the obtained interferometric signal by a phase correction term such that *Icorrected*(*k*) = *<sup>I</sup>*(*k*) <sup>×</sup> *<sup>e</sup>*(−*iθ*(*k*)) and can be obtained during post-processing [46]. This is achieved through isolation of the coherence function of a strong reflector, followed by performing an inverse Fourier transform on the masked signal and then taking the arctangent of the imaginary component divided by the real component. By performing a linear fit and then identifying the difference through subtraction, an error term can be achieved indicating how much wavenumbers are out of phase with one another. This can then be used to form a phase adjustment value *e*(−*iθ*(*k*)). Using the knowledge that, for a properly dispersion compensated system the signal peak intensity will be at its maximum [60], an iterative procedure can be carried out until the maximum peak value is achieved.

## *3.5. Design Exemplar*

This section contains a walk-through for component specification dependent on an exemplar set requirement list as outlined in the design framework shown in Figure 4:


Axial resolution: The operating mediums absorption spectrum, degree of scattering present in the measurement and the required axial resolution will determine the potential range of *λ*0. It is also worth noting that some bandwidth regions have been more developed than others in terms of product availability and pricing. If a central wavelength of 1 μm is chosen, then the corresponding bandwidth for a set axial resolution can be calculated using Equation (17)—for this example, Δ*λ* ≈ 88 nm.

Operating range: For a required operating range and light source bandwidth, the minimum number of pixels sampling the wavelength range can be found using Equation (13) and solving for *Ns* (i.e., *Ns* = *zmax* <sup>4</sup>Δ*<sup>λ</sup> λ*2 ). For the given example, this would come to *Ns* = 880 pixels.

0 Measurement speed: The measurement speed is limited by the linescan rate of the camera. However, signal strength will impact the possibility of operating at such speeds which requires a knowledge of sample reflectivity and potential scattering or absorption of the operating medium; the amount of signal fall-off will also impact this. As shown by Equation (23), when relative transverse motion between the probe and sample occurs for a set exposure time, there will also be a reduction in the transverse resolution.

Lateral resolution: To achieve a lateral resolution of 20 μm at a translational speed of 200 mm/s with a linescan rate of 100 μs, the maximum static FWHM of the beam profile should be ∼6.8 μm from Equation (23), if the linescan rate is halved to 50 μs, then the static FWHM should be ∼18.7 μm. The static FWHM of the beam profile is dependent on the focusing optics used and will only be applicable over a certain axial field of view as described by Equations (18) and (19), respectively. The use of objective lenses can be employed to extend the field of view [64].

#### **4. Signal Processing Algorithms**

Signal processing is increasingly playing a key role in the analysis of optical measurements due to the enhanced power of microprocessors. There are many approaches to improving signal quality of LCI systems presented in literature; however, these again remain dispersed across the literature. This section will provide a concise overview of the key analysis procedures along with highlighting practical application signal issues which the user will encounter during real-life measurement scenarios. A signal processing framework for the steps required to go from raw detected signal to spatial distance value is illustrated in Figure 7. Here, the solid bounding boxed elements represent essential parts of post processing and the dashed bounding boxed elements represent procedures that will enhance the signal quality.

**Figure 7.** Framework for LCI signal data processing and distance offset retrieval, outlining required procedures in solid bounding boxes along with processing steps which can improve the signal quality in dashed bounding boxes.

#### *4.1. Resampling Captured Interferogram*

The majority of spectrometers utilises diffractive components to uniformly sample the signal by wavelength. However, FFT algorithms are most efficient when the data points are evenly spaced within the frequency domain [65]. Due to the relationship between wavenumber and wavelength, (*λ*), (*k* = 2*π*/*λ*), the data will be non-uniformly spaced by wavenumber, requiring a re-sampling process before a conventional FFT can be effectively applied. If the dataset is left as linear in *λ* before the FFT is applied, then severe deterioration of axial resolution and SNR will be present in the signal, with an OPD dependent broadening of the PSF function leading to a reduced operational limit due to signal fall-off. This phenomena can be seen in Figure 8a.

**Figure 8.** (**a**) a comparison between FFT of a resampled spectrum and FFT of the original spectrum; (**b**) a sample signal spectra with the DC component removed, the red dots show the zero crossing (ZC) points on the curve; (**c**) the red line shows the nonlinearly spaced ZC locations for a raw spectrum, and the black line shows the ZC points spacing once a resampling procedure has been undertaken.

It is possible to avoid this requirement for re-sampling of the data by using a Non-uniform Direct Fourier Transform (NuDFT) instead of an FFT algorithm. However, the NuDFT is more computationally intensive than other alternatives for re-sampling. Since computational time is particularly important for high-speed imaging, the FFT is preferred. Re-sampling approaches can be categorised into two methods. The first method uses a special hardware configuration which samples the returned signal directly to be linear in *k* [66,67]. However, this also adds additional cost, design complexity and is not easily translated between systems. The second method utilises software to take the signal acquired nonlinearly in *k* and re-sample it during post processing through the use of a *λ* to *k*-mapping function [68]. However, software-based methods do not perform as well for reflectors close to the maximum achievable depth (Nyquist limit) due to high frequency fringes in the interferogram that are poorly sampled [65].

The approach taken in this work is similar to that presented for automatic calibration [68], relying on the knowledge that, for a singular reflector and whilst operating in a common-path configuration negating the requirement for dispersion compensation, the interference fringes will be evenly spaced in *k*-space. By using zero-crossing detection shown in Figure 8b, fringe spacing as a function of pixel number is obtained from which a mapping relationship shown by the red line in Figure 8c between linear in *λ* and linear in *k*-space can be created; this allows interpolation of the data using a B-spline interpolation to create an evenly sampled dataset in *k*-space before performing an FFT.

To increase the accuracy of the zero-crossing detection, a zero-filling procedure can be followed which addresses the issue of errors at greater depths because of the increased shoulders around the peaks [14,65]. This involves taking the FFT, Fourier shifting the result and then zero padding the start and end of the array to increase the data-array length by a factor *M*, followed by an inverse FFT back to the spectral data. These spectral data, which was *M* times larger than the original array, have a greater datapoint density hence a more accurate resampling procedure is possible. In practice, this gives an increase in signal magnitude and a reduced PSF of the signal peak.

#### *4.2. Windowing Signal*

Applying a windowing function to the spectral interferogram can improve the sensitivity of FFT spectral-analysis techniques and reduce spectral leakage and side-lobes amplitude, resulting in a greater SNR. There are various types of windowing functions available, and the appropriateness of a window function depends on the frequency content of the measured signal. For example, if measuring a multi-layered reflecting sample results in signal components close to one another, then a window function with a narrow main lobe should be used to prevent spectral resolution degradation. Various windows should be compared against one another to find the best fit for the measurement situation. Table 2 shows four commonly employed windowing functions applied to an LCI signal demonstrating an increase in PSF FWHM compared with the original unwindowed signal but also a substantial increase in SNR.

**Table 2.** Comparison of windowing functions applied to LCI signal, showing change in PSF and SNR from unwindowed data.


## *4.3. Fixed Pattern Noise Removal*

Fixed pattern noise refers to items in the captured signal that are separate to the interference modulation pattern which contains the depth information as a response to OPD between the reference reflected signal and the signal reflected from the measured sample [46]. For example, variations in response of pixels in the CCD camera or spurious etalons in the interferometer due to back-reflectance within the interferometer or stray rays and the DC signal term. There have been various software methods to remove these unwanted signal additions suggested in literature with the most common techniques being high-pass filtering, subtracting the measured reference signal [69] and subtracting the mid-point between the upper and lower signal envelopes [12].

## *4.4. Zero-Padding*

A zero-padding methodology can be applied to the captured spectrum to increase datapoints in the Fourier domain achieved by joining an array of zero values to the end of the spectrum array. This does not add any frequency components to the signal but when an FFT is performed the number of frequency bins that will be compared with the signal will be increased, thus improving the datapoint density of the Fourier transform signal. This can be seen in Figure 9a,b showing the FFT peak of a resampled spectrum without and with zero-padding before Fourier-transforming, respectively. However, as array size is increased naturally so will the computational time. This increased datapoint density can improve polynomial fitting to the FFT peak as shown in Figure 9c.

Figure 9d shows the resolution of the interferometer when processing the raw data by detecting the location based on the maximum intensity detected peak, this value is calculated as being 2.17 μm according to the number of bins assigned by the 2048 CCD pixel array in the spectrometer. This resolution can be improved through increasing the number of bins as shown through zero-padding in Figure 9b, or by fitting a polynomial to the datapoints surrounding the maximum peak location and extracting the vertex. In practice, a mixture of zero-padding and polynomial fitting can improve the system's ability to resolve depth values.

**Figure 9.** (**a**) FFT of original dataset, (**b**) FFT of zero-padded dataset, (**c**) improving resolution through fitting a 2nd order polynomial to the FFT peak, (**d**) comparing maximum peak detection versus a polynomial peak fitting against OPD.

## *4.5. Locate Signal Peak(s)*

The signal peak(s) which have the largest magnitude will correspond to regions from the sample that the light signal has been backreflected or backscattered from. Thus, these frequencies correspond to the depth values at areas of interest on the sample. A straightforward method of finding the peak(s) within a signal is through use of the MATLAB function "findpeaks". The resolution of the measurement can be increased by fitting a 2nd order polynomial to datapoints surrounding the peak location relating to the measured sample offset distance as shown in Figure 9b. The obtained peak frequency can then be converted to a spatial depth as described in Section 2.2.

## **5. Conclusions**

This work has identified a growing interest around the application of developments in OCT configurations of LCI to the industrial domain for in situ measurement. However, due to the wealth of knowledge accrued over the past 30 years of progress in the field, an in-depth structured collation of theory with an industrial application mindset was required.

As a result, this work for the first time presents an end-to-end holistic framework for the development of OCT configured LCI systems, linking the journey of light reflected from the object surface to the definition of the m measurand. This includes the breakdown of achieved signal contributions, practical operating limits from measurement range due to sampling rate and sensitivity fall-off, lateral resolution variation across the region of measurement, motion effects on SNR/introduction of spurious artefacts and dispersion mismatch compensation.

In addition to identification of factors impacting the measurement signal, a simple practical approach to component selection resulting from critical operating qualifiers is also given and finally a comprehensive step by step dataflow approach to digital signal processing techniques for optimising the quality and robustness of achieved system measurements.

Some general limitations of OCT configured spectral domain LCI measurements which still require improvement include: achieving a high lateral resolution over a large work volume, difficulty in measuring surfaces with large gradients, limited penetration depth through scattering mediums, relatively small operating range, and signal strength dependent on scatterers reflectively, impacted by motion blur and suffering from sensitivity fall-off.

This work will equip the reader with the necessary underlying theoretical and practical information to apply the developments that have been made in OCT/ LCI to industrial applications for in process metrology.

**Author Contributions:** Conceptualization, T.H., L.J., J.P., P.K.; methodology, T.H.; software, T.H.; validation, T.H.; formal analysis, T.H.; investigation, T.H.; resources, J.P., P.K.; data curation, T.H.; writing—original draft preparation, T.H.; writing—review and editing, T.H., L.J., J.P., P.K.; supervision, L.J., J.P., P.K.; funding acquisition, P.K., J.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors wish to acknowledge support of the UK Engineering Physical Science and Research Council (EPSRC) for funding this work (Grant Nos. EP/M020746/1, EP/L01498X/1).

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
