**9. Motion Correction**

Generally, clinical imaging for preoperative planning for DBS does not correct for motion, and the scans do not tend to incorporate acceleration methods such as parallel imaging. Accurate imaging requires the subject to remain still. If a patient scan exhibits severe motion artifacts, the scan is simply run again. MR images can be distorted by multiple sources of motion arising from breathing, cardiac movement, blood flow, pulsation of cerebrospinal fluid, and patient movement [104]. This can cause distortions in the image such as ghosting, signal loss, and blurring, as well as Gibb's and chemical shift artifacts [175]. Such artifacts can mask or simulate pathological effects [104]. Motion artifacts are particularly prevalent when imaging patients with movement disorders but can be controlled for in a number of ways such as timing medication to be most optimal during the time of scanning or administering additional sedatives during the scan. Moreover, the head and neck should be supported with pads to improve patient comfort, which will also limit movement.

The most logical method of limiting motion artifacts is to decrease the acquisition time. Sequence paraments can be manipulated to shorten the acquisition time by obtaining larger voxel sizes, a partial field of view (FOV), incorporating simultaneous multi slice 3D imaging and parallel imaging techniques, signal averaging, or obtaining multi contrast images. To correctly utilize these potential solutions, each factor should be considered relative to one another. For instance, partial FOVs can induce aliasing, fold over artifacts, and reduce the SNR, which can, to a certain extent, be countered by isolating the excitation to a localized region by using either multiple pulses, signal averaging, or fat suppression methods. Contrary to this, it may increase the effects of field inhomogeneity, but be combated with factors such as spatial pre-saturation. Such issues highlight the dynamic nature and interplay of sequence parameters and hardware, which can be largely overcome through the use of stronger field strengths such as 7 T.

Parallel imaging (PI) is a reconstruction technique rather than a sequence commonly employed to accelerate acquisition time [176]. Magnetic resonance (MR) images are not directly collected but are instead stored in a Cartesian grid, representing a spatial frequency domain known as k-space. K-space data is collected via superimposing spatially varying magnetic field gradients onto the main magnetic field [55,177]. Generalized auto-calibrating partially parallel acquisition (GRAPPA) methods speed up acquisition n time under-sampling each line of k-space in the phase-encoding direction. Additionally, partial FOVs are collected independently, corrected, and then reconstructed within the frequency domain [178–180]. Alternatively, sensitive encoding methods (SENSE or ASSET) can shorten scan times, and these methods occur in the image domain where data are obtained using multiple independent receiver channels where each coil is sensitive to a specific volume of tissue, which is then unfolded and recombined to form the MR image [177]. However, PI methods are associated with a number of artifacts including ghosting, speckling, wrap around, and g factor penalties and ought to be used with caution [181–183].

*J. Clin. Med.* **2020**, *9*, 3124

Motion correction can be conducted prospectively in real time by updating the image geometry during the scan, or retrospectively by post-acquisition registration techniques and manipulations during image reconstructions [184]. Additional hardware is required for prospective methods that are implemented within the scanner itself. In this case, fiducials can be attached to the patient's head, which assesses the extent of movement and adjusts the gradients accordingly. Alternatively, you can employ optical tracking or reflective markers, which are linked to a camera inside the bore. Motion correction is then achieved by either re-registration slice-by-slice during the scan, adjusting first order shims, and/or varying the gradient system online [185,186]. As discussed, motion artifacts do not have to come from patient movement but can arise on a much smaller scale at the proton level. Protons in blood, for example, exhibit a non-static magnetic field due to the variation of gradients in space. That is, they can miss rephasing pulses and therefore decay in signal before it can be read out by the scanner, especially for spin echo sequences that are used for obtaining T2w images [187]. This phenomenon is known as flow-related dephasing and results in artifactual phase shifts and signal distortion. In some instances, this can be useful, for example in angiography sequences, the negative e ffect is larger in sequences with longer TEs, such as those required for accurately imaging the STN. Adding in flow compensation or gradient moment nulling, which applies additional gradient pulses prior to the signal readout to compensate for signal decay, can compensate for this dephasing [188,189]. However, this is a computationally heavy process and is largely only suitable for partial FOVs. Alternatively, the sequence may be synchronized so that the acquisition occurs in time with the cardiac or respiratory cycle, which is known as cardiac gating and simultaneously requires pulse recordings or electrocardiograms [104].

### **10. Registration and Image Fusion**

Using MRI to visualize deep brain structures such as the STN for DBS is a multi-stage process that involves the acquisition of multiple separate contrasts that require registration to a common, patient-specific native space. For pre-operative planning, at least two sets of image registrations are required: (i) anatomical T2 to T1 and (ii) pre-registered anatomical T1 and T2 to stereotaxic space defined by the CT or MRI including the coordinate frame. In this section, we focus on registration and fusion of MRI. For literature including alternative imaging modalities such as CT and ventriculography, see [190,191].

Image registration refers to the process of aligning a moving source image onto a fixed target through an estimated mapping between the pair of images. While the exact parameters incorporated within pre-operative planning systems are mostly proprietary, the general process will require a rigid registration, defined by six parameters: translation and rotations along the *x*-, *y*-, and *z*-axes. This refers to the spatial transformation of how a voxel can move from one space to another [192]. Transformations also require additional parameters such as interpolation and cost function. Interpolation refers to the process of re-gridding voxels from the source image to the target, an essential procedure as each pixel within the transformed image may not represent a whole integer within the target image. This is especially true when T2w images consist of anisotropic voxel sizes and the T1 images are isotropic. Therefore, the goal of interpolation is to piece back together the voxels that have been moved. Clinical neuroimaging traditionally employs the simplest intensity-based methods such as nearest neighbor interpolation, also known as point sampling, which assumes that similar values in di fferent images are closer together and therefore constitute the same location [193,194]. Cost functions are used to assess the suitability of a given transform. This can be achieved with either similarity metrics such as mutual information, which compares, on the basis of pixel intensities, the di fferences between the transformed source and target image [195]. These registration steps are all conducted automatically within pre-operative planning systems, with the only manual alterations relating to viewing criteria such as brightness and intensity. This is suboptimal, as registrations often need tweaking and optimizing, and it becomes challenging to sugges<sup>t</sup> exact methods for optimizing registrations with regards to pre-operative planning systems as it remains unclear as to what exact parameters are employed.

Linear within-subject registrations typically employ intensity-based similarity metrics, matching images on the basis of intensities or intensity distributions. Intensity methods can be optimized to incorporate local patches that account for textures and geometric information that are missed when assessing for global identical intensities. An example is boundary-based registration, which forms the basis of intra-subject registration of T2 to T1 images within the Human Connectome Project minimal processing pipeline [196,197]. Registrations could be optimized to include an additional affine transform that incorporates scaling or sheering [198]. Alternatively, deformable registrations via attribute matching and mutual saliency (DRAMMS) can be achieved. DRAMMS applies confidence weightings for matching voxels across contrast and will relax deformation in local regions where contrast-specific tissues are mutually exclusive to image type. DRAMMS has proven useful in accounting for pathology, subcortical structures, and cortical thinning, which are all factors to consider when imaging PD patients [199].

Further, no quality or standardized evaluation for registration accuracy currently exists in clinical neuroimaging beyond subjective visual assessment. This is problematic as it becomes unclear as to whether the initial rigid body transforms are an accurate spatial representation of individual anatomy, which, if erroneous, could result in targeting errors and DBS lead placement. The gold standard of accuracy is instead dependent on the stereotaxic frame, which is an extrinsic marker and does not include information directly related to the MR image.

Medical imaging often incorporates automated image fusion, which refers to the process of aligning, resampling, smoothing, and combining the information of multiple images into a more informative and descriptive output; for instance, by combining T1 and T2 into a single image. Fusion occurs after registration with the goal of interpolating and smoothing MRI images to make them more visually appealing, which can theoretically recover a signal within the data despite the noise [200]. However, smoothing and resampling voxel sizes will reduce anatomical variability and location accuracy as they can include signal from neighboring structures, leading to an erroneous increase in the size of the nucleus and PVEs [166,201]. Such smoothing methods may not be compatible with quantitative images such as T2\* maps and QSM, as these images represent distinct signal intensities of specific voxels that are outside the predefined values of the planning system. In e ffect, this could be a simple viewing error, rather than a total incompatibility.
