*2.4. Analysis Pipelines in FieldTrip and Brainstorm*

Next, we will discuss the common steps of the MEG data analysis and related implementation details in both FieldTrip and Brainstorm software. We will focus on the analysis of our experimental data, which, of course, does not cover the full functionality of the two toolboxes.

#### 2.4.1. Reading and Segmenting Data

We start our analysis by reading the MEG data stored in the FIFF format and segmenting them into trials according to experimental conditions. It is common to segment the data after decoding trigger sequences in a raw data file. However, in this work, we make use of additional functions to import events from mat-files in both analyses because there are slight differences in the experimental protocol for some subjects, and this approach is more time-efficient than if-else conditions specifying the subjective exceptions in the batch-processing scripts.

After extracting 120-s epochs for both experimental conditions, including the background activity trial called "B-trial" and event-related trials called "F-trials", we split every 120-s trial into 4-s (for FieldTrip; see explanation below) or 3-s (for Brainstorm) sub-trials.

#### 2.4.2. Artifact Removal and Loading Data

Accurate brain source analysis requires the correct integration of MEG data with structural magnetic resonance imaging (MRI) scans. Both software programs align all data by defining a subject coordinate system using three fiducial points, namely the nasion and left- and right-auricular points. Moreover, we complemented the alignment based on only three points by an automatic refinement procedure utilizing additional points on the scalp, marked using a 3D digitizer (Polhemus in the considered experiment).

In FieldTrip, we used the "Colin27" head averaged template MRI [38] and adjusted it to the subject's head shape recorded by the Polhemus device. In Brainstorm, default anatomy was warped to fit the scalp shape of every subject with a 2% fit tolerance using digitized head points from the Polhemus device. After an automatic refinement of head points, the 50-Hz electrical power grid frequency and its harmonics were filtered using notch filters. The 56-ms trigger delay was corrected in the recordings. The recorded electrooculogram (EOG) and electrocardiogram (ECG) signals were used to automatically detect instances of eye blinks and cardiac activity in order to apply signal-space projection (SSP) methods to alleviate the respective artifacts.

Well-defined artifacts such as eye blinks, cardiac activity, muscle contractions, and MEG SQUID jumps were detected semi-automatically using FieldTrip/Brainstorm functions or manual screening. Once artifacts were identified, depending on the artifact intensity, we either discarded trials that contained an artifact or applied linear projection to remove them.

## 2.4.3. Source Reconstruction

The first step in localizing sources is the construction of a forward model and lead field matrix. The forward model allows one to calculate an estimate of the field measured by the MEG sensors for a given current distribution in the brain and is typically constructed for each subject. The lead fields or the solution to the forward problem are evaluated using various algorithms, such as a single sphere [39], overlapping spheres [40], a spherical harmonics approximation of realistic geometries [41], and boundary element methods [42].

The forward solution was computed in Brainstorm analysis using the overlapping spheres method, which is the default. The number of cortical sources was kept at 15,000 as recommended [37]. On the other hand, in FieldTrip, we applied a semi-realistic head model developed by Nolte [41] called a single-shell model, which is based on the correction of the lead field for a spherical volume conductor by a superposition of basic functions, gradients of harmonic functions constructed from spherical harmonics. We thus discretized the head volume with a grid with a 0.7-cm resolution and obtained a source space consisting of 9025 voxels. The lead field matrix was calculated using each grid point [41]. Thus, in Brainstorm, a cortical surface model was used, and in FieldTrip, a volumetric one.

The next step is calculating the inverse solution to estimate the location and strength of neuronal activity, which can be computed via multiple options, including dipole fitting based on nonlinear optimization [43], minimum variance beamformers in time and frequency domains [44–46], and linear estimation of distributed source models [47,48]. In both software analyses, we used standardized low-resolution brain electromagnetic tomography (sLORETA) [49].

The sLORETA family of solutions was validated against numerous imaging modalities [50–52] and simulations [53,54]. sLORETA uses standardized current density images to calculate intra-cerebral generators. Although the image was blurred, sLORETA was found [55] to have the exact zero-error localization when reconstructing single sources in all noise-free simulations, i.e., the maximum of the current density power estimate coincided with the exact dipole location [48]. Meanwhile, in all simulations with noise, sLORETA had the lowest localization errors when compared with the minimum norm solution.

Note that when working with multiple sensor types to form a joint source model, the empirical noise covariance is used to compute the weights of each sensor in the overall model. For this purpose, noise covariance matrices are typically computed from empty-room recordings that capture instrumental and environmental noise in the absence of subjects.
