*2.2. Support Vector Machine (SVM)*

Cortes and Vapnik [30] proposed the SVM model in 1995, and it is a very popular and powerful classifier used in various fields [31–33]. The SVM algorithm uses kernel functions *K*(*x*, *xa*) to transfer the nonlinear low-dimensional input data space into a highdimensional linear data space. The hyperplane function used to separate the transferred data (high-dimensional linear data) is presented in Equation (3).

$$y(\mathbf{x}) = \sum\_{a=1}^{n} \beta\_a \mathbf{K}(\mathbf{x}, \mathbf{x}\_a) + b\_1 \tag{3}$$

Meanwhile, various kernel functions, such as linear kernel, sigmoid kernel, and RBF kernel, can be used to classify the data. Further details about SVM can be found in [30,32].

### *2.3. Proposed Framework*

This section discusses the overall framework of the proposed approach in detail. The proposed approach consists of 4 main components, namely brain MRI image acquisition, pre-processing, feature extraction, and model training, as shown in Figure 1.

**Figure 1.** Framework of the proposed hybrid brain MRI image classification model.

The brain images were acquired using the brain MRI machine in the first step. Next, the acquired brain MRI images were pre-processed from the RGB images into grayscale images. Then, an 8 × 8-pixel grid was defined as a selection point for the feature extraction of the brain MRI images. Variations in pixel size affect the computational cost and feature vector size. Furthermore, the four-element vectors ([16, 32, 48, 64] and [17, 34, 51, 68]) were used to extract the KAZE and SURF features, respectively. The details of KAZE and SURF extraction were already provided in Section 2.1. After this, 20% of the redundant features were discarded to reduce the feature vector size. Finally, based on the simplicity and robustness, the *k*-means clustering algorithm was utilized for feature segmentation. Furthermore, it kept observations inside each cluster as close to each other and as far away from objects in other clusters as possible Therefore, 400-feature histograms were created using the *k*-means clustering approach. Further details about the *k*-means clustering approach can be found in [34,35]. After this, various machine learning classifiers, such as SVM, tree [36], Naïve Bayes [37], k-nearest neighbors (K-NN) [38], ensemble, and neural network (NN), were used to train the models. The results of the proposed method are presented in the subsequent section.
