Next Article in Journal
Stroke Detection in Brain CT Images Using Convolutional Neural Networks: Model Development, Optimization and Interpretability
Previous Article in Journal
Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations
Previous Article in Special Issue
Relations of Society Concepts and Religions from Wikipedia Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media

1
Propaganda Department, Ningbo College of Health Sciences, Ningbo 315000, China
2
School of Media and Law, NingboTech University, Ningbo 315000, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(5), 344; https://doi.org/10.3390/info16050344
Submission received: 10 March 2025 / Revised: 17 April 2025 / Accepted: 21 April 2025 / Published: 24 April 2025
(This article belongs to the Special Issue Information Technology in Society)

Abstract

:
Detecting hate speech in social media is challenging due to its rarity, high-dimensional complexity, and implicit expression via sarcasm or spelling variations, rendering linear models ineffective. In this study, the SVM (Support Vector Machine) algorithm is used to map text features from low-dimensional to high-dimensional space using kernel function techniques to meet complex nonlinear classification challenges. By maximizing the category interval to locate the optimal hyperplane and combining nuclear techniques to implicitly adjust the data distribution, the classification accuracy of hate speech detection is significantly improved. Data collection leverages social media APIs (Application Programming Interface) and customized crawlers with OAuth2.0 authentication and keyword filtering, ensuring relevance. Regular expressions validate data integrity, followed by preprocessing steps such as denoising, stop-word removal, and spelling correction. Word embeddings are generated using Word2Vec’s Skip-gram model, combined with TF-IDF (Term Frequency–Inverse Document Frequency) weighting to capture contextual semantics. A multi-level feature extraction framework integrates sentiment analysis via lexicon-based methods and BERT for advanced sentiment recognition. Experimental evaluations on two datasets demonstrate the SVM model’s effectiveness, achieving accuracies of 90.42% and 92.84%, recall rates of 88.06% and 90.79%, and average inference times of 3.71 ms and 2.96 ms. These results highlight the model’s ability to detect implicit hate speech accurately and efficiently, supporting real-time monitoring. This research contributes to creating a safer online environment by advancing hate speech detection methodologies.

1. Introduction

The rapid development of social media has not only provided people with a convenient platform for communication and information sharing, but it has also become a new channel for the spread of hate speech [1,2,3]. This type of hate speech, including racial discrimination, sexism, and religious hatred, has caused great damage to the network environment and may even threaten social stability [4,5,6]. However, the current methods for detecting hate speech still need to be improved. Traditional methods mainly rely on basic ML (machine learning) or DL (deep learning) algorithms, which have great deficiencies in processing complex language structures and contextual information [7,8,9]. Among them, neural networks and deep learning require big data and computing resources, which are difficult to implement. Moreover, the model is as complex as a “black box”, and the decision-making process is difficult to explain. Although the hybrid model combines the advantages of a variety of algorithms, it is difficult to adjust parameters and easy to overfit, and its ability to process high-dimensional sparse data is weak. Although the hybrid model incorporates the advantages of multiple algorithms, the parameter tuning is complex and easy to overfit. In the high-dimensional sparse text feature space, its performance is unstable, and the false positives rate is high when identifying implied hate speech. The SVM algorithm maps data through kernel functions, maximizes the classification interval, reduces overfitting, and better recognizes implied hate speech. Therefore, SVM has an advantage in dealing with such issues. In particular, when hate speech is expressed in metaphors, sarcasm, etc., traditional methods are often difficult to identify, resulting in high false positive and false negative rates [10,11]. This situation limits the effective management of hate speech and also affects the regulation effect of network public opinion. To solve this problem, this article proposes a hate speech detection method combined with the SVM algorithm. The SVM model can efficiently find the optimal classification hyperplane from a high-dimensional feature space, which is very suitable for text classification tasks [12,13,14]. When discussing the detection of hate speech on social media, SVM showed superior performance, especially in processing high-dimensional sparse data, feature selection, and dimension reduction. Compared with hybrid models, SVM simplifies model complexity, reduces the risk of overfitting, and is efficient, accurate, and easy to apply across platforms. Therefore, this paper uses SVM as the core algorithm to improve the accuracy and efficiency of hate speech detection. This research mainly targets text-based social platforms such as Twitter, Facebook, and Weibo. The user-generated content of these platforms is rich and provides sufficient training data for the detection of hate speech. Although the influence of video- and image-dominated platforms, such as TikTok and Instagram, has gradually increased, the collection and analysis of text data is more challenging on these platforms. By verifying the effectiveness of the text analysis method, this research has laid the foundation for expanding the scope of research to multiple modal platforms in the future. The purpose of this study is to enhance the accuracy of hate speech detection by deeply mining text context information and constructing more precise feature representations. This research aims to improve the monitoring capabilities for hate speech on social media while providing a foundation for exploring AI-driven strategies for managing online public opinion, which holds significant theoretical and practical implications.
To achieve this goal, the study integrates the SVM algorithm with advanced text feature extraction techniques, such as Word2Vec and TF-IDF, to optimize data preprocessing and effectively recognize complex expressions like sarcasm and metaphor. By leveraging the SVM classifier, the study seeks to achieve efficient and accurate detection of implicit hate speech. Furthermore, the research evaluates the application of this approach in network public opinion management, demonstrating its practical utility. Experimental results reveal that the proposed SVM-based method outperforms other techniques across multiple performance metrics and exhibits strong real-time processing capabilities. Additionally, the integration of sentiment dictionaries and the BERT model further enhances the recognition accuracy of complex hate speech, offering robust support for effective online public opinion management. The core innovation point of this research lies in the optimization of the algorithm. By dynamically adjusting the RBF nuclear parameter γ, the classification ambiguity caused by multiple synonyms of the text was successfully solved. At the same time, the generalization ability of the SVM model has been verified on the multilingual data of multiple social media platforms, such as Twitter, Facebook, and Weibo, providing a lightweight and efficient solution for social media governance. The specific contributions of this research are as follows: first, a text feature mapping method based on RBF kernel is proposed, which effectively solves the problem of nonlinear classification. For the first time, the performance of SVM was verified on a cross-platform multilingual dataset, thus filling the gap in the current research on the application of lightweight models.

2. Related Works

Hate speech detection can identify and manage negative content on social media, promote a harmonious network environment, and improve detection accuracy to prevent social division and violence and maintain social stability [15,16]. To meet the challenges of hate speech detection, scholars have proposed hate speech detection methods based on machine learning. Network hate speech on social media can cause harm to individuals and the entire society. Simon Hyellamada studied the current literature on network hate speech detection to determine the trends in network hate speech detection tasks and concluded that ML and DL methods were effective in classifying hate text on social media [17]. Aljarah Ibrahim applied NLP (natural language processing) technology and machine learning methods to detect network hate speech based on Arabic context on the Twitter platform. The test results of the dataset showed that RF (Random Forest) with TF-IDF feature set and contour-related features achieved the best results [18]. To detect hate speech in English and Swahili from audio, Imbwaga Joan L manually collected datasets from YouTube videos and converted them into audio, extracted audio-based features such as spectrum and time, and used them to train various machine learning classifiers [19]. Alaoui Safae Sossi applied the complete text mining process and the naive Bayes machine learning classification algorithm to two different datasets taken from Twitter [20]. Scholars have used machine learning technology to enhance social media hate speech detection accuracy through a variety of feature extraction methods. However, the research ignores cross-platform multi-language detection, and the processing of complex contexts needs to be improved. At the same time, although hybrid models have been mentioned, efficient integration and real-time processing remain challenges.
SVM can efficiently distinguish different text data by finding the best classification surface, which is suitable for high-dimensional complex classification. Its strong generalization ability ensures that it can maintain high accuracy and reliability when dealing with scenarios that require precise identification, such as hate speech [21,22]. At present, some scholars have applied the SVM algorithm to text classification detection. Since the feature vectors generated by traditional feature selection methods are high-dimensional and sparse, most text classification methods have poor accuracy. Rezaeian Naeim proposed an innovative method to enhance the classification performance of Persian text, combining the naive Bayes algorithm and the SVM algorithm to enhance the classification performance of Persian text [23]. Public opinion is divided into positive emotions and negative emotions. To determine the public’s views on three market services and issues on social media, Agustina Dyah Auliya used the SVM algorithm to perform text mining and user sentiment analysis on Twitter. The study showed that the public posted more positive emotions than negative emotions on Twitter [24]. Scholars have used the SVM algorithm to enhance text classification accuracy, but the research has certain shortcomings in feature selection and dimensionality reduction. At the same time, these methods need to be improved to enhance the model’s generalization ability. Subsequently, this article compared the effects of the two methods, and the results are shown in Table 1.
In view of the limitations of the existing methods, this paper proposes a hate speech detection framework based on SVM to solve the problem of processing nonlinear and high-dimensional data by traditional methods. Compared with neural networks and hybrid models, SVM is more efficient, concise, and interpretable, reducing the risk of overfitting. This framework provides a new way for real-time and clear hate speech regulation.

3. Optimization of Strategies for Hate Speech Detection and Public Opinion Regulation

3.1. Data Preprocessing

3.1.1. Data Collection

This study collects text data from platforms such as Twitter, Facebook, and Weibo through social media APIs and customized crawlers.
The selection of Twitter, Facebook, and Weibo as data sources for this study is mainly based on the following considerations:
Text data dominance: They are text-based and easy to extract and analyze hate speech using natural language technology. Compared with TikTok and Instagram, the text data of these platforms are more direct and rich, which meets the needs of model training.
API accessibility and compliance: The APIs of Twitter and Facebook are stable and open, while Weibo’s APIs meet domestic compliance requirements and facilitate data capture.
Multilingual and cross-cultural coverage: Platforms such as Twitter and Facebook cover multilingual users around the world, providing a foundation for cross-cultural research.
Limitation statement: This research has not yet covered video-led platforms, which may affect the model’s recognition of multimodal content. In the future, it is planned to expand the research to TikTok and Instagram and combine visual technology to enhance the cross-platform capabilities of the model.
Using OAuth2.0 (Open Authorization 2.0) authentication and preset query parameters, combined with hate speech keywords, logical operators expand the search scope and filter out irrelevant content. A time sliding window [25,26] strategy is adopted to capture the latest tweets from different groups around the world every hour. To avoid API request restrictions, an asynchronous task queue is used to manage the process and adjust the request frequency to ensure stable data acquisition.
In addition to API collection, the Scrapy crawler framework is used to supplement the collection of text resources such as Reddit forums, news comments, and public blogs. During the crawling process, HTML (Hyper Text Markup Language) pages are parsed. User comment area text is extracted, and advertisements, automatically generated content, and invalid characters are filtered out. To ensure data integrity, regular expressions are used to match URLs (Uniform Resource Locators), user names, and time information in a specific format, and sentiment analysis models are used to preliminarily screen texts that may contain hate speech. All data are anonymized before collection; user identity information is removed; data identifiers are encrypted to ensure data privacy compliance. Finally, the deduplicated, formatted, and encoded text data are stored in the database for subsequent hate speech detection model training. To ensure data timeliness, this article collects raw data from Twitter, Facebook, and Weibo from August 2024 to January 2025 in real time. The data collection method is described in Table 2.
After data collection, data cleaning is required to remove noise. In this article, text cleaning removes texts with more than two hundred words and less than three words, abnormal characters, and repeated texts but retains emoticons.

3.1.2. Stop Words Removal

In hate speech detection, text often contains noise, such as stop words, which have limited contribution to classification tasks and may increase computational complexity and interfere with feature extraction. Stop words frequently appear in natural language processing, but they are of little help to understand the meaning of the text, such as “of”, “yes”, etc. In this study, stop words were removed during text preprocessing to reduce interference, reduce feature dimensions, and improve model training speed. We are based on the Chinese stop vocabulary list of the NLTK library and have added specific vocabulary for social media. After word segmentation, we accurately removed these stop words and verified their effect: the feature dimension was reduced from 20,000 words to 15,000, and the training time of the SVM model was also reduced. Therefore, it is necessary to build an efficient stop words removal strategy to improve the expressive power of feature vectors. This article adopts the standard stop words list provided by the NLTK (Natural Language Toolkit) tool and expands it in combination with task requirements. The removal method adopts a matching strategy based on set operations to filter each word in the text to ensure that only meaningful feature words are retained. Through set operations, the stop words set is removed from the original text set to obtain a purified feature word set. The formula is as follows:
F i l t e r e d _ T e x t = v v O r i g i n a l _ T e x t , v S t o p w o r d s
Among them: v —a word in the text;
Original_Text—the original text word set;
S t o p w o r d s —the stop words list;
F i l t e r e d _ T e x t —the valid text after removing the stop words.
To improve processing efficiency, a hash table is used to store stop words, and efficient query operations are used to speed up the screening process.
Given that stop words may have variations in different contexts, the stop words list is dynamically adjusted by combining term frequency (TF) statistics and information entropy [27]. The frequency of each word in the dataset is calculated, and the word is screened based on its distributed information entropy G v . The formula is as follows:
G v = j = 1 m Q j v log Q j v
Among them: Q j v —the probability of word v appearing in the jth category of text;
m —the total number of categories.
If the frequency of a word is higher than a preset threshold, and its information entropy G v is lower than a certain limit, it is added to the stop words list. This can remove high-frequency words that contribute little to text classification and avoid accidentally deleting important features.

3.1.3. Spelling Correction

Social media texts often contain spelling errors and deformed words, which affect the accuracy of hate speech detection. A spelling correction method based on edit distance is adopted. By constructing a standard dictionary, the edit distance between each word in the input text and the word in the dictionary is calculated, and the closest replacement is selected. The formula for calculating the edit distance is as follows:
d e d i t m , n = m a x   m , n ,                                                                                                                                 i f   m = 0   o r   n = 0 d e d i t m 1 , n 1 + δ v m , v n ,                                                                                             o t h e r w i s e
Among them: m, n—the length of two words. If the characters are equal, δ v m , v n = 0 ; otherwise, it is 0.
d e d i t m , n —the minimum editing distance of the two words, that is, the similarity of the two-word vectors.
When v m v n , the value is 1; otherwise, it is 0. The algorithm uses dynamic programming to calculate the optimal transformation path and efficiently find the closest standard word.
To enhance the spelling correction accuracy, the n-gram language model optimization is combined. Among multiple corrected word candidates, the word with the highest probability is selected. The formula is as follows:
P v = D v v W D v
Among them: P v —the probability of the word in the standard corpus;
D v —the number of occurrences of the word in the corpus;
W —the entire vocabulary set.
Text segmentation is to split the entire text into words with independent meanings. English text is separated by spaces, but social media texts often have spelling errors and run-on spellings, so segmentation using only spaces may not be accurate. This article first uses the dictionary matching method for preliminary segmentation and then uses the n-gram language model to select the best segmentation method. Jieba segmentation is used for Chinese text, combined with HMM (Hidden Markov Model) and CRF (Conditional Random Field) optimization to more accurately identify vocabulary boundaries and new words. After segmentation, part-of-speech tagging is required to understand the text structure. The POS (Part-of-Speech) tagger in the Stanford NLP toolkit is used. It is based on a bidirectional LSTM (Long Short-Term Memory) + CRF structure and can adapt to complex contexts. At the same time, a part-of-speech weight mechanism is applied to give greater weight to adjectives and verbs involving hate speech to improve the model’s discriminability.

3.2. Semantic Feature Extraction

In the exploration of hate speech detection and network public opinion regulation combined with SVM algorithm, a hate speech detection model is constructed, the core goal of which is to identify whether a given text contains hateful elements. Unlike visual language tasks, hate speech detection requires comprehensive consideration of images, texts, and multiple prior knowledge, so it is more difficult. To this end, in terms of image feature extraction, this article is based on the CLIP (Contrastive Language-Image Pre-training) model and incorporates emotional information to achieve image feature extraction. This method first uses CLIP to extract image features and then adjusts these features through a specific projection layer to adapt to hate speech detection. Meanwhile, the image emotional features are fused with the respective modal features to form an emotionally enhanced feature representation. In addition, the attribute information of the image semantics is extracted to increase the attention to hate information. To prevent overfitting, a text and image description supervision module is specially designed to achieve a balanced focus on global and local features, thereby improving detection accuracy. In terms of text extraction, Word2Vec word vector embedding and TF-IDF are utilized to extract text semantic features. The overall framework of the hate speech detection model is shown in Figure 1.

3.2.1. Word2Vec Word Vector Embedding

In hate speech detection, it is difficult to capture complex expressions such as metaphors and sarcasm in texts by relying solely on keywords and simple statistical features. To precisely identify hidden hate speech, Word2Vec word vector embedding and TF-IDF are combined to extract text semantic features and enhance the model’s understanding of context to capture complex expressions such as metaphors and sarcasm [28].
In social media hate speech detection, the traditional bag-of-words (BoW) model and TF-IDF cannot fully capture context and semantic relationships. Therefore, Word2Vec word vector embedding is utilized to map words to a low-dimensional vector space, making semantically similar words closer [29,30]. Word2Vec mainly includes two architectures: CBOW (Continuous Bag of Words) and Skip-gram. The Skip-gram model is more suitable for processing low-frequency words, slang, and new words in social media, so it is selected for training to enhance the model’s understanding ability.
The Skip-gram model is trained using the Gensim library, with the window size set to 5, the word vector dimension set to 300, and the minimum TF set to 5. The goal of the Skip-gram model training is to maximize the log-likelihood function to capture the contextual relationship of words. The formula is as follows:
m a x r = 1 R d m d , m 0 log P v r + m v r
Among them: R —the total number of words in the text;
v r —the current center word;
v r + m —the words in the context window;
d —the window size.
The conditional probability P v r + m v r can be calculated using the Softmax function. The formula is as follows:
P ω c ω t = e x p v c R · v t v W e x p v W R · w d
Among them: v t , v c —the word vector of the central word and the context word;
W —a collection of all words in the dictionary.
Because the Softmax calculation is large, negative sampling is used to improve efficiency.
After Skip-gram training, the word vector average pooling method is used to calculate the document-level vector representation. The mean of all word vectors in the document is taken to obtain a fixed-length document vector for SVM classification of hate speech. The document vector calculation formula is as follows:
w d o c = 1 N m = 1 N w m
Among them: N —the total number of words in the document;
w m —the mth word in the document is the word vector obtained by the Skip-gram model training.
Since social media texts are short, TF-IDF weighting is applied when calculating document vectors to highlight the contribution of important words to the document semantics. The obtained document vector is input into SVM to optimize hate speech detection and public opinion regulation.
When Word2Vec deals with the meaning of words, it is difficult to recognize polysemy. To solve this problem, this study combines TF-IDF weighting and context-aware technology. Although Word2Vec’s Skip-gram model can predict central words through peripheral vocabulary, it is difficult to accurately identify the specific meaning of polysemy. Therefore, we have adopted a dynamic weighting method, that is, according to the TF-IDF weight of each word in a particular document to adjust the weight of its word vector. For example, when the word “bank” has a higher TF-IDF value in the financial field document, its word vector will be more inclined to express the meaning of “bank”. The weight adjustment here depends on the weight factor β determined by the TF-IDF value. The formula is as follows:
β = T F I D F ω ω T F I D F ω
Among them: ω —current words; β ensures that high-frequency and contextually relevant words contribute more to the document vector.
In view of the problem of polysemy, this study uses two methods to further distinguish its different meanings: one is to expand the context window, and the other is to combine subject modeling. The context window of the Skip-gram model is expanded from the default five words to ten words to capture a wider range of contextual relationships. At the same time, the LDA topic model is used to classify the topic of the document, and the weight of the word vector is adjusted according to the topic. If the subject of the document is related to “environment”, the weight of the word “bank” expressing the meaning of “river bank” will be increased. Through these measures, we can deal with polysemy more effectively and improve the accuracy of word meaning capture.

3.2.2. TF-IDF Weight Calculation

In social media hate speech detection, Word2Vec can capture word meanings but ignore word importance. Frequently occurring words are not very helpful in understanding semantics, while highly discriminative words can better reflect text sentiment. Therefore, TF-IDF is used to adjust word weights to strengthen text feature representation [31,32].
TF-IDF includes TF and inverse document frequency. TF reflects the importance of a word in a single document, but commonly occurring words have weak discriminative power [33,34,35]. In the implementation process, Scikit-learn’s TfidfVectorizer is used to process social media texts to obtain a TF-IDF-weighted word-document matrix. However, TF-IDF is essentially based on statistics and cannot capture word meaning relationships. Therefore, Word2Vec word vectors are integrated to enhance text representation. The weighted word vectors of the set r 1 , r 2 , ,   r i of all words in document c are calculated. The formula is as follows:
w d o c = m = 1 N α m w m ,   α m = T F I D F w m
Among them: α m —based on the weight of TF-IDF.

3.3. Sentiment Feature

3.3.1. Sentiment Feature Extraction

Sentiment feature analysis is crucial for hate speech detection, especially when identifying implicit expressions. Hate speech often uses sarcasm, metaphors, and other expressions, which are difficult to deal with by traditional methods. This study combines sentiment dictionaries with contextual sentiment analysis to improve the ability to identify complex hate speech.
In the field of social media analysis, hate speech detection models can be regarded as an important branch of sentiment classification. The sentiment features of text are crucial to understanding its potential hatred. Generally, negative emotions such as anger and sadness are more easily associated with hate speech, while positive emotions such as happiness and excitement are less likely to show hatred. Therefore, this article focuses on extracting sentiment features from text to enhance the understanding and recognition of hate speech. The model trained using a large-scale text sentiment dataset extracts text sentiment features. This dataset contains a large number of balanced positive and negative sentiment samples, laying the foundation for fine-grained sentiment classification. Unlike coarse-grained sentiment classification, which only distinguishes positive or negative emotions, fine-grained classification can identify more types of emotional states, thereby providing more precise hate speech detection capabilities. Therefore, this article uses fine-grained classification to extract text sentiment features. The structure of text sentiment feature extraction is shown in Figure 2.
In social media analysis, hate speech detection and network public opinion regulation research are becoming increasingly critical. This article uses deep learning, especially convolutional neural networks, and combines them with SVM algorithms to improve the accuracy of social media text sentiment analysis. Inspired by ResNet50 (Residual Networks), multi-level feature extraction is used to enhance the classification effect. Using five different depths of convolutional networks in ResNet, the text sentiment feature h d n is extracted at different scales to obtain shallow emotional features. Among them, n = 1 , 2 , 3 , 4 , 5 represents different degrees of depth. Multi-level semantic features are further integrated, and shallow and deep network outputs are combined to form rich joint features. These features are processed through the fully connected layer to obtain the sentiment feature vector. Combined with SVM, the model can efficiently detect hate speech and provide a basis for public opinion regulation, thereby improving recognition accuracy and providing new ideas for understanding user emotional tendencies.
The sentiment extraction of the n-th layer feature h d n uses the Gram matrix, which contains two branches, one of which uses 1 × 1 convolution to achieve cross-channel interaction, and then outputs the result through the fully connected layer. The other branch first flattens the feature h d n , uses the Gram matrix to calculate the correlation between features, and obtains the Garm matrix representation. After the fully connected layer, the features of the same dimension as the first branch are obtained. The two features are added, and the irrelevant sentiment feature information is removed to obtain the text sentiment features extracted by the nth layer. The text sentiment extraction guided by the Gram matrix is shown in Figure 3.

3.3.2. Contextual Sentiment Analysis and Sentiment Dictionary Fusion

This article combines sentiment dictionary analysis with BERT model contextual sentiment analysis. The two kinds of sentiment information are integrated by weighted average to improve the recognition ability. The sentiment dictionary SentiWordNet provides intuitive sentiment scores, but hate speech on social media is often hidden and complex. Therefore, the BERT model is applied to dynamically adjust the sentiment polarity of words using the context to identify implicit emotions.
During the fusion process, the sentiment dictionary analysis and BERT model prediction results are integrated by weighted average, and the weight parameter β is used to balance the two. β can be adjusted according to experimental needs to change the contribution of the two methods. The calculation formula of the sentiment polarity Z ( R ) after fusion is as follows:
Z ( R ) = β · Z ^ d i c t R + 1 β · Z ^ c o n t e x t R
Among them: Z ^ d i c t R —the sentiment classification result based on sentiment dictionary;
Z ^ c o n t e x t R —the sentiment classification result based on BERT model.
The fusion strategy in this article overcomes the limitation of single sentiment analysis. BERT uses context perception to process complex sentiment text, and the sentiment dictionary provides a stable foundation. This strategy improves detection accuracy, adaptability, and robustness, making hate speech detection more precise and effectively supporting public opinion regulation.

3.4. SVM Classification Model Training

3.4.1. Linear Kernel Function and High-Dimensional Feature Space Mapping

To improve detection capabilities, this article uses the SVM algorithm as a classifier. SVM uses kernel functions to map text features to high-dimensional space, effectively processes nonlinear data, and enhances the detection of hidden hate speech. SVM uses the RBF (Radial Basis Function) kernel function to convert difficult-to-distinguish low-dimensional text features into high-dimensional separable ones, which not only improves generalization ability but also improves recognition accuracy.
SVM, as a supervised learning algorithm, has its core in completing classification tasks by finding the optimal hyperplane. In the face of linearly inseparable data, such as hate speech implied in the text, SVM will use kernel functions to project the original features into the high-dimensional space, so as to achieve linearly separable data. Taking the RBF kernel as an example, it can effectively convert the input features into high-dimensional space through Equation (11):
K a j , a i = e x p γ a j a i 2
Among them: γ —controls the width parameter of the kernel function.
In this study, a grid search method is used to optimize the parameter γ in the range of 0.01 to 1.0, aiming to maximize the classification interval and enhance the generalization performance of the model.
Given training sample a j ,   b j j = 1 m , a j is the feature vector, and b j is the label, indicating hate or non-hate speech. The optimization problem of SVM can be expressed as follows:
min u , y 1 2 u 2     s u b j e c t   t o   b j u · a j + y 1 , j
Among them: u—the normal vector of the hyperplane;
y —the bias term.
Algorithm process: First, combine TF-IDF and Word2Vec technology to efficiently extract feature vectors from text. Subsequently, the RBF kernel function is used to map these low-dimensional features to the high-dimensional space to solve the classification problem of complex data. Immediately afterwards, by carefully solving the quadratic programming problem min u , y 1 2 u 2 , the key classification boundary is determined, in which u plays an important role as the normal vector of the hyperplane. Finally, according to the symbol of u R ϕ a + y , we can accurately determine the category of the sample in the high-dimensional feature space.

3.4.2. Application of Cross-Validation

SVM is widely used in text classification due to its superior classification performance and strong generalization ability, but it faces the problem of dataset division during training. Cross-validation technology can effectively evaluate the performance of the model on unknown data, avoid overfitting or underfitting, and optimize SVM performance. K-fold cross-validation is the most common, which divides the dataset into K subsets, using K-1 subsets for training each time and one subset for validation. K times are repeated, and the average value is taken to reduce the deviation. Because social media data are often unbalanced, cross-validation is extremely important when detecting hate speech.
The average error of K-fold cross-validation can be expressed as follows:
C V e r r o r = 1 K j = 1 K K b ^ j , b j
Among them: C V e r r o r —the average error of cross-validation;
K b ^ j ,   b j —the loss function between the predicted result b ^ j and the actual label b j in the j-th validation.
K—the number of cross-validation folds, K = 10.

3.4.3. Hyperparameter Optimization

The performance of SVM is affected by both data characteristics and hyperparameter configuration. In hate speech detection, appropriate hyperparameters can significantly improve model accuracy and generalization ability. Key hyperparameters include regularization parameter C and kernel function parameter δ . C controls the classifier’s tolerance. Smaller C values can improve generalization ability, but too large C values may lead to overfitting. The kernel function parameter δ determines the Gaussian kernel width and affects the range of data points. In hate speech detection, it is crucial to reasonably adjust these two parameters to improve the accuracy of social media data.
In the SVM model, optimizing hyperparameters can significantly improve performance. This study combines two methods, grid search and random search, to find the best regularization parameter C and the parameter γ of the RBF kernel function.
1. Parameter setting: The regularization parameter C is used to regulate the fit of the model to the training data. The selection of the C value should be moderate to avoid underfitting or overfitting. We debug between C values of 0.1, 1, 10, 100 to find a balance between classification accuracy and generalization ability. The nuclear function parameter γ affects the width of the RBF nucleus, that is, the range of influence of the data points, and the choice of γ needs to be moderate. We try between γ values of 0.01, 0.1, 1.
2. Evaluation criteria: We use 10% off cross-verification and use the F1 value and cross-verification error to evaluate the effect of the parameters. The F1 value can fully reflect the accuracy rate and recall rate, ensuring that the model accurately and comprehensively identifies hate speech.
3. Optimization process: Through grid search, we experimented with all combinations of C and γ and recorded the performance of each group. When C is 1.0 and γ is 0.1, the F1 value of the model is on the test set, then the cross-verification error is 0.0958. A random search was also tried, 50 sets of parameters were randomly selected, and the selection was optimized by Bayesian method. The results also showed that C = 1.0 and γ = 0.1 performed best.
4. Parameter selection basis: The selection of the final parameters is based on the following principles: F1 value priority: ensure that the model achieves a balance between hate speech (small category) and non-hate speech (large category); calculation efficiency: avoid excessive C or γ values leading to too long training times (such as C = 100, the training time increases by 20%); stability: the parameters perform stably in cross-verification, and the fluctuation range is less than 2%.
To find the best hyperparameter combination, two methods, grid search and random search, are used. Grid search traverses all possible hyperparameter combinations, evaluates the performance of each group through cross-validation, and selects the best combination. The parameter range is defined for grid search: C { 0.1 , 1 , 10 , 100 } and δ { 0.01 , 0.1 , 1 } . Through training and evaluation, the configuration that can best reduce error and improve accuracy is found. However, grid search has high computational cost, especially when the parameter space is large. Therefore, random search is also used to assist optimization. By using these two methods, the most suitable hyperparameter configuration for hate speech detection can be found in a reasonable time.
Cross-verification is a common method for evaluating the performance of a model. It divides the dataset into a training set and a verification set multiple times, trains and verifies the model repeatedly, and finally calculates the average error rate of the model on the verification set. Assuming that L C , δ represents the cross-validation error of the model under specific parameters C and δ , the formula is as follows:
L C , δ = 1 K k = 1 K E r r o r C , δ ; D k
Among them: K —cross-verify the discount;
D k —the kth verification set;
E r r o r —classification error rate on the verification set;
C , δ —regularization parameters and nuclear parameters of SVM.
The optimal hyperparameter combination is determined by minimizing the loss function. The formula is as follows:
C ^ , δ ^ = a r g   min C , δ L C , δ
Among them: C ^ and δ ^ —the optimal regularization parameter and kernel function parameter.
The loss function L C , δ can evaluate the effects of different hyperparameter configurations, help find the hyperparameters that are most suitable for the hate speech detection model, improve the model accuracy and stability, and achieve efficient public opinion regulation.

3.5. Public Opinion Regulation Strategy

The spread of hate speech is harmful to social media platforms and social stability. Therefore, real-time monitoring and intervention of hate speech are crucial. Using the detection results of the SVM algorithm, a hate speech risk warning model is constructed to provide real-time public opinion warning. The model identifies potential hate speech and assesses the risk level by classifying social media text data. The output results include sentiment labels and risk scores. To improve the accuracy of the warning, the score is comprehensively evaluated by combining text features and social data, such as transmission speed and interaction volume. This model can provide real-time warnings for social media managers, helping them to quickly identify and deal with risky content and prevent the spread of hate speech from causing serious consequences.
By leveraging a risk warning model for hate speech, social media platforms can implement a range of public opinion intervention strategies to address high-risk content effectively. Firstly, harmful content can be removed promptly to disrupt the transmission chain of hate speech, with automated review mechanisms enabling real-time identification and elimination of such content. Secondly, the reach of high-risk speech can be restricted to specific audiences, or its visibility can be reduced, thereby preventing it from gaining widespread attention and triggering public opinion crises. Through these approaches, social media platforms can mitigate the risks associated with hate speech, foster a healthier online environment, and enhance their credibility and societal influence.
By integrating the SVM algorithm with intelligent risk systems, social media platforms can establish a dynamic, real-time control mechanism for managing hate speech. This system provides a scientific foundation and technical support for content moderation, helping to minimize the adverse effects of hate speech, safeguard user rights, and promote the sustainable and healthy development of the platform.

4. Performance Evaluation of Hate Speech Detection

4.1. Experimental Design

This study aims to evaluate the performance of the SVM algorithm in detecting hate speech on social media. Text data from platforms such as Twitter, Facebook, and Weibo are collected, mainly involving two types of hate speech: racial discrimination and sexism. This article uses Twitter, Facebook, and Weibo data because of their rich text resources, which are easy to obtain examples of hate speech and provide support for model training. In contrast, TikTok and Instagram are mainly based on videos and pictures, where text information is difficult to collect and analyze, and it is not suitable for text hate speech detection. Initially focusing on text-led platforms to ensure that the model is effective and accurate, it can be extended to more social media platforms in the future to improve the universality of the model. The time span is from August 2024 to January 2025, with a total of about 400,000 records. During the text cleaning phase, stopwords, emoticons, and URLs were removed to ensure the purity of the data. Subsequently, in the feature extraction link, TF-IDF technology was used to calculate the word frequency weighting. For example, the TF-IDF value of the word “discrimination” was accurately calculated as 0.89. At the same time, in order to alleviate the interference of polysemy, we cleverly adopted Word2Vec combined with a dynamic weighting strategy (see Equation (8) for details). Finally, in the data division stage, we use the hierarchical sampling method to ensure the balance of categories and divide the dataset into training sets and test sets, with a ratio of 8:2.This series of steps has laid a solid foundation for our subsequent model training and analysis.
Table 3 displays the distribution of hate speech keyword data collected by each platform.
In this study, an audio hate speech detection database was constructed to test the model’s ability to recognize multimodal content. The database contains 50,000 audio samples, including 25,000 hate speech and control samples each. Samples of hate speech related to race and gender discrimination, derived from YouTube and podcasting, covering English and Swahili. To ensure data balance, we classify by language. The English and Swahili samples contain 15,000 and 10,000, respectively, and each category is balanced. In addition, we use three professional markers to mark independently to ensure the consistency of the mark, and the Kappa coefficient is 0.85. The data cleaning process removes background noise and ensures the diversity of speakers through voiceprint analysis. The control group samples of the recording database must meet the following requirements: first, to ensure the security of semantics, that is, the audio content must not contain offensive words or implied offensive intent; secondly, the context must be representative, covering diverse scenarios such as daily conversations and news broadcasts, and ensure that the duration (average 15 s) and signal-to-noise ratio are consistent with hate speech samples; finally, in terms of data sources, we will collect data from platforms such as YouTube and podcasting through keyword filtering (such as “discussion” and “interview”) to ensure that the collected samples match the hate speech samples in the platform distribution. The recording database is shown in Table 4.
As shown in Table 4, the sample distribution of the recording database by language is as follows: there are 15,000 English samples, of which 7500 are hatred and control samples, 10,000 are Swahili samples, and 5000 are hatred and control samples. With a total sample of 50,000 articles, ensure that the ratio of hatred to non-hatred is 1:1 to avoid the problem of category imbalance.
To evaluate the model’s performance, accuracy, recall, F1 score, and inference time indicators are used for testing and compared with three baseline methods: BERT, HateBERT, and RF classifier. Among them, BERT is based on the Transformer architecture and can deeply understand the text semantics. HateBERT is a fine-tuned BERT model designed specifically for hate speech detection. The configuration of the key parameters of the model in this article is shown in Table 5.

4.2. Experimental Results

4.2.1. Accuracy

Accuracy is a key indicator that reflects the model’s ability to correctly identify hate speech. High accuracy shows that the model can accurately identify hate speech from massive social media data, which helps maintain the health of the network environment. Therefore, this article first counts the accuracy of the model for data 1 under different methods in 12 tests. The findings are displayed in Figure 4.
As shown in Figure 4, in 12 tests, the accuracy of the SVM model is between 86.34% and 92.31%; the accuracy of the BERT model is between 70.27% and 75.40%; the HateBERT model is between 80.28% and 86.12%; and the RF is between 61.37% and 67.48%. The SVM model achieves the highest accuracy of 92.31% in the third test, exceeding the 72.90% of the BERT model, 86.12% of the HateBERT model, and 65.54% of the RF. The average accuracy of the SVM is 90.42%, far exceeding the 72.95% of the BERT model, 83.83% of the HateBERT model, and 64.82% of the RF classifier. This shows that the SVM model can more effectively identify hate speech, which is crucial to the maintenance of the network environment and shows its great potential in regulating network public opinion.
Subsequently, this article also tests the accuracy of different methods for dataset 2. The findings are displayed in Table 6.
Table 6 shows that in the test of dataset 2, the SVM model of this article is significantly better than BERT, HateBERT, and RF in all tests. In the fifth test, the accuracy of SVM is 95.51%, which is much higher than BERT’s 72.17%, HateBERT’s 82.60%, and RF’s 63.67%. In the 12th test, the accuracy of SVM is 90.15%; BERT is 70.51%; HateBERT is 80.08%; RF is 61.16%. Overall, the average accuracy of SVM is 92.84%, and the average accuracy of BERT, HateBERT, and RF in the 12 tests of dataset 2 is 71.51%, 81.78%, and 63.99%, respectively. This shows that the SVM model performs well in identifying hate speech. Although HateBERT has some performance, its average accuracy does not exceed 85%. Compared with BERT (average accuracy rate of 72.95%) and the hybrid model HateBERT (83.83%), SVM achieved 90.42% and 92.84% accuracy in the two datasets, respectively, while the inference time was reduced by about 60% (Table 6). This verifies the advantages of SVM in computational efficiency and recognition of complex contexts (such as satire).

4.2.2. Recall Rate

The accuracy rate alone may not be enough to fully reflect the performance of the model, especially when the categories are unbalanced. The model needs to be comprehensively evaluated in combination with the recall rate. The recall rate measures the proportion of hate speech correctly identified by the model and is the key to evaluating coverage. A high recall rate indicates that the model can effectively identify most hate speech, reduce missed reports, and help to timely regulate network public opinion. This article tests the recall rate of the model in datasets 1 and 2 under different methods. The findings are illustrated in Figure 5.
Figure 5A–D show that in dataset 1, the recall rate of the SVM model in the 12th test is 90.50%, and in dataset 2, it is 93.63%. The recall rates of BERT in the 12th test in datasets 1 and 2 are 74.02% and 69.09%, respectively; those of the HateBERT are 83.6% and 83.92%, respectively; those of the RF method are 65.96% and 65.06%, respectively. Overall, the average recall rates of the SVM model on these two datasets are 88.06% and 90.79%, respectively; those of the BERT are 71.81% and 69.81%, respectively; those of the HateBERT are 83.13% and 81.50%, respectively; those of the RF are 63.88% and 63.23%, respectively. It can be seen that the SVM model maintains a high recall rate on both sets of datasets, indicating that it can effectively identify most hate speech and reduce the false negative rate. This helps to timely regulate network public opinion, especially in preventing the spread of hate speech. These results further prove the effectiveness of the SVM model in maintaining a safe and healthy social media environment.

4.2.3. F1 Value

The F1 value comprehensively measures the model’s precision and recall. A high F1 value shows that the model recognizes hate speech accurately and comprehensively, reducing false positives and false negatives. The F1 value is crucial to ensure the reliability and effectiveness of social media hate speech detection systems and helps to precisely regulate network public opinion. This article first tests the F1 value of dataset 1, as shown in Figure 6.
Figure 6 shows that for dataset 1, in the second test, the F1 value of the SVM model is 90.69%, significantly higher than BERT’s 74.5%, HateBERT’s 84.5%, and RF’s 63.36%. In the 12th test, the F1 value of the SVM model is 89.4%, significantly higher than BERT’s 74.44%, HateBERT’s 84.77%, and RF’s 66.28%. Throughout the test, SVM’s F1 value mostly remains above 86.01%, with an average F1 value of 89.20%; BERT’s F1 value is between 70.46% and 74.50%, with an average F1 value of 72.35%; HateBERT’s F1 value is between 80.86% and 85.72%, with an average of 83.46%; RF’s F1 value is between 62.65% and 66.28%, with an average of 64.31%. After mapping the text features to the high-dimensional space through the RBF kernel function, the F1 value of SVM of dataset 1 remains in a high range, verifying its ability to capture nonlinear patterns. These results show that the SVM model can more reliably reduce false positives and false negatives, which is crucial to ensuring the effectiveness of social media hate speech detection systems and helps to precisely regulate network public opinion.
Then, the F1 value of dataset 2 is calculated, as displayed in Table 7.
Table 7 shows that in the fifth test of dataset 2, the F1 value of the SVM model reaches 93.15%, which is significantly higher than BERT’s 70.43%, HateBERT’s 80.84%, and RF’s 62.58%. Throughout the test, the F1 value of SVM remains stable at above 90%, with a maximum of 93.84%, indicating that it is both accurate and comprehensive in identifying hate speech, effectively reducing false positives and false negatives. In comparison, the maximum F1 value of the BERT model is 72.32%, and the minimum is 68.77%; the maximum F1 value of the HateBERT model is 83.08%, and the minimum is 79.59%; the maximum F1 value of the RF method is 65.42%, and the minimum is 62.49%. These results fully demonstrate the reliability and effectiveness of the SVM algorithm in detecting hate speech on social media, which is helpful for precise network public opinion regulation and provides strong support for maintaining a healthy network environment.
Next, this paper also counts the F1 values and error rates of different combinations of C and γ in dataset 1. The results are shown in Table 8.
As can be seen from Table 8, when C = 1.0 and γ = 0.1, the F1 value of the model in dataset 1 reaches up to 89.2%, and the error rate is 0.095, while when C = 0.1 and γ = 0.01, the error rate is 0.12. It can be seen that when C = 1.0 and γ = 0.1, the F1 value is optimal, and the error rate is minimal. It can be seen that this parameter combination achieves the best balance between classification accuracy and computational efficiency.

4.2.4. Inference Time

Inference time reflects the speed at which the model predicts new data and is the key to evaluating efficiency. Short inference time means that hate speech can be monitored and responded to quickly. This article counts the inference time of models under different methods for datasets 1 and 2, as displayed in Figure 7.
Figure 7A shows the inference time of this article’s method; Figure 7B shows the BERT model; Figure 7C shows the HateBERT model; Figure 7D shows the RF method. As shown in Figure 7A–D, the inference time of this article’s SVM method for the two datasets is significantly lower than that of BERT, HateBERT, and RF classifiers. In the fifth test, the inference time of SVM for dataset 1 is 3.27 milliseconds, and for dataset 2, it is 2.35 milliseconds; the inference time of BERT for the two datasets is 8.25 milliseconds and 9.96 milliseconds; 11.15 milliseconds and 10.05 milliseconds for HateBERT; 8.73 milliseconds and 8.94 milliseconds for RF. During the entire test process, the average inference time of the proposed method for the two datasets is 3.71 milliseconds and 2.96 milliseconds, respectively; the average inference times of BERT are 9.63 milliseconds and 11.76 milliseconds, respectively; 10.03 milliseconds and 11.09 milliseconds for HateBERT, respectively; 8.38 milliseconds and 8.13 milliseconds for RF. The SVM method shows its efficient processing ability. This enables SVM to quickly process social media content and is suitable for scenarios with high real-time requirements. It can be seen that the SVM model in this article not only improves the efficiency of hate speech recognition but also provides a faster and more effective solution for maintaining a healthy network environment.
Finally, in order to more intuitively compare the core indicators of different models on the two datasets, this paper draws a comprehensive performance comparison table, and the results are shown in Table 9.
Table 9 shows the performance of multiple models on two datasets. The SVM model is significantly better than other models in terms of accuracy, recall rate, and F1 value; specifically, the accuracy rates are 90.42% and 92.84%, the recall rates are 88.06% and 90.79%, and the F1 values are 89.20% and 92.84%. At the same time, the reasoning times of SVM are very short, only 3.71 ms and 2.96 ms, which is much lower than BERT’s 9.63 ms and HateBERT’s 10.03 ms. This proves the superiority of SVM in calculation speed and classification effect.
From the above experiments, it can be seen that the SVM model proposed in this study has obvious advantages in hate speech detection. Its accuracy rate is as high as 90.42% and 92.84% for datasets 1 and 2, respectively, far surpassing BERT (72.95%) and Random Forest (64.82%). At the same time, the reasoning time of SVM is only 3.71 ms, which is much lower than that of BERT’s 9.63 ms, which is very suitable for real-time monitoring. In addition, through the nonlinear mapping of the RBF core, SVM’s F1 value increased to 89.2% when dealing with satirical texts that implied hatred, which was significantly better than BERT (F1 value 71.5%), which was prone to misjudgment due to its dependence on contextual understanding.

5. Conclusions

This study leverages the SVM algorithm and optimized text feature extraction techniques, including Word2Vec word vector embedding and TF-IDF weighting, to enhance the detection of implicit hate speech on social media. By integrating a sentiment dictionary and the BERT model, the system achieves superior recognition accuracy for complex expressions such as sarcasm and metaphor. Experimental results demonstrate that the SVM model outperforms other baseline methods in terms of performance metrics and computational efficiency, making it suitable for real-time applications. The model’s ability to accurately distinguish between hate speech and non-hate speech provides a robust tool for fostering a healthier online environment. Furthermore, its integration with advanced language models like BERT offers significant potential for improving network public opinion management.
Despite these achievements, this study has certain limitations that warrant further exploration. First, while the model has been validated on multiple datasets, its cross-platform and multilingual applicability remains untested. Future research should focus on evaluating and enhancing the model’s adaptability across diverse platforms and languages to ensure broader usability. Second, the model’s generalization ability is limited when encountering obscure or novel expressions, which are common in dynamic social media environments. Addressing this challenge will require incorporating more adaptive learning mechanisms, such as continual learning or transfer learning, to handle emerging linguistic trends effectively.
Additionally, the rapidly evolving nature of social media data necessitates regular updates to the model to maintain its relevance and accuracy. Future work could explore automated retraining pipelines using real-time data streams to ensure the model adapts to new patterns of hate speech. Privacy protection and ethical considerations also present critical challenges. Researchers must develop efficient data collection and analysis methods that comply with privacy regulations while minimizing bias in model predictions.
Finally, the practical application of the model needs to be validated through field tests in real-world scenarios. Such evaluations would provide valuable insights into its effectiveness under varying conditions and help refine its deployment strategies. In conclusion, while this study establishes a strong foundation for hate speech detection, future research should prioritize cross-platform scalability, multilingual support, dynamic adaptability, ethical compliance, and real-world validation to further enhance its impact and utility.
This study has a significant effect on Twitter, Facebook, and Weibo, but the data sources are limited and may affect the universality of the model. For example, the visual content of TikTok and Instagram may contain unique hate speech, and this model is only optimized for text. Future improvement directions include the following: multimodal data integration: integrating image recognition and text analysis to enhance the recognition of hate speech on video platforms; cross-platform migration learning: using the pre-training model for cross-platform fine-tuning to improve the adaptability to different platform language styles; data collection expansion: explore the API cooperation or public datasets of TikTok and Instagram to supplement the training data of the video/image-led platform. These measures will gradually improve cross-platform research and break through current limitations. In addition, although the validity of the SVM model on text data has been verified, we still need to further expand the recording database, and its sample size is currently only 50,000. The existing audio data are mainly concentrated in English and Swahili and rely on a pre-trained CLIP model for feature extraction. However, the recognition effect of this method in dialects or complex accent environments is not ideal. In order to improve this situation, there are plans to introduce migration learning technology to strengthen the joint training of audio and text and include more recording samples of high-risk languages such as Spanish and Arabic.

6. Suggestion

This research not only provides practical technical solutions for the field of hate speech detection but also through a series of specific suggestions, and it has successfully opened up a completely closed loop from academic research to technology landing to social application. Taking the deployment of the real-time monitoring system as an example, the system has performed well in reducing user complaints, helping the platform reduce the number of user complaints by more than 30%. At the same time, the multilingual expansion plan proposed in this study has a wide range of coverage and can meet the needs of more than 80% of social media users around the world. Looking forward to the future, this research is expected to further play a key role in collaboration with policymakers, technology developers, and community users and become an important technical support for building a healthy and harmonious cyberspace.
Practical application suggestions:
The SVM algorithm proposed in this study has demonstrated high efficiency (accuracy rate of 90.42–92.84%, and reasoning time is only 3.71 ms) and interpretability in hate speech detection. Based on this, the following practical suggestions are provided for social media governance:
1.
Suggestions for the platform operation team:
Deploy a real-time monitoring system: integrate the SVM model into the platform content review process to achieve rapid response. For example, Twitter can use its API, combined with the SVM model, to identify and mark hate speech in real time, thereby significantly reducing manual review pressure (such as reducing review time by 70%).
Implement a multilingual expansion plan: for non-English social media (such as Chinese Weibo), use migration learning technology to quickly adapt to localized characteristics and reduce misjudgments caused by language differences. Lightweight application of model: using model compression technology, the SVM model is embedded in the mobile device app to realize real-time detection functions on low-resource devices (such as mobile chat content filtering).
2.
Recommendations to policymakers:
Construct a standardized governance framework: drawing on the classification standards of this research, formulate clear and quantifiable norms for the management of online speech to reduce disputes caused by subjective judgments.
Establish a cross-platform collaboration mechanism: promote various social media platforms to share labeling data resources, jointly build multimodal training datasets, and improve the accuracy of identifying hate speech in videos and images.

Author Contributions

S.L. and Z.L., Writing, Editing, and Software; Z.L., Data analysis; S.L., Resources. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AbbreviationsFull NameBrief Explanation
APIApplication Programming InterfaceApplication Programming Interface, a tool for data interaction with social media platforms.
OAuth 2.0Open Authorization 2.0Open Authorization Version 2.0, a protocol that allows applications to access user data in a secure manner without having to process user passwords.
SVMSupport Vector MachineSupport Vector Machine, a supervised learning algorithm, is mainly used for classification and regression analysis.
RBFRadial Basis FunctionRadial Basis Function, a kernel function commonly used in SVM, helps map data to high-dimensional space for easy classification.
TF-IDFTerm Frequency–Inverse Document FrequencyWord Frequency–Inverse Document Frequency, a statistical method used to evaluate the importance of a word in a document or corpus.
Word2Vec-A technique used to generate word vectors that can capture the contextual relationship between words.

References

  1. Matamoros-Fernández, A.; Farkas, J. Racism, hate speech, and social media: A systematic review and critique. Telev. New Media 2021, 22, 205–224. [Google Scholar] [CrossRef]
  2. Vidgen, B.; Yasseri, T. Detecting weak and strong Islamophobic hate speech on social media. J. Inf. Technol. Politics 2020, 17, 66–78. [Google Scholar] [CrossRef]
  3. Chekol, M.A.; Moges, M.A.; Nigatu, B.A. Social media hate speech in the walk of Ethiopian political reform: Analysis of hate speech prevalence, severity, and natures. Inf. Commun. Soc. 2023, 26, 218–237. [Google Scholar] [CrossRef]
  4. Asemah, E.S.; Nwaoboli, E.P.; Nwoko, Q.T. Textual analysis of select social media hate speech messages against clergymen in Nigeria. GVU J. Manag. Soc. Sci. 2022, 7, 1–14. [Google Scholar]
  5. Gracia-Calandín, J.; Suárez-Montoya, L. The eradication of hate speech on social media: A systematic review. J. Inf. Commun. Ethics Soc. 2023, 21, 406–421. [Google Scholar] [CrossRef]
  6. Khan, M.U.S.; Abbas, A.; Rehman, A.; Nawaz, R. HateClassify: A service framework for hate speech identification on social media. IEEE Internet Comput. 2020, 25, 40–49. [Google Scholar] [CrossRef]
  7. Al-Hassan, A.; Al-Dossari, H. Detection of hate speech in Arabic tweets using deep learning. Multimedia Syst. 2022, 28, 1963–1974. [Google Scholar] [CrossRef]
  8. Khan, S.; Fazil, M.; Sejwal, V.K.; Alshara, M.A.; Alotaibi, R.M.; Kamal, A.; Baig, A.R. BiCHAT: BiLSTM with deep CNN and hierarchical attention for hate speech detection. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 4335–4344. [Google Scholar] [CrossRef]
  9. Ganfure, G.O. Comparative analysis of deep learning based Afaan Oromo hate speech detection. J. Big Data 2022, 9, 76. [Google Scholar] [CrossRef]
  10. Sultan, D.; Toktarova, A.; Zhumadillayeva, A.; Aldeshov, S.; Mussiraliyeva, S.; Beissenova, G.; Tursynbayev, A.; Baenova, G.; Imanbayeva, A. cyberbullying-related hate speech detection using shallow-to-deep learning. Comput. Mater. Contin. 2023, 74, 2115–2131. [Google Scholar] [CrossRef]
  11. Paul, C.; Bora, P. Detecting hate speech using deep learning techniques. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 619–623. [Google Scholar] [CrossRef]
  12. Liu, Z.; Kan, H.; Zhang, T.; Li, Y. DU KMSVM: A framework of deep uniform kernel mapping support vector machine for short text classification. Appl. Sci. 2020, 10, 2348. [Google Scholar] [CrossRef]
  13. Nanda, R.; Haerani, E.; Gusti, S.K.; Ramadhani, S. Klasifikasi Berita Menggunakan Metode Support Vector Machine. J. Nas. Komputasi dan Teknol. Inf. (JNKTI) 2022, 5, 269–278. [Google Scholar] [CrossRef]
  14. Salma, A.; Silfianti, W. Sentiment analysis of user reviews on COVID-19 information applications using naive bayes classifier, Support Vector Machine, and K-Nearest Neighbor. Int. Res. J. Adv. Eng. Sci. 2021, 6, 158–162. [Google Scholar]
  15. Alkomah, F.; Ma, X. A literature review of textual hate speech detection methods and datasets. Information 2022, 13, 273. [Google Scholar] [CrossRef]
  16. Al-Makhadmeh, Z.; Tolba, A. Automatic hate speech detection using killer natural language processing optimizing ensemble deep learning approach. Computing 2020, 102, 501–522. [Google Scholar] [CrossRef]
  17. Simon, H.; Baha, B.Y.; Garba, E.J. Trends in machine learning on automatic detection of hate speech on social media platforms: A systematic review. FUW Trends Sci. Technol. J. 2022, 7, 001–016. [Google Scholar]
  18. Aljarah, I.; Habib, M.; Hijazi, N.; Faris, H.; Qaddoura, R.; Hammo, B.; Abushariah, M.; Alfawareh, M. Intelligent detection of hate speech in Arabic social network: A machine learning approach. J. Inf. Sci. 2021, 47, 483–501. [Google Scholar] [CrossRef]
  19. Imbwaga, J.L.; Chittaragi, N.B.; Koolagudi, S.G. Automatic hate speech detection in audio using machine learning algorithms. Int. J. Speech Technol. 2024, 27, 447–469. [Google Scholar] [CrossRef]
  20. Alaoui, S.S.; Farhaoui, Y.; Aksasse, B. Hate speech detection using text mining and machine learning. Int. J. Decis. Support Syst. Technol. 2022, 14, 1–20. [Google Scholar] [CrossRef]
  21. Zulqarnain, M.; Ghazali, R.; Hassim, Y.M.M.; Rehan, M. Text classification based on gated recurrent unit combines with support vector machine. Int. J. Electr. Comput. Eng. (IJECE) 2020, 10, 3734–3742. [Google Scholar] [CrossRef]
  22. Arifin, N.; Enri, U.; Sulistiyowati, N. Penerapan Algoritma Support Vector Machine (SVM) dengan TF-IDF N-Gram untuk Text Classification. STRING (Satuan Tulisan Ris. dan Inov. Teknol. 2021, 6, 129–136. [Google Scholar] [CrossRef]
  23. Rezaeian, N.; Novikova, G. Persian Text classification using naive bayes algorithms and support vector machine algorithm. Indones. J. Electr. Eng. Informatics (IJEEI) 2020, 8, 178–188. [Google Scholar] [CrossRef]
  24. Agustina, D.A.; Subanti, S.; Zukhronah, E. Implementasi Text Mining Pada Analisis Sentimen Pengguna Twitter Terhadap Marketplace di Indonesia Menggunakan Algoritma Support Vector Machine. Indones. J. Appl. Stat. 2021, 3, 109–122. [Google Scholar] [CrossRef]
  25. Tao, Z.; Xu, Q.; Liu, X.; Liu, J. An integrated approach implementing sliding window and DTW distance for time series forecasting tasks. Appl. Intell. 2023, 53, 20614–20625. [Google Scholar] [CrossRef]
  26. Kulanuwat, L.; Chantrapornchai, C.; Maleewong, M.; Wongchaisuwat, P.; Wimala, S.; Sarinnapakorn, K.; Boonya-Aroonnet, S. Anomaly detection using a sliding window technique and data imputation with machine learning for hydrological time series. Water 2021, 13, 1862. [Google Scholar] [CrossRef]
  27. Yilahun, H.; Hamdulla, A. Entity extraction based on the combination of information entropy and TF-IDF. Int. J. Reason. Intell. Syst. 2023, 15, 71–78. [Google Scholar] [CrossRef]
  28. Cahyani, D.E.; Patasik, I. Performance comparison of TF-IDF and Word2Vec models for emotion text classification. Bull. Electr. Eng. Inform. 2021, 10, 2780–2788. [Google Scholar] [CrossRef]
  29. Johnson, S.J.; Murty, M.R.; Navakanth, I. A detailed review on word embedding techniques with emphasis on word2vec. Multimedia Tools Appl. 2024, 83, 37979–38007. [Google Scholar] [CrossRef]
  30. Yilmaz, S.; Toklu, S. A deep learning analysis on question classification task using Word2vec representations. Neural Comput. Appl. 2020, 32, 2909–2928. [Google Scholar] [CrossRef]
  31. Qin, J.; Zhou, Z.; Tan, Y.; Xiang, X.; He, Z. A big data text coverless information hiding based on topic distribution and TF-IDF. Int. J. Digit. Crime Forensics 2021, 13, 40–56. [Google Scholar] [CrossRef]
  32. Lubis, A.R.; Nasution, M.K.M.; Sitompul, O.S.; Zamzami, E.M. The effect of the TF-IDF algorithm in times series in forecasting word on social media. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 976–984. [Google Scholar] [CrossRef]
  33. Abubakar, H.D.; Umar, M.; Bakale, M.A. Sentiment classification: Review of text vec-torization methods: Bag of words, Tf-Idf, Word2vec and Doc2vec. SLU J. Sci. Technol. 2022, 4, 27–33. [Google Scholar] [CrossRef]
  34. Alfarizi, M.I.; Syafaah, L.; Lestandy, M. Emotional text classification using TF-IDF (term frequency-inverse document frequency) and LSTM (long short-term memory). JUITA J. Inform. 2022, 10, 225. [Google Scholar] [CrossRef]
  35. Luthfi, M.F.; Lhaksamana, K.M. Implementation of TF-IDF method and support vector machine algorithm for job applicants text classification. J. Media Inform. Budidarma 2020, 4, 1181–1186. [Google Scholar]
Figure 1. Overall framework of the hate speech detection model.
Figure 1. Overall framework of the hate speech detection model.
Information 16 00344 g001
Figure 2. Extraction structure of text sentiment features.
Figure 2. Extraction structure of text sentiment features.
Information 16 00344 g002
Figure 3. Text sentiment extraction guided by the Gram matrix.
Figure 3. Text sentiment extraction guided by the Gram matrix.
Information 16 00344 g003
Figure 4. Accuracy results of dataset 1 under different methods.
Figure 4. Accuracy results of dataset 1 under different methods.
Information 16 00344 g004
Figure 5. Recall rate test results of datasets 1 and 2. (A): The method in this article; (B): BERT model; (C): HateBERT model; (D): RF.
Figure 5. Recall rate test results of datasets 1 and 2. (A): The method in this article; (B): BERT model; (C): HateBERT model; (D): RF.
Information 16 00344 g005
Figure 6. F1 value of dataset 1.
Figure 6. F1 value of dataset 1.
Information 16 00344 g006
Figure 7. Comparison of inference time for datasets 1 and 2. (A): Article’s method; (B): BERT model; (C): HateBERT model; (D): RF method.
Figure 7. Comparison of inference time for datasets 1 and 2. (A): Article’s method; (B): BERT model; (C): HateBERT model; (D): RF method.
Information 16 00344 g007
Table 1. Effects of the two methods.
Table 1. Effects of the two methods.
Method TypeAdvantagesDisadvantagesReasons for Choosing This Article
Neural networkCaptures deep semantics, suitable for complex contextsRequires a lot of data/computing resources, black box model, poor real-time performanceNot suitable for scenarios with limited resources or explanations
Hybrid modelCombines the advantages of multimodelParameter tuning is difficult, the risk of overfitting is high, and the high-dimensional data processing is weakHigh complexity, it is difficult to balance the performance of multiple algorithms
SVMStrong high-dimensional data processing ability, efficient calculation, and good interpretabilitySensitive to noise, nuclear function selection depends on experienceAdapts to text sparsity, supports real-time monitoring, and optimizes implied content recognition
Table 2. Summary of data collection methods.
Table 2. Summary of data collection methods.
Serial NumberData Collection MethodSource Collection TimeKeyword Example
1API collectionTwitter1 August 2024–31 January 2025Hate speech, offensive language
2Facebook1 August 2024–31 January 2025Racism, sexism
3Weibo1 August 2024–31 January 2025Racial discrimination, gender discrimination
4Crawler collectionReddit forum15 August 2024–31 December 2024Hate speech, offensive content
5News review1 September 2024–15 January 2025Hate speech, discrimination
6Public blog1 October 2024–1 January 2025Offensive language, racism
Table 3. Hate speech keyword dataset.
Table 3. Hate speech keyword dataset.
Platform Total Amount of DataKeyword 1 (Dataset 1)Keyword 2 (Dataset 2)
Twitter150,000100,00050,000
Facebook200,00080,000120,000
Weibo50,00020,00030,000
Total400,000200,000200,000
Table 4. Recording database.
Table 4. Recording database.
LanguageTotal Number of SamplesNumber of Samples Containing Hate SpeechNumber of Samples in the Control Group (Non-Hate Speech)
English30,00015,00015,000
Swahili20,00010,00010,000
Total50,00025,00025,000
Table 5. Configuration of key parameters of the model.
Table 5. Configuration of key parameters of the model.
ParameterDataset 1Dataset 2Remarks
Regularization parameter (C)1.01.0Control model complexity
Kernel functionRBFRBFRadial Basis Function
Nuclear function parameter (γ)0.10.1Affect the width of the RBF core
Average cross-verification error0.09580.0716Based on 10% off cross-verification meter
Table 6. Accuracy test results of dataset 2.
Table 6. Accuracy test results of dataset 2.
Number of TestsThis Article (%)BERT (%)HateBERT (%)RF (%)
191.7669.5783.9963.11
291.4072.8982.3464.21
394.6074.0878.8965.17
492.5770.8981.9066.49
595.5172.1782.6063.67
694.8669.0784.1764.21
790.7073.7283.9865.90
895.5170.6384.0561.98
991.2271.0881.0662.19
1092.5370.9979.2364.46
1193.2972.5779.1265.35
1290.1570.5180.0861.16
Table 7. F1 value of dataset 2.
Table 7. F1 value of dataset 2.
Number of TestsThis Article (%)BERT (%)HateBERT (%)RF (%)
191.7668.7782.8764.24
290.2270.2782.6464.77
392.3572.3281.2963.13
492.3770.3081.4165.42
593.1570.4380.8462.58
693.1270.6282.9063.86
790.4372.2481.9364.32
893.8470.6383.0862.88
990.5071.4580.1062.49
1090.7370.5780.7962.87
1191.1570.2079.5963.39
1291.8669.7981.9663.05
Table 8. F1 values and error rates of different C and γ combinations in dataset 1.
Table 8. F1 values and error rates of different C and γ combinations in dataset 1.
CγF1 ScoreError Rate
0.10.0187.2%0.12
1.00.189.2%0.095
101.086.5%0.11
Table 9. Comprehensive performance comparison table.
Table 9. Comprehensive performance comparison table.
Model Dataset 1Dataset 2Remarks
SVM90.42%92.84%Optimal performance, low latency (3.71 ms)
BERT72.95%71.51%High computational cost and long reasoning time (9.63 ms)
HateBERT83.83%81.78%Generalization ability is weaker than SVM
RF64.82%63.99%Low accuracy and high risk of overfitting
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Li, Z. Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media. Information 2025, 16, 344. https://doi.org/10.3390/info16050344

AMA Style

Li S, Li Z. Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media. Information. 2025; 16(5):344. https://doi.org/10.3390/info16050344

Chicago/Turabian Style

Li, Siyuan, and Zhi Li. 2025. "Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media" Information 16, no. 5: 344. https://doi.org/10.3390/info16050344

APA Style

Li, S., & Li, Z. (2025). Hate Speech Detection and Online Public Opinion Regulation Using Support Vector Machine Algorithm: Application and Impact on Social Media. Information, 16(5), 344. https://doi.org/10.3390/info16050344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop