Next Article in Journal
FEPA-Net: A Building Extraction Network Based on Fusing the Feature Extraction and Position Attention Module
Previous Article in Journal
Detection of Undeclared Meat Species and Fatty Acid Variations in Industrial and Traditional Beef Sausages
Previous Article in Special Issue
A Comprehensive Approach to Bias Mitigation for Sentiment Analysis of Social Media Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Scene Segmentation and Sentiment Analysis for Danmaku

National Center for Materials Service Safety, University of Science and Technology Beijing, No. 12 Kunlun Road, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4435; https://doi.org/10.3390/app15084435
Submission received: 11 March 2025 / Revised: 9 April 2025 / Accepted: 15 April 2025 / Published: 17 April 2025

Abstract

:
Danmaku analysis is important for understanding video content and user interactions. However, current methods often look at separate comments and do not see the complex links between Danmaku and the video’s context. This paper presents a new approach that combines advanced shot segmentation techniques, using Deep Convolutional Neural Networks (DDCNN), with an analysis of feelings based on the MacBERT model. First, videos are cut into clear scenes based on detected scene changes. Then, a large group of Danmaku comments are collected and studied to make a complete dictionary of feelings for this field. With this as a base, a new Danmaku-E model is made to find and group seven different emotional categories within Danmaku comments. The model shows significantly improved performance, with accuracy increasing from 94.58% to 95.37% and F1 score going from 94.92% to 95.66%, helped by the improved dictionary of feelings. Experimental results show the good effects of the expanded dictionary in helping model performance in different structures. Also, the Apriori algorithm is used to find and explain links between Danmaku comments and video content, providing a deeper understanding into user participation and emotional reactions.

1. Introduction

The rapid evolution of online video platforms, particularly the widespread adoption of Danmaku, offers users a distinctive environment for interactive engagement and personal expression in near-real-time. However, the transient nature and large volume of Danmaku data present significant challenges for sentiment analysis. Current methods often struggle to effectively capture the nuanced and dynamic interactions between Danmaku comments and video content. Additionally, current studies fail to fully exploit the interaction between a video’s visual information and its associated Danmaku data. Bai et al. [1] underscored the importance of integrating visual information with Danmaku data, concluding that such an approach could substantially improve the accuracy of sentiment analysis by providing a more comprehensive understanding of user sentiments and engagement.
This study pioneers a deeper understanding of the emotional connection between video content and Danmaku data. We fundamentally re-engineered the shot segmentation technology, inspired by Soucek and Lokoc’s work [2] on shot transition detection, to merge video segments based on emotional coherence and visual similarity.
The main contributions of this paper are listed as follows:
(1)
A scene merging method based on average color histogram similarity is introduced to more accurately identify and merge visually similar consecutive scenes and reduce erroneous scene switch predictions.
(2)
Combining an extended sentiment lexicon with BERT (Bidirectional Encoder Representations from Transformers). A novel fuzzy feature layer is introduced to refine sentiment quantification, mapping multidimensional sentiment scores into clear categories through fuzzy variables and membership functions, enhancing both granularity and interpretability of sentiment classification, addressing the challenges of ambiguity and complexity in emotional expressions.

2. Related Works

Sentiment analysis is a main part of natural language processing. It aims to find, pull out, and measure subjective information from data. This basis allows us look into advanced deep learning methods and models that are already trained in language. They find complicated emotional signals and semantic details.

2.1. Scene Segmentation Technology

Scene segmentation is crucial when studying video content. It includes identifying visual boundaries and analyzing and encoding the emotions within the content.
Rao et al. [3] put forward a scene segmentation method that starts with local details and then moves to a global view. They aimed to address the challenging task of movie scene segmentation. Movie scenes contain more complex temporal structures and semantic information compared to videos in traditional vision media. To achieve their goal, Rao et al. created a large-scale video dataset called MovieScenes. This dataset has 21,000 annotated scene segments from 150 movies. Their proposed local-to-global scene segmentation framework integrates multi-modal information at three levels: clip, segment, and movie. First, it focuses on local details by analyzing clip-level information to capture subtle changes in scenes. Then, it gradually takes into account segment- and movie-level information to consider the overall structure and semantics. This method simplifies the complex meanings in long movies’ temporal structures. Experiments on the MovieScenes dataset show that their network can segment movies into scenes with high accuracy, outperforming previous methods. Also, pre-training on MovieScenes can significantly improve the performance of existing methods, promoting the development of scene segmentation technology in handling large-scale and complex video data.
Fu et al. [4] proposed the DRANet (Dual Relation-Aware Attention Network) for scene segmentation. In pixel-level recognition, effectively using context information is very important. They used the relation-aware attention mechanism to adaptively capture context information. Specifically, they added two types of attention modules on top of the dilated FCN (Fully Convolutional Network). One module models the contextual dependencies in the spatial dimension, and the other does the same in the channel dimension. In these attention modules, they used the self-attention mechanism. This allows each pixel or channel to gather context from all other pixels or channels based on their correlations. To reduce the high cost of calculation and memory caused by the pairwise association calculation, they designed two compact attention modules. In these modules, each pixel or channel only forms associations with a few gathering centers and obtains context aggregation from these centers. At the same time, they added a cross-level gating decoder. This decoder can selectively enhance spatial details, which improves the network’s performance. They conducted many experiments on four challenging scene segmentation datasets: Cityscapes, ADE20K, PASCAL Context, and COCO Stuff. The results show that their network is effective and achieves a new state-of-the-art segmentation performance. For example, on the Cityscapes test set, without using extra coarse-annotated data, it achieves a mean IoU score of 82.9%, providing a more effective solution for scene segmentation on complex datasets.
Soucek and Lokoc [2] presented TransNet V2, a deep network design for the rapid detection of shot transitions. It improved the accuracy of shot transition detection in complex video content, marking significant progress in the initial step of video analysis—shot transition detection—and providing a more reliable basis for subsequent video content analysis.

2.2. Deep Learning-Based Emotion Analysis

Deep learning methods have been used with success for analyzing emotions. These include CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), LSTM (Long Short-Term Memory), and attention mechanisms [5]. They can find complex emotional signals and features of meaning.
Kossack and Unger [6] came up with a sentiment analysis method for film review texts. Their method is based on SVM (Support Vector Machines) classification technology in sentiment dictionaries and machine learning. First, they built basic sentiment dictionaries, dominant sentiment dictionaries, negative word dictionaries, and degree adverb dictionaries. Then, they combined these four dictionaries to expand the dictionary. Next, they constructed the SVM model training set by calculating the combination of sentiment weights and user scores. Finally, they used test data for sentiment classification experiments. The results showed that their method has a higher accuracy in sentiment classification compared to methods based on basic sentiment dictionaries. This proves the effectiveness of their proposed emotion-aware system in text sentiment analysis and offers a new way of thinking for emotion analysis in the film review field.
Lu and Wu [7] did not directly study how to solve the long-distance dependency problem. Instead, they introduced the BERT pre-training model. BERT was designed to pre-train deep bidirectional representations from unlabeled text. It depends on both left and right contexts in all layers during pre-training. BERT’s pre-training mainly involves two tasks: Masked Language Modeling and Next Sentence Prediction. After pre-training, the BERT model only needs to add an extra output layer for fine-tuning. Then, it can be used to create state-of-the-art models for many natural language processing tasks, like question answering and language inference, without making many changes to the architecture for specific tasks. BERT has achieved great results in many natural language processing tasks. For example, it increased the GLUE score to 80.5 (an absolute increase of 7.7 points), the MultiNLI accuracy to 86.7% (an absolute increase of 4.6%), the SQuAD v1.1 question answering test F1 value to 93.2 (an absolute increase of 1.5 points), and the SQuAD v2.0 test F1 value to 83.1 (an absolute increase of 5.1 points). Although BERT is not a method used specifically for solving the long-distance dependency problem, its strong ability to learn context information provides strong support for solving various natural language processing problems, including long-distance dependency. Future research can explore ways to solve the long-distance dependency problem based on BERT.
In recent years, BERT has achieved notable success in natural language processing research. Devlin et al. [8] demonstrated that BERT pre-trains deep bidirectional representations by jointly conditioning on both left and right contexts across all transformer layers.
The pre-training of BERT primarily involves two tasks: Masked Language Modeling and Next Sentence Prediction. RoBERTa (Robustly Optimized BERT Approach) retains the original BERT architecture while introducing refinements to fully exploit BERT’s capabilities. Liu et al. [9] conducted detailed comparisons of various components within BERT, such as masking strategies and training steps, improving BERT’s performance by extending the training time using larger batch sizes and longer sequences on a wider range of datasets, and by eliminating the next sentence prediction task and adopting the key method of dynamic masking. MacBERT builds upon both BERT and RoBERTa, replacing the original MLM (Masked Language Modeling) task with MLM as a correction (Mac) task to mitigate discrepancies between pre-training and finetuning. Xu et al. [10] revisited previously popular pretrained language models and adapted them to Chinese. In recent years, deep learning has provided powerful tools for sentiment analysis, specifically in the domain of Danmaku comments. Wang and Huang [11] developed a sentiment classification algorithm based on a modified Bayes model for Danmaku comments, highlighting advancements in model adaptability for specific comment types. Although their approach achieved good results in classification accuracy, it relied mainly on traditional machine learning methods. As a result, it had trouble fully grasping the intricate emotional cues within Danmaku comments. Our model incorporates advanced deep-learning technologies. We utilize Deep Convolutional Neural Networks (DCNN) for scene segmentation and the MacBERT model for sentiment analysis. Thanks to this combination, we can not only classify sentiments with greater precision but also better understand the relationship between video content and user comments. Furthermore, by applying a fuzzy logic system and an extended sentiment lexicon, we enhance the granularity and interpretability of sentiment classification. Zhao et al. [12] used the MIBE-RoBERTa-FFBiLSTM model for sentiment analysis of Danmaku videos. This can find complex emotional signals in comments. Nevertheless, their method was mainly oriented towards text-based sentiment analysis and failed to consider the visual information in the video. In contrast, our approach combines both visual and textual data. This way, we can gain a more comprehensive understanding of user emotions. Through segmenting videos into scenes and analyzing the emotional coherence between scenes and comments, our model can better capture the nuances of user interactions. Additionally, by using the Apriori algorithm to mine association rules between video content and comments, we can gain deeper insights into user behavior and emotional responses. Rohidin et al. [13] brought up a Class-Based Fuzzy Soft Associative model. It brings together association rules and fuzzy soft set models and shows better accuracy and efficiency than normal classifiers. This way of doing things is very useful in text classification, where time and accuracy are important. Cao et al. [14] made VisDmk, a system for interactive visual analysis to study large emotional Danmaku data from online videos. Also, association rules can show links between specific ways of expression, slang, or shortcuts and particular trends in feelings. This way, it can identify patterns in audience emotional responses [15]. Connectedly, Tayaba et al. [16] showed how machine learning and association rule mining can be used to analyze feelings on Twitter. Liu et al. [17] performed sentiment mining at the aspect level of Danmaku comments. They made a framework that improves the understanding of meaning and shows the potential to enhance the viewing experiences. Ye et al. [18] investigated information cues in Danmaku comments that trigger user emotional production in reaction videos. Nagao et al. [19] showed effective language representations for classifying Danmaku comments, using Nicopedia BERT to make the classification performance better. But they restricted language representation and did not investigate the interaction between video content and comments. Our model addresses this shortcoming. We integrate video scene segmentation and sentiment analysis. This integration enables us to capture both the visual and textual context of user interactions, leading to more accurate and nuanced sentiment classification. Moreover, our model’s capacity to handle mixed emotions and complex emotional expressions via the fuzzy logic system differentiates it from traditional language-based models. Li et al. [20] built a dictionary to analyze the sentiment of movie features based on Danmaku, providing a new and effective method for sentiment analysis.

2.3. Association Rules Based Emotion Analysis

Mining rules of association, a common technique in data mining, works to find meaningful and often happening patterns in sets of items. This way has been used efficiently in studying emotion and sentiment to find hidden patterns and trends within text data. This gives useful information about how users behave, their emotional tendencies, and feedback mechanisms.
Tayaba et al. [16] studied the application of machine learning and association rule mining in tweet analysis. Airlines pay much attention to improving customer experience, and Twitter is an important platform for passengers to share their opinions. In their research, they used the Glove dictionary and n-gram methods to extract features from tweets for word embedding. They explored different artificial neural network (ANN) architectures and support vector machines (SVM) to create a classification model that can classify tweets into positive and negative sentiment categories. They also developed a convolutional neural network (CNN) for tweet classification and compared its performance with the most accurate model among SVM and multiple ANN architectures. The results showed that the CNN model performed better. To gain more insights, they applied association rule mining to different tweet categories. For example, they found that certain word combinations, like the simultaneous appearance of “flight on time” and “good service”, are strongly associated with the positive sentiment category, while “flight delay” and “poor attitude” are related to the negative sentiment category. These findings are valuable for airlines to optimize their customer experience strategies.
Naznin et al. [21] proposed two methods, quantitative and fuzzy, to extract meaningful rules to reveal the associations between different emotion categories in users’ posts on social media platforms. They used a Twitter user dataset and classified emotions according to Ekman’s six emotion categories. They conducted several experiments with different minimum support and minimum confidence thresholds. In the quantitative method, they counted the frequencies of different emotion-related words in users’ posts and their co-occurrence situations to mine the association rules between emotion categories. For example, they found that when words expressing “joy” frequently appear in a user’s post, the probability of words expressing “anticipation” also increases. In the fuzzy method, they used fuzzy logic to deal with the uncertainty of emotion expressions and capture the fuzzy associations between emotion categories more flexibly. For example, for some ambiguous words, they determined their associations with different emotion categories through fuzzy membership functions. The experiments achieved promising results. Both methods can effectively identify the associations between different emotion categories in users’ tweets, which is of great significance for psychologists in studying people’s emotional psychology and personality.

2.4. Model Design Theory

In view of the shortcomings of traditional sentiment analysis methods in the field of Danmaku, such as low accuracy of text ambiguity resolution, poor consistency of sentiment annotation, and insufficient extraction of semantic features, the fusion model design aims to combine the advantages of different models to solve these problems. When processing irregular popular new words in Danmaku, the fusion model can leverage its integrated capabilities to better understand and interpret these emerging expressions, thereby improving the overall performance of sentiment analysis.
The various components in the fusion model do not work independently, but work together to improve the overall performance of the model. In similar text sentiment analysis scenarios, the RoBERTa-RNN fusion model achieves better results in tasks such as multi-label text sentiment analysis [22]. For the combined model RoBERT-CNN, RoBERTa is responsible for the deep semantic encoding of Danmaku and converting text into vector representation with rich semantic information. CNN performs convolution operations on these vectors to further filter and combine features to highlight key information related to emotions. For example, when processing Danmaku containing complex emotional expressions, RoBERTa understands the semantics of the text, and CNN extracts the local features that best represent the emotional tendency from these semantic features. The two work together to improve the accuracy of sentiment classification [23]. The RoBERTa model, with its powerful pre-training capabilities, can deeply mine the semantic and structural information of Danmaku comments, while the BiLSTM model is good at capturing the contextual information of texts. The RoBERTa model is responsible for pre-training and extracting deep semantic information. Its output word embedding vectors are processed by the featured fusion layer, and they contain more fine-grained Chinese corpus information, which is then input into the BiLSTM model. The BiLSTM model uses its bidirectional sequence modeling capabilities to extract features from the contextual information of the input text, making up for the RoBERTa model’s shortcomings in considering contextual information. The two work together to effectively improve the model’s ability to analyze the sentiment tendency of Danmaku comments [24].

3. Method

The detailed setup for studying sentiments in Danmaku comments is shown in Figure 1. The proposed approach is essentially a multimodal analysis method that integrates deep learning with fuzzy logic. Its core lies in the amalgamation of visual information from video scenes and semantic information from Danmaku comments. This method delves deeply into the genuine emotional feedback of users while watching videos, providing valuable insights for video content creators and platform operators. This approach aims to achieve not only more granular but also more precise sentiment classifications, allowing a detailed understanding of the emotional undercurrents in real-time video commentary. Finally, the Apriori algorithm is applied to mine association rules from the Danmaku data, uncovering deeper relationships between the comments and video content.

3.1. Video Scene Segmentation

When performing video scene segmentation, the integration of temporal and spatial features, such as color histogram similarity, is crucial for achieving more accurate segmentation.
From the perspective of temporal features, the model conducts a detailed analysis of video data frame by frame. Specifically, it monitors the content changes in each frame compared to the previous one. Significant changes over several consecutive frames might indicate a scene transition.
Regarding spatial features, color histogram similarity plays a vital role. When calculating the color histogram, the model uses the common RGB color space, dividing each color channel (R, G, B) into 256 bins, and then counts the number of pixels within each bin to generate the color histogram for that frame. For two adjacent frames, the similarity of their color histograms is computed to assess the degree of similarity in their color distributions.
Here, different segments in the video are identified based on changes in visual content, with a complete video being divided into a series of scene segments. By combining temporal and color histogram similarities, the model can more accurately identify and merge visually similar consecutive scenes, thereby reducing incorrect scene switch predictions.
Firstly, a prediction array is received, which contains the model’s prediction of whether each frame is a scene switch point. These prediction values are binary: 1 is indicated for a scene switch point while 0 for a non-switch point.
Secondly, this array is iterated to determine scene boundaries based on changes in the prediction values (from 0 to 1 or from 1 to 0). Whenever a change from the middle of a scene to the end of a scene (from 1 to 0) is detected, it marks the beginning of a new scene. Conversely, when a change from the end of a scene to the beginning of a new one (from 0 to 1) is detected, the end of the current scene is recorded, and the scene is added to the scene list.
Assuming a scene has M frames, the average histogram of the scene is shown in Formula (1).
H ¯ = 1 M i = 1 M H i
Once all scenes are preliminarily segmented, the length L (number of frames) of each scene is compared with the predefined threshold parameter. If the length of the scene is less than the minimum length defined by the min-scene-length parameter, the scene is considered too brief and is merged with either the previous or the next scene. We calculate the similarity of the average color histograms H 1 ¯ and H 2 ¯ of two adjacent scenes. The similarity S is shown in Formula (2).
S = i = 1 N ( H 1 , i H ¯ 1 ) ( H 2 , i H ¯ 2 ) i = 1 N ( H 1 , i H ¯ 1 ) 2 i = 1 N ( H 2 , i H ¯ 2 ) 2
where H 1 , i and H 2 , i are the i-th bin values of the histograms H 1 ¯ and H 2 ¯ , respectively. N is the number of bins of the histogram. If the color histogram similarity S between two adjacent scenes exceeds the defined threshold, the scenes are considered similar in color and are merged into a single scene. Using the above method, the model can accurately identify and merge visually similar consecutive scenes, thereby reducing incorrect scene switch predictions and achieving effective video scene segmentation.

3.2. Sentiment Classification

Sentiment classification in Danmaku comments is achieved through a series of steps that leverage an extended emotion dictionary and sophisticated fuzzy logic techniques. This ensures that the emotional expressions captured from the Danmaku text are both nuanced and precise.

3.2.1. Expanded Sentiment Dictionary

The sentiment dictionary is important in our framework for studying sentiment, acting as a key tool to find and categorize emotional expressions in Danmaku comments. Usually, sentiment dictionaries are made of defined affective words, each labeled with emotional polarity (positive, negative, neutral) and intensity. These are needed for studying and understanding emotional tendencies in text. To enhance the effectiveness of sentiment analysis in the dynamic context of Danmaku comments, we significantly augment this standard dictionary.
This enlargement incorporates additional emotional expressions, such as common words, slang, and context-specific jargon which are frequently present in Danmaku comments. Language evolves rapidly on digital platforms, with these emerging terms reflecting the changing landscape of online communication. Therefore, including these elements in the dictionary brings it closer to the actual language used in Danmaku, thereby making our sentiment analysis model more responsive to changes in language use.
Deep investigation of Danmaku data, aided by understanding of the context, uncovers many specific expressions that hold the various emotional content of user comments. These dictionary additions, which include both common and platform-specific expressions tied to the video content, enable the model to better understand online language. By enlarging the dictionary, we can cover a wider and more detailed range of emotions, improving the model’s ability to spot and sort subtle emotional expressions.
Making the dictionary richer renders our sentiment analysis model stronger and able to adjust. This prepares it to manage the complicated and heavily context-dependent nature of Danmaku comments, leading to more accuracy in identifying sentiments in unseen text. This ability to adjust is key to staying up to date with the changing nature of language on digital platforms.

3.2.2. Model Design

With this enlarged sentiment dictionary, we create a model for studying sentiment called Danmaku-E. It is built especially to handle Danmaku comments. Figure 2 shows the setup of the Danmaku-E model. This picture shows how using the bigger sentiment dictionary with advanced deep learning methods makes the model better at sorting emotions.
BERT, a high-grade deep learning setup, excels at grasping the sense and context ties in natural language. It makes contextualized vector representations for each token (word or subword) in a Danmaku comment. This allows the model to grasp local and global ties in the text. It gives a detailed picture of the comment’s meaning and can catch subtle emotional hints that might be otherwise overlooked.
The fuzzy feature layer serves as a key component in mapping multi-dimensional sentiment scores onto specific sentiment categories. It is built on the contextual vector representations of tokens generated by the BERT model. These vectors encapsulate rich semantic and contextual information, providing a crucial foundation for subsequent sentiment analysis.
We put in a fuzzy feature layer after BERT to improve the mood understanding of the model. This layer’s goal is to transform the BERT-generated token-level embeddings into fuzzy features. These represent the level of belonging in different emotional groups. This fuzzy feature layer is key in dealing with the uncertainty and subjectivity often found within human feelings. It allows partial membership in multiple emotional groups, providing a more complex sorting of feelings.
This fuzzy feature layer consists of fuzzy variables, membership functions, and fuzzy rules. These define the emotional kinds used to sort feelings. We split feelings into seven main sections: joy, spite, astonishment, sadness, dread, goodwill, and rage. Each of these sections comes with number scores from 0 to 1. Each score shows the intensity of the relative feeling. Using a fuzzy logic setup ensures feelings are sorted correctly and can be interpreted. It accounts for different levels of feelings within each group.
To classify the intensity of each emotional score, three membership functions—low, medium, and high—are employed. The representation of the low membership function can be seen in Formula (3).
μ low ( x ) = 0 if   x 0   0.5 x 0.5 if   0 x 0 . 5 1 if   x 0 . 5 .
The medium and high membership functions are set out in Formulas (4) and (5). These functions assist in representing the gradual transition in emotional intensity, capturing not only distinct emotions but also transitional states between different emotions. As a result, the model becomes more adept at discerning emotional subtleties.
μ medium ( x ) = 0 if   x 0 x 0.5 if   0 < x < 0.5 1 x 0.5 if   0 . 5 x < 1 0 if   x 1 ,
μ high ( x ) = 0 if   x 0 . 5 x 0.5 0.5 if   0 . 5 < x < 1 1 if   x 1 .
The output fuzzy variable final emotion is defined to represent the final emotion classification result. The output variable ranges from 0 to 7, corresponding to the seven emotion categories.
A set of fuzzy rules is defined based on the membership function of emotional scores. If one emotion has a high score and the adjacent emotion has a moderate score, then that emotion category is considered the final emotion. For each rule, the calculation formula for fuzzy reasoning is as shown in Formula (6).
Rule   Strength = min ( μ 1 ( x 1 ) , μ 2 ( x 2 ) , , μ N ( x N ) ) .
After computing the results of each rule, the centroid method is used for defuzzification. In this context, y represents the possible values of the final emotion output variable, which are used to determine the center of gravity across the aggregated membership functions. The calculation formula for the centroid method is as shown in Formula (7).
y = y μ aggregate ( y ) d y μ aggregate ( y ) d y .
Among them, μ aggregate ( y ) is the aggregation of the membership functions of all fuzzy rule results. The centroid method computes the weighted average of all possible output values y, providing a crisp final decision about the emotional category. The seven sentiment scores of the sample Danmaku comments are presented in Table 1.

3.3. Semantic Enhanced Apriori Algorithm

The strong association rules identified by this method not only satisfy traditional minimum support and confidence thresholds but also capture semantic relationships between items. The result is a set of association rules that reveal complex interdependencies between Danmaku attributes, providing valuable insights into user behavior and content dynamics. This approach not only improves the accuracy of the analysis but also enhances the understanding of nuanced patterns within the Danmaku data.
This research looks at the link between certain parts of video content (like climaxes and turning points) and changes in emotions and feelings associated with Danmaku. We want to understand if specific video sections regularly bring about certain emotional reactions. This study gives content makers deep insights into audience emotional feedback, letting them improve content and boost viewer interest.
We also look for behavior patterns among specific user groups by analyzing whether certain users regularly post Danmaku with unique feelings across different video parts. This gives valuable insights into the watching preferences and emotional response patterns of these user groups. The patterns we found can inform video recommendation systems, suggesting related sections based on users’ emotional responses. For example, when a user watches a particular section, the system can suggest other sections with similar emotional relevance.

4. Experiments

4.1. Dataset

This research collects Danmaku comments from the Bilibili platform, one of the most used video-sharing platforms in China, using a web crawler that pulls publicly available content. We focus on comments from videos that a lot of people see such as TV series, variety shows, and movies, to make sure that the dataset shows a wide range of user interactions. We collect 3.9 million Danmaku comment entries, providing a big and varied set of data points that represent the emotional landscape in online comment sections.
To make sure the sentiment labels are correct and reliable, we ask eight video viewers to manually mark a sample of Danmaku comments, putting them into seven different emotional categories: happiness, malice, surprise, sadness, fear, beneficence, and anger. These markings give a total of 55,378 data points, which work as the gold standard for training and testing the sentiment analysis model.
We divide the dataset into two parts for training and testing. The split is 80:20. This way, the model is trained on a big part of the data and is also tested a lot on unseen data in the test set.

4.2. Scene Segmentation

To make the division of scenes better, we study color histograms. By comparing color histograms in video sections that are next to each other, we can find sections that are alike and part of the same scene. This allows them to be combined. This method increases the connection between divided sections of video and makes sure the clips that result show meaningful video content more accurately.
Through multiple tests, we fine-tune the parameters related to dividing scenes. We pay special attention to min-scene-length and hist-similarity-threshold. The settings picked for these parameters are meant to find a balance between attention to detail and connection in division. Table 2 shows the results of our tests on combining scenes. It shows how changing these parameters impacts how many scenes are found.

4.3. Model Training

We initialize the model with pre-trained MacBERT weights to leverage the rich linguistic features learned during pre-training. We carefully adjust hyperparameters to optimize model performance. The key hyperparameters and their values are listed in Table 3. The model is trained for 25 epochs with a batch size of 64. We use the AdamW optimizer with a learning rate of 5 × 10−5 and a linear scheduler with a warmup ratio of 0.2. The training process is monitored to prevent overfitting, and early stopping is employed if the validation loss does not improve for a certain number of epochs.
To see how different sentiment analysis models work on Danmaku data, we perform several experiments. We compare a few models, using both the first and expanded sentiment dictionaries. Table 4 shows the results. The results highlight how much better the performance is when the expanded dictionary is used. Of the models tested, Danmaku-E performs the best. It reaches an accuracy of 94.58% and an F1 score of 94.92% when using the original dictionary. When the expanded dictionary is used, these numbers go up to 95.37% and 95.66%. This improvement shows how well the model uses a large vocabulary to accurately spot and sort subtle emotional expressions in Danmaku comments.
The RoBERTa-BiLSTM model also does very well. It achieves an accuracy of 93.67% and an F1 score of 93.83% when using the original dictionary. Using the expanded dictionary improves its performance to an accuracy of 94.05% and an F1 score of 94.13%. The strengths of this model come from RoBERTa’s advanced pre-training abilities, combined with BiLSTM’s ability to model sequences in both directions. This combination efficiently captures the complex context and structure in Danmaku text.
Similar improvements are noted in the RoBERTa-RNN model. When the expanded dictionary is used, it achieves an increase in its accuracy from 92.79% to 93.41% and its F1 score from 93.02% to 93.72%. There should also be mention of models based on MacBert like MacBert-RCNN and MacBert-BiLSTM. They show significant improvements when the expanded dictionary is used. For example, the accuracy of MacBert-RCNN goes from 90.03% to 91.42%. Its F1 score goes from 90.98% to 91.89%. This shows it is efficient at recognizing a wider range of sentiment-related features.
To verify the necessity of each component in scene segmentation and sentiment analysis, we design three sets of ablation experiments, removing the following parts and retraining and testing the model. The experimental results are shown in Table 5.
The ablation study clearly shows how each component affects model performance. When we remove the scene merging method that uses color histogram similarity, the accuracy dropped by 2.16% and the F1-score fell by 2.21%. This happens because without color guidance, the model makes more over-segmentation mistakes. The original method reduced these errors by 15%, proving it helps keep video scenes properly aligned.
Taking out the extended sentiment lexicon causes smaller decreases (0.79% accuracy, 0.74% F1-score). The pretrained RoBERTa model can still understand some slang terms but struggled with rare or new expressions. This means that while the base model handles unusual language somewhat well, our special word list remains important for covering all platform-specific terms.
The biggest performance drop came when we removed the fuzzy feature layer (3.22% accuracy, 3.26% F1-score). Recall suffers most (3.29% decrease). Without this layer, the model has trouble with mixed emotions like sarcasm and cannot effectively tell similar feelings apart. The confusion matrix shows 12% more mistakes between emotions like sadness and fear.
These results support our design decisions. The scene merging method helps segment videos correctly. The special word list covers internet language, while the fuzzy layer handles complex emotions. Since removing the fuzzy layer hurt performance the most, it is clearly our most important innovation for analyzing Danmaku comments.
Danmaku-E is used to subdivide the sentiment categories of Danmaku comments into seven distinct categories, providing a nuanced understanding of user emotions expressed in real-time comments. The model’s classification capabilities enable more precise sentiment analysis, which is crucial for accurately interpreting the emotional undertones of user interactions. Seven example comments and their corresponding sentiment categories are showcased in Table 6.
In the processing phase, the model conducts a thorough semantic analysis of each word within a comment. It calculates the membership of each word in different sentiment categories by integrating information from the sentiment dictionary and contextual data. To clarify how the sentiment classification boundary is determined, we can use “Bad Era” as an example. Initially, the model uses BERT to transform the entire comment into a vector representation, effectively capturing its semantic and contextual nuances. Following this, the fuzzy feature layer assesses the membership level of the comment within various sentiment categories using a membership function. In the case of “Bad Era”, the model combines “Bad” and “Era” with the sentiment dictionary to compute the membership function values. According to the fuzzy rules, since the membership for “Malice” is high and that for “Sadness” is medium, “Bad Era” is categorized under “Malice”.

4.4. Association Rules

4.4.1. Association Rule Mining

We conducted a correlation analysis between the Danmaku text and the video content. Since setting the threshold too high could prevent the identification of sufficient frequent item sets and association rules, we began our experimental process by mining frequent item sets with a minimum support threshold of 0.01. This threshold ensures that we only consider item sets appearing in at least 1% of transactions.

4.4.2. Iterative Adjustment of Parameters

To focus on more meaningful patterns, we excluded rules where both the antecedent and the consequent contained only a single item.
When analyzing the relationship between Danmaku comments and video segments, given the large number of users posting Danmaku comments, the dataset is substantial. As many users contribute to each video segment, a lower min_support value is set initially. To ensure statistical reliability and robustness of the correlations represented by the rules, we aim to set the min_threshold as high as possible.
We apply an iterative adjustment method to determine a suitable min_support value. The process begins with a low min_support value, and we observe the number of frequent item sets generated. If the number of generated rules is too large, we gradually increase the min_threshold until a satisfactory result is achieved. The iterative method enables fine-tuning to achieve a satisfactory balance between rule quantity and quality.
The primary advantage of this approach is the robustness and reliability of the associations, along with a manageable number of rules for practical analysis. However, a key limitation is the time and computational resources required for iterative adjustments.
Table 7 presents the iterative experimental results.

4.4.3. Results and Discussion

Figure 3 illustrates an example of the distribution of emotion types across 13 segmented video clips. A deeper color indicates higher frequency of the corresponding emotion.
Figure 4 illustrates an example of association rules generated using an initial low min_support value. This setting produces a large number of frequent item sets, resulting in an overwhelming number of association rules. Although this approach captures a broad range of user interactions and sentiments, the sheer volume of rules can be challenging to manage and interpret. The primary advantage is the comprehensive coverage of potential associations, but the downside lies in the increased complexity and potential noise within the data.
In Figure 5, the min_support value is increased to filter out less frequent item sets. This refinement reduces the number of association rules, making the data more manageable and the associations more meaningful. The primary advantage of this approach is a clearer and more focused set of rules, enhancing interpretability. However, a notable limitation is that some potentially significant but less frequent associations may be excluded.
Given the relatively small number of video segments and sentiment labels, specific item sets appear more frequently. Therefore, we set higher min_support and min_threshold values when exploring the relationship between the two. The method for determining suitable values is the same as described above. For clearer visualization, we generate a regular network diagram illustrating the relationship between video segments and sentiment labels.
As shown in Figure 6, we can see that the central nodes typically represent core items that are frequently associated with multiple other item sets. These core items are video segments with a high frequency of Danmaku comments of the same type of emotion, including 4, 7, and 8. Video segments 4, 7, and 8 may be video segments of the same type because there are strong correlations between video segments <8,4>, <8,7>, and <7,8>. This indicates a high probability of Danmaku comments in these segments expressing similar emotions. Based on this finding, other emotionally related movie clips can be recommended to users who have watched some of these movie clips.
Figure 7 visualizes the association rules between users posting Danmaku comments and video segment numbers. The weight attribute of each edge is set to the confidence of the rule. Video segment is the video paragraph with the highest frequency of Danmaku posted by users. Users who posted Danmaku data in video segments 8, 9, and 11 have a higher probability of posting Danmaku data in these segments.

5. Conclusions

In this paper, we present an innovative method that combines scene segmentation techniques with Danmaku sentiment analysis. This method effectively tackles significant challenges in this research area.
Compared with sentiment analysis models based only on text, Danmaku-E achieves remarkable improvements in sentiment analysis accuracy and F1 score. When using the extended sentiment lexicon, its accuracy is 95.37% and its F1 score is 95.66%, which has a significant improvement over other models in terms of performance indicators.
We further validate the contributions of each component within our model by conducting an ablation study. Removing the scene segmentation feature results in a 2.16% decrease in accuracy and a 2.21% drop in the F1 score. Excluding the extended sentiment dictionary leads to a smaller decrease in accuracy (0.79%) and F1 score (0.74%). The most significant performance drop is observed when the fuzzy feature layer is removed, resulting in a 3.22% decrease in accuracy and a 3.26% decrease in the F1 score. These findings highlight the importance of each component within our model, supporting our design choices and demonstrating the robustness of our approach.
In addition to these improvements, the Apriori algorithm successfully identifies meaningful association rules between video content and Danmaku comments. In our experiments, we identify strong correlations between specific video segments and emotional responses. These correlations provide valuable insights into user behavior and content dynamics, enabling more accurate predictions of audience emotional tendencies.
Despite its strengths, the method has some limitations. The complexity of sentiment classification presents several challenges. Sentiment expressions in Danmaku comments are often intricate and diverse, with some comments simultaneously exhibiting multiple sentiment tendencies. When processing such mixed emotions, our model may demonstrate inaccuracies in classification. For instance, certain comments may contain both positive and negative sentiments, but the model might only recognize one sentiment tendency, potentially leading to biased results.
Regarding dataset limitations, the dataset utilized in this study predominantly originates from a specific video platform, which may not comprehensively represent the emotional expressions inherent to Danmaku across other platforms or different video types. Consequently, this limitation might impact the model’s generalization capability when applied to broader contexts or diverse scenarios.
The accuracy of scene segmentation also poses challenges. Although methods based on color histogram similarity and temporal features are employed to enhance segmentation precision, errors may persist in certain complex scenes, such as unclear scene transitions or gradual fades. These segmentation inaccuracies may adversely affect the subsequent sentiment analysis outcomes.
Furthermore, balancing model complexity with computational efficiency remains an issue. While the model excels in sentiment analysis accuracy, its computational complexity is relatively high. Specifically, when handling large-scale datasets, challenges such as extended processing time and substantial memory consumption may arise. This compromises the model’s applicability in real-time sentiment analysis scenarios, limiting its effectiveness in time-critical applications.
In summary, while our model offers substantial improvements in sentiment analysis for Danmaku comments, it is essential to recognize these limitations. Future work will focus on addressing these challenges by exploring more sophisticated methods for handling mixed emotions, expanding the diversity of training datasets, improving scene segmentation accuracy, optimizing model efficiency, and continuously updating the sentiment lexicon.

Author Contributions

Conceptualization, J.J.; Methodology, L.L.; Software, L.L.; Validation, L.L.; Investigation, J.J.; Resources, P.S.; Data curation, L.L.; Writing—original draft, L.L.; Writing—review & editing, J.J. and P.S.; Supervision, P.S.; Project administration, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by University of Science and Technology Beijing.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study, including the Danmaku comments and associated sentiment analysis results, primarily originate from publicly available content on the vedio platform. The dataset was collected using web crawling techniques and manually annotated by multiple reviewers to ensure the accuracy and reliability of the sentiment labels. The data used for training and testing the models are part of this annotated dataset. Due to the nature of the data collection process and the potential privacy concerns related to user-generated content, the disclosure of some detailed data may be limited. However, upon reasonable request and in compliance with relevant regulations, we will consider providing aggregated data results or anonymized subsets to support the reproducibility of the results of this research and further scientific exploration. To request the data from this study, please contact the corresponding author via email at m202211188@xs.ustb.edu.cn (L.L.).

Acknowledgments

We acknowledge the contributions of the eight video viewers who participated in the manual annotation of Danmaku comments, providing essential data for the training and validation of the sentiment analysis model.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bai, Q.; Wei, K.; Zhou, J.; Xiong, C.; Wu, Y.; Lin, X.; He, L. Entity-level sentiment prediction in Danmaku video interaction. J. Supercomput. 2021, 77, 9474–9493. [Google Scholar] [CrossRef]
  2. Soucek, T.; Lokoc, J. Transnet v2: An effective deep network architecture for fast shot transition detection. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, VIC, Australia, 28 October–1 November 2024; pp. 11218–11221. [Google Scholar]
  3. Rao, A.; Xu, L.; Xiong, Y.; Xu, G.; Huang, Q.; Zhou, B.; Lin, D. A local-to-global approach to multi-modal movie scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10146–10155. [Google Scholar]
  4. Fu, J.; Liu, J.; Jiang, J.; Li, Y.; Bao, Y.; Lu, H. Scene Segmentation with Dual Relation-Aware Attention Network. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2547–2560. [Google Scholar] [CrossRef] [PubMed]
  5. Islam, S.; Kabir, M.N.; Ab Ghani, N.; Zamli, K.Z.; Zulkifli, N.S.A.; Rahman, M.; Moni, M.A. Challenges and future in deep learning for sentiment analysis: A comprehensive review and a proposed novel hybrid approach. Artif. Intell. Rev. 2024, 57, 1–79. [Google Scholar] [CrossRef]
  6. Kossack, P.; Unger, H. Emotion-Aware Chatbots: Understanding, Reacting and Adapting to Human Emotions in comments Conversations. In Proceedings of the International Conference on Autonomous Systems, Barcelona, Spain, 13–17 March 2023; Springer Nature: Cham, Switzerland, 2023; pp. 158–175. [Google Scholar]
  7. Lu, K.; Wu, J. Sentiment analysis of film review texts based on sentiment dictionary and SVM. In Proceedings of the 2019 3rd International Conference on Innovation in Artificial Intelligence, Suzhou, China, 15–18 March 2019; pp. 73–77. [Google Scholar]
  8. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
  9. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
  10. Xu, D.; Tian, Z.; Lai, R.; Kong, X.; Tan, Z.; Shi, W. Deep learning based emotion analysis of microblog texts. Inf. Fusion 2020, 64, 1–11. [Google Scholar] [CrossRef]
  11. Wang, Z.; Huang, G. Sentiment classification algorithm of Danmaku comment based on modified Bayes model. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 28–31 May 2021; pp. 342–346. [Google Scholar]
  12. Zhao, J.; Liu, H.; Wang, Y.; Zhang, W.; Zhang, X.; Li, B.; Sun, T.; Qi, Y.; Zhang, S. Sentiment analysis of video Danmakus based on MIBE-RoBERTa-FF-BiLSTM. Sci. Rep. 2024, 14, 5827. [Google Scholar] [CrossRef] [PubMed]
  13. Rohidin, D.; Samsudin, N.A.; Deris, M.M. Association rules of fuzzy soft set based classification for text classification problem. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 801–812. [Google Scholar] [CrossRef]
  14. Cao, S.; Guo, D.; Cao, L.; Li, S.; Nie, J.; Singh, A.K.; Lv, H. VisDmk: Visual analysis of massive emotional danmaku in online videos. Vis. Comput. 2023, 39, 6553–6570. [Google Scholar] [CrossRef]
  15. Gutirrez Espinoza, L.; Keith Norambuena, B. Evaluating semantic representations for extended association rules. Intell. Data Anal. 2022, 26, 1341–1357. [Google Scholar] [CrossRef]
  16. Tayaba, M.; Ayon, E.H.; Mia, M.T.; Sarkar, M.; Ray, R.K.; Chowdhury, M.S.; Al-Imran, M.; Nobe, N.; Ghosh, B.P.; Islam, M.T.; et al. Transforming Customer Experience in the Airline Industry: A Comprehensive Analysis of Twitter Sentiments Using Machine Learning and Association Rule Mining. J. Comput. Sci. Technol. Stud. 2023, 5, 194–202. [Google Scholar] [CrossRef]
  17. Liu, J.; Zhou, Z.; Gao, M.; Tang, J.; Fan, W. Aspect sentiment mining of short bullet screen comments from online TV series. J. Assoc. Inf. Sci. Technol. 2023, 74, 1026–1045. [Google Scholar] [CrossRef]
  18. Ye, X.; Zhao, Y.; Li, J.; Zhang, Y.; Hansen, P. Exploring the Information Cues of Danmaku Comments to Stimulate Users’ Affective Generation in Reaction Videos. Proc. Assoc. Inf. Sci. Technol. 2023, 60, 723–727. [Google Scholar] [CrossRef]
  19. Nagao, H.; Tamura, K.; Katsurai, M. Effective Language Representations for Danmaku Comment Classification in Nicovideo. IEICE TRANS. Inf. Syst. 2023, 106, 838–846. [Google Scholar] [CrossRef]
  20. Li, J.; Li, Y. Constructing dictionary to analyze features sentiment of a movie based on Danmakus. In Proceedings of the International Conference on Advanced Data Mining and Applications, Dalian, China, 21–23 November 2019; Springer: Cham, Switzerland, 2019; pp. 474–488. [Google Scholar]
  21. Naznin, F.; Hazarika, I.; Laskar, D.; Mahanta, A.K. Mining association between different emotion classes present in users posts of social media. Soc. Netw. Anal. Min. 2024, 14, 1–18. [Google Scholar] [CrossRef]
  22. Xu, H.; Hou, X. A method based on Roberta_Seq2Seq for chinese text multi label sentiment analysis. In Proceedings of the 2022 International Conference on Machine Learning and Knowledge Engineering (MLKE), Guilin, China, 25–27 February 2022; pp. 88–92. [Google Scholar]
  23. Cojocaru, A.; Paraschiv, A.; Dascalu, M. News-RO-Offense-A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments. In Proceedings of the RoCHI 2022, Craiova, Romania, 6–7 October 2022; pp. 65–72. [Google Scholar]
  24. Rahman, M.M.; Shiplu, A.I.; Watanobe, Y.; Alam, M.A. Roberta-bilstm: A context-aware hybrid model for sentiment analysis. arXiv 2024, arXiv:2406.00367. [Google Scholar]
Figure 1. The framework of scene segmentation based sentiment analysis for Danmaku.
Figure 1. The framework of scene segmentation based sentiment analysis for Danmaku.
Applsci 15 04435 g001
Figure 2. The framework diagram of Danmaku-E.
Figure 2. The framework diagram of Danmaku-E.
Applsci 15 04435 g002
Figure 3. An example of emotion distribution heatmap.
Figure 3. An example of emotion distribution heatmap.
Applsci 15 04435 g003
Figure 4. An example of a heatmap with 24 association rules.
Figure 4. An example of a heatmap with 24 association rules.
Applsci 15 04435 g004
Figure 5. An example of a heatmap with four association rules.
Figure 5. An example of a heatmap with four association rules.
Applsci 15 04435 g005
Figure 6. An example of a rule network diagram of video segments and emotional labels.
Figure 6. An example of a rule network diagram of video segments and emotional labels.
Applsci 15 04435 g006
Figure 7. An example of visualization of association rules between users and video segments.
Figure 7. An example of visualization of association rules between users and video segments.
Applsci 15 04435 g007
Table 1. Examples of sentiment analysis for Danmaku comments.
Table 1. Examples of sentiment analysis for Danmaku comments.
Just Be Happy with Each OtherThis Is Already GoodUnlucky Guy
Beneficence0.04150.93650.0147
Malice0.00840.01600.6822
Happiness0.93640.03310.0025
Sadness0.00890.01120.0399
Fear0.00110.00130.2288
Anger0.00110.00070.0208
Surprise0.00230.00090.0108
Table 2. Test Experiment of Merging Scenes.
Table 2. Test Experiment of Merging Scenes.
Min-Scene-LengthHist-Similarity-ThresholdNumber of Scenes
140.615
140.714
140.813
150.614
150.713
150.812
160.612
160.711
160.810
Table 3. Parameter Settings.
Table 3. Parameter Settings.
ItemParameter
Batch size64
Vocab size21,128
Maximum sequence length128
Optimizeradamw_torch
Schedulerlinear
Seed42
Learning rate5 × 10−5
Warmup ratio0.2
Epoch25
Table 4. Comparison of experimental results of models (%).
Table 4. Comparison of experimental results of models (%).
ModelOriginal DictionaryExpanded Dictionary
AccuracyF1AccuracyF1
WoBert83.4383.9284.0184.23
MacBert83.8584.0984.6785.11
MacBert-CNN87.2387.9188.2389.19
MacBert-GCN87.5688.2587.9988.31
MacBert-BiLSTM88.1989.5289.0489.68
MacBert-RNN89.1189.4290.5290.58
MacBert-RCNN90.0391.3991.4292.13
RoBERTa-CNN91.0891.9392.0892.47
RoBERTa-RNN92.7993.2493.4994.10
RoBERTa-BiLSTM91.5893.4294.3094.70
Danmaku-E94.5894.9295.3795.66
Table 5. Ablation Study on Key Components of the Danmaku-E (%).
Table 5. Ablation Study on Key Components of the Danmaku-E (%).
Ablation GroupAccuracyF1RecallPrecision
Danmaku-E (Baseline)95.3795.6695.8495.10
Remove Scene Segmentation93.21
(−2.16)
93.45
(−2.21)
93.60
(−2.24)
93.05
(−2.05)
Remove Extended Dictionary94.58
(−0.79)
94.92
(−0.74)
95.10
(−0.74)
94.58
(−0.52)
Remove Fuzzy Feature Layer92.15
(−3.22)
92.40
(−3.26)
92.55
(−3.29)
92.10
(−3.00)
Table 6. An Example of the Correlation between Danmaku Text and Emotional Tags.
Table 6. An Example of the Correlation between Danmaku Text and Emotional Tags.
English TextEmotional Label
Movie Really Very GreatBeneficence
Happiness Most ImportantHappiness
Ah Actually Have ThisSurprise
Bad EraMalice
Soul All Be ScaredFear
Workers AngryAnger
The Whole Play The worstSadness
Table 7. Comparison of the number of association rules under different thresholds.
Table 7. Comparison of the number of association rules under different thresholds.
Min_SupportMin_ThresholdNumber of Rules
(Number of Items = 2)
0.050.14
0.040.18
0.030.110
0.020.112
0.010.124
0.010.215
0.010.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Jing, J.; Shi, P. Dynamic Scene Segmentation and Sentiment Analysis for Danmaku. Appl. Sci. 2025, 15, 4435. https://doi.org/10.3390/app15084435

AMA Style

Li L, Jing J, Shi P. Dynamic Scene Segmentation and Sentiment Analysis for Danmaku. Applied Sciences. 2025; 15(8):4435. https://doi.org/10.3390/app15084435

Chicago/Turabian Style

Li, Limin, Jie Jing, and Peng Shi. 2025. "Dynamic Scene Segmentation and Sentiment Analysis for Danmaku" Applied Sciences 15, no. 8: 4435. https://doi.org/10.3390/app15084435

APA Style

Li, L., Jing, J., & Shi, P. (2025). Dynamic Scene Segmentation and Sentiment Analysis for Danmaku. Applied Sciences, 15(8), 4435. https://doi.org/10.3390/app15084435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop