Topic Editors

The Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, 38000 Grenoble, France
Department of Computer Science, NTNU - Norwegian University of Science and Technology, P.O. Box 191, 2802 Gjøvik, Norway
Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan
Dr. Maheen Bakhtyar
SLIDE, Université Grenoble Alpes, 38401 Grenoble, France

Multimodal Sentiment Analysis Based on Deep Learning Methods Such as Convolutional Neural Networks

Abstract submission deadline
31 August 2024
Manuscript submission deadline
31 October 2024
Viewed by
7155

Topic Information

Dear Colleagues,

This Special Issue is aimed at researchers who are working on large scale data with real-word problem solving. Every social media app is generating data in GBs that require researcher attention for identifying useful patterns and information. Due to globalization, every social media content is shared and commented on by diverse background users. The data contain several opinions in different languages on the similar topics. The classical approach for text classification, i.e., sentiment analysis, mainly relies on NLP techniques related to single language. However, it is important to propose a model that can learn features from multilingual data. Submissions on this issue focused on theoretical expansion and also the applications of sentiment analysis on our daily life are invited. Topics of interest include but are not limited to the following areas:

  • Text classification;
  • Opinion mining;
  • Visualization of opinions;
  • Social network analysis for sentiment analysis;
  • Multi-model learning for text classification;
  • Multi-lingual sentiment analysis;
  • Applications for sentiment analysis;
  • Explainable artificial intelligence for sentiment analysis;
  • Aspect-based sentiment analysis;
  • Hate speech detection;
  • Sarcasm and irony detection.

Dr. Junaid Baber
Dr. Ali Shariq Imran
Prof. Dr. Doudpota Sher
Dr. Bakhtyar Maheen
Topic Editors

Keywords

  •  text classification
  •  opinion mining
  •  visualization of opinions
  •  social network analysis for sentiment analysis
  •  multi-model learning for text classification
  •  multi-lingual sentiment analysis
  •  applications for sentiment analysis
  •  explainable artificial intelligence for sentiment analysis
  •  deep learning for text classification

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600 Submit
Axioms
axioms
2.0 - 2012 21.8 Days CHF 2400 Submit
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600 Submit
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600 Submit
Symmetry
symmetry
2.7 4.9 2009 16.2 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (4 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
20 pages, 6478 KiB  
Article
CSINet: Channel–Spatial Fusion Networks for Asymmetric Facial Expression Recognition
by Yan Cheng and Defeng Kong
Symmetry 2024, 16(4), 471; https://doi.org/10.3390/sym16040471 - 12 Apr 2024
Viewed by 443
Abstract
Occlusion or posture change of the face in natural scenes has typical asymmetry; however, an asymmetric face plays a key part in the lack of information available for facial expression recognition. To solve the problem of low accuracy of asymmetric facial expression recognition, [...] Read more.
Occlusion or posture change of the face in natural scenes has typical asymmetry; however, an asymmetric face plays a key part in the lack of information available for facial expression recognition. To solve the problem of low accuracy of asymmetric facial expression recognition, this paper proposes a fusion of channel global features and a spatial local information expression recognition network called the “Channel–Spatial Integration Network” (CSINet). First, to extract the underlying detail information and deepen the network, the attention residual module with a redundant information filtering function is designed, and the backbone feature-extraction network is constituted by module stacking. Second, considering the loss of information in the local key area of face occlusion, the channel–spatial fusion structure is constructed, and the channel features and spatial features are combined to enhance the accuracy of occluded facial recognition. Finally, before the full connection layer, more local spatial information is embedded into the global channel information to capture the relationship between different channel–spatial targets, which improves the accuracy of feature expression. Experimental results on the natural scene facial expression data sets RAF-DB and FERPlus show that the recognition accuracies of the modeling approach proposed in this paper are 89.67% and 90.83%, which are 13.24% and 11.52% higher than that of the baseline network ResNet50, respectively. Compared with the latest facial expression recognition methods such as CVT, PACVT, etc., the method in this paper obtains better evaluation results of masked facial expression recognition, which provides certain theoretical and technical references for daily facial emotion analysis and human–computer interaction applications. Full article
Show Figures

Figure 1

13 pages, 1270 KiB  
Article
Multimodal Prompt Learning in Emotion Recognition Using Context and Audio Information
by Eunseo Jeong, Gyunyeop Kim and Sangwoo Kang
Mathematics 2023, 11(13), 2908; https://doi.org/10.3390/math11132908 - 28 Jun 2023
Cited by 3 | Viewed by 1932
Abstract
Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional [...] Read more.
Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional deep-learning layers that cannot be attached. In the natural-language emotion-recognition task, improved emotional classification can be expected when using audio and text to train a model rather than only natural-language text. Audio information, such as voice pitch, tone, and intonation, can give more information that is unavailable in text to predict emotions more effectively. Thus, using both audio and text can enable better emotion prediction in speech emotion-recognition models compared to semantic information alone. In this paper, in contrast to existing studies that use multimodal data with an additional layer, we propose a method for improving the performance of speech emotion recognition using multimodal prompt learning with text-based pre-trained models. The proposed method is using text and audio information in prompt learning by employing a language model pre-trained on natural-language text. In addition, we propose a method to improve the emotion-recognition performance of the current utterance using the emotion and contextual information of the previous utterances for prompt learning in speech emotion-recognition tasks. The performance of the proposed method was evaluated using the English multimodal dataset MELD and the Korean multimodal dataset KEMDy20. Experiments using both the proposed methods obtained an accuracy of 87.49%, F1 score of 44.16, and weighted F1 score of 86.28. Full article
Show Figures

Figure 1

21 pages, 1274 KiB  
Article
Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis
by Wei Lai, Jinjing Shi and Yan Chang
Axioms 2023, 12(3), 308; https://doi.org/10.3390/axioms12030308 - 19 Mar 2023
Cited by 4 | Viewed by 2171
Abstract
Most of the existing quantum-inspired models are based on amplitude-phase embedding to model natural language, which maps words into Hilbert space. In quantum-computing theory, the vectors corresponding to quantum states are all complex values, so there is a gap between these two areas. [...] Read more.
Most of the existing quantum-inspired models are based on amplitude-phase embedding to model natural language, which maps words into Hilbert space. In quantum-computing theory, the vectors corresponding to quantum states are all complex values, so there is a gap between these two areas. Presently, complex-valued neural networks have been studied, but their practical applications are few, let alone in the downstream tasks of natural language processing such as sentiment analysis and language modeling. In fact, the complex-valued neural network can use the imaginary part information to embed hidden information and can express more complex information, which is suitable for modeling complex natural language. Meanwhile, quantum-inspired models are defined in Hilbert space, which is also a complex space. So it is natural to construct quantum-inspired models based on complex-valued neural networks. Therefore, we propose a new quantum-inspired model for NLP, ComplexQNN, which contains a complex-valued embedding layer, a quantum encoding layer, and a measurement layer. The modules of ComplexQNN are fully based on complex-valued neural networks. It is more in line with quantum-computing theory and easier to transfer to quantum computers in the future to achieve exponential acceleration. We conducted experiments on six sentiment-classification datasets comparing with five classical models (TextCNN, GRU, ELMo, BERT, and RoBERTa). The results show that our model has improved by 10% in accuracy metric compared with TextCNN and GRU, and has competitive experimental results with ELMo, BERT, and RoBERTa. Full article
Show Figures

Figure 1

16 pages, 5311 KiB  
Article
Product Evaluation Prediction Model Based on Multi-Level Deep Feature Fusion
by Qingyan Zhou, Hao Li, Youhua Zhang and Junhong Zheng
Future Internet 2023, 15(1), 31; https://doi.org/10.3390/fi15010031 - 09 Jan 2023
Viewed by 1338
Abstract
Traditional product evaluation research is to collect data through questionnaires or interviews to optimize product design, but the whole process takes a long time to deploy and cannot fully reflect the market situation. Aiming at this problem, we propose a product evaluation prediction [...] Read more.
Traditional product evaluation research is to collect data through questionnaires or interviews to optimize product design, but the whole process takes a long time to deploy and cannot fully reflect the market situation. Aiming at this problem, we propose a product evaluation prediction model based on multi-level deep feature fusion of online reviews. It mines product satisfaction from the massive reviews published by users on e-commerce websites, and uses this model to analyze the relationship between design attributes and customer satisfaction, design products based on customer satisfaction. Our proposed model can be divided into the following four parts: First, the DSCNN (Depthwise Separable Convolutions) layer and pooling layer are used to combine extracting shallow features from the primordial data. Secondly, CBAM (Convolutional Block Attention Module) is used to realize the dimension separation of features, enhance the expressive ability of key features in the two dimensions of space and channel, and suppress the influence of redundant information. Thirdly, BiLSTM (Bidirectional Long Short-Term Memory) is used to overcome the complexity and nonlinearity of product evaluation prediction, output the predicted result through the fully connected layer. Finally, using the global optimization capability of the genetic algorithm, the hyperparameter optimization of the model constructed above is carried out. The final forecasting model consists of a series of decision rules that avoid model redundancy and achieve the best forecasting effect. It has been verified that the method proposed in this paper is better than the above-mentioned models in five evaluation indicators such as MSE, MAE, RMSE, MAPE and SMAPE, compared with Support Vector Regression (SVR), DSCNN, BiLSTM and DSCNN-BiLSTM. By predicting customer emotional satisfaction, it can provide accurate decision-making suggestions for enterprises to design new products. Full article
Show Figures

Figure 1

Back to TopTop