Complex Network Modeling in Artificial Intelligence Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: closed (31 May 2024) | Viewed by 1626

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
School of Economics and Management, Harbin Institute of Technology, Harbin, China
Interests: network science; social networks; artificial intelligence; deep learning; financial risk; data mining; data-driven methods
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Economics and Management, Harbin Institute of Technology, Harbin, China
Interests: network science; prediction method; artificial intelligence; deep learning; fuzzy reasoning

Special Issue Information

Dear Colleagues,

We are pleased to announce the Special Issue “Complex Network Modeling in Artificial Intelligence Applications” is now open for contributions. At present, the combined modeling of complex networks and deep learning algorithms is a main topic in artificial intelligence research. The indicative structure of a complex network uncovers the black box of a deep learning algorithm, making their combined models more explainable and more suitable for applications in various fields, such as computer vision, natural language processing and speech recognition. Today, successfully combined models, such as the graph neural network and its multiple variants, have made great progress in addressing emerging complex tasks, such as multimodular representation, knowledge reasoning and interpretable decision making. However, there are still challenges that need to be solved, especially regarding risk identification in finance, estimations of heterogenous treatment effects in marketing, smart route optimization in operations research, etc. Better utilization of graph structures and integrating network properties into deep learning design would be effective ways to tackle such challenges.

This Special Issue is devoted to state-of-the-art developments in the combined modeling of complex networks and deep learning algorithms as well as their applications. All related contributions are welcome, particularly those considering novel structure design combining complex networks and deep learning algorithms, advances in link prediction, key node recognition, community detection using deep learning methods and combined models’ applications in finance, marketing, operations research and other areas of interest.

Prof. Dr. Yongli Li
Prof. Dr. Chong Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • complex networks
  • deep learning
  • graph neural network
  • novel structures design combining complex networks and deep learning
  • link prediction, key node recognition and community detection
  • applications in finance, marketing, operations research and other areas of interest

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1911 KiB  
Article
DFNet: Decoupled Fusion Network for Dialectal Speech Recognition
by Qianqiao Zhu, Lu Gao and Ling Qin
Mathematics 2024, 12(12), 1886; https://doi.org/10.3390/math12121886 - 17 Jun 2024
Viewed by 493
Abstract
Deep learning is often inadequate for achieving effective dialect recognition in situations where data are limited and model training is complex. Differences between Mandarin and dialects, such as the varied pronunciation variants and distinct linguistic features of dialects, often result in a significant [...] Read more.
Deep learning is often inadequate for achieving effective dialect recognition in situations where data are limited and model training is complex. Differences between Mandarin and dialects, such as the varied pronunciation variants and distinct linguistic features of dialects, often result in a significant decline in recognition performance. In addition, existing work often overlooks the similarities between Mandarin and its dialects and fails to leverage these connections to enhance recognition accuracy. To address these challenges, we propose the Decoupled Fusion Network (DFNet). This network extracts acoustic private and shared features of different languages through feature decoupling, which enhances adaptation to the uniqueness and similarity of these two speech patterns. In addition, we designed a heterogeneous information-weighted fusion module to effectively combine the decoupled Mandarin and dialect features. This strategy leverages the similarity between Mandarin and its dialects, enabling the sharing of multilingual information, and notably enhance the model’s recognition capabilities on low-resource dialect data. An evaluation of our method on the Henan and Guangdong datasets shows that the DFNet performance has improved by 2.64% and 2.68%, respectively. Additionally, a significant number of ablation comparison experiments demonstrate the effectiveness of the method. Full article
(This article belongs to the Special Issue Complex Network Modeling in Artificial Intelligence Applications)
Show Figures

Figure 1

16 pages, 530 KiB  
Article
A Negative Sample-Free Graph Contrastive Learning Algorithm
by Dongming Chen, Mingshuo Nie, Zhen Wang, Huilin Chen and Dongqi Wang
Mathematics 2024, 12(10), 1581; https://doi.org/10.3390/math12101581 - 18 May 2024
Viewed by 637
Abstract
Self-supervised learning is a new machine learning method that does not rely on manually labeled data, and learns from rich unlabeled data itself by designing agent tasks using the input data as supervision to obtain a more generalized representation for application in downstream [...] Read more.
Self-supervised learning is a new machine learning method that does not rely on manually labeled data, and learns from rich unlabeled data itself by designing agent tasks using the input data as supervision to obtain a more generalized representation for application in downstream tasks. However, the current self-supervised learning suffers from the problem of relying on the selection and number of negative samples and the problem of sample bias phenomenon after graph data augmentation. In this paper, we investigate the above problems and propose a corresponding solution, proposing a graph contrastive learning algorithm without negative samples. The model uses matrix sketching in the implicit space for feature augmentation to reduce sample bias and iteratively trains the mutual correlation matrix of two viewpoints by drawing closer to the distance of the constant matrix as the objective function. This method does not require techniques such as negative samples, gradient stopping, and momentum updating to prevent self-supervised model collapse. This method is compared with 10 graph representation learning algorithms on four datasets for node classification tasks, and the experimental results show that the algorithm proposed in this paper achieves good results. Full article
(This article belongs to the Special Issue Complex Network Modeling in Artificial Intelligence Applications)
Show Figures

Figure 1

Back to TopTop