Artificial Intelligence Applications in Complex Networks

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: 28 February 2025 | Viewed by 4735

Special Issue Editor


E-Mail Website
Guest Editor
Networks Unit, IMT School for Advanced Studies, Piazza San Francesco 19, 55100 Lucca, Italy
Interests: complex networks; graph theory; statistical physics; randomization techniques for graphs; higher-order interactions; social networks; economics; neuroscience
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, network theory has become the core of the interdisciplinary study of complex systems. Graph theory offers a simple tool to model the interactions among nonlinear dynamic units and study the emerging collective and nontrivial patterns in different fields.

Artificial intelligence (AI) combined with a great volume of available data and advanced algorithms can provide an unprecedented opportunity to explore complex system features by means of data-driven techniques. Indeed, AI makes use of neuronal networks, deep learning, machine learning techniques, and supervised and unsupervised learning to automate the building of analytical models, and improve repetitive learning and discovery through data.

This Special Issue aims to collect unpublished and original contributions from different fields and areas of interest that combine AI techniques and complex networks. Topics of interest include, but are not limited to:

  • Real-world applications to biophysical and socioeconomic phenomena;
  • Processes on hypergraphs and higher-order networks, and topological data analyses;
  • Community detection;
  • Data mining and evolutionary games on networks;
  • Cognitive processes.

Dr. Rossana Mastrandrea
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • complex networks
  • machine learning
  • graph neural networks
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 31731 KiB  
Article
Graph Convolutional Network Design for Node Classification Accuracy Improvement
by Mohammad Abrar Shakil Sejan, Md Habibur Rahman, Md Abdul Aziz, Jung-In Baik, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2023, 11(17), 3680; https://doi.org/10.3390/math11173680 - 26 Aug 2023
Cited by 2 | Viewed by 1728
Abstract
Graph convolutional networks (GCNs) provide an advantage in node classification tasks for graph-related data structures. In this paper, we propose a GCN model for enhancing the performance of node classification tasks. We design a GCN layer by updating the aggregation function using an [...] Read more.
Graph convolutional networks (GCNs) provide an advantage in node classification tasks for graph-related data structures. In this paper, we propose a GCN model for enhancing the performance of node classification tasks. We design a GCN layer by updating the aggregation function using an updated value of the weight coefficient. The adjacency matrix of the input graph and the identity matrix are used to calculate the aggregation function. To validate the proposed model, we performed extensive experimental studies with seven publicly available datasets. The proposed GCN layer achieves comparable results with the state-of-the-art methods. With one single layer, the proposed approach can achieve superior results. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Complex Networks)
Show Figures

Figure 1

13 pages, 328 KiB  
Article
Optimization Based Layer-Wise Pruning Threshold Method for Accelerating Convolutional Neural Networks
by Yunlong Ding and Di-Rong Chen
Mathematics 2023, 11(15), 3311; https://doi.org/10.3390/math11153311 - 27 Jul 2023
Cited by 1 | Viewed by 1395
Abstract
Among various network compression methods, network pruning has developed rapidly due to its superior compression performance. However, the trivial pruning threshold limits the compression performance of pruning. Most conventional pruning threshold methods are based on well-known hard or soft techniques that rely on [...] Read more.
Among various network compression methods, network pruning has developed rapidly due to its superior compression performance. However, the trivial pruning threshold limits the compression performance of pruning. Most conventional pruning threshold methods are based on well-known hard or soft techniques that rely on time-consuming handcrafted tests or domain experience. To mitigate these issues, we propose a simple yet effective general pruning threshold method from an optimization point of view. Specifically, the pruning threshold problem is formulated as a constrained optimization program that minimizes the size of each layer. More importantly, our pruning threshold method together with conventional pruning works achieves a better performance across various pruning scenarios on many advanced benchmarks. Notably, for the L1-norm pruning algorithm with VGG-16, our method achieves higher FLOPs reductions without utilizing time-consuming sensibility analysis. The compression ratio boosts from 34% to 53%, which is a huge improvement. Similar experiments with ResNet-56 reveal that, even for compact networks, our method achieves competitive compression performance even without skipping any sensitive layers. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Complex Networks)
Show Figures

Figure 1

12 pages, 1128 KiB  
Article
Learning Bilateral Clipping Parametric Activation for Low-Bit Neural Networks
by Yunlong Ding and Di-Rong Chen
Mathematics 2023, 11(9), 2001; https://doi.org/10.3390/math11092001 - 23 Apr 2023
Viewed by 1035
Abstract
Among various network compression methods, network quantization has developed rapidly due to its superior compression performance. However, trivial activation quantization schemes limit the compression performance of network quantization. Most conventional activation quantization methods directly utilize the rectified activation functions to quantize models, yet [...] Read more.
Among various network compression methods, network quantization has developed rapidly due to its superior compression performance. However, trivial activation quantization schemes limit the compression performance of network quantization. Most conventional activation quantization methods directly utilize the rectified activation functions to quantize models, yet their unbounded outputs generally yield drastic accuracy degradation. To tackle this problem, we propose a comprehensive activation quantization technique namely Bilateral Clipping Parametric Rectified Linear Unit (BCPReLU) as a generalized version of all rectified activation functions, which limits the quantization range more flexibly during training. Specifically, trainable slopes and thresholds are introduced for both positive and negative inputs to find more flexible quantization scales. We theoretically demonstrate that BCPReLU has approximately the same expressive power as the corresponding unbounded version and establish its convergence in low-bit quantization networks. Extensive experiments on a variety of datasets and network architectures demonstrate the effectiveness of our trainable clipping activation function. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Complex Networks)
Show Figures

Figure 1

Back to TopTop