Symmetry and Asymmetry Studies on Graph Data Mining

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: closed (15 September 2022) | Viewed by 7998

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Tianjin University, Tianjin 300072, China
Interests: graph data mining; graph machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer and Information Engineering, Henan Normal University, Xinxiang, China
Interests: network mining; social network analysis

Special Issue Information

Graph data mining has become one of the most popular research topics in the field of data mining, such as graph deep learning and graph neural networks. However, symmetry and/or asymmetry, which are key structural properties in complex graphs, are often ignored by state-of-the-art graph mining studies. Aiming to address this problem, this Special Issue will focus on new theories, approaches, models, as well as applications of graph mining on complex graph data under symmetry and/or asymmetry. Our goal is for this Special Issue to promote new approaches among the graph data mining community.

Dr. Dongxiao He
Dr. Dong Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Community detection on symmetry and asymmetry graphs
  • Representation learning of symmetric and asymmetric graphs
  • Asymmetric attack and defense on graphs
  • Graph neural networks on dynamic graphs
  • Graph neural networks on heterogeneous graphs
  • Contrastive learning on graphs
  • New models and algorithms on text-rich and multilayer heterogeneous graphs
  • Theoretical studies of symmetry and asymmetry graph neural networks
  • Symmetry and asymmetry graph matching
  • Applications of mining graph relation data

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

9 pages, 769 KiB  
Article
Channel Pruning Base on Joint Reconstruction Error for Neural Network
by Bin Li, Shimin Xiong and Huixin Xu
Symmetry 2022, 14(7), 1372; https://doi.org/10.3390/sym14071372 - 4 Jul 2022
Cited by 3 | Viewed by 1314
Abstract
In this paper, we propose a neural network channel pruning method based on the joint reconstruction error (JRE). To preserve the global discrimination ability of a pruned neural network, we propose the global reconstruction error. To ensure the integrity of information in the [...] Read more.
In this paper, we propose a neural network channel pruning method based on the joint reconstruction error (JRE). To preserve the global discrimination ability of a pruned neural network, we propose the global reconstruction error. To ensure the integrity of information in the forward propagation process of a neural network, we propose the local reconstruction error. Finally, through normalization, the two magnitude mismatched losses are combined to obtain the joint error. The baseline network and pruned network are symmetrical structures. The importance of each channel in the pruned network is determined by the joint error between the channel and the corresponding channel in the baseline network. The proposed method prunes the channels in the pruned network according to the importance score and then restores its accuracy. The proposed method reduces the scale of the neural network and speeds up the model inferring speed without losing the accuracy of the neural network. Experimental results show the effectiveness of the method. For example, on the CIFAR-10 dataset, the proposed method prunes 50% of the channels of the VGG16 model, and the accuracy of the pruned model is 0.46% higher than that of the original model. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Studies on Graph Data Mining)
Show Figures

Figure 1

11 pages, 363 KiB  
Article
A Modified Stein Variational Inference Algorithm with Bayesian and Gradient Descent Techniques
by Limin Zhang, Jing Dong, Junfang Zhang and Junzi Yang
Symmetry 2022, 14(6), 1188; https://doi.org/10.3390/sym14061188 - 9 Jun 2022
Viewed by 1613
Abstract
This paper introduces a novel variational inference (VI) method with Bayesian and gradient descent techniques. To facilitate the approximation of the posterior distributions for the parameters of the models, the Stein method has been used in Bayesian variational inference algorithms in recent years. [...] Read more.
This paper introduces a novel variational inference (VI) method with Bayesian and gradient descent techniques. To facilitate the approximation of the posterior distributions for the parameters of the models, the Stein method has been used in Bayesian variational inference algorithms in recent years. Unfortunately, previous methods fail to either explicitly describe the influence of its history in the tracing of particles (Q(x) in this paper) in the approximation, which is important information in the search for particles. In our paper, Q(x) is considered in design of the operator Bp, but the chance of jumping out of the local optimum may be increased, especially in the case of complex distribution. To address the existing issues, a modified Stein variational inference algorithm is proposed, which can make the gradient descent of Kullback–Leibler (KL) divergence more random. In our method, a group of particles are used to approximate target distribution by minimizing the KL divergence, which changes according to the newly defined kernelized Stein discrepancy. Furthermore, the usefulness of the suggested technique is demonstrated by using four data sets. Bayesian logistic regression is considered for classification. Statistical studies such as parameter estimate classification accuracy, F1, NRMSE, and others are used to validate the algorithm’s performance. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Studies on Graph Data Mining)
Show Figures

Figure 1

14 pages, 474 KiB  
Article
A Generative Model for Topic Discovery and Polysemy Embeddings on Directed Attributed Networks
by Bianfang Chai, Xinyu Ji, Jianglin Guo, Lixiao Ma and Yibo Zheng
Symmetry 2022, 14(4), 703; https://doi.org/10.3390/sym14040703 - 30 Mar 2022
Viewed by 1470
Abstract
Combining topic discovery with topic-specific word embeddings is a popular, powerful method for text mining in a small collection of documents. However, the existing researches purely modeled on the contents of documents and led to discovering noisy topics. This paper proposes a generative [...] Read more.
Combining topic discovery with topic-specific word embeddings is a popular, powerful method for text mining in a small collection of documents. However, the existing researches purely modeled on the contents of documents and led to discovering noisy topics. This paper proposes a generative model, the skip-gram topical word-embedding model (simplified as steoLC) on asymmetric document link networks, where nodes correspond to documents and links refer to directed references between documents. It simultaneously improves the performance of topic discovery and polysemous word embeddings. Each skip-gram in a document is generated based on the topic distribution of the document and the two word embeddings in the skip-gram. Each directed link is generated based on the hidden topic distribution of the beginning document node. For a document, the skip-grams and links share a common topic distribution. Parameter estimation is inferred and an algorithm is designed to learn the model parameters by combining the expectation-maximization (EM) algorithm with the negative sampling method. Experimental results show that our method generates more useful topic-specific word embeddings and coherent latent topics than the state-of-the-art models. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Studies on Graph Data Mining)
Show Figures

Figure 1

16 pages, 2490 KiB  
Article
Self-Supervised Graph Representation Learning via Information Bottleneck
by Junhua Gu, Zichen Zheng, Wenmiao Zhou, Yajuan Zhang, Zhengjun Lu and Liang Yang
Symmetry 2022, 14(4), 657; https://doi.org/10.3390/sym14040657 - 24 Mar 2022
Cited by 2 | Viewed by 2647
Abstract
Graph representation learning has become a mainstream method for processing network structured data, and most graph representation learning methods rely heavily on labeling information for downstream tasks. Since labeled information is rare in the real world, adopting self-supervised learning to solve the graph [...] Read more.
Graph representation learning has become a mainstream method for processing network structured data, and most graph representation learning methods rely heavily on labeling information for downstream tasks. Since labeled information is rare in the real world, adopting self-supervised learning to solve the graph neural network problem is a significant challenge. Currently, existing graph neural network approaches attempt to maximize mutual information for self-supervised learning, which leads to a large amount of redundant information in the graph representation and thus affects the performance of downstream tasks. Therefore, the self-supervised graph information bottleneck (SGIB) proposed in this paper uses the symmetry and asymmetry of graphs to establish comparative learning and introduces the information bottleneck theory as a loss training model. This model extracts the common features of both views and the independent features of each view by maximizing the mutual information estimation between the local high-level representation of one view and the global summary vector of the other view. It also removes redundant information not relevant to the target task by minimizing the mutual information between the local high-level representations of the two views. Based on the extensive experimental results of three public datasets and two large-scale datasets, it has been shown that the SGIB model can learn higher quality node representations and that several classical network analysis experiments such as node classification and node clustering can be improved compared to existing models in an unsupervised environment. In addition, an in-depth network experiment is designed for in-depth analysis, and the results show that the SGIB model can also alleviate the over-smoothing problem to a certain extent. Therefore, we can infer from different network analysis experiments that it would be an effective improvement of the performance of downstream tasks through introducing information bottleneck theory to remove redundant information. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Studies on Graph Data Mining)
Show Figures

Figure 1

Back to TopTop