Graph and Geometric Deep Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 October 2024 | Viewed by 587

Special Issue Editors


E-Mail Website
Guest Editor
lastminute.com Group, Vicolo de Calvi, 2, 6830 Chiasso, Switzerland
Interests: pattern recognition; machine learning; deep learning; graph neural networks and their applications

E-Mail Website
Guest Editor
Department of Information Engineeering, University of Florence, Via di Santa Marta, 3, 50139 Firenze, Italy
Interests: Machine Learning; Pattern Recognition; Computer Vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Graph Neural Networks (GNNs) have risen in popularity and have become pivotal in interpreting the rich data encoded in graph structures.

This Special Issue aims to delve into pioneering GNN methodologies and their expansive applications, which are redefining the limits of artificial intelligence.

We welcome contributions that confront challenges such as over-smoothing, scalability, and generalization, particularly emphasizing transfer learning and few-shot learning within the realm of graph domains. Moreover, we are particularly interested in submissions that demonstrate the beneficial application of GNNs in diverse fields, including but not limited to bioinformatics, social network analysis, recommendation systems, and computer vision. Submissions could also address advancements in the interpretability and explainability of GNNs, which are crucial for their integration into areas where decision-making is sensitive and outcomes are critical.

Finally, considering the recent growth in generative AI, this call for papers also seeks contributions focused on generative models harnessing graphs.

Dr. Alessandro Rozza
Dr. Lorenzo Seidenari
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural networks
  • graph representation learning
  • geometric deep learning

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 796 KiB  
Article
Universal Local Attractors on Graphs
by Emmanouil Krasanakis, Symeon Papadopoulos and Ioannis Kompatsiaris
Appl. Sci. 2024, 14(11), 4533; https://doi.org/10.3390/app14114533 - 25 May 2024
Viewed by 197
Abstract
Being able to express broad families of equivariant or invariant attributed graph functions is a popular measuring stick of whether graph neural networks should be employed in practical applications. However, it is equally important to find deep local minima of losses (i.e., produce [...] Read more.
Being able to express broad families of equivariant or invariant attributed graph functions is a popular measuring stick of whether graph neural networks should be employed in practical applications. However, it is equally important to find deep local minima of losses (i.e., produce outputs with much smaller loss values compared to other minima), even when architectures cannot express global minima. In this work we introduce the architectural property of attracting optimization trajectories to local minima as a means of achieving smaller loss values. We take first steps in satisfying this property for losses defined over attributed undirected unweighted graphs with an architecture called universal local attractor (ULA). This refines each dimension of end-to-end-trained node feature embeddings based on graph structure to track the optimization trajectories of losses satisfying some mild conditions. The refined dimensions are then linearly pooled to create predictions. We experiment on 11 tasks, from node classification to clique detection, on which ULA is comparable with or outperforms popular alternatives of similar or greater theoretical expressive power. Full article
(This article belongs to the Special Issue Graph and Geometric Deep Learning)
Back to TopTop