Next Article in Journal
Editorial: Advances in Mathematical Modeling for Structural Engineering and Mechanics
Previous Article in Journal
On Signifiable Computability: Part II: An Axiomatization of Signifiable Computation and Debugger Theorems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation

by
Elias Lemuye Jimale
1,2,
Wenyu Chen
1,*,
Mugahed A. Al-antari
3,*,
Yeong Hyeon Gu
3,*,
Victor Kwaku Agbesi
1,
Wasif Feroze
1,
Feidu Akmel
4,
Juhar Mohammed Assefa
1 and
Ali Shahzad
1
1
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
College of Electrical Engineering and Computing, Adama Science and Technology University, Adama 1888, Ethiopia
3
Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea
4
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(6), 935; https://doi.org/10.3390/math13060935
Submission received: 14 February 2025 / Revised: 6 March 2025 / Accepted: 7 March 2025 / Published: 11 March 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

Graph-to-text generation (G2T) involves converting structured graph data into natural language text, a task made challenging by the need for encoders to capture the entities and their relationships within the graph effectively. While transformer-based encoders have advanced natural language processing, their reliance on linearized data often obscures the complex interrelationships in graph structures, leading to structural loss. Conversely, graph attention networks excel at capturing graph structures but lack the pre-training advantages of transformers. To leverage the strengths of both modalities and bridge this gap, we propose a novel bidirectional dual cross-attention and concatenation (BDCC) mechanism that integrates outputs from a transformer-based encoder and a graph attention encoder. The bidirectional dual cross-attention computes attention scores bidirectionally, allowing graph features to attend to transformer features and vice versa, effectively capturing inter-modal relationships. The concatenation is applied to fuse the attended outputs, enabling robust feature fusion across modalities. We empirically validate BDCC on PathQuestions and WebNLG benchmark datasets, achieving BLEU scores of 67.41% and 66.58% and METEOR scores of 49.63% and 47.44%, respectively. The results outperform the baseline models and demonstrate that BDCC significantly improves G2T tasks by leveraging the synergistic benefits of graph attention and transformer encoders, addressing the limitations of existing approaches and showcasing the potential for future research in this area.
Keywords: data-to-text generation; graph-to-text generation; graph neural network; graph attention; knowledge graph; language models; natural language generation; text generation; cross-attention data-to-text generation; graph-to-text generation; graph neural network; graph attention; knowledge graph; language models; natural language generation; text generation; cross-attention

Correction Statement

This article has been republished with a minor correction to the Funding statement and Acknowledgments. This change does not affect the scientific content of the article.

Share and Cite

MDPI and ACS Style

Jimale, E.L.; Chen, W.; Al-antari, M.A.; Gu, Y.H.; Agbesi, V.K.; Feroze, W.; Akmel, F.; Assefa, J.M.; Shahzad, A. Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation. Mathematics 2025, 13, 935. https://doi.org/10.3390/math13060935

AMA Style

Jimale EL, Chen W, Al-antari MA, Gu YH, Agbesi VK, Feroze W, Akmel F, Assefa JM, Shahzad A. Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation. Mathematics. 2025; 13(6):935. https://doi.org/10.3390/math13060935

Chicago/Turabian Style

Jimale, Elias Lemuye, Wenyu Chen, Mugahed A. Al-antari, Yeong Hyeon Gu, Victor Kwaku Agbesi, Wasif Feroze, Feidu Akmel, Juhar Mohammed Assefa, and Ali Shahzad. 2025. "Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation" Mathematics 13, no. 6: 935. https://doi.org/10.3390/math13060935

APA Style

Jimale, E. L., Chen, W., Al-antari, M. A., Gu, Y. H., Agbesi, V. K., Feroze, W., Akmel, F., Assefa, J. M., & Shahzad, A. (2025). Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation. Mathematics, 13(6), 935. https://doi.org/10.3390/math13060935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop