Recent Advances in the Synergy Between Federated Learning and Foundation Models

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 2087

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University, Hong Kong, China
Interests: federated learning; edge AI; wireless communications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
Interests: edge AI; trustworthy AI; generative AI

Special Issue Information

Dear Colleagues,

Foundation models (FMs), such as the Generative Pre-trained Transformer (GPT) series, are large generative models that are competent in a variety of tasks. They have become the key enablers for many AI applications, including chatbots, image captioning, and video editing. However, the versatility and generalizability of FMs make their training highly difficult, which demands massive datasets and tremendous computational resources. This creates significant obstacles including scalability, privacy, and efficiency concerns in real-world use cases. As the most popular framework of privacy-preserving collaborative training, federated learning (FL) is believed to continue to play an important role in the age of FMs. Recently, the generative power of FMs has also been found effective in overcoming some open challenges of FL for improved performance and better personalization.

This Special Issue solicits original research and review articles, aiming to bring together researchers, practitioners, and industry experts from around the world to explore the latest advancements, deployment challenges, and opportunities in synergizing FL and FMs.

Dr. Yuyi Mao
Dr. Jiawei Shao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • federated learning
  • foundation models
  • large language models
  • multimodal models
  • generative AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 724 KiB  
Article
On Assessing the Performance of LLMs for Target-Level Sentiment Analysis in Financial News Headlines
by Iftikhar Muhammad and Marco Rospocher
Algorithms 2025, 18(1), 46; https://doi.org/10.3390/a18010046 - 13 Jan 2025
Viewed by 1667
Abstract
The importance of sentiment analysis in the rapidly evolving financial markets is widely recognized for its ability to interpret market trends and inform investment decisions. This study delves into the target-level financial sentiment analysis (TLFSA) of news headlines related to stock. The study [...] Read more.
The importance of sentiment analysis in the rapidly evolving financial markets is widely recognized for its ability to interpret market trends and inform investment decisions. This study delves into the target-level financial sentiment analysis (TLFSA) of news headlines related to stock. The study compares the performance in the TLFSA task of various sentiment analysis techniques, including rule-based models (VADER), fine-tuned transformer-based models (DistilFinRoBERTa and Deberta-v3-base-absa-v1.1) as well as zero-shot large language models (ChatGPT and Gemini). The dataset utilized for this analysis, a novel contribution of this research, comprises 1476 manually annotated Bloomberg headlines and is made publicly available (due to copyright restrictions, only the URLs of Bloomberg headlines with the manual annotations are provided; however, these URLs can be used with a Bloomberg terminal to reconstruct the complete dataset) to encourage future research on this subject. The results indicate that the fine-tuned Deberta-v3-base-absa-v1.1 model performs better across all evaluation metrics than other evaluated models in TLFSA. However, LLMs such as ChatGPT-4, ChatGPT-4o, and Gemini 1.5 Pro provide similar performance levels without the need for task-specific fine-tuning or additional training. The study contributes to assessing the performance of LLMs for financial sentiment analysis, providing useful insights into their possible application in the financial domain. Full article
Show Figures

Figure 1

Back to TopTop