Efficiency and Scalability of Advanced Machine Learning and Optimization Methods for Real-World Applications

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 693

Special Issue Editor


E-Mail Website
Guest Editor
Center for Artificial Intelligence Research and Optimisation, Torrens University Australia, Adelaide, SA 5000, Australia
Interests: supervised learning; deep learning; optimisation; evolutionary computations; meta-heuristic algorithms; swarm intelligence; renewable energy systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Compared to synthetic mathematical problems, real-world problems pose distinct challenges and involve complex systems with interdependent components, which require advanced modelling and analysis techniques. Nonlinear behaviour, dependencies, and uncertainties further complicate the task, and require advanced machine learning methods that are capable of capturing such intricacies. Real-world applications must also grapple with massive datasets, leading to computational and memory constraints. To address these limitations, emerging methodologies such as transfer learning, federated learning, and quantum machine learning, must ensure their efficiency and scalability while processing extensive volumes of data. This Special Issue seeks submissions that not only present advanced techniques in this area, but also demonstrate improvements in their scalability and efficiency compared to existing approaches. The deployment of machine learning methods in real-world environments is of particular interest in this Issue. Case studies detailing the practical benefits and implementation challenges of these methods are invited. Discussions on the societal, ethical, and regulatory implications of deploying advanced machine learning systems are encouraged. Additionally, this Special Issue emphasizes techniques that accommodate the computational demands of large-scale datasets; benchmark datasets and evaluation metrics play a crucial role in addressing complex real-world problems.

Dr. Mehdi Neshat
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • supervised learning
  • reinforcement learning
  • domain adaptation
  • scalability
  • optimisation
  • evolutionary intelligence
  • swarm optimisation methods
  • multiobjective optimisation methods
  • real-world problems

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1254 KiB  
Article
Optimizing Curriculum Vitae Concordance: A Comparative Examination of Classical Machine Learning Algorithms and Large Language Model Architectures
by Mohammed Maree and Wala’a Shehada
AI 2024, 5(3), 1377-1390; https://doi.org/10.3390/ai5030066 - 6 Aug 2024
Viewed by 406
Abstract
Digital recruitment systems have revolutionized the hiring paradigm, imparting exceptional efficiencies and extending the reach for both employers and job seekers. This investigation scrutinized the efficacy of classical machine learning methodologies alongside advanced large language models (LLMs) in aligning resumes with job categories. [...] Read more.
Digital recruitment systems have revolutionized the hiring paradigm, imparting exceptional efficiencies and extending the reach for both employers and job seekers. This investigation scrutinized the efficacy of classical machine learning methodologies alongside advanced large language models (LLMs) in aligning resumes with job categories. Traditional matching techniques, such as Logistic Regression, Decision Trees, Naïve Bayes, and Support Vector Machines, are constrained by the necessity of manual feature extraction, limited feature representation, and performance degradation, particularly as dataset size escalates, rendering them less suitable for large-scale applications. Conversely, LLMs such as GPT-4, GPT-3, and LLAMA adeptly process unstructured textual content, capturing nuanced language and context with greater precision. We evaluated these methodologies utilizing two datasets comprising resumes and job descriptions to ascertain their accuracy, efficiency, and scalability. Our results revealed that while conventional models excel at processing structured data, LLMs significantly enhance the interpretation and matching of intricate textual information. This study highlights the transformative potential of LLMs in recruitment, offering insights into their application and future research avenues. Full article
Show Figures

Figure 1

Back to TopTop