Machine Learning and Safety: Friends or Foes?

A special issue of Safety (ISSN 2313-576X).

Deadline for manuscript submissions: 20 September 2024 | Viewed by 564

Special Issue Editors


E-Mail Website
Guest Editor
Informatic and Maths Department (DiMal), University of Florence, Firenze, Italy
Interests: dependability; anomaly detection; safety; unsupervised; machine learning

E-Mail Website
Guest Editor
Faculty of Humanities, University of Roma Tre, Rome, Italy
Interests: resilience; reliability; trustworthiness; safety; complex socio–cyber–physical systems

E-Mail Website
Guest Editor
Informatic and Maths Department (DiMal), University of Florence, Firenze, Italy
Interests: dependability; safety; verification; validation; modeling; standards; railways
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine Learning (ML) has become the enabling technology of several applications. There is enormous interest in adopting this technology to support critical tasks such as error detection, failure prediction, or intrusion detection, or even object detection and trajectory planning to support autonomous driving. However, components that operate in critical systems must be assessed so that the encompassing system complies with adequate safety requirements before they are deployed and used in their operational environment, in order to avoid generating catastrophic hazards to the health of citizens, the environment, or infrastructures. The absence of guarantees that a component will never fail more than is acceptable, according to domain-specific standards, is considered unacceptable by certification bodies who oversee the component's deployment into real critical systems.

Unfortunately, the maturity, reliability, and robustness of components that exercise ML algorithms are still far from guaranteeing safe operation due to non-transparency and the unpredictable error rate of the models learned by ML algorithms, let alone the instability, out-of-distribution, or adversarial inputs that may harm the behavior of ML algorithms and have a detrimental impact on the encompassing system.

This Special Issue aims to take a step towards the safe operation of ML components through the definition of architectural patterns, software mechanisms, model-based analyses, safety-oriented metrics, methods for explainable AI, assurance cases, tooling, and any data-driven process that aims to enhance the safety properties of the ML algorithm itself or guides its integration in an encompassing critical system. 

Dr. Tommaso Zoppi
Dr. Emanuele Bellini
Prof. Dr. Andrea Bondavalli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Safety is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • safety
  • machine learning
  • reliability
  • systems engineering
  • software engineering
  • security

Published Papers

This special issue is now open for submission.
Back to TopTop