Skip to Content

Explainable/Interpretable Machine Learning for Biomedical Sensing, Sensor Data Fusion and Diagnostics

This special issue belongs to the section “Sensing and Imaging“.

Special Issue Information

Keywords

  • Interpretable machine learning (IML)
  • Explainable machine learning
  • LIME
  • Shapley values
  • Feature importance
  • Knowledge integration
  • Visual interpretation support
  • Transparent ML models
  • Global/local explanations
  • IML in regular contexts
  • Deep learning
  • Classical machine learning methods
  • Model agnostic models
  • Rule-based models
  • Feature interactions
  • Ensemble methods
  • Trusted models
  • Robustness of models
  • Biomedical applications
  • Diagnostics
  • Risk assessment
  • Healthcare
  • Wearable sensors
  • Internet of Things (IoT)
  • Multisensor fusion
  • Data fusion
  • Anomaly detection
  • Audio processing
  • Computer vision
  • Image processing
  • Signal processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Published Papers

XFacebookLinkedIn
Sensors - ISSN 1424-8220