**George Dimas, Dimitris E. Diamantis, Panagiotis Kalozoumis and Dimitris K. Iakovidis \***

Department of Computer Science and Biomedical Informatics, University of Thessaly, 35131 Lamia, Greece; gdimas@uth.gr (G.D.); didiamantis@uth.gr (D.E.D.); pkalozoumis@uth.gr (P.K.)

**\*** Correspondence: dimitris.iakovidis@ieee.org

Received: 23 March 2020; Accepted: 18 April 2020; Published: 22 April 2020

**Abstract:** Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed.

**Keywords:** visually challenged; navigation; image analysis; fuzzy sets; machine learning
