**6. Conclusions**

In this paper, we developed the LidSonic V2.0 system by leveraging a comprehensive understanding of the state-of-the-art requirements and solutions involving assistive technologies for the visually impaired through a detailed literature review and a survey. We explained in Section 1 that it is difficult for visually impaired people to orient themselves and move in an unfamiliar environment without assistance and, hence, the focus regarding the LidSonic system in this paper is placed on the mobility tasks of the visually impaired. The system is based on a novel approach using a combination of a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using machine and deep learning for environment perception and navigation. The deep learning model TModel2, for the DS2 dataset, provided the overall best accuracy results at 96.49%. The second-best accuracy was provided by the KStar classifier at 95.44%, with a precision of 95.6%. The IBk and RC classifiers provided the same precision at 95.2% and similar accuracy results at 95.15% and 95%, respectively, using the DS2 dataset. Note that the IBk classifier was seen to be relatively non-dependent on the size of the datasets. This could be because both the datasets are numeric, and the difference between their sizes is small. It took 1 ms to train both the D2 and D1 datasets, and these were the fastest training times overall. The IBk classifier also provided the second fastest prediction time at 0.8 ms and, hence, we recommend using it at the edge for training and prediction. As for the KStar classifier, the training time is influenced by the size of the training dataset. It took 11 ms to train KStar with the DS2 dataset and 42 ms to train it with the larger DS1 dataset. Moreover, the KStar classifier required much longer times for the prediction, ranging between 22 ms and 55 ms, compared to the other classifiers in our experiments. Hence, we proposed using the KStar classifier at the fog or cloud layers.

We evaluated the proposed system from multiple perspectives. For instance, we proposed, based on the results, using the Random Committee classifier at the edge for prediction due to its faster prediction time, although it needs to be trained at the fog or cloud layers because it requires larger resources. In this respect, we plan to extend and integrate this work with other strands of our work on big data analytics and edge, fog, and cloud computing [148–151]. For example, we plan to experiment with different machine learning and deep learning methods at the edge, fog, and cloud layers, assessing their performance and the applicability of the use of edge, fog, and cloud computing for smart glasses, and considering new applications for the integration of smart glasses with cloud, fog, and edge layers. Another direction of our research is green and explainable AI [152,153], and we will also explore the expandability of the LidSonic system.

We created the second prototype of our LidSonic system in this work. The team constructed and tested the prototype. We also benefitted from the assistance of four other people aged 18 to 47 (who were not visually impaired) who helped to test and evaluate the LidSonic system. The time required to explain the device's operation to the selected users was only a few minutes, but it varied depending on the user's age and digital affinity. The tests were carried both indoors and outdoors on the campus of King Abdulaziz University. The purpose of this paper was to put the system's machine learning and other technical capabilities to the test. Future work will involve testing the device with blind and visually impaired users so as to provide more details about the LidSonic system's usability, human training, and testing aspects.

We conclude this paper with the remark that the technologies developed in this study show a high potential and are expected to open new directions for the design of smart glasses and other solutions for the visually impaired using open software tools and off-the-shelf hardware.

**Author Contributions:** Conceptualization, S.B. and R.M.; methodology, S.B. and R.M.; software, S.B.; validation, S.B. and R.M.; formal analysis, S.B., R.M., I.K., A.A., T.Y. and J.M.C.; investigation, S.B., R.M., I.K., A.A., T.Y. and J.M.C.; resources, R.M., I.K. and A.A.; data curation, S.B.; writing—original draft preparation, S.B. and R.M.; writing—review and editing, R.M., I.K., A.A., T.Y. and J.M.C.; visualization, S.B.; supervision, R.M. and I.K.; project administration, R.M., I.K. and A.A.; funding acquisition, R.M., I.K. and A.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors acknowledge, with thanks, the technical and financial support from the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia, under the gran<sup>t</sup> No. RG-11-611-38. The experiments reported in this paper were performed on the Aziz supercomputer at KAU.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The dataset developed in this work can be provided on request.

**Acknowledgments:** The work carried out in this paper was supported by the HPC Center of King Abdulaziz University. The training and software development work reported in this paper was carried out on the Aziz supercomputer.

**Conflicts of Interest:** The authors declare no conflict of interest.
