Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.
Real-time detection and recognition systems for vehicle license plates present a significant design and implementation challenge, arising from factors such as low image resolution, data noise, and various weather and lighting conditions.This study presents an efficient automated system for the identification and classification of vehicle license plates, utilizing deep learning techniques. The system is specifically designed for Iraqi vehicle license plates, adapting to various backgrounds, different font sizes, and non-standard formats. The proposed system has been designed to be integrated into an automated entrance gate security system. The system’s framework encompasses two primary phases: license plate detection (LPD) and character recognition (CR). The utilization of the advanced deep learning technique YOLOv4 has been implemented for both phases owing to its adeptness in real-time data processing and its remarkable precision in identifying diminutive entities like characters on license plates. In the LPD phase, the focal point is on the identification and isolation of license plates from images, whereas the CR phase is dedicated to the identification and extraction of characters from the identified license plates. A substantial dataset comprising Iraqi vehicle images captured under various lighting and weather circumstances has been amassed for the intention of both training and testing. The system attained a noteworthy accuracy level of 95.07%, coupled with an average processing time of 118.63 milliseconds for complete end-to-end operations on a specified dataset, thus highlighting its suitability for real-time applications. The results suggest that the proposed system has the capability to significantly enhance the efficiency and reliability of vehicle license plate recognition in various environmental conditions, thus making it suitable for implementation in security and traffic management contexts.