Autonomous vehicles are set to revolutionize transportation — however, their successful implementation relies on the ability to accurately recognize and respond to external threats. From signal processing and image analysis algorithms through deep learning intelligence systems integrated with IoT infrastructure, a range of technologies must be utilized in order for autonomous cars to provide safe operation over varied terrain. To ensure passenger safety is not compromised as these cutting-edge automobiles become more widespread, robust methods need development that can effectively detect potential hazards quickly and reliably.
Self-driving vehicles rely on high-tech sensors such as LiDAR, radar, and RGB cameras to generate large amounts of information to properly identify pedestrians, other drivers, and potential hazards. The integration of advanced computing capabilities and Internet-of-Things (IoT) into these automated cars makes it possible to rapidly process this data on site in order to navigate various areas and objects more efficiently. Ultimately, this allows the autonomous vehicle to make split-second decisions with a much higher accuracy than traditional human drivers.
Huge Step Forward in Autonomous Driving Tech
Groundbreaking research conducted by Professor Gwanggil Jeon from Incheon National University, Korea and his international team marks a huge step forward in autonomous driving technology. The innovative smart IoT-enabled end-to-end system that they have developed allows for 3D object detection in real time using deep learning, making it more reliable and efficient than ever before. It can detect an increased number of objects more accurately, even when faced with challenging environments such as low light or unusual weather conditions – something other systems are not able to do. These capabilities allow for safer navigation around various traffic scenarios, raising the bar for autonomous driving systems and contributing to improved road safety worldwide.
The research was published in the journal IEEE Transactions of Intelligent Transport Systems.
“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” explains Prof. Jeon. “We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he continues.
Basing Model on YOLOv3
The team fed the collected RGB images and point cloud data to YOLOv3, which then output classification labels and bounding boxes with confidence scores. Its performance was then tested with the Lyft dataset, and early results demonstrated that YOLOv3 achieved an extremely high accuracy of detection (>96%) for both 2D and 3D objects. The model outperformed various state-of-the-art detection models.
This newly developed method could be used for autonomous vehicles, autonomous parking, autonomous delivery, and future autonomous robots. It could also be used in applications where object and obstacle detection, tracking, and visual localization is required.
“At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” Prof. Jeon says. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years.”