Understanding the role of sensors and radar in autonomous drivingxt-shenzhen
Starting from the architecture of autonomous driving, all robots are developed with a processing architecture that acquires events through perception. Similar to human eyes and ears. Obtain surrounding obstacles and road information through cameras, radar, maps, etc., so as to calculate a reasonable response plan.
Based on our analysis of humans, we use the information we see in our eyes every moment of our daily life to judge our next decision. However, there is always a time delay from eye to head to hands to feet, and the same goes for autonomous driving. But in contrast to robots, our brains automatically “predict” how things will unfold. Even within milliseconds, our decisions are guided by predictions about what we see, which is why we can handle certain emergencies much faster than robots. Therefore, we will add a prediction module before the autonomous driving decision.
The perception process also requires careful scrutiny and can be divided into two stages: “sensing” and “perception”. “Sensing” needs to obtain raw data such as pictures, sounds, etc., while “perception” is useful information sorted out from pictures or sounds. The useful information of “perception” can be further divided into real-time perception and memory perception. Humans or robots often have different strategies when processing information.
Real-time perception is the information obtained by sensor devices every moment (including cameras, radar, GPS, etc.)
Memory perception is the information (including positioning, map, vehicle connection information, etc.) that is collected and processed by external agents or past memories.
In addition, the algorithms and processing methods of various sensors often have contradictions in information. The radar sees an obstacle in front of you and the camera tells you that there is no obstacle. At this time, you need to add a “fusion” module. Make further correlation and judgment on inconsistent information.
In order to ensure the system’s understanding and grasp of the environment, it is usually necessary to obtain a large amount of information about the surrounding environment, including the position of obstacles, the speed, the precise shape of the lane ahead, the position type of the sign, etc. This information is usually obtained by fusing data from various sensors such as Lidar, Surrounding/Looking Camera (Camera), and Millimeter Wave Radar.