Having autonomous driving cars with reliable visual functions has always been a difficult problem in technological development. Today, developers can create a monitoring system by combining various sensors. With this system, autonomous vehicles can “see” the surroundings of the vehicle better than human drivers.
From photos to videos, the camera is the most accurate way to visualize the world, especially for autonomous vehicles. And to enable parking space perception, autonomous vehicles rely on cameras in various conditions, and deep neural network processing through ParkNet DNN.
In order to achieve safe automatic driving, a complete set of DNN is indispensable. These networks are redundant because they overlap in function, thus minimizing the possibility of failure marking as the ideal network for searching a parking spot. Not every parking space is a perfect rectangle.
PakNet DNN generalizes to define four lines connected at arbitrary, rather than right angles. This enables it to perceive parking spaces regardless of the lane markings’ orientation with respect to the car.
ParkNet outputs parking space detections and entry line classifications in 2D image space which can be converted into 3D coordinates that makes the estimation more accurate for short-distance self-parking maneuvers.