The ADAS (Advanced Driver Assistance Systems) is one intense areas of the automotive industry. Some of the most common ADAS systems are the rear radar and camera, lane departure warning, parking assistance systems, blind-spot detection.
One of these systems is the autonomous lane tracking system, which is capable of driving the vehicle in its lane based on the information of a single or multiple cameras and radars.
In this thesis I present algorithms capable of solving the mentioned problem with a single camera. My goal is to present an algorithm developed by me, which transforms the camera’s picture to a Bird’s Eye view image with inverse perspective transformation (IPM) and on this transformed image it detects the lanes in real time using machine learning combining the advantages of both algorithms.
I realized this algorithm in Matlab environment, later I make a proposition for an FPGA implementation on a ZedBoard card, this way providing the possibility for testing the algorithm of the autonomous lane tracking in a real time embedded environment.