Today’s modern robotic systems perform well in terms of navigation in an already known environment. However movement in an unknown space is a much harder task, especially if the robot doesn’t get any external information besides the measurement data from it’s own sensors. The creation of autonomous navigation systems is an interesting and actively researched field.
In my thesis I expanded the capabilities of an already existing mobile robot, so that it will be able to autonomously navigate to a predefinied target location in an unknown environment. To achieve this, I’ve examined the way of workings of the possible sensors, then I’ve added a Microsoft Kinect camera and a gyroscope to the sensor system of the robot. By using these, the system gets plenty of detailed information about its envornment as well as about its own movement.
I’ve converted the data received from the Kinect sensor to a point cloud, then processed it with the Point Cloud Libary (PCL). To appropriately align the received images together, I had to become familiar with the operation of registration algorithms.
As the final result of my thesis, I’ve created a complex software for the robot, which operates the Kinect camera, creates a map of the surroundings, searches for a path on the map, does the route planning and also controls the movement of the robot.