For robotic applications, we almost always need the exact spatial position and orientation of the object to be manipulated to properly handle the robot. There are a number of methods known for object recognition and its position and orientation, which usually require RGB image-based machine vision, but facilitates the process if the image information includes depth data. However, image-based recognition can be difficult if an item is covered by another object. The relative movement of the camera and object may also result from the camera being fixed to the robot arm.
During my diploma work I created an experimental configuration, familiarized myself with using the depth-image-based clouds, object-registration , implementation of a tracking algorithm and testing the algorithm using the 6-DOF robotic arm on the department.
In the first part of my thesis I present the object registration and tracking algorithm, and the opportunities offered by the ROS framework.
In the second part, I developed a GUI application, established data sending on MODBUS protocol, created the control program for robot arm and at the end I tested the operation.