Finding objects in an arbitrary environtment is one of the unsolved problems about robots operating in such environments, eg. households. In this thesis I present a robotics application, which is supposed to solve this problem. The software is controlling a robotic arm, and estimates the spatial position and orientation of
an object for which it has been trained previously. The estimation is done using images retrieved from a camera mounted on the end effector of the robot. The software uses PnP algorithm which estimates the spatial pose from object points with known 3D coordinates and the corresponding image points. The image points are found with SURF keypoint detector. During training the algorithm, 3D reconstruction is done via multi-view triangulation using multiple images taken from known positions. Furthermore I describe the hardware and software architecture of the robot controlling system, and the implementation of the hand-eye calibration. Finally I analyze the results of the implementation.