Ray tracing is a method that has been long used in the past for making beautiful images, which resemble reality. The disadvantage of these algorithms is their non-linear time complexity. The time to make a single image can last from minutes to several days.
Because in ray tracing, a ray’s path is independent from all the other rays, the algorithm can be parallelized very well. Today’s graphical processors can run millions of light weight threads in parallel, which gives an ideal environment for ray tracing.
The thesis introduces a naive implementation of ray tracing in the NVidia Cuda language, and also introduces a new algorithm. The new algorithm solves the computing in many smaller steps, and only works on part of the actors of the scene, which enables it for bigger scenes, but has to repeat some steps when checking surfaces for shadows.
After the introduction of both of the algorithms, the efficiency, run length and greatest memory consumption was inspected using several test cases.