Photorealistic image rendering is an ever-increasingly relevant mode of image rendering, which can be implemented with numerous algorithms and methods, so that the result can simulate reality with big accuracy.
These algorithms usually require a huge computing performance, a lot of the time the same instructions on different data (Single Instruction, Multiple Data/SIMD) are being executed. Because the CPUs in household personal computers are usually scalar processors, these computations can take a long time even when parallelized if no specialized hardvare is available.
The solution was introducing graphics cards into personal computers. The core of these cards is the GPU (graphics processing unit), which is usually a vector processor, which is optimized for these kind of computations. By using the computational capability of the graphics cards we can speed up the execution time of these programs tremendously.
In this paper I am going to introduce a Monte Carlo method, namely the path tracing method through an example application, which uses the parallel computing platform CUDA developed by Nvidia to execute the program in a highly parallel manner. I will also describe the architecture and traits of Nvidia graphics cards, and introduce the parallel computing platform CUDA.