Several technical problems can be traced back to systems of linear equations, which may take up to gigantic sizes. Such fundamental problems are the behavior of stochastic system’s balances, the state equations of control engineering and the ordinary or the partial differential equations.
The numerical solutions are often based on the discretization methods leading to systems of linear equations. The Google page ranking is a concrete example, what most people are using day after day, but this is not a real-time algorithm.
During Google page ranking algorithm the system of linear equations is gigantic, and to solve this effectively, we can use iterative methods to approximate the solution.
The GPGPU (general-purpose computing on graphics processing units) programming is rather new technology, because the graphics cards were not designed originally to compute these problems. This hardware nowadays could reach higher performance, than central processing unit, so it could be used to solve systems of linear equations. The many small computing units generate the high-performance, so the algorithm can run fast if we can effectively parallelize it.
The aim of the thesis is to examine the advantages and disadvantages of the three iterative methods (Jacobi, Gauss-Seidel, Successive over-Relaxation), which are serving to solve large systems of linear equations. Examine the CPU and the GPU architectural and programming differences. Implement the iterative methods to CPU and GPU for dense and sparse systems of linear equations, and compare the speed of each procedure.