In today's modern world, it is crucial that the processes we're dealing with are as quick as possible. Whether we're talking about a chat message, or an online transaction, even a second of processing time makes us nervous. When looking for errors, our patience grows even thinner, but there is no faster method than using linear searches. The log files made by Windows can only be arranged by the time they were logged in; therefore, we are unable to conduct an efficient method to find the source of a problem.
If you start the search on multiple threads, then the time for it to finish is a fraction of what we previously experienced. Current CPUs work with 4-8 cores that can already represent a great increase in performance. But there's something else in each personal computer, that can start millions of threads, and that is the GPU. In order to utilize these threads, we can use the OpenCL framework on our graphic card, which makes our card instead of processing images, run data mining algorithms that looks for patterns. As it is highlighted in the thesis, performance can be multiplied with parallelization and even a log file with more than two hundred thousand lines can be processed in a matter of seconds.
Parallel processing can greatly increase the speed of our programs. Due to the constant increase of the data-quantity we use, multithread-programming is getting more and more common, and soon will be top priority.