During my work, I have implemented an example-based stylized visualization algorithm that can produce a synthetic scene using different illumination channels and drawn images. The algorithm is mainly inspired by an algorithm called StyLit, which optimizes the output image by complex methods. Based on our observations, the algorithm builds on larger patches and then optimizes, so it was suggested whether similar results could be achieved with simpler patch-based synthesis. Increasing the number of channels will give us a more accurate picture of a scene and ensure that we pick up the patches from the right place.
There are several approaches to texture synthesis. Thanks to this, I became acquainted with patch and pixel-based texture synthesis. Pixel based texture synthesis takes pixels as base unit and builds output per pixel. A patch-based synthesis attempts to extract parts of the image and compose the output image. I’ll look at each synthesis methodology, on their advantages and disadvantages.
I’ll present the course of development from the beginning to the implementation. I’ll look at the user interface, which I developed using OpenGL. I’ll mention the techniques and libraries used such as the OpenMP multiprocessing API and the FreeImage library project.
Finally, I’ll evaluate the results achieved and examine the various options for improvement.