Since the introduction of the first camera in 1839, capturing still images and video has become a part of our everyday lives. High resolution and high speed videos are gaining popularity not just on video sharing websites and in our mobile phones but in the world of embedded systems as well. The demand to record even the tiniest details and changes grow as image processing and data gathering systems get more and more complex.
Larger resolutions and faster refresh rates bring along higher bandwidth requirement as well, so more complicated systems are needed to record, process and compress this amount of data. Developing such systems is not effortless as video sources come with a huge diversity of interfaces.
The objective of my thesis is to design an embedded video processing system that can accept video formats and interfaces that are in use today and to develop a proof of concept. This system should also be capable of preprocessing the video signal, compressing it with an efficient algorithm, and transmit the compressed stream to other systems.
In order to achieve this, I gather information about widely used video formats and interfaces, and the currently accessible modern compression algorithms. I also research hardware solutions that are available and suitable for this task. With all the information in hand, I select the most fitting combination.
In the development phase I create the mandatory interfaces between the system's components, then assemble a video processing pipeline that can preprocess and effectively compress the incoming stream.