Nowadays search for visual content is not novel. An image search engine supposed to deliver relevant images according to user-defined search keys. However, determine relevance is not an easy task, specifically difficult in the case of certain search queries. Therefore, the most widely used search engines (e.g. Google, Bing and Flickr) are still under development in the present days.
In my thesis I dealt with semantic image search, where metadata were not available for search, and I used only the information content of the images. If this is carried out on unknown photos, it is necessary to have a training data based on which we use machine learning method to analyze the images in the search space, called test images. Thereafter, the data from the analysis are available to search.
In my thesis I present the semantic image search system made by me, which is responsible for search based on search keys that are obtained by simultaneously combining multiple objects or concepts. I am using the state-of-the-art image processing methods prior to the search, such as the Fisher-vector or the C-SVC classifier. I implemented several methods by using the available offline semantic information that resolves the combined image search.
In this paper I focus on search terms, which are compiled by two or three objects. I evaluate them and compare their results to the results of to three well-known web image search engines (Google, Bing and Flickr). As a training data I used the training set of the Pascal VOC image classification competition, which I downloaded from their webpage. These photos were originally collected from Flickr.