The need and ability of audio signal processing have been existing for a long time, but since the spread of digital devices numerous new opportunities have been created. As a result, some solutions of processing - which also can be implemented with analogue signals - has become simpler, and even tasks that cannot be solved in analogue world have come to realization.
The term audio signal processing can mean two things. One is the methods used for the enrichment of sound (e.g. reverb, delay and adding various effects), the other is the analysis of the recording’s contents. If the audio contains music, the purpose can be either the recognition of a known piece or the description of the musical parameters in the audio signal. Both approaches can be examined for live or recorded music.
My thesis is about the conversion of audio signals into a time and frequency quantized form which therefore contains the features that are described in sheet music as well.
The first part of my work is about the examination of the achievements in this field, the different methods to determine the features and their operating effectiveness for different musical instruments and singing.
After the literature review section an algorithm is presented that primarily consists of my own ideas, and which is able to correctly determine the features with given range of pitch and limits if the test signal is a monophonic sound.
In the next section, I will examine the results of the operation of the developed algorithms for different musical instruments and recordings, and I will suggest possible improvements, extensions.
Finally, the conditions and possibilities of real-time operation for transposition of the algorithm are reviewed.