In this paper I look into how computer-generated, text-to-speech synthesized acoustic icons can be used in real-life environment.
I explore the methodology, history and characteristics of the acoustic icons, with special emphasis on their development. I research one of the state-of-the-art computer generated icon types in more detail. I develop a set of customized icons on both computer generated male and female voice, than attempt to record their human uttered versions so I can evaluate on their mutual commutability. That is to mark the quality of the computer-based voice synthesis.
For the sake of the simulation of real-life circumstances I put these icon sets all into one custom designed working prototype system to demonstrate their usability in practice. I develop a weather reporter Android software dedicated for this purpose, which I take to a group of testers to evaluate.
My ultimate goal with all of this is to find out if these computer-generated sounds are similar to human sounds in the matter of their capacity to express feelings and thoughts. I seek to know if they can transport relevant information to the user enhancing or even replacing the visual experience. But most of all I want to determine if they can get the sympathy of the users before the human recorded sounds. If so, that means these sounds are applicable for real-life usage and can replace human voice recordings.