Nowadays different Brain-Computer Interfaces (BCIs) are getting more and more attention. Usage of these devices, however, is limited by their size and the hostile conditions of the target environments. The solution for this problem could be not to take the device to the application area, but reproduce that without the interfering elements. Among other things, Virtual Reality (VR) Systems are capable of that, and in turn, they could get a new, natural input device.
During my work I have researched a chosen BCI device, its possibilities and the supplied signal processing and learning engine (EmoEngine). In addition I studied the VirCA Virtual Collaboration Arena VR system, developed by the 3DICC laboratory at MTA SZTAKI. I familiarized with its component-based architecture and the development of new components (CyberDevices). I have implemented different solutions for the interconnection of the two systems, and analyzed from each system’s viewpoint, what benefits could this integration mean for them. For the BCI systems, this means a test-platform and a new, immersive environment, while the VR systems broaden with an alternative avatar control and object interaction. Furthermore, I studied how this immersion, this stimuli-rich environment influences the BCI systems efficiency, for which I used the EmoEngine as a benchmark.
While using the EmoEngine I discovered several problems like the lengthy calibration process, the unreliable control or the black-box model (the manufacturer publishes nothing about the algorithms used). In conclusion, I offer a method of solution for these problems as I present the P300 and SSVEP paradigms, and their usability in the system.