The project consisted on connecting a prosthetic arm to a Wireless EEG (this is a device that reads your brain signals) that would be reading the user's intent of Open/Close. To do this we used Machine Learning to classify different EEG signals at think Open/Close events. My participation on this project was during the IEEE Brain Hack 2017 where I built the communication between the server where the open/close information was and the prosthetic itself. The commands were transmitted as 1s (open) and 0s (close) from the Machine Learning Pipeline and imported into Firebase with a couple seconds delay. A Raspberry Pi would read these commands and transmit to the Arduino when the command had changed status (i.e. from 1 to 0). The Arduino would done rotate the servo motors controlling the prosthetic hand to meet the user's intent.
This was my first experience on Machine Learning and Brain Computer Interfaces. Even though we reached about 87% accuracy after the training periods, I realized that there are several limitations of this technology. Most importantly, it relies heavily on training data. If the user removes the headset and puts it back after a couple of hours, our accuracy would drop dramatically and we would need to re-do training for another hour or two. Regarding real-world applications, the challenge is to account for the brain's reorganization every day in order to keep the accuracy high. Incorporating a feedback mechanism for the brain to understand which position the prosthetic is on (from a sensory-motor point of view) could be beneficial for this purpose.