Results and Challenges

We faced several challenges in creating this project. One of the greatest challenges came in gesture recognition. Though we created models that were capable of inference given user-generated gestures, these were noisy and inaccurate because of variance in the lighting conditions and hand positions possible. Eventually, we got finger detection to work, but overall gesture recognition was not as robust as originally intended. Furthermore, because scikit-learn was difficult to download on the Pi, and time constraints were pressing, it remains to be seen whether gesture recognition using the created model on the Pi would have been functional. Finally, the MagicMirror user-interface API is written in JavaScript while our gesture recognition software is written in Python, meaning control of the npm application synchronously with the Python interface through hardware was preferable to using direct software wrappers, as these wrappers would not work in real-time with respect to user input. However, the end product had a functional user interface that changed at a button press as common smart mirror technologies may, and had the added advantage of counting fingers once the Raspberry Pi's GPIO output was configured properly in the backend. Thoughnot everything worked as intended, this project was still a meaningful learning experience that produced a creative and useful application in its own right.
All code for this project can be found in this zipped folder.

Future Work

The first future improvement we propose is installing scikit-learn and dependencies. Because we ran out of time, we were unable to fully understand how effective our machine learning model was, and it was favorable for demonstration purposes to show finger recognition using OpenCV and Python logic. The next improvement we would propose is having a better backend application to connect the JavaScript updates with the gestre recognition rather than using hardware GPIO circuitry.

Work Division

Both partners contributed equally to this project. Asena was in charge of the front-end, working with the MagicMirror API and interfacing the GPIO and the JavaScript code provided. Shubhom handled the gesture recognition portion of the project using the OpenCV, Tensorflow and scikit-learn portions. Both of us worked on creating this website. Here's a picture of us!

wow!!!!