Connecting reality and virtuality with Human Machine Interface (HMI)
This thesis explores the development and application of machine learning models to predict hand gestures using electromyography (EMG) data. The primary focus is on leveraging a convolutional neural network (CNN) for detecting and analyzing hand gestures based on electrical signals from muscle contra...
- Autores:
-
Bastidas Peralta, Luciano
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2024
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/74984
- Acceso en línea:
- https://hdl.handle.net/1992/74984
- Palabra clave:
- HMI
EMG
Human machine interface
Electromyography
Machine learning
Deep Learning
Electrodes
Ingeniería
- Rights
- openAccess
- License
- Attribution 4.0 International
Summary: | This thesis explores the development and application of machine learning models to predict hand gestures using electromyography (EMG) data. The primary focus is on leveraging a convolutional neural network (CNN) for detecting and analyzing hand gestures based on electrical signals from muscle contractions. Data acquisition is facilitated through a Cyton OpenBCI board, with electrodes placed on the forearm. Initial challenges with electrode placement using kinesiology tape were addressed by designing a 3D-printed armband to ensure secure and consistent electrode po sitioning. Various machine learning models, including simple and slightly larger CNNs, as well as a pre-trained 3DResNet, were tested for their accuracy and training efficiency. The study reveals that while the models can achieve reasonable accuracy, incor porating additional parameters such as angular velocity of finger joints significantly complicates the training process. A new graphical user interface (GUI) was devel oped to distinguish between dynamic and static gestures, enhancing real-time applicability. The results indicate that although current models can be integrated into real life scenarios, further improvements are needed for effective real-time prediction, particularly for gestures involving thumb movements. Future work will focus on refining the model and incorporating more sophisticated data processing techniques to improve accuracy and usability in practical applications. |
---|