Multi-channel neural sensing and embedded control for gesture recognition

No Thumbnail Available

URL

Journal Title

Journal ISSN

Volume Title

Sähkötekniikan korkeakoulu | Master's thesis

Date

2024-08-19

Department

Major/Subject

Smart Systems Integrated Solutions

Mcode

ELEC3064

Degree programme

Master’s Programme in Smart Systems Integrated Solutions (Erasmus Mundus)

Language

en

Pages

120

Series

Abstract

Upper-limb amputees have to deal with many obstacles in daily life, because many activities rely on hand function. Even after being amputated, the remaining motor neurons can still generate electrical signal, which can be detected and lever-aged for a prosthetic hand control. Myoelectric control using surface electromyogra-phy (EMG) signals is a non-invasive method to sense these electrical signals, as EMG signals contain neural codes that can be interpreted into muscle contraction commands. Other traditional myoelectric control systems, which use simple algo-rithm or simple Machine Learning models as hand gestures classifier, experience the robustness issues related to insensitivity to shifts in electrodes positions on the amputee’s forearm. In this master thesis, three Machine Learning models - Logistic Regression, Feed Forward Neural Network and Self-supervised Learning - are employed in the prosthetic control system as the hand gestures classifier, in order to compare the performance of them in different shifting position scenarios. An 8-electrode-armband is used to detect and senses EMG signals from the subject’s forearm. A graphical user interface (GUI) of the myoelectric control system was developed in the PyQt framework, allowing for measurement and collection of EMG data, train-ing of machine learning models for classification, and real-time detection of hand gestures based on the EMG signals and trained classifiers. Furthermore, a comprehensive training and testing across multiple shift posi-tions data analysis with three approaches: "No Shifting Approach"; "Naive Training Approach" and "Advanced Training Approach" was conducted, in order to simulate and solve the practical issue in realistic scenarios with shifting robustness issue. Each of these approaches was evaluated using two sub-approaches: leveraging the full set of five features (RMS, MAV, WL, ZC, SSC) and using only the RMS feature extracted from the EMG signals. The performance of the hand gesture classifier was assessed in two scenarios: a simple challenge involving five hand motions with dis-tinct features and a more complex challenge involving nine hand motions with iden-tical features. The results demonstrate significant accuracy in hand gesture classification for nine motions with all three learning models—Logistic Regression, Feed Forward Neural Network, and Self-supervised Learning—achieving up to 88.59%, 90.54%, and 90.94% accuracy, respectively, in the "No Shifting Approach" with full feature extraction. In more complex scenarios, the Feed Forward Neural Network and Self-supervised Learning models show superior performance with 86% and 74% accura-cy, respectively, compared to 50% for the Logistic Regression model. This under-scores the importance of comprehensive training across multiple shift positions for enhancing gesture recognition performance. However, there is still an open chal-lenge "Naive Training Approach," where the training dataset is limited, even the complex Feed Forward Neural Network and Self-supervised Learning model only reach 33.81% and 21%, respectively, paving the way for future potential develop-ment in some aspects: integration of hardware wearable device, optimization of Ma-chine Learning algorithm and creation of a comprehensive database of hand gestures.

Description

Supervisor

Ivan, Vujaklija

Thesis advisor

Mansour, Taleshi

Keywords

prosthetic, electromyography, myoelectric control, gesture classification, machine learning, feature extraction

Other note

Citation