EMGP-AI
Project


This project is supported by the Romanian National Authority for Scientific Research and Innovation,

UEFISCDI

The complete project can be found here:
GITLAB DATASET

Description

Name: Embedded Platform for EMG acquisition with Artificial Intelligence software toolkit (EMGP-AI)

Project number: PN-III-P2-2.1-PED-2019-2392

Contract number: 425PED / 2020

Period: 27.10.2020 - 22.10.2022

Research fields: Machine Learning, Biomedical Signal Processing

General Objective
The goal of this project is to develop an intelligent platform (hardware and software) capable of both real-time recording of the electrical activity of the muscles, produced in response to the motor nerve stimulus (Electromyography - EMG), as well as providing a necessary set of software tools based on Artificial Intelligence (AI) for the development of applications in various fields: medicine, security, entertainment, etc. The platform will be structured in two components - the EMG signal acquisition module and the software tools for Artificial Intelligence. The EMG signal acquisition module will be a non-invasive acquisition device that can record and monitor EMG activity from the forearm. The architecture of the device will consist of a network of EMG sensors and a central unit (Microcontroller/Microcomputer) responsible for synchronously recording the signals and communicating with the software tools for Artificial Intelligence. We will create an EMG dataset corresponding to 15 hand gestures, which we will use for extensive analysis of all the methods and algorithms that we will include in the AI software toolkit. For validation, we will develop an Automatic Gesture Recognition (AGR) system using the previously described platform. This system will allow the real-time classification of 15 gestures (identical to those in the dataset). More precisely, the system will be responsible for acquiring the EMG signal, processing it, and finally associating the input signal with the corresponding gesture (the output of the system will be represented by the gesture executed by the user).

Towards this goal, we have set three objectives:

  • O1. Design and develop an EMG Acquisition Module
  • O2. Collect an EMG dataset for gesture classification
  • O3. Design and develop an Automatic Gesture Recognition (AGR) framework

Motivation
In recent times, a significant amount of research has been focused on human-computer interaction (HCI), augmented reality (AR) and virtual reality (VR). Hand gestures represent a very natural and easy way to interact with the devices around us, taking the IoT experience to a different, more organic level. In this project we aim to develop an intelligent platform (hardware and software) able to capture in real time electrical activity in response to a nerve's stimulation of the muscle (Electromyography - EMG) and offer the necessary Artificial Intelligence software tools for developing applications in various domains: healthcare, security, entertainment, etc. The platform will be structured in two components - EMG Acquisition Module and Artificial Intelligence software tools. The EMG Acquisition Module will be a non-invasive acquisition device that can collect and monitor EMG activity at the forearm level. The architecture of this device will be composed of a network of several EMG sensors and a central unit (Microcontroller/Microcomputer) responsible for captured signals synchronization and communication with Artificial Intelligence software tools. We will create an EMG dataset with 15 hand gestures followed by an extended analysis of every method and algorithm included in the Artificial Intelligence software tools. For validation purposes, we will develop an Automatic Gesture Recognition (AGR) framework using the platform. The AGR will be a complex system that enables real-time labeling of the 15 gestures of the dataset. More precisely, this framework will incorporate the acquisition of the signal to-be labeled, the pre-processing of the signal, and finally the classification (the system's output is the gesture that the user has performed).

Working Plan
The project is structured in 3 stages, corresponding to the reporting stages. Each stage is based on the results of the previous stages or studies in the project, as follows:
  • Stage 1: Project planning and hardware component design for the data acquisition module
  • Stage 2: Development of the acquisition module and creation of EMG resources for the training and testing the gesture classification module
  • Stage 3: Complete implementation of the automatic gesture recognition system based on biometric data (EMG)

Expected Results
The expected results in each phase of the project are as follows:
Stage 1:
Period: 27.10.2020 - 31.12.2020

  • Design of the acquisition module

Stage 2:
Period: 01.01.2021 - 22.11.2021

  • Development of the data acquisition hardware module
  • Development of the data acquisition software module
  • Acquisition of the EMG dataset for 15 gestures
  • Data segmentation software

Stage 3:
Period: 23.11.2021 - 22.10.2022

  • Study on the state of the art in the field of gesture recognition
  • The concept and software implementation of the classifier
  • Automatic gesture recognition system, based on Artificial Intelligence techniques
  • Dissemination in 2 international conferences, 1 publication at a specialized conference

Final Results
In this project a complete system for classifying EMG signals was created. In the first stage, a device for recording EMG signals was developed, in the form of a portable bracelet. Using this device, a dataset of electromyographic signals was created, acquired from 50 subjects, both men and women. Each participant was recorded 2 times while performing a gesture, for a total number of 15 gestures.
In order to create machine learning models to classify gestures based on EMG signals, the acquired dataset was filtered to reduce any possible noise. Each signal was afterwards segmented in small time-segments, which were used to extract representative features of the EMG signal. Methods to increase the robustness of the automatic gesture recognition system have also been experimented with.
The final system is capable of a classification accuracy of 98.67% for 15 gestures. Therefore, the system offers high performance and can be included in smart prostheses that can help people who have lost an upper limb. Examples of the final model are illustrated below:
Test EMG1
Figure 1 - Model up view
Test EMG4
Figure 2 - Model front view
Test EMG3
Figure 3 - Model worn on the arm example 1
Test EMG2
Figure 4 - Model worn on the arm example 2

Team

University POLITEHNICA of Bucharest

Corneliu Burileanu

Project director

Anamaria Rădoi

Senior researcher

Ana Neacșu

Assistant researcher

George Cioroiu

Assistant researcher

Georgian Nicolae

Assistant researcher

Cristina Andronache

Assistant researcher

Marian Negru

Assistant researcher

Scientific Reports

  • Report stage 1/2020
  • Report stage 2/2021
  • Report stage 3/2022

Publications

Contact


corneliu.burileanu@upb.ro

ana_antonia.neacsu@upb.ro