Mel Frequency Cepstral Coefficients Enhance Imagined Speech Decoding Accuracy from EEG
Abstract
Imagined speech has recently become an important neuro-paradigm in the field of brain-computer interface (BCI) research. Electroencephalogram (EEG) recordings during imagined speech production are difficult to decode... [ view full abstract ]
Imagined speech has recently become an important neuro-paradigm in the field of brain-computer interface (BCI) research. Electroencephalogram (EEG) recordings during imagined speech production are difficult to decode accurately, due to factors such as weak neural correlates and spatial specificity, and signal noise during the recording process. In this study, a dataset of imagined speech recordings obtained during production of eleven different units of imagined speech is used to investigate the relative effects of different features on classification accuracy. Three distinct feature-sets are computed from the data: a time-domain feature-set, a frequency-based feature-set and a feature-set comprised only of mel frequency cepstral coefficients (MFCC). Each feature-set is used to train a decision tree classifier and a Support Vector Machine classifier. The results indicate that the use of MFCC features produces greater discrimination of imagined speech EEG recordings in comparison with the other features evaluated and that phonological differences
Authors
-
Ciaran Cooney
(Ulster University,)
-
Rafaella Folli
(Ulster University,)
-
Damien Coyle
(Ulster University,)
Topic Areas
Digital Signal Processing , AI and Machine Learning , Data analytics
Session
Th1a » Machine Learning (10:30 - Thursday, 21st June, 02.014 (Ashby))
Presentation Files
The presenter has not uploaded any presentation files.