Novel developments of brain-computer interfaces for smart home control

  1. Laport Lopez, Francisco
Dirigida por:
  1. Adriana Dapena Director/a
  2. Paula-María Castro-Castro Director/a

Universidad de defensa: Universidade da Coruña

Fecha de defensa: 17 de enero de 2022

Tribunal:
  1. Begoña García-Zapirain Presidenta
  2. Francisco Javier Vázquez Araujo Secretario/a
  3. Sérgio Ivan Fernandes Lopes Vocal

Tipo: Tesis

Teseo: 703575 DIALNET lock_openRUC editor

Resumen

In this work, we analyze and develop new Human-Machine Interfaces (HMIs) that can help the communication and interaction of people who suffer from motor or neurological damages, with their environment and daily life devices working through a smart-home system. A smart home can be defined as a residence equipped with a communication network, connected sensors, devices, and appliances that can be controlled, accessed and monitored remotely by the residents in order to satisfy their daily life needs. It represents a context-aware system that, using technologies such as Internet of Things (IoT) and artificial intelligence algorithms, can sense, anticipate and respond to activities in the home. The HMIs offer new communication channels to interact with outer devices where the electrical signals coming from different parts of the human body, such as the brain, muscles or the eyes, can be used as control commands. Therefore, the integration of these interfaces into a smart-home system, results in an extremely useful technology for patients with severe motor disabilities or serious injuries in their limbs, since it allows them to interact with their environment or control damaged parts of their bodies without the need for large physical movements. A wide variety of biological signals can be used to interact with HMIs. The use of each of them will depend on the objective of the interface and the motor skills that the user possesses. The three most common types of biological signals are: muscular signals, ocular signals and brain signals. The presented thesis is mainly focused on the analysis and development of interfaces based on brain signals, which are also known as Brain-Computer Interface (BCI) or Brain-Machine Interface (BMI). This kind of interfaces captures the biological signals produced by the neural activity of the user’s brain, through non-invasive techniques such as Electroencephalography (EEG), and translates them into commands for external devices. Thus, they offer a new communication channel with the environment without using muscle movements, but just employing the user’s thoughts. During the last decades, several works have proposed different BCI interfaces that offer accurate and good quality results. However, most of them use expensive clinical devices and numerous electrodes to capture brain signals. This can be uncomfortable for users and make the interfaces difficult to use, so in our work we focus on developing low-cost BCI systems that employ open devices with a small number of electrodes. As a first approach, we propose an architecture for integrating a BCI application in an IoT environment for home automation. For this purpose, we develop a low-cost open-source EEG device which acquires EEG signals from two input channels. Using the EEG data of only one channel, we propose and compare different feature extraction algorithms based on Sliding Transforms (STs) techniques for determining the user’s eye states, i.e., open eyes (oE) or closed eyes (cE). Firstly, we compare real-valued and complex-valued transforms. Secondly, we consider sliding windows with overlap instead of using the traditional approach with non-overlapped windows, thus reducing the delay time between acquisition and decision. The algorithms include different configurations oriented to provide accurate results with a low computational burden. After that, we extended the study to the situation where the data captured by both channels are jointly employed in order to build a two-dimensional feature set that may provide higher and more robust results. To this end, we compare the previously analyzed feature extraction techniques with other strategies commonly used in BCI applications, such as Discrete Wavelet Transform (DWT). We compare three classification techniques, one based on a threshold classifier and another two that implement the well known Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) algorithms. As a second approach, we propose an HMI for environmental control using a single-channel recording system. The P300 potential and eye blinks are compared as control signals in order to determine which one offers the best performance in terms of accuracy and response time. The home elements to be controlled are displayed in a Graphical User Interface (GUI) following a matrix-form and presented to users using two different stimulation paradigms: 1) home elements are intensified one by one or 2) all the elements of the same row/column are jointly intensified at the same time. Both interfaces, P300-based or blink-based, employ only one input channel of the same EEG device to capture the brain/eye user’s activity. The GUI is fully configurable, so it can be adapted to the user’s needs.