Skip to Main Content
 

Global Search Box

 
 
 

ETD Abstract Container

Abstract Header

Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control

Abstract Details

2019, Doctor of Philosophy, University of Toledo, Engineering.
Wearable electronic equipment is continuously evolving and is increasing the human-machine integration. While industrialists (read Elon Musk) want to integrate a microchip in the human brain to leverage the faster processing capabilities of machines, others have been trying to build human-like machines. Available in various forms, these sensors can detect and measure the physiological changes in the human body; and may use those signals to control other devices. One such sensor, an electromyographic sensor (EMG), captures electromyographic data using myoelectric (electric signals in muscles) signals and translates them to be used as input signals through pre-de ned gestures. Use of such a sensor in a multimodal environment will not only increase the possible types of work that can be accomplished with the help of such a device, but it will also help in improving the accuracy of the tasks performed. This research addresses the fusion of input modalities such as speech and myoelectric signals captured through a microphone and EMG sensor, respectively, to accurately control a robotic arm. The research was completed in three phases. During phase 1, an extensive survey on technologies based on the multimodal environment was conducted. The goal was to nd the pros and cons of each application and its utility. The classi cation was broadly divided into unimodal and multimodal systems. The multimodal system was further classi ed based on the fusion of input modalities. Phase 1 results rearmed our expectation that the EMG data along with speech has not been used in many multimodal systems and if used, hasn't resulted in a high accuracy fusion that is useful for real-world application. Phase 2 involved performing the experimental research using the EMG data (collected using the EMG sensor) with speech (collected using a speech recognition API). The findings show that there is a scope of improvement in accuracy for both the modalities when the EMG and Speech data was collected in laboratory conditions. The error percentage for the modalities varies from 8.9-34.1%. A decision-based fusion was performed which lead to a conclusion that multimodality improves the accuracy of operating the robotic arm, and the error rate reduced to 3.5-7.5%. The last phase dealt with improving the results achieved during phase 2 using machine learning techniques and analyzes the most suitable strategy for controlling a robotic arm. Six machine learning algorithms were tested. After the training data was provided with sixty error conditions and tested again on the newly developed cases, the highest accuracy achieved through the K-nearest neighbor (KNN) algorithm was approximately 92%. Building upon phase 2, phase 3 concluded that the use of a machine learning algorithm helps in cases where input is misinterpreted and the error reduces drastically.
Ahmad Javaid (Committee Chair)
Mansoor Alam (Committee Co-Chair)
Weiqing Sun (Committee Member)
Devinder Kaur (Committee Member)
Quamar Niyaz (Committee Member)
Jared Oulouch (Committee Member)
155 p.

Recommended Citations

Citations

  • Khan Mohd, T. (2019). Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control [Doctoral dissertation, University of Toledo]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597

    APA Style (7th edition)

  • Khan Mohd, Tauheed. Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control. 2019. University of Toledo, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597.

    MLA Style (8th edition)

  • Khan Mohd, Tauheed. "Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control." Doctoral dissertation, University of Toledo, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597

    Chicago Manual of Style (17th edition)