Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Reinforcement Learning and Feedback Control for High-Level Upper-Extremity Neuroprostheses

Jagodnik, Kathleen M

Abstract Details

2014, Doctor of Philosophy, Case Western Reserve University, Biomedical Engineering.
High-level spinal cord injury causes paralysis below the level of the neck. Functional Electrical Stimulation (FES) is a technology that restores voluntary movement via application of electrical current to nerves and muscles. Our work aims to restore movement in the paralyzed upper limb. When implementing FES systems, effective controllers are needed to translate the current and desired arm positions into a pattern of muscle stimulations that achieve the target position accurately and efficiently. Although a range of upper-extremity neuroprosthesis controllers exist, none is capable of restoring accurate, natural arm movement in a clinical setting. For the purpose of advancing upper-extremity FES control technology, we explore reinforcement learning (RL), a control strategy that uses delayed reward and a trial-and-error search to develop its action policy. A potential advantage of RL control for upper-extremity FES systems is that human user preferences can be incorporated into controller training through the use of user-generated rewards of the controller actions. To date, RL control has been minimally explored for FES systems, and human rewards have never been incorporated for this application. An RL controller was implemented for a planar 2 degree of freedom biomechanical arm model, and this project explored the feasibility of using human rewards to train the RL controller. Simulation experiments were performed using pseudo-human, computer generated rewards that simulate the rewards that a human would be likely to assign. A range of experiments was performed to examine the learning properties of RL control using human-like rewards, and it was determined that RL controller learning occurs over a measurable time frame. Subsequently, human rewards were introduced to train the RL controller. Ten human subjects viewed animations of arm reaching movements, and assigned rewards to train the RL controller based on the quality of each movement. The RL controllers trained by humans learned well, although pseudo-human reward training was found to be equally effective. We discuss the potential benefits of using pseudo-human rewards for initial RL controller training, with subsequent fine-tuning training using human rewards. Reinforcement learning is a promising control strategy to restore natural arm movement to individuals with high-level paralysis.
Robert Kirsch, Ph.D. (Advisor)
Antonie van den Bogert, Ph.D. (Committee Member)
Dawn Taylor, Ph.D. (Committee Member)
Kenneth Gustafson, Ph.D. (Committee Member)
422 p.

Recommended Citations

Citations

  • Jagodnik, K. M. (2014). Reinforcement Learning and Feedback Control for High-Level Upper-Extremity Neuroprostheses [Doctoral dissertation, Case Western Reserve University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=case1395789620

    APA Style (7th edition)

  • Jagodnik, Kathleen. Reinforcement Learning and Feedback Control for High-Level Upper-Extremity Neuroprostheses. 2014. Case Western Reserve University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=case1395789620.

    MLA Style (8th edition)

  • Jagodnik, Kathleen. "Reinforcement Learning and Feedback Control for High-Level Upper-Extremity Neuroprostheses." Doctoral dissertation, Case Western Reserve University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1395789620

    Chicago Manual of Style (17th edition)