Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control

Skelly, Margaret Mary

Abstract Details

2004, Doctor of Philosophy, Case Western Reserve University, Systems and Control Engineering.
This dissertation investigates the incorporation of function approximation and hierarchy into reinforcement learning for use in an adaptive control setting through empirical studies. Reinforcement learning is an artificial intelligence technique whereby an agent discovers which actions lead to optimal task performance through interaction with its environment. Although reinforcement learning is usually employed to find optimal problem solutions in unchanging environments, a reinforcement learning agent can be modified to continually explore and adapt in a dynamic environment, carrying out a form of direct adaptive control. In the adaptive control setting, the reinforcement learning agent must be able to learn and adapt quickly enough to compensate for the dynamics of the environment. Since reinforcement learning is known to converge slowly to optimality in stationary environments, the use of abstraction and changes in task representation are examined as a means to accelerate reinforcement learning. Various levels of abstraction and task representations are incorporated into reinforcement learning agents through the use of function approximation and hierarchical task decomposition. The effectiveness of this approach is tested in simulations of representative reinforcement learning tasks. The comparison of the learning and adaptation times for differing levels of abstraction and competing task representations provides insight into the suitability of these techniques to accelerate learning and adaptation. The level of abstraction is examined in experiments where the reinforcement learning agent uses function approximation to store its learned information. The function approximation method chosen provides local generalization, which provides for a controlled diffusion of information throughout the task space. As a consequence, the experiments conducted with function approximation demonstrate how greater levels of abstraction, as determined by the amount of information diffusion, can accelerate learning in tasks where similar states call for similar actions. Hierarchical task decomposition provides a means of representing a task as a set of related subtasks, which introduces modularity into the task’s representation not possible in a monolithic representation. One effect of the hierarchy’s modularity is to contain certain environment changes within the smaller space of a subtask. Therefore, the experiments comparing hierarchical and monolithic representations of a task demonstrate that the hierarchical representation can accelerate adaptation in response to certain isolated environment changes.
Michael Branicky (Advisor)
261 p.

Recommended Citations

Citations

  • Skelly, M. M. (2004). Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control [Doctoral dissertation, Case Western Reserve University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818

    APA Style (7th edition)

  • Skelly, Margaret. Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control. 2004. Case Western Reserve University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.

    MLA Style (8th edition)

  • Skelly, Margaret. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Doctoral dissertation, Case Western Reserve University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818

    Chicago Manual of Style (17th edition)