Learning reward machines: A study in partially observable reinforcement learning 
School authors:
author photo
Margarita Castro
author photo
Rodrigo Andres Toro
External authors:
  • Toryn Q. Klassen ( University of Toronto , Vector Inst Artificial Intelligence )
  • Richard Valenzano ( Toronto Metropolitan Univ )
  • Ethan Waldie ( University of Toronto )
  • Sheila A. Mcilraith ( University of Toronto , Vector Inst Artificial Intelligence )
Abstract:

Reinforcement Learning (RL) is a machine learning paradigm wherein an artificial agent interacts with an environment with the purpose of learning behaviour that maximizes the expected cumulative reward it receives from the environment. Reward machines (RMs) provide a structured, automata-based representation of a reward function that enables an RL agent to decompose an RL problem into structured subproblems that can be efficiently learned via off-policy learning. Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems. We pose the task of learning RMs as a discrete optimization problem where the objective is to find an RM that decomposes the problem into a set of subproblems such that the combination of their optimal memoryless policies is an optimal policy for the original problem. We show the effectiveness of this approach on three partially observable domains, where it significantly outperforms A3C, PPO, and ACER, and discuss its advantages, limitations, and broader potential.1 & COPY; 2023 Elsevier B.V. All rights reserved.

UT WOS:001062209400001
Number of Citations 1
Type
Pages
ISSUE
Volume 323
Month of Publication OCT
Year of Publication 2023
DOI https://doi.org/10.1016/j.artint.2023.103989
ISSN
ISBN
Loading…
Loading the web debug toolbar…
Attempt #