Electrical Engineering Systems Seminar
I will describe a reinforcement learning agent that, with specification only of agent state dynamics and a reward function, can operate with some degree of competence in any environment. The agent applies an optimistic version of Q-learning to update value predictions that are based on the agent's actions and aleatoric states. We establish a regret bound demonstrating convergence to near-optimal per-period performance, where the time required is polynomial in the number of actions and aleatoric states, as well as the reward mixing time of the best policy among those for which actions depend on history only through aleatoric state. Notably, there is no further dependence on the number of environment states or mixing times associated with other policies or statistics of history.