Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning
Libin LiuJessica Hodgins
ACM Transactions on Graphics (2017)
teaser

Given a robust control system, physical simulation offers the potential for interactive human characters that move in realistic and responsive ways. In this article, we describe how to learn a scheduling scheme that reorders short control fragments as necessary at runtime to create a control system that can respond to disturbances and allows steering and other user interactions. These schedulers provide robust control of a wide range of highly dynamic behaviors, including walking on a ball, balancing on a bongo board, skateboarding, running, push-recovery, and breakdancing. We show that moderate-sized Q-networks can model the schedulers for these control tasks effectively and that those schedulers can be efficiently learned by the deep Q-learning algorithm.

Libin Liu, Jessica Hodgins (2017). Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning. ACM Transactions on Graphics, 36(3).

@article{Hodgins:2017:DOE,
author={Libin Liu and Jessica Hodgins},
title={Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning},
journal={ACM Transactions on Graphics},
volume={36},
number={3},
year={2017},
}