Neural Reinforcement Learning Controllers for a Real Robot Application
- 1 April 2007
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- No. 10504729,p. 2098-2103
- https://doi.org/10.1109/robot.2007.363631
Abstract
Accurate and fast control of wheel speeds in the presence of noise and nonlinearities is one of the crucial requirements for building fast mobile robots, as they are required in the MiddleSize League of RoboCup. We will describe, how highly effective speed controllers can be learned from scratch on the real robot directly. The use of our recently developed neural fitted Q iteration scheme allows reinforcement learning of neural controllers with only a limited amount of training data seen. In the described application, less than 5 minutes of interaction with the real robot were sufficient, to learn fast and accurate control to arbitrary target speeds.Keywords
This publication has 8 references indexed in Scilit:
- Neural Reinforcement Learning to Swing-up and Balance a Real PolePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2006
- Neural Fitted Q Iteration – First Experiences with a Data Efficient Neural Reinforcement Learning MethodPublished by Springer Nature ,2005
- Natural Actor-CriticPublished by Springer Nature ,2005
- Policy gradient reinforcement learning for fast quadrupedal locomotionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2004
- A direct adaptive method for faster backpropagation learning: the RPROP algorithmPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Concepts and Facilities of a Neural Reinforcement Learning Control Architecture for Technical Process ControlNeural Computing & Applications, 1999
- Stable Function Approximation in Dynamic ProgrammingPublished by Elsevier ,1995
- Reinforcement LearningPublished by Springer Nature ,1992