What you should know about approximate dynamic programming
Article first published online: 24 FEB 2009
DOI: 10.1002/nav.20347
Copyright © 2009 Wiley Periodicals, Inc.
Additional Information
How to Cite
Powell, W. B. (2009), What you should know about approximate dynamic programming. Naval Research Logistics, 56: 239–249. doi: 10.1002/nav.20347
Publication History
- Issue published online: 15 MAR 2009
- Article first published online: 24 FEB 2009
- Manuscript Received: 17 DEC 2008
- Manuscript Accepted: 17 DEC 2008
Funded by
- Air Force Office of Scientific Research. Grant Number: AFOSRF49620-93-1-0098
- Abstract
- References
- Cited By
Keywords:
- approximate dynamic programming;
- reinforcement learning;
- neuro-dynamic programming;
- stochastic optimization;
- Monte Carlo simulation
Abstract
Approximate dynamic programming (ADP) is a broad umbrella for a modeling and algorithmic strategy for solving problems that are sometimes large and complex, and are usually (but not always) stochastic. It is most often presented as a method for overcoming the classic curse of dimensionality that is well-known to plague the use of Bellman's equation. For many problems, there are actually up to three curses of dimensionality. But the richer message of approximate dynamic programming is learning what to learn, and how to learn it, to make better decisions over time. This article provides a brief review of approximate dynamic programming, without intending to be a complete tutorial. Instead, our goal is to provide a broader perspective of ADP and how it should be approached from the perspective of different problem classes. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009
1520-6750/asset/NAV_left.gif?v=1&s=23e2ded0809abfc3ed47247eefb801bfed0ff239)
1520-6750/asset/cover.gif?v=1&s=1bdad4849bee8229bbc4053fa1d796b87c453243)