Stochastic Optimal Control Formulations of Decision Problems
Published Online: 15 JUN 2010
Copyright © 2010 John Wiley & Sons, Inc. All rights reserved.
Wiley Encyclopedia of Operations Research and Management Science
How to Cite
Srivastava, K. and Stipanović, D. M. 2010. Stochastic Optimal Control Formulations of Decision Problems. Wiley Encyclopedia of Operations Research and Management Science.
- Published Online: 15 JUN 2010
In this article, we give a brief overview of different stochastic optimal control problems that arise in decision problems. An informal derivation of the Hamilton–Jacobi–Bellman equation, which characterizes the optimal policy, is derived for the case of full state infinite horizon optimal control problem. We then discuss the optimal control problem where the horizon is governed by a random stopping time. An example illustrating the application of the theory to a portfolio selection problem is provided. After a brief discussion on ergodic and risk sensitive control formulations, we move to the case when the state is partially observed. We then state the key separation result in this case and provide some intuitive explanation.
- stochastic optimal control formulation;
- HJB equation;
- stationary Markov control;
- separation principle;
- ergodic control