Standard Article

Stochastic Optimal Control Formulations of Decision Problems

  1. Kunal Srivastava,
  2. Dušan M. Stipanović

Published Online: 15 JUN 2010

DOI: 10.1002/9780470400531.eorms0838

Wiley Encyclopedia of Operations Research and Management Science

Wiley Encyclopedia of Operations Research and Management Science

How to Cite

Srivastava, K. and Stipanović, D. M. 2010. Stochastic Optimal Control Formulations of Decision Problems. Wiley Encyclopedia of Operations Research and Management Science.

Author Information

  1. University of Illinois at Urbana-Champaign, Department of Industrial and Enterprise Systems Engineering, Urbana, Illinois

Publication History

  1. Published Online: 15 JUN 2010

Abstract

In this article, we give a brief overview of different stochastic optimal control problems that arise in decision problems. An informal derivation of the Hamilton–Jacobi–Bellman equation, which characterizes the optimal policy, is derived for the case of full state infinite horizon optimal control problem. We then discuss the optimal control problem where the horizon is governed by a random stopping time. An example illustrating the application of the theory to a portfolio selection problem is provided. After a brief discussion on ergodic and risk sensitive control formulations, we move to the case when the state is partially observed. We then state the key separation result in this case and provide some intuitive explanation.

Keywords:

  • stochastic optimal control formulation;
  • HJB equation;
  • stationary Markov control;
  • separation principle;
  • ergodic control