Get access

A least squares temporal difference actor–critic algorithm with applications to warehouse management

Authors

  • Reza Moazzez Estanjini,

    1. Division of Systems Engineering, Boston University, Boston, Massachusetts
    Search for more papers by this author
  • Keyong Li,

    1. Center for Information & Systems Engineering, Boston University, Boston, Massachusetts
    Search for more papers by this author
  • Ioannis Ch. Paschalidis

    Corresponding author
    1. Department of Electrical and Computer Engineering, Division of Systems Engineering, Boston University, Boston, Massachusetts 02215
    • Department of Electrical and Computer Engineering, Division of Systems Engineering, Boston University, Boston, Massachusetts 02215
    Search for more papers by this author

Abstract

This article develops a new approximate dynamic programming (DP) algorithm for Markov decision problems and applies it to a vehicle dispatching problem arising in warehouse management. The algorithm is of the actor-critic type and uses a least squares temporal difference learning method. It operates on a sample-path of the system and optimizes the policy within a prespecified class parameterized by a parsimonious set of parameters. The method is applicable to a partially observable Markov decision process setting where the measurements of state variables are potentially corrupted, and the cost is only observed through the imperfect state observations. We show that under reasonable assumptions, the algorithm converges to a locally optimal parameter set. We also show that the imperfect cost observations do not affect the policy and the algorithm minimizes the true expected cost. In the warehouse application, the problem is to dispatch sensor-equipped forklifts in order to minimize operating costs involving product movement delays and forklift maintenance. We consider instances where standard DP is computationally intractable. Simulation results confirm the theoretical claims of the article and show that our algorithm converges more smoothly than earlier actor–critic algorithms while substantially outperforming heuristics used in practice. © 2012 Wiley Periodicals, Inc. Naval Research Logistics, 2012

Ancillary