SEARCH

SEARCH BY CITATION

We study the properties of the generalized stochastic gradient (GSG) learning in forward-looking models. GSG algorithms are a natural and convenient way to model learning when agents allow for parameter drift or robustness to parameter uncertainty in their beliefs. The conditions for convergence of GSG learning to a rational expectations equilibrium are distinct from but related to the well-known stability conditions for least squares learning.