SEARCH

SEARCH BY CITATION

References

  • Bellman, R. (1957) Dynamic Programming. Princeton University Press, Princeton, New Jersey.
  • Bellman, R. (1961) Adaptive Control Processes: A Guided Tour. Princeton University Press, Princeton, New Jersey.
  • Bertsekas, D. (2007) Dynamic Programming and Optimal Control, vol. II, 3rd edn. Athena Scientific, Nashua, New Hampshire.
  • Bogich, T. & Shea, K. (2008) A state-dependent model for the optimal management of an invasive metapopulation. Ecological Applications, 18, 748761.
  • Boutilier, C., Dean, T. & Hanks, S. (1999) Decision theoretic planning: structural assumptions and comupational leverage. Journal of Artificial Intelligence Research, 11, 194.
  • Boutilier, C., Dearden, R. & Goldszmidt, M. (2000) Stochastic dynamic programming with factored representations. Artificial Intelligence, 121, 49107.
  • Chades, I., Martin, T., Nicol, S., Burgman, M., Possingham, H.P. & Buckley, Y. (2011) General rules for managing and surveying networks of pests, diseases, and endangered species. Proceedings of the National Academy of Sciences, 108, 83238328.
  • Chauvenet, A., Baxter, P., McDonald-Madden, E. & Possingham, H.P. (2010) Optimal allocation of conservation effort among subpopulations of a threatened species: How important is patch quality? Ecological Applications, 20, 789797.
  • Clark, C.W. & Mangel, M. (2000) Dynamic State Variable Models in Ecology: Methods and Applications, Oxford Series in Ecology and Evolution. Oxford University Press, New York, New York.
  • Costello, C. & Polasky, .S. (2004) Dynamic reserve site selection. Resources and Energy Economics, 26, 157174.
  • Dearden, R. & Boutilier, C. (1997) Abstraction and approximate decision theoretic planning. Artificial Intelligence, 89, 219283.
  • Ferns, N., Panangaden, P. & Precup, D. (2004) Metrics for finite Markov decision processes. In: Proceedings of the Twenthieth Conference on Uncertainty in Artificial Intelligence, pp. 162169.
  • Fikes, R. & Nilsson, N. (1971) STRIPS. A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 189208.
  • Givan, R., Dean, T. & Greig, M. (2003) Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, 147, 163223.
  • Kushmerick, N., Hanks, S. & Weld, D. (1994) An algorithm for probabilistic least-commitment planning. In: Proceedings of the Twelfth National Conference on Artificial Intelligence, 10731078.
  • Li, L., Walsh, T.J. & Littman, M.L. 2006. Towards a unified theory of state abstraction for MDPs. In: Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, pp. 531539.
  • Nicol, S. & Chades, I. (2011) Beyond stochastic dynamic programming: a heuristic sampling method for optimizing conservation decisions in very large state spaces. Methods in Ecology and Evolution, 2, 221228.
  • Nicol, S. & Chades, I. (2012) Which states matter? An application of an intelligent discretization method to solve a continuous POMDP in conservation biology PLoS ONE, 7, e28993. doi:10.1371/journal.pone.0028993.
  • Nicol, S., Chades, I., Linke, S. & Possingham, H.P. (2010) Conservation decision-making in large state spaces. Ecological Modelling, 221, 25312536.
  • Possingham, H.P., Andelman, S., Noon, B., Trombulak, S. & Pulliam, H., 2001. Making smart conservation decisions. Conservation Biology: Research priorities for the next decade (eds M.E. Soule & G.H. Orians), pp. 225244. Island Press, Washington, District of Columbia, USA.
  • Powell, W. (2007) Approximate Dynamic Programming: Solving the Curse of Dimensionality. Wiley-Interscience, New York, New York.
  • Putterman, M. (1994) Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York, New York.
  • R Development Core Team (2009) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org.
  • Tenhumberg, B., Tyre, A.J., Shea, K. & Possingham, H.P. (2004) Linking wild and captive populations to maximize species persistence: optimal translocation strategies. Conservation Biology, 18, 13041314.
  • Uther, W. & Veloso, M.M., 1998. Tree based discretization for continuous state space reinforcement learning. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence AAAI-98, pp. 769794.
  • Westphal, M.I., Pickett, M., Getz, W.M. & Possingham, H.P. (2003) The use of stochastic dynamic programming in optimal landscape reconstruction for metapopulations. Ecological Applications, 13, 543555.