Two annual flow generation models proposed to account for the Hurst phenomenon, fractional Gaussian noise (FGN) and lag 1 autoregressive-lag 1 moving average (Arma) models, were compared operationally by using the sequent peak algorithm (SPA). The SPA was used to determine empirical probability distributions of storage at a hypothetical single site on the basis of 1000 synthetic annual streamflow traces generated from each model, for constant demands of 0.5, 0.7, and 0.9 of the mean annual flow. The comparisons were made on the basis of population values of the lag 1 correlation coefficient, coefficient of variation, and (for the FGN sequences) Hurst coefficient. For the Arma models, large sample expectations of the Hurst coefficient (O'Connell, 1974)were used in the comparisons. The fast fractional Gaussian noise (FFGN) generator proposed by Mandelbrot (1971) was used to represent FGN. A modification of the Arma model, the annual Arma-Markov mixture process, was developed and tested. This model has the advantage that the Hurst coefficient is an explicit model parameter. The results of the comparisons showed that at an operating life of 40 years the unskewed (normal) models gave similar storage probability distributions. The small differences between models could be explained in part by differences in equivalent independent sample sizes of the generated time series. For an operating life of 40 years and H = 0.70, tests of skewed (three-parameter log normal) models gave substantially identical storage probability distributions. For the longer 100-year operating life, storage distributions for the Arma model differed significantly, apparently owing to the lack of an equivalent population H value for this model. The Arma-Markov model, however, gave results nearly identical to the FFGN model at the 100-year operating life and appears to provide a viable low-cost alternative generation method for the preservation of long-term persistence in hydrologic time series.