Approximate record length constraints for experimental identification of dynamical fractals



The ambiguity that can exist, for short datasets, between the observational power spectra of dynamical fractals and low-order linear memory processes is demonstrated and explained. It is argued that it could be broadly useful to have a highly practical rule-of-thumb for assessing whether a data record is sufficiently long to permit distinguishing the two types of processes, and if it is not, to produce an approximate estimate of the amount of additional data that would be required to do so. Such an expression is developed using the AR(1) process as a loose benchmark. Various aspects of the technique are successfully tested using synthetic time series generated by a range of prescribed models, and its application and relevance to observational datasets is then demonstrated using examples from mathematical ecology (wild steelhead population size), geophysics (river flow volume), and econophysics (stock price volatility).