Theory hypothesizes that the rate of decline in linkage disequilibrium (LD) as a function of distance between markers, measured by r2, can be used to estimate effective population size (Ne) and how it varies over time. The development of high-density genotyping makes feasible the application of this theory and has provided an impetus to improve predictions. This study considers the impact of several developments on the estimation of Ne using both simulated and equine high-density single-nucleotide polymorphism data, when Ne is assumed to be constant a priori and when it is not. In all models, estimates of Ne were highly sensitive to thresholds imposed upon minor allele frequency (MAF) and to a priori assumptions on the expected r2 for adjacent markers. Where constant Ne was assumed a priori, then estimates with the lowest mean square error were obtained with MAF thresholds between 0.05 and 0.10, adjustment of r2 for finite sample size, estimation of a [the limit for r2 as recombination frequency (c) approaches 0] and relating Ne to c (1 – c/2). The findings for predicting Ne from models allowing variable Ne were much less clear, apart from the desirability of correcting for finite sample size, and the lack of consistency in estimating recent Ne (<7 generations) where estimates use data with large c. The theoretical conflicts over how estimation should proceed and uncertainty over where predictions might be expected to fit well suggest that the estimation of Ne when it varies be carried out with extreme caution.