The problem of non-parametric spectral density estimation for discrete-time series in the presence of missing observations has a long history. In particular, the first consistent estimators of the spectral density have been developed at about the same time as consistent estimators for non-parametric regression. On the other hand, while for now, the theory of efficient (under the minimax mean integrated squared error criteria) and adaptive nonparametric regression estimation with missing data is well developed, no similar results have been proposed for the spectral density of a time series whose observations are missed according to an unknown stochastic process. This article develops the theory of efficient and adaptive estimation for a class of spectral densities that includes classical causal autoregressive moving-average time series. The developed theory shows how a missing mechanism affects the estimation and what penalty it imposes on the risk convergence. In particular, given costs of a single observation in time series with and without missing data and a desired accuracy of estimation, the theory allows one to choose the cost-effective time series. A numerical study confirms the asymptotic theory.