Light from ‘point sources’ such as supernovae is observed with a beam width of the order of the sources’ size – typically less than 1 au. Such a beam probes matter and curvature distributions that are very different from coarse-grained representations in N-body simulations or perturbation theory, which are smoothed on scales much larger than 1 au. The beam typically travels through unclustered dark matter and hydrogen with a mean density much less than the cosmic mean, and through dark matter haloes and hydrogen clouds. Using N-body simulations, as well as a Press–Schechter approach, we quantify the density probability distribution as a function of beam width and show that, even for Gpc-length beams of 500 kpc diameter, most lines of sight are significantly underdense. From this we argue that modelling the probability distribution for au-diameter beams is absolutely critical. Standard analyses predict a huge variance for such tiny beam sizes, and non-linear corrections appear to be non-trivial. It is not even clear whether underdense regions lead to dimming or brightening of sources, owing to the uncertainty in modelling the expansion rate which we show is the dominant contribution. By considering different reasonable approximations which yield very different cosmologies, we argue that modelling ultra-narrow beams accurately remains a critical problem for precision cosmology. This could appear as a discordance between angular diameter and luminosity distances when comparing supernova observations to baryon acoustic oscillations or cosmic microwave background distances.