The common approach to compute the cosmic ray distribution in a starburst galaxy or region is equivalent to assuming that at any point within that environment, there is an accelerator inputting cosmic rays at a reduced rate. This rate should be compatible with the overall volume-average injection, given by the total number of accelerators that were active during the starburst age. These assumptions seem reasonable, especially under the supposition of a homogeneous and isotropic distribution of accelerators. However, in this approach the temporal evolution of the superposed spectrum is not explicitly derived; rather, it is essentially assumed ab initio. Here, we test the validity of this approach by following the temporal evolution and spatial distribution of the superposed cosmic ray spectrum and compare our results with those from theoretical models that treat the starburst region as a single source. In the calorimetric limit (with no cosmic ray advection), homogeneity is reached (typically within 20 per cent) across most of the starburst region. However, values of centre-to-edge intensity ratios can amount to a factor of several. Differences between the common homogeneous assumption for the cosmic ray distribution and our models are larger in the case of two-zone geometries, such as a central nucleus with a surrounding disc. We have also found that the decay of the cosmic ray density following the duration of the starburst process is slow, and even approximately 1 Myr after the burst ends (for a gas density of 35 cm−3) it may still be within an order of magnitude of its peak value. Based on our simulations, it seems that the detection of a relatively hard spectrum up to the highest gamma-ray energies from nearby starburst galaxies favours a relatively small diffusion coefficient (i.e. long diffusion time) in the region where most of the emission originates.