We thank Dr. Weedon-Fekjaer and Dr. Jørgensen and Dr. Gøtzsche for their comments.
Dr. Weedon-Fekjaer addresses the question when to adjust for differences in the breast cancer mortality between the study (invited to mammography screening) and control areas (not invited to mammography screening) during the prescreening period. Dr. Jørgensen and Dr. Gøtzsche also touch upon this subject. We have recently analyzed the regional differences in breast cancer mortality 1952 to 2004 in women 40 to 79 years in Sweden and showed that the differences in the county-specific mortality seen in earlier decades diminished over time. To be on the safe side, one can adjust for the differences in breast cancer morality during the prescreening period independently whether the difference is statistically significant or not. However, adjustment makes the confidence intervals approximately 40% wider, because the variance of the prescreening period is “added” to the variance of the screening period. One can, therefore, question the logic of adjusting if the estimated baseline relative difference is say 0.99 or 0.999. In previous studies,1, 2 we have adjusted for the differences but concluded that this can be unnecessarily conservative. Thus, in the SCRY (mammography SCReening of Young women) study, we have applied the 5% significance level strictly for adjusting or not.
Jørgensen and Gøtzsche question our matching. We agree that the matching in our study was complex. The study design was chosen to be able to use our data in an optimal way. Matching is often a problem in epidemiological studies with retrospective data. Most important to us was to ensure that the length and mid-year of the follow-up was equal on average in the study group and the control group.
Jørgensen and Gøtzsche also question adjustments for lead time. Only lead time in the deceased cases can affect the refined breast cancer mortality. Our estimates showed that the lead time for the breast cancer deaths was as expected shorter than for all breast cancer cases. An adjustment for lead time in our study would have increased the mortality reduction. We did not adjust the results, but with reasonable estimates of lead time, the rate ratio had been reduced by 1 to 5 percentage points, that is, ≥29%.
Jørgensen and Gøtzsche state that “it is more reliable to count the number of deaths,” that is, breast cancer deaths. Our interpretation is that they advocate analysis of the breast cancer mortality trends as they did in the evaluation of the Copenhagen screening program.3 However, mortality trends do not take account of the changes in the breast cancer incidence. In Sweden, the breast cancer incidence has increased continuously since the start of the national Cancer Registry in 1958 and in particular since the mid-1980s in the age group 50 to 69 years. The reasons for this are not known, but introduction of hormonal therapy during menopause may be 1 explanation. The national service-screening program with mammography can be an explanation for a temporary change but probably not for a continuing increase.
Jørgensen and Gøtzsche also refer to their own estimate of the impact of the service-screening program in Sweden and United Kingdom and Denmark where they failed to demonstrate any effect due to the method problems indicated above. Furthermore, in Denmark they compared the breast cancer mortality in screened areas with nonscreened areas for women aged 55 to 74 year without adjusting for the visible difference between the 2 groups during the prescreening period (higher mortality in the screening areas). It is worth noting that they compared the annual mortality change and not the mortality level. Finally, they also refer to the null-result in the recent publication from the Norwegian mammography screening program. However, if that study had showed a substantial effect of the screening program,4 we would have been suspicious as the average follow-up time for women with breast cancer was 2.2 years, and in 5 of 6 regions, the follow-up time from screening start was 2 to 6 years.