SEARCH

SEARCH BY CITATION

Modest progress has been made in improving the care of patients with bladder cancer over the past decade. Notable advances include a better understanding of the role of neoadjuvant chemotherapy, maturation of the evidence supporting perioperative intravesical chemotherapy and maintenance intravesical immunotherapy, improved risk stratification to predict disease progression, improved staging through repeat transurethral resection for high-risk tumors, and a better understanding of the importance of surgical quality (eg, adequate lymph node dissection). Despite these advances, survival outcomes still depend heavily on the grade and stage of tumors at the time of diagnosis, and little progress has been made toward improving survival among patients with locally advanced or metastatic disease. Although patients with organ-confined tumors at the time of cystectomy enjoy an 85% 5-year recurrence-free survival rate, up to 50% of patients who undergo radical cystectomy have a locally advanced tumor or lymph node metastases, for which the 5-year recurrence-free survival rate drops to 60% and 35%, respectively.1, 2 This association between disease stage and survival is the theoretical basis for improving outcomes through early detection, timely diagnosis, and prompt treatment of patients with bladder cancer.

Bladder tumors originate in the bladder mucosa and progressively invade into the lamina propria, muscularis propria, perivesical fat, and contiguous structures in the pelvis, and an increased incidence of lymph node and distant metastases accompanies each progressive local stage.2 Given this model of progression, it is logical to hypothesize that earlier diagnosis and definitive intervention at a less advanced disease state should result in improved survival for the average patient. Whether it is screening for presymptomatic disease, more rapid diagnosis among those with initial symptoms, or a shorter time interval from diagnosis to definitive therapy, any intervention that shortens the interval between the early stages of curable disease and definitive therapy has the potential to improve outcome.

In this issue of Cancer, Hollenbeck and colleagues examine the relation of the time interval between symptoms (hematuria) and diagnosis (initial transurethral resection of the bladder tumor [TURBT]) and survival. By using Medicare-linked Surveillance, Epidemiology, and End Results data from 1992 through 2002, the authors identified more than 27,000 Medicare beneficiaries who had a physician-ascribed diagnosis of hematuria in the year preceding a diagnosis of bladder cancer. They sorted patients into 4 groups based on the time interval between the initial hematuria claim and the diagnosis of bladder cancer (<3 months, 3 months to <6 months, 6 months to <9 months, and 9 months to 12 months). Then, these time intervals were modeled as predictors of bladder cancer mortality, with survival time measurement starting at the time of bladder cancer diagnosis. After adjusting for demographic factors, they observed that, compared with patients who were diagnosed within 3 months of initial symptoms, those who were diagnosed at 9 months to 12 months were 34% more likely (hazard ratio [HR], 1.34) to die of bladder cancer and 15% more likely (HR, 1.15) to die of any cause. Adjustment for tumor grade and stage only slightly attenuated these results. Furthermore, in subgroup analyses, they noted that patients who were most likely to experience death associated with a delay in diagnosis were those with low-grade (HR, 2.21) and low-stage (HR, 2.22) tumors. Patients with muscle invasive tumors (tumor stage T2) were 23% more likely (HR, 1.23) to experience cancer-specific death when diagnosis took 9 months to 12 months compared with <3 months.

This report adds to the growing list of articles that have evaluated delay in definitive therapy and its association with poorer survival outcomes. Most of studies that have addressed this topic have evaluated the time from initial diagnosis of muscle-invasive tumors to cystectomy, and most have reported an association between longer delay and shorter survival.4-6 Other work suggests that worse survival outcomes among women with bladder cancer may be related to a delay in diagnosis because of hematuria being attributed to other causes.7, 8 Finally, a large screening study that used dipstick hematuria to detect incident bladder tumors reinforced the concept of early detection by providing evidence that screen-detected tumors are identified at a lower stage and are less likely to be lethal than tumors that present initially with symptoms.9

Does the current article, building on existing work, allow us to conclude that delays in diagnosis of up to 1 year and, thus, delays in definitive treatment, are a cause of poorer survival outcomes for bladder cancer patients? Before reaching that conclusion, the reader should consider the limitations of this study as well as general limitations inherent in all studies that use observational data to draw conclusions about the correlation between earlier diagnosis and survival.

Figure 1 is a graphic representation of how the cohort in the study by Hollenbeck et al was assembled and analyzed. On the basis of inclusion criteria, all patients had an episode of hematuria; and, at some time interval, they were diagnosed with bladder cancer. This time interval is represented in the figure as ΔT1. Patients were followed from the time of diagnosis to the time of death (or were censored administratively at the end of available follow-up), and this time is represented as ΔT2. In their analyses, Hollenbeck and colleagues examined the relation between ΔT1 and ΔT2, and their main conclusion was that a longer ΔT1 (delay in diagnosis) was associated with a shorter ΔT2 (survival). However, constructing the analyses based on this definition of survival time inherently introduces lead-time bias into the study. Note Scenarios A and B in Figure 1, which, in general terms, represent the findings of the study. Compared with Scenario B, the patient in Scenario A has a longer delay in diagnosis (ΔT1) and subsequently has a shortened survival interval after diagnosis (ΔT2). However, in both scenarios, the survival time measured from the first symptoms of disease until death is the same (represented as ΔT3). Because of how the data were analyzed, even if there was no difference in survival time measured from the time of first symptoms, a shorter time from symptoms to diagnosis (ΔT1) will always result in a longer survival time measured from the time of diagnosis for any given patient (ΔT2). This lead-time bias increases the observed magnitude of survival benefit beyond the true benefit of early diagnosis and can result in false-positive findings. In all cases, it biases the observed findings away from the null and toward longer survival.

thumbnail image

Figure 1. This figure illustrates how lead time bias can be introduced into observational studies that evaluate the association between earlier diagnosis and survival. ΔT1 is the time from first symptoms until diagnosis, ΔT2 is the time interval between diagnosis and death, and ΔT3 is the time interval between first symptoms and death. TURBT indicates transurethral resection of bladder tumor.

Download figure to PowerPoint

The authors could have mitigated the effects of lead-time bias by examining the association between time from symptoms to diagnosis (ΔT1) and survival time starting with initial symptoms (ΔT3; the overlap of these time intervals, however, has its own inherent problems beyond the scope of this editorial). The authors note that they did perform a univariate sensitivity analysis to examine the magnitude of lead-time bias by structuring the data in this way, but the results of adding the lead time back into the survival time were completely predictable: Each group's survival time increased by exactly the number of months as their delay category. Specifically, median survival was unchanged for the 0 to <3-month delay group (70.9 months), it increased by exactly 3 months for the 3 to <6-month delay group (from 59.6 months to 62.6 months), it increased by exactly 6 months for the 6 to <9-month delay group (from 54.7 months to 60.7 months), and it increased by exactly 9 months for the 9 to 12-month delay group (from 50.9 months to 59.9 months). Although the difference in survival between groups still was significant according to the log-rank test after adding the lead time back into the survival estimates, it is not possible to know how these updated survival times would have impacted the results of the multivariate analyses, nor is it possible to know which findings in the multivariate analysis may no longer be statistically or clinically significant with survival times measured starting from the date of symptoms.

The problem of lead time is not unique to this study; it is an inherent risk of any observational study that examines the relation between earlier diagnosis and survival. The classic scenario is a screening study that compares survival between screened and unscreened cohorts. In such studies, the screened groups have inflated survival times, because the survival clock started earlier (ie, at the time of a positive screen test), whereas it did not start until symptoms led to a diagnosis in the comparison group. Lead-time bias can be introduced into any observational study in which the start of the survival clock is moved back on the timeline by nonrandom factors, and these factors (whether it is screening, a delay in diagnosis, or a delay in treatment after diagnosis) form the basis for establishing cohorts.

Despite the problem of lead time, the report by Hollenbeck et al presents important findings, perhaps the most important of which is derived from the observation that patients with low-grade and low-stage tumors have the largest increase in mortality when there is a longer delay in diagnosis. If we assume that these findings are correct despite the potential lead-time bias, is there a plausible explanation for this counterintuitive finding? Low-grade tumors uncommonly progress and cause serious morbidity, so it seems unlikely that the observed increased risk of death is related directly to the classic notion of disease progression. The authors speculate that this observation could be a result of factors unrelated to the biologic progression of disease. They point out that a delay in diagnosis may be a marker of limited access to care and/or poorer quality of care. Patients with less access to care may undergo less surveillance after initial diagnosis. Poorer quality care may translate into less accurate grading and staging, and it also may translate into suboptimal care for any given stage of the disease. Taken together, these factors may result in poorer outcomes and higher frequency of disease progression and death for any stage of the disease. In short, the delay in diagnosis per se may have only a limited influence on the course of the disease but may be a marker for quality-of-care factors that, when taken together, result in poorer survival outcomes. And, as the authors suggest, perhaps this is most pronounced in low-stage and low-grade tumors, because this clinical scenario offers more opportunity for variability of care. The finding that adjustment for disease stage and grade did not appreciably attenuate the findings reinforces this possibility. If early diagnosis improves outcomes by allowing treatment at lower disease stages, then adjustment for disease severity should have attenuated most of the findings.

What conclusions are we left with? First, the methods and findings presented by Hollenbeck et al do not provide convincing evidence that earlier diagnosis within the first year of symptoms improves outcomes, at least not under the conceptual model of improved outcomes resulting from intervention at earlier disease states. It should be noted, however, that, based on our current understanding of bladder cancer progression, there is strong face validity that earlier treatment, on average, should lead to improved survival outcomes. Proving this with observational data, however, is a difficult task. Even more difficult is determining the length of time from symptoms to diagnosis to treatment that is sufficiently short to add no appreciable risk to patients; and, to date, this question remains largely unanswered. What this study does well is add to the growing body of evidence that there are variations in care delivered to patients with bladder cancer, in this instance, variation in timely diagnosis, which may be associated with adverse survival outcomes. Unexplained variation in care is a hallmark of an underlying quality-of-care problem, so perhaps there is just as much or more to gain by improving delivery of care to patients with bladder cancer than through early detection and earlier treatment.

In the first paragraph above, I listed several advances that have improved our ability to care for patients with bladder cancer over the past decade. However, a quick review of the literature is all it should take to convince the reader that many of these approaches are underused and applied unevenly in urologic practice in the United States. Variability in the delivery of care occurs in all aspects of this disease, not just in time to diagnosis and treatment. It should be a goal of leaders in the fields of bladder cancer, health services delivery, and patient advocacy to determine why these wide variations in practice occur and to seek and disseminate solutions that result in more standardized approaches based on available evidence. Furthermore, it should be a mission of all urologists who manage this disease to ensure that timely and evidence-based treatment is available to all patients; this should include education of referring providers within their community about bladder cancer awareness and the importance of timely referral for evaluation of hematuria.

CONFLICT OF INTEREST DISCLOSURES

  1. Top of page
  2. CONFLICT OF INTEREST DISCLOSURES
  3. REFERENCES

The author made no financial disclosures.

REFERENCES

  1. Top of page
  2. CONFLICT OF INTEREST DISCLOSURES
  3. REFERENCES