Ethics Symposium Part III: Ethical review boards: Important ethical safeguards or over-burdensome and unnecessary bureaucracy?
Simon Whitney, MD, JD, Department of Family and Community Medicine, Baylor College of Medicine, 3701 Kirby Drive, Suite 600, Houston, TX 77098, USA. (fax: 713 798 7940; e-mail: email@example.com). Carl E. Schneider, JD, University of Michigan Law School, 341 Hutchins Hall, Ann Arbor, MI, USA. (fax: 001 734 449 8763; e-mail: firstname.lastname@example.org).
All regulation has costs; most regulations have benefits: the key question is whether a regulation’s costs exceed its benefits. Cost-benefit analysis is as challenging as it is necessary, since both costs and benefits are protean and hard to measure [1, 2]. But cost-benefit issues will be decided implicitly if they are not addressed explicitly. Better, then, to make it with evidence and careful thought.
The cost-benefit question is often asked with just such evidence and thought of regulatory regimes like the FDA, but it is rarely asked of the regime that regulates research ethics. As Greg Koski, the former director of the U.S. Office for Human Research Protections, admitted, ‘We still don’t have direct evidence that the current process is actually preventing harm. We need solid empirical research on whether the IRB process is actually working’ . The benefits of the regime are assumed but rarely analysed. The costs of the regime are numerous but are considered even less often. Scattered data suggest that the cost to researchers of complying with ethics regulation can be considerable – as much as 15% of the research budget [4, 5].
Ethics regulation has another, little studied, cost: because it slows, discourages and stops life-saving research, lives are lost that would otherwise have been saved. Studies from Australia and the United Kingdom tried to count these lost lives. We build on those studies by suggesting a way to estimate the deaths caused when ethics review committees slow research. This approach can be applied to any kind of regulation  and any country where regulation delays research, although we limit our comments to Institutional Review Boards in the United States.
Biomedical research has a staggering scope. Its concerns range from the reduction in infant and maternal mortality caused by training midwives in remote Afghan villages to developing medication to prevent or delay Alzheimer’s disease. It is a complex, worldwide endeavour. When it fails, as for instance when it does not identify a treatment that is already in common use as being on balance harmful, people suffer and die. When it succeeds, people live longer and better lives – and those improved lives benefit both the individual and society. Nordhaus  estimates that the medical revolution of the twentieth century produced economic value about as large as the increase in all other goods and services.
A regulatory system that impedes this research must inevitably cost lives. We define cost in lives as the number of people who die because of the delay in research caused by ethics board review.
Christie et al. estimated the cost in lives of ethics board review in Australia: (i) 36 000 Australians die of cancer annually. Cancer survival improves slightly more than 1% annually or about 360 Australians annually. (ii) Ethics review of one cancer trial took about 2 months (one-sixth of a year). (iii) Therefore, about 60 patients (one-sixth of 360) die annually because of regulatory delay . Delaying research costs lives, but this method is debatable . How much of the decline in mortality is from research and how much from other causes? How much of the Australian decline comes from research elsewhere?
Collins et al. of the United Kingdom were among the principal investigators of ISIS-2, a multinational trial of the effect of thrombolytics – blood thinners – on the mortality of hospitalized heart attack patients. In the United Kingdom, physicians could obtain consent by merely mentioning that the patient would be participating in (unspecified) research. In the United States, subjects had to sign a 1750-word form describing risks and benefits of participation, the side effects of thrombolytics (aspirin and streptokinase), and their rights as subjects .
ISIS-2 recruited 17 200 subjects in 16 countries between 1985 and 1987 – 6000 in the United Kingdom but only 400 in the United States. Collins attributes this difference to the stringent American consent requirements. Thrombolytics reduced vascular mortality in heart attack patients at 5 weeks by 42%. Heart attacks are common, and Collins et al. concluded that the delay in publication of the study’s results created by the American consent rules caused, worldwide, ‘at least a few thousand unnecessary deaths’ .
We extend Collins’ approach. We consider the factors that contribute to delay, adjust his formula to reflect the limited uptake of medical interventions, and comment on other considerations, using ISIS-2 as an example. We do this to show how to calculate lives lost by regulatory delay. (Lives saved by regulation should also be considered in a complete analysis.)
Formula 1: Calculating the cost in lives
Collins’ implied equation for calculating the cost in lives caused by protocol delay is:
In 1987, 760 000 patients were hospitalized with heart attacks in the United States . During 1994–1996, 31% of them were eligible for thrombolytic therapy . At that rate, 235 600 patients would have been eligible during 1987. Therefore,
(Formula 1 applied)
This is our first approximation.
Figure 1 shows this graphically. The diffusion of innovations, including new medical treatments, follows an S shaped curve  beginning at T0; regulatory action delays the beginning of the diffusion of the new treatment to time T1. The area between the two curves – approximately a parallelogram – is proportional to the cost in lives of regulatory delay.
Possible modifications of Collins’ formula
Sources of delay
Regulation delays clinical trials in three ways. (i) It delays start dates; all biomedical research in the United States is thus delayed, typically for weeks or months. (ii) It delays progress, as did the American consent process in ISIS-2. And (iii) trials of every size are stopped when investigators modify their methods and must receive IRB permission before resuming. OHRP has halted important federally funded trials in mid-course (e.g., the National Heart, Lung and Blood Institute’s study of acute respiratory distress syndrome in 2002  and Peter Pronovost’s work to prevent infections in ICUs in 2007 ).
In addition to trial-specific delays, approval of new drugs and devices is slowed by regulatory delay preparatory to the final trial. Drugs are typically developed in studies conducted in stages, usually called phase 1, phase 2 and phase 3. IRB review delays each phase. These delays are summed to indicate the total delay.
Diminishing IRB delay by clever planning
The most common cause of protocol delay is the time required to obtain initial IRB approval. But if an investigator could obtain approval without delaying the protocol, the cost in lives would be reduced to zero. This might have been possible years ago, when it was common to seek IRB approval while protocol details were being worked out, details like refining power calculations or writing newspaper advertisements. But now IRBs want to see power calculations and know exactly what newspaper advertisements say before approving the study. Only a desperate investigator submits a protocol without perfecting it. In short, biomedical research is typically conducted this way: first, develop the protocol in complete detail. Full stop. Second, get IRB approval.1 Third, conduct research. Few projects can escape regulatory delay by clever timing.
Differential utilization of research results
Collins’ approach implicitly assumes that after ISIS-2 the use of thrombolytics rose from zero to 100%. However, many protocols, including ISIS-2, promote an intervention that is already in modest use – and a few cardiologists were using thrombolytics in the mid-1980s. Further, even after the study results were published, not all patients received the better treatment. We therefore add a term to Formula 1, U, indicating the differential utilization of results:
Adding this term to formula 1, we obtain
Figure 2 shows this graphically, with U0 the utilization of the intervention in clinical practice before publication of the study and U1 its utilization afterward. As U1–U0 approaches 100%, the cost in lives approaches the maximum derived from the original Collins formula. Consider ISIS-2. Before the study was published, fewer than one per cent of US cardiologists used thrombolytics in heart attack patients  (Fig. 2). After the study was published, thrombolytics became the international standard of care but were still not given to every eligible patient. Patients refuse interventions, pharmacies run out of medication, and some cardiologists adopt new treatments slowly. Still, by 1998, thrombolytics were used in 76% of eligible US heart attack patients . As thrombolytics were used in no more than 1% of patients before ISIS-2, the differential uptake was about 75%. We add this to the equation:
(Formula 5 applied)
Each element of this equation is subject to uncertainty. We first change one factor at a time, while holding the others constant (with delay at 8 months, mortality reduction at 5.6% and differential uptake 75%). In the second step, we allow all three variables to reach the limits of their ranges (Table 1).
Table 1. Sensitivity analysis
Cost in lives
Reduction in mortality
Delay of study
6 months20 months
5.5%, 8, 75%
4.5%, 6 months, 68%6.6%, 20 months, 79%
Reduction in mortality
ISIS-2 reduced mortality by 42% with a confidence interval (±2 standard deviations) of 34% to 50% (corresponding to an absolute reduction in mortality of 4.5% and 6.6%).
Extent of delay
The United States recruited an average of 12 subjects per month, the United Kingdom 177 per month, and all other countries 322 per month. The total enrolment of 17 200 was reached in 33.5 months. Had the United States enrolled 177 subjects per month, enrolment would have been completed 8 months faster. Collins conservatively estimates a delay of only 6 months. These calculations, however, ignore the larger population of the United States.
Cardiologists in the United Kingdom enrolled 3.1 subjects per million population per month. If US cardiologists had enrolled subjects at this rate, the United States would have contributed 748 subjects per month, and the trial would have been completed 20 months sooner.
Barron’s published measure of the 76% uptake of thrombolytics in 1998 has a 95% confidence interval of approximately 69% to 80%, with corresponding differential uptakes of 68% and 79%.
Because these input variables are unlikely to influence each other (for instance, the delay in trial completion is unlikely to influence the reduction in mortality or the differential uptake of results), the cost in lives is probably between the minimum and maximum values shown here.
These estimates do not include the deaths outside the United States. ISIS-2 demonstrated that aspirin alone, which is readily available even in poor countries, reduced vascular death by 23%. The ISIS-2 authors comment, ‘If 1 month of low-dose aspirin were to be given to just one million new patients a year – which is only a fraction of the worldwide total with acute myocardial infarction [heart attack] – then a few tens of thousands of deaths, reinfarctions, and strokes could be avoided or substantially delayed (and these benefits could be doubled if low-dose aspirin were continued for at least a few more years)’ .
Estimating the cost in lives across studies
We now move from the cost in lives of a single trial to the cost across multiple studies. At the end of a year, we could estimate the cost in lives because of delay of the research completed that year. The calculation requires only, for each successful study, the monthly number of eligible patients, the reduction in mortality, the months of delay and the differential uptake of the better intervention.
Only studies that showed a reduction in mortality would be included in this calculation. Phase 3 trials of new medications or controlled comparisons of already-available interventions would qualify if they showed significantly reduced mortality in one arm. Studies aimed at improving outcomes other than mortality would be excluded. Thus, many studies that clearly save lives in the long run, like trials aimed at controlling blood pressure, preventing diabetes and helping patients stop smoking, are not included in this analysis. Our method, though, could certainly be extended to include such trials.
The time the IRB system delayed initiation of the study could be obtained from investigators (or tracked by IRBs). When the time to completion of a trial is increased, as when IRB requirements reduce the pool of eligible subjects, that delay can also be estimated.
Calculating differential uptake requires, first, estimating how often the better intervention is already used. For new drugs, this will be zero; for approved medications and established treatments, some doctors are already using the intervention. We would also need an estimate of how often the new intervention is used after publication of the study results. For new medications, pharmacy or drug company records would often provide this datum. For a new device or surgical procedure, billing records would be helpful.
Combining the data
At each step, the analysis should indicate upper and lower bounds on the range of each variable so that the final estimate reflects an appropriate amount of uncertainty. Once these data have been tallied for each study conducted in that year, the total cost in lives for the year would be the sum of the cost in lives for each individual project.
This is a first attempt at this kind of analysis, and we want to make its limitations clear. For our example of ISIS-2, the 1987 data of heart attack hospitalizations excludes federal hospitals. Collins plausibly argues that the US consent requirements delayed ISIS-2, but we cannot prove its extent. We assume that the per cent of patients eligible for thrombolysis in 1987 was the same as in 1994–1996. Making 10 years the point for determining final uptake was influenced by the availability of a 1998 measure of utilization. The precise cut-off is not critical – thrombolytics’ survival advantage persisted 10 years after the study .
Extending this approach to estimate regulation’s cost in lives in less well-studied conditions will involve challenging data collection. And no analysis should ignore other considerations, such as whether regulatory delay also produced benefits to a trial’s subjects.
ISIS-2 was an unusual trial. It is a rare protocol that shows a dramatic reduction in mortality for a common problem and it is therefore rare for protocol delay to cost thousands of lives. However, regulatory delay is critical even if fewer lives are lost.
Although research ethics are enforced through a system of government regulation, that system has not been examined in the way government regulation should always be examined – by asking whether regulation’s benefits exceed its costs. We know of no disciplined attempt to identify and measure the system’s benefits or its costs. This is disturbing, since regulation that does more harm than good is itself unethical.
We have not essayed a cost-benefit analysis of the IRB system nor can we provide a precise calculation of how many lives are lost by the system’s delay of research. Our goal in this paper is to show how the inquiry might be begun and to suggest that the cost of research-ethics regulation is great enough to make this inquiry urgent. We do this by showing that the cost in lives could be measured and by proposing considerations that should be taken into account.
Because biomedical research saves lives, it is unsurprising that ethics regulation that requires prior approval and continuous monitoring of every example of human-subject research costs lives. The number of premature deaths experienced around the world, the extraordinary progress medical science has made against some causes of death (e.g., the death rate from cardiovascular disease dropped by two-thirds between 1950 and 2000) and the promising results of bench research mean that biomedical research can save many lives.
When biomedical research produces life-saving interventions like new drugs and devices, new uses for old drugs and new systems of care, time is critical – cardiologists want to learn of breakthroughs in treating heart attacks immediately, and intensivists want to reduce central line infections now. Medical journals, through rapid online publication, labour to save weeks, days and even hours to speed life-saving research to physicians. Regulatory delay is as harmful as any other delay. Further, biomedical research does not just save lives, it promotes other important social goods, like soothing suffering and diminishing disability. Regulatory delay presumably diminishes these benefits as well in ways that also need to be assessed.
Ethics regulation has costs other than those created by delay. That regulation, for example, has sometimes prevented research altogether, which is true of important categories of research in emergency medicine in the United States. Ethics regulation also can affect the quality of research, as when it distorts the representativeness of samples. Ethics regulation may also have a chilling effect that causes researchers not even to attempt some kinds of research. Finally, because many researchers are members of review boards, ethics regulation reduces the time they have to do their own research.
All these costs need to be surveyed and assessed as – of course – do the benefits of research-ethics regulation. Only then can we make a judgment as to whether ethics regulation does more good than harm. If it does not, we must ask whether it can be reformed or whether another system of regulation should be used.
IRB review is only part of regulatory delay, as demonstrated graphically for oncology research in Dilts DM, Sandler AB, Baker M et al. Processes to activate Phase III clinical trials in a cooperative oncology group: the case of cancer and leukaemia group B. J Clin Oncol. 2006; 24: 4553–7. DOI: 10.1200/JCO.2006.06.7819.
Conflict of interest statement
The authors have no conflict of interest to declare.
We are grateful for the assistance and insightful comments of Sean Blackwell, David Dilts, Jan Eberth, Asha Kapadia, Paula Knudson, Jon Tyson and Robert Volk. Any errors are their responsibility. We also appreciate Mats Hansson and Ruth Chadwick, organizers of the conference, ‘Is Medical Ethics Really in the Best Interest of the Patient?,’ held at Uppsala, Sweden, in June, 2010, and to the Journal of Internal Medicine, sponsor of the conference. Dr Whitney’s work is supported in part by the Center for Clinical Research and Evidence-Based Medicine at the University of Texas Medical School at Houston.