Cost-effectiveness of Point-of-care Biomarker Assessment for Suspected Myocardial Infarction: The Randomized Assessment of Treatment Using Panel Assay of Cardiac Markers (RATPAC) Trial


  • RATPAC Research Team members are listed in Appendix A.

  • The RATPAC trial was funded by the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Program (06/302/19). The study funders had no role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; and in the decision to submit the paper for publication. The researchers were independent of the study funders. The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the NIHR HTA. All authors have completed the Unified Competing Interest form at and declare that they have no conflicts of interest to declare.

  • Supervising Editor: Richard Byyny, MD, MSc.

Address for correspondence and reprints: Steve W. Goodacre, PhD; e-mail:


ACADEMIC EMERGENCY MEDICINE 2011; 18:488–495 © 2011 by the Society for Academic Emergency Medicine


Objectives:  Chest pain due to suspected myocardial infarction (MI) is responsible for many hospital admissions and consumes substantial health care resources. The Randomized Assessment of Treatment using Panel Assay of Cardiac markers (RATPAC) trial showed that diagnostic assessment using a point-of-care (POC) cardiac biomarker panel consisting of CK-MB, myoglobin, and troponin increased the proportion of patients successfully discharged after emergency department (ED) assessment. In this economic analysis, the authors aimed to determine whether POC biomarker panel assessment reduced health care costs and was likely to be cost-effective.

Methods:  The RATPAC trial was a multicenter individual patient randomized controlled trial comparing diagnostic assessment using a POC biomarker panel (CK-MB, myoglobin, and troponin, measured at baseline and 90 minutes) to standard care without the POC panel in patients attending six EDs with acute chest pain due to suspected MI (n = 2,243). Individual patient resource use data were collected from all participants up to 3 months after hospital attendance using self-completed questionnaires at 1 and 3 months and case note review. ED staff and POC testing costs were estimated through a microcosting study of 246 participants. Resource use was valued using national unit costs. Health utility was measured using the EQ-5D self-completed questionnaire, mailed at 1 and 3 months. Quality-adjusted life-years (QALYs) were calculated by the trapezium rule using the EQ-5D tariff values at all follow-up points. Mean costs per patient were compared between the two treatment groups. Cost-effectiveness was estimated in terms of probability of dominance and incremental cost per QALY.

Results:  Point-of-care panel assessment was associated with higher ED costs, coronary care costs, and cardiac intervention costs, but lower general inpatient costs. Mean costs per patient were £1217.14 (standard deviation [SD] ± 3164.93), or $1,987.14 (SD ±$4,939.25), with POC versus £1005.91 (SD ±£1907.55), or $1,568.64 (SD ±$2,975.78), with standard care (p = 0.056). Mean QALYs were 0.158 (SD ± 0.052) versus 0.161 (SD ± 0.056; p = 0.250). The probability of standard care being dominant (i.e., cheaper and more effective) was 0.888, while the probability of the POC panel being dominant was 0.004. These probabilities were not markedly altered by sensitivity analysis varying the costs of the POC panel and excluding intensive care costs.

Conclusions:  Point-of-care panel assessment does not reduce costs despite reducing admissions and may even increase costs. It is unlikely to be considered a cost-effective use of health care resources.

The diagnosis and management of acute chest pain incurs substantial health care costs, being responsible for around 6% of adult emergency department (ED) attendances and around a quarter of emergency medical admissions.1 Rapid diagnostic assessment using a panel of point-of-care (POC) cardiac markers could save substantial health costs through reduced hospital admissions.2 The Randomized Assessment of Treatment Using Panel Assay of Cardiac Markers (RATPAC) trial showed that use of a POC cardiac marker panel consisting of myoglobin, CK-MB, and troponin measured at baseline and 90 minutes reduced hospital admissions.3 However, an intervention that reduces admissions may not be cost saving if it incurs additional costs that outweigh the reduced admission costs. The RATPAC trial also showed that the POC panel assessment was associated with a small increase in coronary care use and chest pain–related outpatient follow-up and did not significantly alter mean inpatient days. We aimed to undertake an economic analysis alongside the RATPAC trial to estimate the effect of the POC panel on health care costs, health utility, and quality-adjusted life-years (QALYs) accrued after diagnostic assessment and estimate the cost-effectiveness of the POC panel in terms of the incremental cost per QALY gained compared to standard care without the panel.


Study Design

The RATPAC trial was a pragmatic multicenter randomized controlled trial comparing rapid diagnostic assessment with a POC biomarker panel to standard care in 2,243 patients attending six hospitals with chest pain due to suspected myocardial infarction (MI). The study was approved by the Leeds East Research Ethics Committee.

Study Setting and Population

The trial methods have been described fully elsewhere,3 but are briefly reproduced here. Patients attending the ED with suspected, but not proven, MI were asked to participate in the trial. Research nurses or ED staff screened all patients with chest pain and excluded those with electrocardiogram (ECG) changes for MI or high-risk acute coronary syndrome (>1-mm ST deviation or >3-mm inverted T-waves), known coronary heart disease (CHD) presenting with prolonged (>1 hour) or recurrent episodes of cardiac-type pain, proven or suspected serious noncoronary pathology (e.g., pulmonary embolus), comorbidity or social problems that require hospital admission, an obvious noncardiac cause (e.g., pneumothorax or muscular pain), more than 12 hours since their most significant episode of pain, previous participants, those unable to understand the trial information, and those unwilling to consent. Those who consented were randomized to either 1) diagnostic assessment using a POC biomarker panel consisting of troponin I, CK-MB (mass), and myoglobin at baseline and 90 minutes later or 2) standard diagnostic assessment with laboratory troponin assays according to local protocols. All other tests and treatments were at the discretion of the treating physician. In five of the six hospitals the local protocol followed national guidance4 recommending troponin measurement 10–12 hours after the worst symptoms. At the other hospital an earlier time threshold of 6 hours was recommended.

Study Protocol

Economic evaluation was planned in the trial protocol (available at and undertaken alongside the trial using recommended practice.5 The United Kingdom National Health Service (NHS) perspective was undertaken, and other methods were in line with UK National Institute for Health and Clinical Excellence (NICE) Technology Appraisal Guidelines.6 NICE is the organization responsible in the United Kingdom for reviewing submissions to secure NHS approval for funding of individual health services, devices, and treatment. The guidelines describe the principles and methods that are expected to be used in health technology appraisals submitted to NICE. The time horizon of the analysis was 3 months. We considered that the initial diagnostic strategy was unlikely to influence costs or outcomes after 3 months and would only have a very weak association with adverse events occurring before 3 months that could influence long-term costs and outcomes.

The primary outcome for the trial was avoidance of hospital admission, based on whether the patient had been discharged from the hospital by 4 hours after arrival. The sample size estimate was based on detecting a 5% absolute difference in the primary outcome, rather than any economic measures. In our cost analysis we aimed to determine whether a difference in this outcome led to cost savings. We therefore used the total time each patient spent in the hospital to estimate resource use, regardless of whether it was greater or less than 4 hours. A research nurse at each participating hospital collected resource use data using hospital computer records and case notes for all patients covering the length of time in the ED, the use of diagnostic tests, admissions, readmissions, outpatient reviews, and cardiac procedures. A self-completed questionnaire, including the EQ-5D health utility questionnaire and a resource use questionnaire, was mailed to all trial participants at 1 and 3 months after recruitment. The research nurse also collected microcosting data estimating the amount of time each staff member spent in patient care during ED assessment for a sample of 30 to 40 patients at each participating hospital.

The EQ-5D is a brief questionnaire measuring five dimensions of health (mobility, self-care, ability to undertake usual activities, pain, and anxiety/depression). Responses are allocated a tariff to give an overall estimate of health utility ranging from less than zero (states worse than death), through zero (equivalent to death), to one (perfect health). The EQ-5D tariff can be combined with survival data to give an estimate of QALYs accrued.

Supplementary Tables 1 and 2 (Data Supplements S1 and S2, available as supporting information in the online version of this paper) show the unit costs used in the main analysis and the microcosting study respectively, which are all at 2007/2008 prices. ED overheads were based on a study previously undertaken by the investigators7 and applied to the microcosting data from this study. Panel costs were based on purchase price, and the remaining costs were valued using national unit costs.8,9 Total United Kingdom NHS costs up to 3 months after initial attendance were calculated. QALYs were calculated by the trapezium rule using the EQ-5D tariff values at all follow-up points.

Table 1. 
Patient Demographics and Characteristics
 Standard CarePOC
(n = 1,118)(n = 1,125)
  1. Values are reported as n (%) unless otherwise stated.

  2. CABG = coronary artery bypass graft, CHD = coronary heart disease; MI = myocardial infarction; POC = point of care;

Mean age (±SD), yr  54.6 (14.4)  54.5 (13.8)
Male624 (56)683 (61)
Female494 (44)442 (39)
Past history of CHD
 No973 (88)985 (88)
 Yes137 (12)132 (12)
 Previous MI65 (6)60 (5)
 Angina + positive diagnostic test53 (5)46 (4)
 CABG surgery15 (1)12 (1)
 Angioplasty34 (3)37 (3)
 Stenosis > 50% on coronary angiography12 (1)14 (1)
 Unproven clinical label of CHD31 (3)36 (3)
 Other10 (1)12 (1)
Risk factors
 Diabetes92 (8)86 (8)
 Hypertension361 (33)376 (34)
 Hyperlipidemia282 (27)271 (26)
 Present smoker316 (29)310 (28)
 Ex-smoker–past 10 yr129 (12)144 (13)
 Cocaine abuse10 (1)6 (1)
 First-degree relative with angina or MI, onset age  < 60 yr352 (34)344 (33)
Source of referral
 Family doctor189 (17)188 (17)
 Emergency ambulance510 (46)481 (43)
 Self-referred375 (34)419 (37)
 Other41 (4)35 (3)
Table 2. 
ED Economic Microcosting Study Costs, £ ($)
 Standard CarePOC
  1. POC = point-of-care; RATPAC = Randomized Assessment of Treatment using Panel Assay of Cardiac markers.

  2. *Source: RATPAC microcosting study

  3. †Based on n = 119, three patients had no test panels (21 only had one).

Item cost(n = 124)(n = 122)
Staff costs*49.21 (76.81)60.76 (94.83)
POC tests (£21.33 per panel)038.13 (59.52)†
ED overheads (£5.38 [$8.40]/hr)17.32 (27.03)20.81 (32.48)
Total costs (mean)66.54 (103.86)119.70 (186.83)
Minimum26.41 (41.22)52.04 (81.23)
Maximum151.81 (236.95)268.46 (419.03)

Supplementary Table 3 (Data Supplement S3, available as supporting information in the online version of this paper) shows the unit costs used to estimate the cost per patient of providing POC testing. The total cost per patient of POC testing was based on prices provided by Siemens Healthcare Diagnostics (Deerfield, IL). These prices, which vary with the volume of testing within any hospital, include the cost of the machine, periodic maintenance, reagents, and all consumables. To this we added estimates relating to the cost of calibration and quality control during testing (which are not volume dependent). The cost applied to the analyses reported here are those for an annual rate of 1,500 full panels (£21.33 [$33.28] per panel). It is worth noting that the relationship between cost and annual rate of testing is only known with certainty for the rates reported in Supplementary Table 3.

Table 3. 
Other Resource Use at Initial Assessment, by Treatment Group
Resource itemNumber of PatientsFrequenciesCost £ ($)p-value†
  1. ECG = electrocardiogram; POC = point of care; SC = standard care.

  2. *Standard care, n = 1,118; POC, n = 1,125.

  3. †For t-test.

  4. ‡Prorata cost of hospital ward for patients not admitted.

Medications8918272,1122,0865.21 (8.13)5.56 (8.68)0.703
Laboratory tests1,0751,0712,5462,07938.00 (59.31)28.58 (44.61)<0.001
ECGs1,1181,1251,4841,41135.39 (55.24)33.44 (52.20)<0.001
Angiograms8168160.92 (1.44)1.85 (2.89)0.104
ED length of stay‡44659834.53 (53.90)40.36 (63.00)<0.001

Data Analysis

The main economic analysis compared bootstrap estimates of the mean cost and QALY per patient of the two groups and then estimated the incremental cost per QALY of using POC cardiac marker testing compared to management without POC testing. Results were plotted on the cost-effectiveness plane and then transformed into cost-effectiveness acceptability curves.10 We also undertook secondary analyses comparing different elements of resource use. No correction was made for these multiple comparisons.

The primary parametric techniques used to analyze data were t-tests and 95% confidence intervals (CI). Where the robustness of these techniques might be challenged by, for example, the skewed distribution of a variable, the qualitative conclusions of these tests were confirmed by their resampling counterparts, the permutation test and bootstrap CI.11 With few exceptions, results from both approaches coincide.

We anticipated that nonresponse to either the 1 or the 3-month questionnaire would result in some of the resource use and QALY data being incomplete (missing). Resource use items identified from patient records would be virtually complete, but community resource use identified on the questionnaire (family doctor, community nurse, and social worker visits), and EQ-5D scores would be missing for nonresponders at either 1 or 3 months. Exclusion of cases with any missing data wastes the potentially useful data that are available for these cases, such as resource data from hospital records and questionnaire responses at other time points. Thus, to maximize the information collected from the trial we imputed missing values for EQ-5D and community resource use in cases where questionnaire responses were not received at 1 or 3 months using multiple imputation to produce ten datasets, using the ICE module written for Stata (version 10.1, StataCorp, College Station, TX). The idea of multiple imputation draws from the fact that, while missing values from incomplete data are unknown, complete values across variables provide information about what the missing values might be. The technique of multiple imputation generates more than one likely value for the missing data, hence providing an unbiased representation of uncertainty.12 Thus, an additional set of results is available from the imputed cost and QALY data.


A total of 2,263 patients were recruited to the RATPAC trial, of whom 2,243 had analyzable data. The characteristics of the study population are shown in Table 1. Some 1,415 of the 2,243 (63.1%) participants returned completed questionnaires at both time points, 22 of the 2,243 (1%) only returned the 1-month questionnaire, 316 (14.1%) only returned the 3-month questionnaire, and 490 of 2,243 (21.8%) returned neither questionnaire.

Data were collected from 246 patients for the microcosting study. The average age of this sample was 54 years, 52% were male, and 12% had a history of CHD (Table 2). The POC testing added £38.13 ($59.47) per patient managed in this arm of the trial. Staff costs were £11.65 ($18.17) higher in the POC group, reflecting the increased level of staff involvement required to deliver the intervention. Overall, the microcosting study showed that POC testing added £53.16 ($82.87) to the costs of ED management.

Table 3 shows the number of patients in each group receiving other interventions during their initial assessment, the number of interventions received, and the cost per patient of providing these interventions. The small differences noted in the use of aspirin and clopidogrel between the two groups did not produce a significant difference in the cost per patient of medications. The standard care group received more laboratory blood tests, leading to an excess cost of £9.42 ($14.68) per patient compared to the POC group. The standard care group also received more ECGs than the POC group, although this resulted in only a small (£1.95, or $3.04) additional cost per patient. More patients in the POC group received angiography or percutaneous coronary intervention, but the differences in mean cost per patient attributable to these two interventions were not statistically significant. A cost for ED stay for discharged patients was estimated on a prorata basis for the ward cost for a day (£276.21, or $430.50) for patients who were discharged from the ED without any subsequent hospital admission. This cost was higher by £5.87 ($9.15) for patients discharged from the POC arm, who spent about a half-hour longer in the ED.

Table 4 shows resource use after the initial assessment. Patients in the POC group received more coronary care and intensive care, while patients in the standard care group received more general inpatient care. None of these resulted in significant differences in mean cost, although the differences in the point estimates for costs relating to coronary care and intensive care were marked, with POC testing being associated with an additional £70.77 ($110.29) and £62.58 ($97.52) per patient, respectively. Patients in the POC group received more outpatient follow-up. Standard care was associated with more nurse home visits and social work visits, but with relatively small differences in mean cost per patient.

Table 4. 
Non-ED Resource Use and Costs, by Treatment Group
Item*Resource Usep-value†
SCPOCCost £ ($)
  1. *Costs of informal care excluded as it is outside the perspective of the study.

  2. †p-value for t-test (permutation (nonparametric) tests also conducted and qualitative results coincide).

  3. ‡Complete-case analysis: fraction is number of patients with events over number with completed items.

  4. GP = general practitioner; POC = point of care; SC = standard care.

Coronary care days3110447176103.79 (162.00)174.56 (272.46)0.064
Intensive care days31476417.66 (27.56)80.24 (125.24)0.352
Other inpatient days5641,4674291,353362.43 (565.70)330.96 (516.58)0.420
Outpatient attendances15518319122227.67 (43.19)35.18 (54.91)0.045
Diagnostic tests (nonlaboratory)32743034445049.28 (76.92)51.41 (80.24)0.573
Interventions3840475491.25 (142.43)149.83 (233.86)0.061
Postdischarge events (ED attendances)10415510014014.82 (23.13)13.30 (20.76)0.507
Community health support‡
 GP surgery visits 519/6661,816562/6261,977150.85 (235.46)154.36 (240.93)0.646
 GP home visits47/53510944/5539811.82 (18.45)10.28 (16.05)0.602
 Nurse home visits46/52926244/55412810.72 (16.73)5.00 (7.80)0.046
 Social worker visits26/52916414/549458.82 (13.77)2.33 (3.64)0.091

The mean cost per patient (imputed analysis) was £1,217.14 (SD ±£3,164.93), or $1,987.14 (SD ±$4,939.25), in the POC group and £1,005.91 (SD ±£1907.55), or $1,568.64 (SD ±$2975.78), in the standard care group (difference = £211.23 [standard error {SE}±£109.35], or $329.52 [SE ±$170.89] per patient; 95% CI = –£16.53 to £442.90, or –$25.79 to $690.92; p = 0.056). The respective costs per patient were £1,216.18 (SD ±£3,319.55), or $1,897.98 (SD ±$5,178.50), and £1,008.94 (SD ±£3276.44), or $1,574.83 (SD ±$5,111.24), for the complete case analysis (difference = £207.24 [SE ±£104.21], or $323.17 [SE ±$162.56]; 95% CI = £2.98 to £431.62, or $4.69 to $673.33; p = 0.047).

The mean EQ-5D scores (imputed analysis) at 1 month were 0.769 (SD ± 0.261) for the standard care group and 0.753 (SD ± 0.285) for the POC group (p = 0.158). The respective values for the complete case analysis were 0.761 (SD ± 0.267) and 0.747 (SD ± 0.289; p = 0.369). The mean EQ-5D scores (imputed analysis) at 3 months were 0.772 (SD ± 0.273) for the standard care group and 0.764 (SD ± 0.289) for the POC group (p = 0.433). The respective values for the complete case analysis were 0.759 (SD ± 0.280) and 0.753 (SD ± 0.290; p = 0.710).

The standard care group accrued a mean of 0.161 (SD ± 0.052) QALYs over the 3-month follow-up compared to 0.158 (SD ± 0.056) for the POC group (difference = –0.003 [SE ± 0.003]; p = 0.250). Data are reported assuming that EQ-5D was zero at baseline, although means for any baseline score between 0 and 1 can be estimated by adding a constant k/24, where k is the baseline EQ-5D score of interest; this does not affect the mean difference between study groups. Because 3 months is approximately one-quarter of a year, the maximum possible number of QALYs accrued is about 0.25 (which assumes that EQ-5D was 1 at baseline).

Figure 1 shows the cost-effectiveness plane based on the resampled cost-effectiveness data points for the POC test strategy (mean values shown by the circular target). There is a high probability that the POC strategy is dominated by standard care (empirical probability = 0.888). Conversely, the probability of the POC test strategy being cost effective at £20,000/QALY ($31,229) is very low (0.004).

Figure 1.

 Cost-effectiveness plane for POC treatment strategy. POC = point-of-care; QALY = quality-adjusted life-years.

We undertook two deterministic sensitivity analyses to explore whether the findings were robust to two important assumptions. First, we explored whether a cheaper POC test could be cost-effective by assuming that the test performance for a single troponin test, which at £6.17 ($9.63) incurs the lowest feasible POC test cost, is the same as that of the trial strategy. In this scenario we assume that only costs would be affected and test costs are reduced from £38.13 ($59.33) to £6.17 ($9.63), reducing the mean cost difference to £179.26 (SE ±£109.35), or $279.88 (SE ±$170.89). Despite this, the probability of the POC strategy being dominated remained high at 0.869. Second, we explored whether excluding intensive care costs would alter the findings. Intensive care costs were only weakly related to the intervention, and one patient in the POC group had spent 52 days on intensive care, incurring substantial costs and increasing the variance of cost estimates. Excluding intensive care costs reduced the mean cost difference to £153.59 (SD ±£82.86), or $239.80 (SD ±$129.31), per patient, but the probability of the POC strategy being dominated remained high at 0.876.


The RATPAC trial showed that a POC panel assessment reduced hospital admissions among patients with suspected MI.3 However, the economic analysis reported here showed that the POC panel assessment may have been more expensive than standard care and was unlikely to be considered cost-effective compared to standard care. Mean costs per patient were £211 (SD ±£109.35), or $329.52 (SD ±$170.89), higher in the POC group, and there was a 0.888 probability that standard care dominated POC (i.e., was cheaper and more effective).

This apparently counterintuitive finding is probably explained by the observation that the additional admissions in the standard care group were short and inexpensive, while expensive items such as cardiac interventions, coronary care, and intensive care admission were more frequent in the POC group. It could be argued that intensive care admissions are unlikely to be related to the initial diagnostic process. However, our sensitivity analysis with intensive care costs excluded showed that, although the difference in costs was reduced, the probability of standard care being dominant remained high. The increase in coronary care admissions and cardiac interventions could be due to the POC troponin assay being used earlier and/or with a lower diagnostic threshold. It may be argued that greater use of coronary care and cardiac interventions is appropriate and confers patient benefit. However, this benefit is uncertain and difficult to estimate in this low-risk patient group.

Related to this, it is possible that inpatient treatment patterns differ between hospitals and that in some settings POC testing in the ED does not lead to higher cost inpatient care. If this were the case, or if the costs of specialist care were much lower in some hospitals, then POC would have the potential to be cost-effective. Previous work on chest pain units has tried to identify the combination of costs and admission rates that would make such initiatives cost-effective.13 Further work should be undertaken to assess in what circumstances POC testing could be cost-effective.

Point-of-care testing was also associated with an additional £53 ($83) per patient for the initial ED assessment. Most of this (£38 [$59]) is related to the POC testing machine, but additional staff time to undertake the tests also added to the costs. It is possible that using the machine for other ED patients (thus reducing the amount charged for equipment), or using a simple protocol (such as a single troponin measurement), could reduce the additional costs associated with POC testing. However, our sensitivity analysis showed that assuming that the machine costs were only £6.17 ($9.63) per patient, they did not markedly change the probability that standard care was dominant.


This economic analysis has the strength of being based on data from patients participating in a pragmatic randomized trial in which the only difference between the treatment groups was the use of the POC panel. We were thus able to compare the two alternatives in practice without having to make assumptions about how patients would be managed. The corollary of this is that, unlike economic modeling, we were only able to compare two alternatives. There may be other ways of using POC cardiac biomarkers to diagnose MI that are more cost-effective than the panel assessed here. These alternatives could be evaluated using a model based on the RATPAC trial, which could also examine longer-term effects associated with the higher rates of cardiac intervention seen in the POC group.

The sample size was determined on the basis of the primary effectiveness outcome rather than economic measures. The cost estimates had large variances and thus lacked statistical power to detect small cost differences. So although we can reasonably conclude that POC assessment did not reduce costs, it is uncertain whether and to what extent POC testing may increase costs. The trial was also not powered to detect differences in individual elements of resource use, and any significant differences in individual elements of resource use arose from multiple comparisons. We should therefore be cautious in drawing conclusions regarding any association between POC testing and particular elements of resource use, such as coronary care or cardiac interventions.

Response rates for the patient questionnaire were, as anticipated, around 70%, so there is the potential for responder bias. It is notable in this respect that response rates were lower in the standard care group, perhaps reflecting a reduced level of engagement in those who did not receive the study intervention. Patients were aware of whether they had received the intervention or standard care when they completed the EQ-5D questionnaire, so their assessment of health utility may have been influenced by this awareness. However, one might expect any bias arising from this awareness to favor the intervention group, whereas health utility was nonsignificantly lower in the POC group.

The economic analysis used standard methods6 to allow comparison of the cost-effectiveness of POC testing to other competing demands for health service resources in the United Kingdom. As a result we would not anticipate the POC marker panel tested in RATPAC to be recommended for national use in the United Kingdom. However, individual hospitals may still decide to use the POC panel if the benefits identified in the effectiveness analysis (such as the increased probability of successful discharge home) are considered to be worth the additional costs. Furthermore, POC panel assessment may be more cost-effective in health care systems that have higher inpatient costs and longer inpatient stays for acute chest pain.

Finally, as noted in the main discussion, the additional costs incurred by POC testing in relation to increased use of coronary care and cardiac interventions may have been associated with patient benefit if used appropriately. We were unable to judge whether the increased use of coronary care and cardiac interventions was appropriate or not, and the trial would have needed to be unfeasibly large to measure the benefit to the minority of patients receiving these interventions. We cannot therefore exclude the possibility that the additional costs associated with POC testing may have resulted in unmeasured patient benefit.


Although use of a point-of-care cardiac marker panel reduces hospital admissions, this does not reduce health service costs and may even increase costs. This appears to be because the admissions saved are brief and inexpensive, compared to other items of resource use in acute cardiac care.


The authors acknowledge Margaret Jane for clerical assistance and all the staff of the participating hospitals.


Appendix A

RATPAC Research Team

Charlotte Arrowsmith (RATPAC Research Nurse, Derriford Hospital, Plymouth); Julian Barth (Consultant in Chemical Pathology, Leeds General Infirmary/co-applicant); Jonathan Benger (Professor of Emergency Care, University of the West of England/co-applicant); Mike Bradburn (Senior Medical Statistician, Clinical Trials Research Unit, University of Sheffield); Simon Capewell (Professor of Epidemiology, University of Liverpool/co-applicant); Tim Chater (Database Manager, Clinical Trials Research Unit, University of Sheffield); Tim Coats (Professor of Emergency Medicine/co-applicant); Paul Collinson (Consultant in Chemical Pathology, St. George’s Hospital, London/co-applicant); Cindy Cooper (Director, Clinical Trials Research Unit, University of Sheffield); Mandy Cooper (RATPAC Research Nurse, Leicester Royal Infirmary); Judy Coyle (RATPAC Research Nurse, Edinburgh Royal Infirmary); Liz Cross (Trial Manager, Health Services Research, University of Sheffield); Simon Dixon (Professor of Health Economics, Health Economics & Decision Science/co-applicant); Patrick Fitzgerald (Research Fellow, Health Economics & Decision Science, University of Sheffield); Emma Gendall (RATPAC Research Nurse, Frenchay Hospital, Bristol); Steve Goodacre (Professor of Emergency Medicine, Health Services Research, University of Sheffield/Chief Investigator); Emma Goodwin (RATPAC Research Nurse, Barnsley Hospital); Alasdair Gray (Consultant in Emergency Medicine, Royal Infirmary of Edinburgh/co-applicant); Alistair Hall (Professor of Clinical Cardiology, University of Leeds/co-applicant); Kevin Hall (RATPAC Research Nurse, Barnsley Hospital); Taj Hassan (Consultant in Emergency Medicine, Leeds General Infirmary/co-applicant); Julian Humphrey (Consultant in Emergency Medicine, Barnsley Hospital); Steven Julious (Senior Lecturer in Medical Statistics, Medical Statistics Group, Health Services Research, University of Sheffield/co-applicant); Jason Kendall (Consultant in Emergency Medicine, Frenchay Hospital, Bristol); Vanessa Lawlor (RATPAC Research Nurse, Frenchay Hospital, Bristol); Sue Mackness (RATPAC Research Nurse, Leicester Royal Infirmary); Yvonne Meades (RATPAC Research Nurse, Leeds General Infirmary); David Newby (Professor of Cardiology, University of Edinburgh/Co-applicant); Dawn Newell (RATPAC Research Nurse, Leicester Royal Infirmary); Doris Quartey (RATPAC Research Nurse, Leeds General Infirmary); Karen Robinson (RATPAC Research Nurse, Leicester Royal Infirmary); Glen Sibbick (RATPAC Research Nurse, Leicester Royal Infirmary); Jason Smith (Consultant in Emergency Medicine, Derriford Hospital, Plymouth); and Roz Squire (RATPAC Research Nurse, Derriford Hospital, Plymouth).

Trial Steering Committee

Simon Carley, Consultant in Emergency Medicine, Manchester Royal Infirmary (independent member); Paul Collinson, Consultant in Chemical Pathology, St. George’s Hospital, London (co-applicant); Liz Cross, Research Associate, Health Services Research, University of Sheffield (trial manager); Sue Dodd, CHD Emergency & Acute Care Manager (vascular program) Department of Health (independent member); Marcus Flather, Director Clinical Trials & Evaluation Unit, Royal Brompton Hospital, London (chair); Steve Goodacre, Professor of Emergency Medicine, Health Services Research, University of Sheffield (chief investigator); Sara Hilditch, Statistician, Director of Statistical Services Unit, Sheffield University (independent member); Enid Hirst (lay representative); Richard Hudson, Quality Assurance Manager, Research Office, Academic Division, University of Sheffield (sponsor representative); Jason Kendall, Consultant in Emergency Medicine, Frenchay Hospital, Bristol (investigator representative); and Rebecca Whitlock-Moss, Programme Manager, NIHR HTA (funder representative).

Data Monitoring Committee

John Greenwood, Senior Lecturer/Consultant Cardiologist, Academic Unit of Cardiovascular Medicine, Leeds General Infirmary (independent clinician); Jon Nicholl, Professor of Health Services Research, University of Sheffield (acting chair); Helen Thorpe, Principal Statistician, Clinical Trials Research Unit, University of Leeds (chair and independent statistician); William Townend, Consultant in Emergency Medicine, Hull Royal Infirmary (independent clinician).