SEARCH

SEARCH BY CITATION

Keywords:

  • acute admission;
  • laboratory data;
  • mortality;
  • outcome;
  • scoring system

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

Objective: To devise a simple clinical scoring system, using age of patients and laboratory data available on admission, to predict in-hospital mortality of unselected medical and surgical patients.

Methods: All patients admitted as emergencies to a large teaching hospital in Liverpool in the 5 months July–November 2004 were reviewed retrospectively, identifying all who died in hospital and controls who survived. Laboratory data available on admission were extracted to form a derivation dataset. Factors that predicted mortality were determined using logistic regression analysis and then used to construct models tested using receiver operating characteristic curves. Models were simplified to include only seven data items, with minimal loss of predictive efficiency. The simplified model was tested in a second validation dataset of all patients admitted to the same hospital in October and November 2004.

Results: The derivation dataset included 550 patients who died and 1100 controls. After logistic regression comparisons, 22 dummy variables were given weightings in discriminant analysis and used to create a receiver operating characteristic curve with area under the curve (AUC) of 0.884. The model was simplified to include the seven most discriminant variables, which can each be assigned scores of 2, 3 or 4 to form an index predicting outcome; a validation dataset contained 4828 patients (overall mortality 4.7%), showed this simplified scoring system accurately predicted mortality with AUC 0.848, compared with an AUC of 0.861 in a model containing all 23 original variables.

Conclusion: A simple scoring system accurately predicts in-hospital mortality of unselected hospital patients, using age of patient and a small number of laboratory parameters available very soon after admission.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

A simple and reproducible clinical scoring system that rapidly identifies the most ill patients on admission to hospital has been sought for years.1 Most general clinical scoring systems have combined functional and physiological characteristics of the patient to predict mortality in intensive care settings.2–4

Some uses of scoring systems that predict mortality include the identification of patients requiring immediate intensive care, identification of those with potential poor outcomes so that health-care workers and relatives have a clearer view of prognosis,5 and for risk stratification in audit and research studies. Aggregate weighted track and trigger scoring systems such as the MEWS system6–8 and the SEWS system,9 designed to highlight clinical deterioration of patients, have been used to predict lengths of stay and mortality and have been promoted for use by hospital intensive care outreach teams.10

We have adopted a different approach, following our interest in the possible predictive value of combinations of abnormalities in routine laboratory tests for hospital mortality. Abnormally high or low values of serum sodium,11–14 glucose,15–19 peripheral white cell counts20–22 and blood urea23,24 have been shown to correlate with adverse outcomes in various hospital populations. A retrospective pilot study in Liverpool showed that cumulative abnormalities in these laboratory tests, routinely available in most acutely admitted patients irrespective of specialty, correlated with increasing mortality risk.25 We have repeated the present study to eliminate age-matching, as age is an important component of most clinical scoring systems,10,26,27 and we focused on the more discriminatory laboratory variables, as our pilot study showed considerable interrelation between several laboratory parameters.25

We hypothesized that a clinically relevant scoring system could be devised to predict inpatient mortality, using the results of routine blood tests that are available on admission of most patients to hospital. We present the results of investigation of this hypothesis, which has led to development of a simple prototype scoring system.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

A retrospective case control unmatched study was performed to collect all laboratory data records of admitted patients in the first 24 h of hospitalization. The setting used was the Royal Liverpool University Hospital, which is a large urban teaching hospital with most medical, surgical and intensive care specialties excluding obstetrics, paediatrics, neurosurgery and cardiothoracic surgery. The ED has over 100 000 visits per year, with more than 25 000 acute admissions to all specialties, among which the inpatient mortality is 5–7%.

Derivation dataset

All patients admitted to the hospital in the 5 month period July–November 2004 inclusive were listed in order of date and time of admission to hospital, and all those who died in hospital were identified. For each deceased patient, the next four patients admitted on the same or following day and who left hospital alive were identified as potential controls (survivors). The separate computerized laboratory database of the hospital was interrogated to identify those deceased patients and potential controls for whom a routine biochemistry panel and full blood count were available on admission within 24 h of admission to hospital. The laboratory data examined included anion gap, serum bicarbonate, serum chloride, plasma glucose, serum sodium, serum potassium, serum urea, creatinine, white blood cell count, haemoglobin, platelets, and absolute neutrophil and total leucocyte counts. Serum albumin was not included as it is not always available in our local hospital biochemistry admission screening panels. These data were extracted on to an Excel spreadsheet for all deceased patients, and for the first two sequential controls for each deceased patient for whom these results were available. The dataset was then anonymized, retaining only age and sex as demographic details for the patients, and unused ‘control’ patients were discarded, and the final derivation dataset contained 550 deceased patients and 1100 controls.

This sample size has 80% power to detect odds ratios greater than 1.6 or less than 0.55 assuming a 10% exposure rate among the controls (exact exposure levels for each variable can be extracted from Table 1). Logistic regression requires at least 10–20 outcomes of each type (deaths and survival) per variable included in the model28 to avoid overparameterization of the models; our sample of 550 deceased and 1100 controls was therefore adequate to include the 22 dummy variables detailed below.

Table 1.  Comparison of different laboratory variables in deceased patients and survivor controls (derivation dataset, n = 1650)
VariablesRangesNumberUnivariate analysisLogistic regression
DeadAliveOR95% CIPOR95% CIP
n = 550n = 1100
  • Fisher exact test result. CI, confidence interval; OR, odds ratio; P, probability.

Age (years)20–4927437Reference range
50–64352233.92.7–5.70.00012.21.2–4.00.008
≥6548739213.56.4–280.000113.28.3–20.90.0001
Sodium (mmol/L)>14537254.32.5–7.50.00013.91.9–8.00.0002
135–145313914Reference range
130–1341041102.72.0–3.70.00011.20.8–1.70.4
125–12969474.22.8–6.40.00011.50.8–2.80.3
<12527145.62.8–11.30.00011.70.8–3.70.2
Potassium (mmol/L)>5.069345.23.3–8.20.00011.61.1–2.40.009
3.5–5.0335860Reference range
<3.5951561.81.3–2.40.00011.40.9–20.1
Chloride (mmol/L)>10921223.11.6–5.90.00030.80.3–2.10.6
99–109262864Reference range
<992672323.73.0–4.70.00012.21.6–2.90.0001
Bicarbonate (mmol/L)>3337273.52.0–5.90.00012.11.1–3.90.03
22–33324816Reference range
<221892571.91.5–2.30.00011.20.8–1.70.3
Urea (mmol/L)>7.03972847.45.8–9.40.00012.51.9–3.30.0001
2.5–7.0143758Reference range
<2.510580.90.9
Creatinine (µmol/L)>1302351435.03.9–6.40.00011.40.9–1.90.08
50–130315958Reference range
<50140.81
Glucose (mmol/L)>11.057672.31.5–3.40.00010.80.5–1.30.3
7.1–111481862.11.6–2.80.00011.10.8–1.50.6
5.0–7.0195521Reference range
<5.0381501.51.0–2.20.060.80.5–1.20.3
Leucocytes (×109/L)>103354692.21.8–2.70.00011.30.8–1.90.3
4–9200617Reference range
<414123.61.5–8.50.0023.51.1–11.40.04
Neutrophil (×109/L)>73774852.92.3–3.70.00012.01.5–2.70.0001
2–7153576Reference range
<210331.10.9
Lymphocyte (×109/L)>3331390.60.4–0.90.040.90.5–1.60.7
1–3275745Reference range
<12322113.02.4–3.80.00011.51.1–2.00.008
Platelet (×109/L)>45056413.22.1–5.00.00011.00.6–1.80.9
150–450423998Reference range
<15068552.92.0–4.30.00012.81.7–4.60.0001
Haemoglobin (g/dL)>1712301.20.71.80.7–4.40.2
12–17283873Reference range
<122551954.03.2–5.10.00012.72.1–3.70.0001

Biochemical results were obtained using the Roche Modular System analyser (Hitachi Ltd., Hitachinaka-Ibaraki, Japan) and haematological results were analysed using the LH 750 Beckman Coulter machine (Beckman Coulter (UK) Ltd, London, UK). Normal and abnormal levels as defined in the hospital laboratories mirror internationally agreed definitions.29–31 Each variable was categorized as high or low compared with the normal reference range and in some cases levels of abnormality were further stratified as previously described.25 The 13 variables were thus split into their constituent 22 dummy variables and compared in univariate and multiple logistic regression analyses to identify abnormalities associated with mortality, using EPI INFO version 3.01 software 2003 (Microsoft's Visual Basic, Atlanta, GA, USA). The 22 variables and the patients' age were then subjected to discriminant analysis using spss version 11.0 (IBM Company, Chicago, IL, USA) to give a standardized weighting coefficient for each variable. These weightings were used to construct receiver operating characteristic (ROC) curves to examine the relationship between sensitivity and specificity and to determine whether the c statistic, or area under the curve (AUC), was high enough to construct a useful predictive model.32 Plotting all these variables resulted in an AUC of 0.884, that is, outstanding discrimination (Fig. 1), supporting the construction of a multifactorial scoring system with laboratory variables.

image

Figure 1. Receiver operating characteristic (ROC) curves and relevant AUCs applied for groups of 23 or 7 variables using standardized weighting coefficients.

Download figure to PowerPoint

Model simplification

However, using 23 variables to construct a model was thought to be too complicated in clinical practice and it was decided to reduce this to a more feasible number. Although reducing the number of variables will decrease the AUC as well as the power of the scoring system, the selection of variables with higher coefficients in the discriminant analysis should minimize this loss of power. We restricted the model to include only tests that are routinely performed in almost all hospitals, and iteratively examined different combinations of six to eight of the most highly weighted variables. ROC curves and their AUCs were obtained for more than 20 such groupings (data not shown), to find a simple and practically feasible model containing seven variables; eventually this generated an excellent ROC profile with an AUC of 0.867 (Fig. 1). The weighting of each of these seven variables was empirically rounded up or down, so that these could be added together to give a maximum score of 20. These rounded scores were initially assessed by constructing a ROC curve for a random selection of 400 patients from the derivation dataset.

Validation dataset

A new validation dataset was then constructed using all 4828 patients admitted sequentially to the hospital in the 2 months October and November 2004, without any selection or matching. The same key laboratory and demographic variables were extracted retrospectively as for the first dataset, and the dataset was then anonymized. Discriminant analysis and a ROC curve were constructed for the validation dataset of seven variables using the simplified 20-point scoring system, and the observed mortality at each score level was compared with that predicted by the full logistic regression model for the same dataset. Finally, using the cross-tab function of spss, the number of deceased patients and survivors and their relevant scores were extracted, and using Win Episcope 2.0 (http://www.clive.ed.ac.uk/winepiscope), sensitivity, specificity, positive predictive values and negative predictive values for each score were calculated.

Ethics

The study was approved by Liverpool Research and Ethics Committee and Liverpool School of Tropical Medicine Research Ethics Committee, and the study was registered with the Research and Ethics Support Office of the Royal Liverpool and Broadgreen University Hospitals Trust (study number: 2751).

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

A total of 11 944 patients were admitted to the hospital in the 5 month period July – November 2004, among whom 572 (4.8%) died in hospital. Laboratory variables were obtained for 550 of these and for 1100 surviving controls (derivation dataset). Initial univariate and multivariate comparisons of the groups are summarized in Table 1, showing that increasing age band and the following abnormalities were associated with mortality: hypernatraemia, hyperkalaemia, uraemia, high bicarbonate, hypochloraemia, thrombocytopenia, leucopenia, neutrophilia, lymphocytopenia, anaemia.

The weightings given for the variables by discriminant analysis are summarized in Table 2, and these weightings were applied to the derivation dataset to construct the ROC curve in Figure 1, which had a high AUC of 0.884, suggesting that a good predictive model could be derived.

Table 2.  Standardized coefficients assigned to 23 variables after discriminant analysis (derivation dataset, n = 1650)
VariableCoefficient
Age ≥650.544
Uraemia0.280
Low haemoglobin0.270
Hypernatraemia0.185
Low chloride0.184
Low platelets0.159
Leucopenia0.152
Neutrophilia0.133
Lymphopenia0.121
Hypokalaemia0.113
High creatinine0.105
High bicarbonate0.098
Low bicarbonate0.083
Hyponatraemia0.070
Leucocytosis0.069
Hyperkalaemia0.058
High haemoglobin0.040
High lymphocyte0.023
High platelets0.015
Hyperglycaemia0.010
High chloride−0.034
Hypoglycaemia−0.035
Neutropenia−0.105

The abbreviated scoring system of age and six laboratory variables is summarized in Table 3, with the actual function weightings produced within this model by discriminant analysis and the rounding of these weightings to scores that would summate to 20.

Table 3.  Discriminant analysis coefficients for each variable in the derivation (n = 1650) and validation (n = 4828) datasets
VariableDerivationValidationScore
  1. Also shown is the corresponding score assigned in the simplified scoring system (scores of 1 or 19 cannot be achieved).

Age ≥650.5840.4324
Urea >7.00.4180.4214
Haemoglobin <12.00.2870.3183
White blood count >10.00.2260.2563
Platelet count <1500.2170.1922
Sodium <1350.2080.1572
Glucose >7.00.0370.1252
Total score20

The ROC curve produced using this simple scoring system on 400 randomly selected patients from the derivation dataset had an AUC of 0.87, suggesting it was consistent. The validation dataset consisted of 4828 patients, of whom 228 (4.7%) died in hospital. Laboratory results were available on admission of about 87% patients, except for admission blood glucose, which was only available in 72.5%. The percentages of patients for which data on that metabolic indicator were available were as follow: sodium 87.4%; potassium 81.2%; urea 87.4%; platelet 87.0%; leucocytes 87.2%; neutrophil 86.7%; lymphocyte 86.7%; haemoglobin 87.2%; chloride 87.4%; bicarbonate 87.5%; creatinine 87.4%.

The simplified scoring formula was used on this dataset to construct the ROC curve shown in Figure 2 with an excellent AUC of 0.848, compared with that predicted from a full regression model of 0.861 using 23 variables (Fig. 2), or the AUC of 0.853 using seven standardized variables.

image

Figure 2. Receiver operating characteristic (ROC) and relevant AUCs applied for groups of 23 variables using standardized weighting coefficients, compared with curve produced using simplified score in the validation dataset.

Download figure to PowerPoint

The observed and predicted mortality curves are superimposed in Figure 3 showing excellent correlation except that highest score levels were more variable as there were only small numbers of patients in this range.

image

Figure 3. Relation between observed mortality and mortality predicted by simple scoring system. (inline image) Observed mortality rate %; (inline image) Predicted mortality rate %.

Download figure to PowerPoint

Finally, the sensitivity and specificity of different scores within the validation dataset are summarized in Table 4 showing the expected inverse relationship between negative and positive predictive values.

Table 4.  Comparing the sensitivity and specificity as well as PPV and NPV for each level of score produced by the simple scoring system (validation dataset, n = 4828)
ScoreDeceasedSurvivorsDeath rateSensitivitySpecificityPPV %NPV
n (%)n (%)%% (95% CI)% (95% CI)% (95% CI)% (95% CI)
  1. CI, confidence interval; NPV, negative predictive value; PPV, positive predictive value.

 01 (0.4)1173 (25.25)0.199150100
 22 (0.9)219 (4.8)0.999 (97.1–100)25 (16.5–33.5)57 (49.5–64.3)96 (88.8–100)
 33 (1.3)536 (11.7)0.699 (97.1–100)30 (21.0–39.0)59 (51.2–66.0)97 (90.6–100)
 47 (3.1)596 (13)1.297 (93.7–100)42 (32.3–51.8)63 (55.0–70.2)93 (86.1–100)
 53 (1.3)184 (4)1.694 (89.4–98.7)55 (45.3–64.8)68 (59.9–75.4)90 (82.7–97.6)
 63 (1.3)249 (5.4)1.293 (88.0–98.0)59 (49.4–68.6)69 (66.0–77.2)89 (82.0–98.8)
 78 (3.5)334 (7.3)2.392 (86.7–97.3)64 (54.6–73.4)72 (64.1–79.7)89 (81.6–96.2)
 819 (8.3)216 (4.7)8.188 (81.6–94.4)71 (62.1–79.9)75 (67.4–83.0)86 (78.0–93.1)
 919 (8.3)237 (5.2)7.480 (72.2–87.8)76 (67.6–84.4)77 (68.8–85.0)79 (71.0–87.3)
1014 (6.1)157 (3.4)8.271 (62.1–80.0)81 (73.3–88.7)79 (70.5–87.3)74 (65.4–81.9)
1137 (16.2)252 (5.5)12.865 (55.7–74.4)85 (43.1–57.0)81 (72.7–89.8)71 (62.7–79.0)
1211 (4.8)71 (1.5)13.449 (39.2–58.8)90 (84.1–95.9)83 (73.5–92.6)64 (55.9–71.8)
1337 (16.2)166 (3.6)18.244 (34.3–53.7)92 (86.7–97.3)85 (74.8–94.4)62 (54.4–70.0)
1418 (7.9)69 (1.5)20.728 (19.2–36.8)95 (90.7–99.3)85 (72.6–87.1)57 (49.4–64.4)
1514 (6.1)46 (1)23.320 (12.2–27.8)97 (93.7–100)87 (73.2–100)57 (47.5–62.1)
1620 (8.8)76 (1.7)21.014 (7.2–20.8)98 (95.3–100)88 (71.3–100)53 (46.1–60.5)
173 (1.3)3 (0.07)505 (0.7–9.3)99 (97.1–100)83 (53.5–100)51 (44.0–58.1)
189 (3.95)13 (0.3)414 (0.2–7.8)99 (97.1–100)80 (45.0–100)51 (43.8–57.8)
2003 (0.07)00000
Total228 (100)4600 (100)4.7

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

The present study shows that a simple scoring system can predict the risk of inpatient death among unselected hospital admissions, using laboratory data that are routinely collected on admission of a patient to hospital. It extends a previous retrospective matched case control study in the same large general hospital, examining links between abnormalities in selected laboratory tests and mortality in sequential acute hospital admissions, irrespective of specialty.25 That study confirmed the findings of others that both low and high values of tests were associated with increased mortality, and that combinations of abnormalities had stronger associations with inpatient death than did single abnormalities.5,33,34 Unfortunately it used age as a matching criterion, and it was essential to confirm the method was robust once this important risk factor was formally included in the analysis.

A similar case control design was used in sequential unselected hospital admissions, with a slightly lower mortality rate than acutely admitted patients.25 Importantly, the simple scoring system that was devised was then shown to produce equally strong predictive associations when applied for a large population of uncontrolled sequential admissions. This confirms the principles recently established by other groups, who have reported that parsimonious predictive models can be constructed to predict the mortality of patients admitted acutely via the ED33 or of unselected medical.34 We believe that these principles would be generalizable to both acute and unselected admissions to all specialties in other hospitals. Similarly, condition specific scoring systems such as those used to stage the severity of community-acquired pneumonia can be simplified to incorporate few data items and can perform better than more complex models.35

The simple scoring system described here would be relatively easy for individual clinicians to use and has an excellent overall correlation with inpatient mortality, with an AUC of about 0.87. The predictive value will never be absolute as the scoring system produces a continuous score between 0 and 20, linked to a dichotomous variable of death or survival in hospital. To achieve complete predictive accuracy, the AUC would have to be 1.0, which is virtually impossible to achieve. The scores can still be used to place an individual patient into a low, medium or high risk category for 24 h and 30 day hospital mortality.5 However, there might be discrepancies between risk scores and the clinical condition of the patient, so scoring systems should be used to augment rather than replace clinical assessment of individual patients.

Two recent studies of large hospital datasets including Tabak and colleagues using nearly 200 000 2inpatients36 and Kaiser Permanente database using 400 000 hospital admissions in California27 support models for mortality prediction that incorporate laboratory data as a major component.

In contrast to these reports, the limitations of including physiological parameters to predict mortality were underscored by a comparison of 33 aggregate weighted track and trigger scoring systems using a large prospectively collected clinical dataset to model survival.10 The AUCs varied from 0.66 to 0.78, and the best four performing systems included age as a component. The importance of including age in scoring systems was emphasized by a further prospective study from the same authors, comparing a simple scoring system37 with the MEWS system in one medical assessment unit.26

There are some practical limitations to our study, which used a retrospective labour-intensive approach to link the different patient administration and laboratory datasets manually. The manual linking of these datasets was undertaken by a single person. As no exercise to evaluate the accuracy of the data entry was undertaken (e.g. rechecking a proportion of data entries), the accuracy of the final dataset is not known. Laboratory data items were missing from 12.5% to 17.5% of patients in the final validation dataset, similar to the prevalence of missing variables reported by Prytherch and others, who used computer algorithms to simplify the dataset matching process.34 The model was derived semi-empirically but yielded robust results. Some researchers have used similar approaches to model construction, using a formula to combine the weighting coefficients of selected model variables to produce a rounded empirical scoring system.38,39 Others have used more complex formulae, derived from regression modelling of larger sets of data items, to combine a few critical variables into a simple score.33,34,40 This approach cannot be applied easily at the bedside without electronic assistance.

Our scoring system is simple and appropriate for bedside use, either from clinician's memory, or with a simple notes-based proforma. A score of 10 approximately equates to a mortality risk in hospital of 10%, score 13–20%, score 15–30% and score 17–40% (see Fig. 3). Our findings confirm those of other groups, showing that parsimonious models requiring very few data items provide almost as good risk stratification as more complex models.5,33,34 However, there are many barriers to the routine collection and use of even small datasets, including the well-recognized difficulties of persuading clinicians to follow clinical guidelines.36,41 As hospital data become more available in electronic formats in real time, pocket or laptop computers might be used to collate the relevant data, present the clinician with a risk score, and link this electronically to the appropriate clinical care pathway or care bundle.10,40

The scoring system we have devised performed well in a large patient validation dataset. Further prospective studies are needed to compare this with other scoring systems currently proposed for general use. The performance of each system could be compared in acute and elective hospital admissions, to predict outcomes including 24 h, inpatient and 30 day mortality, length of hospital stay, and perhaps the need for intensive care or outreach care on the wards.

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

We have confirmed that complex scoring systems requiring the collection of many patient variables do not perform any better than simple models that use few data items that are readily available on admission to hospital. Further studies are needed to confirm the most robust yet practical system for bedside use in every day practice, recognizing that the data components required might be different from those needed for administrative purposes.

Acknowledgements

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References

This work was supported by a grant from the Iranian Department of Health to KA for PhD studies at the Liverpool School of Tropical Medicine. The work in the present paper was part of those studies.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Conclusions
  8. Acknowledgements
  9. Competing interests
  10. References
  • 1
    Gunning K, Rowan K. ABC of intensive care: outcome data and scoring systems. BMJ 1999; 319 (7204): 2414.
  • 2
    Knaus WA, Zimmerman JE, Wagner DP, Draper EA, Lawrence DE. APACHE-acute physiology and chronic health evaluation: a physiologically based classification system. Crit. Care Med. 1981; 9: 5917.
  • 3
    Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Crit. Care Med. 1985; 13: 81829.
  • 4
    Markgraf R, Deutschinoff G, Pientka L, Scholten T. Comparison of acute physiology and chronic health evaluations II and III and simplified acute physiology score II: a prospective cohort study evaluating these methods to predict outcome in a German interdisciplinary intensive care unit. Crit. Care Med. 2000; 28 (1): 2633.
  • 5
    Kellett J, Deane B. The Simple Clinical Score predicts mortality for 30 days after admission to an acute medical unit. Q. J. Med. 2006; 99: 77181.
  • 6
    Subbe CP, Davies RG, Williams E, Rutherford P, Gemmell L. Effect of introducing the Modified Early Warning score on clinical outcomes, cardio-pulmonary arrests and intensive care utilisation in acute medical admissions. Anaesthesia 2003; 58: 797802.
  • 7
    Subbe CP, Kruger M, Rutherford P, Gemmel L. Validation of a modified Early Warning Score in medical admissions. Q. J. Med. 2001; 94: 5216.
  • 8
    Subbe CP, Slater A, Menon D, Gemmell L. Validation of physiological scoring systems in the accident and emergency department. Emerg. Med. J. 2006; 23: 8415.
  • 9
    Paterson R, MacLeod DC, Thetford D et al. Prediction of in-hospital mortality and length of stay using an early warning scoring system: clinical audit. Clin. Med. 2006; 6: 2814.
  • 10
    Smith GB, Prytherch DR, Schmidt PE, Featherstone PI. Review and performance evaluation of aggregate weighted ‘track and trigger’ systems. Resuscitation 2008; 77: 1709.
  • 11
    Asadollahi K, Beeching N, Gill G. Hyponatraemia as a risk factor for hospital mortality. Q. J. Med. 2006; 99: 87780.
  • 12
    Tierney WM, Martin DK, Greenlee MC, Zerbe RL, McDonald CJ. The prognosis of hyponatremia at hospital admission. J. Gen. Intern. Med. 1986; 1: 3805.
  • 13
    Anderson RJ, Chung HM, Kluge R, Schrier RW. Hyponatremia: a prospective analysis of its epidemiology and the pathogenetic role of vasopressin. Ann. Intern. Med. 1985; 102: 1648.
  • 14
    Lindner G, Funk GC, Schwarz C et al. Hypernatremia in the critically ill is an independent risk factor for mortality. Am. J. Kidney Dis. 2007; 50: 9527.
  • 15
    Weir CJ, Murray GD, Dyker AG, Lees KR. Is hyperglycaemia an independent predictor of poor outcome after acute stroke? Results of a long-term follow up study. BMJ 1997; 314 (7090): 13036.
  • 16
    Norhammar AM, Ryden L, Malmberg K. Admission plasma glucose. Independent risk factor for long-term prognosis after myocardial infarction even in nondiabetic patients. Diabetes Care 1999; 22: 182731.
  • 17
    Capes SE, Hunt D, Malmberg K, Gerstein HC. Stress hyperglycaemia and increased risk of death after myocardial infarction in patients with and without diabetes: a systematic overview. Lancet 2000; 355 (9206): 7738.
  • 18
    Umpierrez GE, Isaacs SD, Bazargan N, You X, Thaler LM, Kitabchi AE. Hyperglycemia: an independent marker of in-hospital mortality in patients with undiagnosed diabetes. J. Clin. Endocrinol. Metab. 2002; 87 (3): 97882.
  • 19
    Asadollahi K, Beeching N, Gill G. Hyperglycaemia and mortality. J. R. Soc. Med. 2007; 100: 5037.
  • 20
    Lawrence YR, Raveh D, Rudensky B, Munter G. Extreme leukocytosis in the emergency department. Q. J. Med. 2007; 100: 21723.
  • 21
    Le Tulzo Y, Pangault C, Gacouin A et al. Early circulating lymphocyte apoptosis in human septic shock is associated with poor outcome. Shock 2002; 18: 48794.
  • 22
    Reding MT, Hibbs JR, Morrison VA, Swaim WR, Filice GA. Diagnosis and outcome of 100 consecutive patients with extreme granulocytic leukocytosis. Am. J. Med. 1998; 104 (1): 1216.
  • 23
    McClellan WM, Flanders WD, Langston RD, Jurkovitz C, Presley R. Anemia and renal insufficiency are independent risk factors for death among patients with congestive heart failure admitted to community hospitals: a population-based study. J. Am. Soc. Nephrol. 2002; 13 (7): 192836.
  • 24
    Langston RD, Presley R, Flanders WD, McClellan WM. Renal insufficiency and anemia are independent risk factors for death among patients with acute myocardial infarction. Kidney Int. 2003; 64 (4): 1398405.
  • 25
    Asadollahi K, Hastings IM, Beeching NJ, Gill GV. Laboratory risk factors for hospital mortality in acutely admitted patients. Q. J. Med. 2007; 100: 5017.
  • 26
    Smith GB, Prytherch DR, Schmidt PE et al. Should age be included as a component of track and trigger systems used to identify sick adult patients? Resuscitation 2008; 78: 10915.
  • 27
    Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med. Care 2008; 46: 2329.
  • 28
    Katz MH. Multivariate Analysis. A Practical Guide for Clinicians. Cambridge: Cambridge University Press, 1999.
  • 29
    Handin IR. Disorders of hemostasis. In: FauciSA et al., eds. Harrison's Text Book of Medicine, 14th edn. New York: McGraw-Hill Companies, 1998; 730.
  • 30
    Weatherall DJ. Disorders of the blood. In: WeatherallDJ, LedinghamJGG, WarrellDA, eds. Oxford Textbook of Medicine, 3rd edn. New York: Oxford University Press, 1996; 337781.
  • 31
    World Health Organisation. Definition, Diagnosis, and Classification of Diabetes Mellitus and Its Complications. Report of a WHO Consultation. Part 1: Diagnosis and Classification of Diabetes Mellitus. Geneva: World Health Organization, 1999.
  • 32
    McNeil BJ, Hanley JA. Statistical approaches to the analysis of receiver operating characteristic (ROC) curves. Med. Decis. Making 1984; 4: 13750.
  • 33
    Hucker TR, Mitchell GP, Blake LD et al. Identifying the sick: can biochemical measurements be used to aid decision making on presentation to the accident and emergency department. Br. J. Anaesth. 2005; 94: 73541.
  • 34
    Prytherch DR, Sirl JS, Schmidt P, Featherstone PI, Weaver PC, Smith GB. The use of routine laboratory data to predict in-hospital death in medical admissions. Resuscitation 2005; 66: 2037.
  • 35
    Barlow G, Nathwani D, Davey P. The CURB65 pneumonia severity score outperforms generic sepsis and early warning scores in predicting mortality in community-acquired pneumonia. Thorax 2007; 62: 2539.
  • 36
    Tabak YP, Johannes RS, Silber JH. Using automated clinical data for risk adjustment: development and validation of six disease-specific mortality predictive models for pay-for-performance. Med. Care 2007; 45: 789805.
  • 37
    Hourihan F, Bishop G, Hillman K, Daffurn K, Lee A. The Medical Emergency Team: a new strategy to identify and intervene in high-risk patients. Clin. Intensive Care 1995; 6: 26972.
  • 38
    Hosoglu S, Geyik MF, Akalin S, Ayaz C, Kokoglu OF, Loeb M. A simple validated prediction rule to diagnose typhoid fever in Turkey. Trans. R. Soc. Trop. Med. Hyg. 2006; 100: 106874.
  • 39
    Silke B, Kellett J, Rooney T, Bennett K, O'Riordan D. An improved medical admissions risk system using multivariable fractional polynomial logistic regression modeling. Q. J. Med. 2010; 103: 2332.
  • 40
    Smith GB, Prytherch DR, Schmidt P et al. Hospital-wide physiological surveillance-a new approach to the early identification and management of the sick patient. Resuscitation 2006; 71 (1): 1928.
  • 41
    Collini P, Beadsworth M, Anson J et al. Community-acquired pneumonia: doctors do not follow national guidelines. Postgrad. Med. J. 2007; 83 (982): 5525.