SEARCH

SEARCH BY CITATION

Keywords:

  • core competency;
  • biostatistics;
  • knowledge and skills assessment

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

Research training has enabled academic clinicians to contribute significantly to the body of medical research literature. Biostatistics represents a critical methodological skill for such researchers, as statistical methods are increasingly a necessary part of medical research. However, there is no validated knowledge and skills assessment for graduate level biostatistics for academic medical researchers. In this paper, I review graduate level statistical competencies and existing instruments intended to assess physicians’ ability to read the medical literature and for undergraduate statistics for their alignment with core competencies necessary for successful use of statistics. This analysis shows a need for a new instrument to assess biostatistical competencies for medical researchers. Clin Trans Sci 2011; Volume 4: 448–454


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

Research training has enabled academic clinicians to contribute significantly to the body of medical research literature. The National Institutes of Health (NIH) have a series of grants to provide funding for training such researchers, both individually and in training programs. Biostatistics represents a critical methodological skill for such researchers, as statistical methods are increasingly a necessary part of medical research. Published original medical research articles now nearly always include statistics, and the complexity of the statistical methods has increased over time.1,2 An assessment instrument is clearly indicated by Young et al. (2002)3 who found that physicians who claimed to understand statistical methods in a self-assessment were unable to adequately explain these methods during a structured interview. However, this skill is very valuable. When competent to perform more basic statistical methods independently, graduates’ research can be completed more quickly and at lower cost. The same goal also helps biostatistics departments in academic medical centers, which are typically overwhelmed with work. Transferring some projects to medical research staff is in the best interests of both groups.

There are two primary disciplines, which train medical researchers: clinical and translational science (CTS) and public health (PH). PH programs are well established compared to the CTS programs, which for the most part have been developed in response to the NIH funding proposals in recent years. Both types of programs include biostatistics in required coursework. In the past few years, both disciplines have worked to develop core competencies to clarify their discipline.4,5 Although the competencies differ in other areas, the goals for biostatistics are largely the same.

Students’ competency in biostatistics should mirror the statistical methods required to engage in research by the time they complete the degree. The most fundamental statistical methods for design, analysis, and reporting of clinical research are described as part of the Consolidated Standards of Reporting Trials (CONSORT) document,6 which lists criteria for manuscripts from randomized clinical trials. Because randomized trials have fewer issues with confounding than observational studies, some of the more advanced statistical topics are de-emphasized in CONSORT. Although randomized trials represent the pinnacle of medical research due to their experimental nature and greatly reduced chance of confounding, randomization is not always ethical or feasible. Therefore, students also need a thorough grounding in observational methods; similar criteria have been developed for observational studies in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) and Strengthening the Reporting of OBservational studies in Epidemiology (STROBE) manuscript guidelines.7–9 Of note, statistical errors in published manuscripts and those submitted for publication have continued even after the initial CONSORT document was published in 2001.10–12 In particular, Strasak et al. (2007) found that 16% of original research articles in the New England Journal of Medicine used an incorrect statistical test; they found the same type of mistakes in 27% of original research articles in Nature Medicine.10

There has not yet been an effort to develop a knowledge and skills assessment for graduate level biostatistics to match the disciplines’ core competencies or the manuscript guidelines. However, such an assessment will be needed to ensure that students are learning the appropriate skills during their coursework, that program graduates have attained the intended competencies, and that competency is retained and provides the necessary research skills once graduates become practicing medical researchers.

While no assessment has been developed specifically for this group, there are two related types of assessment instruments available; outcome measures for introductory undergraduate statistics13 and biostatistical methods needed for practicing physicians to read and comprehend the medical literature.14–20 The goal of this paper is to assess what topics should be included in an assessment for graduate level medical researchers and to evaluate whether any of the existing instruments is sufficient for this population.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

A comprehensive list of biostatistics and statistically related competencies for academic clinicians was developed in stages. First, I combined the statistical competencies from the CTS and PH competency documents to obtain a full list of competencies previously identified for graduate level biostatistics. In the interests of completeness, this included those areas required for or dependent upon statistics. Competencies, which were similar from a statistical viewpoint, were grouped both within and between disciplines. I then compared this list with the set of statistical methods needed to write manuscripts with different experimental and observational study designs by using three guidelines; CONSORT, TREND, and STROBE.6–9 Methods from the manuscript guidelines which were not included in the disciplines’ competency list were added to create a set of comprehensive competencies for statistics in medical research.

Assessment instruments were identified by searching PubMed and Google Scholar. Unfortunately, searches such as “statistics assessment survey validation” produce results from across the medical literature because these topics are so inextricably intertwined with research, so it is possible that some assessment tools were missed in this review. Only published instruments designed to specifically assess statistics or biostatistics and which included either the instrument itself or a detailed description of individual questions was included in this review. The instruments were described in terms of the intended population, goals, and validation.

The instrument validation chosen by the authors included a variety of methods. In content validation, experts with knowledge of what should be contained in such an instrument provide a review. Two types of criterion validity were also used; in criterion validity, the scale is compared to a gold standard, in this case one associated with a higher level of statistical expertise. One of these was concurrent validity, in which the scores from people with more statistical expertise were shown to be better than the scores from people with less statistical expertise. The other was responsiveness, in which the instrument score was shown to change following exposure to statistical training.

A list of statistical topic areas was developed iteratively using the assessment items in conjunction with author’s experience as a faculty member teaching introductory biostatistics for 8 years. Individual items from each instrument were then associated with statistical topic areas. More than one statistical topic area was assigned only when all areas were required to correctly respond to the question. Questions designed to collect self-efficacy data were excluded. Matching items were treated as individual questions for each matched pair. The statistical topic areas included in existing assessment instruments were cross-referenced with the statistical competencies for a gap analysis.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

Table 1 shows a comprehensive set of statistical competencies together with the source of each. In addition to the 11 competencies described in the CTS and PH competency documents, there are six more competencies implied by the three sets of manuscript guidelines.

Table 1.  Sources of statistical competencies including a comparison of similar competencies from the CTR and PH disciplines.
 Clinical and translational research†Public health†Guidelines
1Assess sources of bias and variation in published studies; assess threats to study validity (bias) including problems with sampling, recruitment, randomization, and ­comparability of study groups CONSORT, TREND, STROBE
2Propose study designs for addressing a clinical or ­translational research question STROBE
3Describe the basic principles and practical importance of random variation, systematic error, sampling error, measurement error, hypothesis testing, type I and type II errors, and confidence limitsDescribe the basic concepts of probability, random variation, and commonly used statistical probability distributions 
4Compute sample size, power, and precision for comparisons of two independent samples with respect to continuous and binary outcomes CONSORT, TREND, STROBE
5Explain the uses, importance, and limitations of early stopping rules in clinical trials CONSORT, TREND
6Describe the concepts and implications of reliability and validity of study measurements; evaluate the reliability and validity of measuresCalculate basic epidemiologic measures; draw ­appropriate inferences from epidemiologic dataTREND
7Scrutinize the assumptions behind different statistical methods and their corresponding limitationsDescribe preferred methodological alternatives to commonly used statistical methods when assumptions are not met 
8 Distinguish among the different measurement scales and the implications for selection of statistical methods to be used on the basis of these distinctions 
9Generate simple descriptive and inferential statistics that fit the study design chosen and answer research questionApply descriptive techniques commonly used to ­summarize public health data; apply common ­statistical methods for inference; apply descriptive and inferential methodologies according to the type of study design for answering a particular research questionCONSORT, TREND, STROBE
10Describe the uses of meta-analytic methods  
11Communicate clinical and translational research findings to difference groups of individuals, including colleagues, students, the lay public, and the media; write summaries of scientific information for use in the development of clinical health care policyInterpret results of statistical analyses found in public health studies; develop written and oral presentations on the basis of statistical analyses for both public health professionals and educated lay audiencesCONSORT, TREND, STROBE
Competencies not described in disciplines’ competency documents
12Describe size of the effect with a measure of precisionCONSORT, TREND, STROBE
13Describe the study sample, including sampling methods, the amount and type of missing data, and the implications for generalizabilityTREND, STROBE
14Interpret results in light of multiple comparisonsCONSORT, TREND, STROBE
15Identify inferential methods appropriate for clustered, matched, paired, or longitudinal studiesTREND, STROBE
16Describe adjusted inferential methods appropriate for the study design, including examination of interactionCONSORT, TREND, STROBE
17Describe statistical methods appropriate to address loss to follow-upSTROBE
† Text in these columns is quoted from the CTR and PH competency documents [REF].

Table 2 describes metadata for the statistical assessment instruments including the target population and validation metrics. One of the instruments, by delMas et al.13 was designed to assess students learning undergraduate statistics. All of the remaining instruments were designed to assess whether various subsets of the medical community are able to use statistics effectively when reading the medical literature. None of the assessment instruments was designed specifically for a population of medical researchers.

Table 2.  Overview of the target population and validation results for the statistical assessment instruments.
First authorTarget populationSample sizeNumber of itemsValidation
delMas13Undergraduate students147040Content validation by experts, responsiveness following statistics coursework (45% pre vs. 54% post, p < 0.001)
Ferrill14Practicing pharmacists70710Pharmacists with advanced training performed better than those with only a bachelor’s degree (40% vs. 25%, p < 0.0001)
Berwick15Practicing physicians28141Academic physicians, medical students, and house officers as a group performed better than practicing physicians (72% vs. 55%, p < 0.001)
Windish16Physicians in training27718Content validation by faculty with advanced training in epidemiology and biostatistics, trainees with advanced degrees performed better than those without (50% vs. 40%, p < 0.001) and fellows and medical faculty with advanced training in biostatistics performed better than residents (72% vs. 41%, p < 0.001)
Wulff17Practicing physicians2459Physicians completing postgraduate research coursework performed better than practicing physicians (mean: 4.0 vs. 2.4, p < 0.0001)
Novack18Practicing physicians22610Physicians who reported regularly reading the methods section of research articles performed better than those who did not (median IQR: 5 (4–7) vs. 4 (3–5), p < 0.001)
Ahmadi-Abhari19Physicians in training1046Trainees with prior research experience scored higher than those without (mean: 3.3 vs. 2.7, p = 0.03) and trainees with Evidence-Based Medicine training performed better than those without (mean: 3.2 vs. 2.5, p= 0.02)
Raju20Practicing physicians6816None described

Table 3 shows the number and percent of items in each instrument pertaining to each topic. The numbers in the header row represents the total number of items in each instrument. Instruments are grouped by the intended population (undergraduate or medical). Readers should recall that some items are represented more than once when multiple topic areas are needed to correctly answer the question. Only three questions were associated with none of these statistical topics. One question in delMas et al. assesses generalizability of results. One question in Berwick et al. assesses regression to the mean; another requires students to identify the typical length of disease from prevalence and incidence information. One topic was missing from all these instruments: methods appropriate for clustered, matched, paired, or longitudinal studies.

Table 3.  Number (percent) of items associated with statistical topic areas for undergraduate and medical assessment instruments
Statistical topicsdelMas*N= 40Ferrill N= 10Berwick n= 41Windish N= 18Wulff n= 9Novack n= 10Ahmadi-Abhari n= 6Raju n= 16Medical Total N= 110
Diagnostic testing (sensitivity, specificity, positive and negative predictive values, pretest and posttest probability)  26(63)1(6)  3(50) 30(27)
Unadjusted methods for independent continuous data (t-tests, ANOVA, simple linear regression)1(3)2(20) 4(22)4(44)2(20) 4(25)16(15)
Assess assumptions and select an ­appropriate method 4(40) 2(11)2(22)3(30) 1(6)12(11)
Significance testing and p-values7(18)2(20)1(2)1(6)3(33)1(10) 4(25)12(11)
Unadjusted methods for independent binary data (odds ratio, relative risk, difference in risk, number needed to treat) 1(10)3(7)3(17) 2(20)1(17)1(6)11(10)
Confidence intervals4(10)1(10) 2(11)4(44)1(10) 1(6)9(8)
Describe the size of the effect or association   4(22)1(11)  1(6)6(5)
Probability and probability distributions2(5) 6(15)     6(5)
Study design2(5)1(10) 1(6) 2(20)1(17) 6(5)
Clinical relevance versus statistical ­significance1(3)2(20)  1(11)1(10)1(17)1(6)5(5)
Multiple comparisons  2(5) 1(11)  2(13)5(5)
Power and sample size1(3)1(10) 1(6) 1(10) 1(6)4(4)
Graphing and summary statistics18(45)      3(19)3(3)
Confounding1(3) 2(5)  1(10)  3(3)
Blinding and bias   2(11) 1(10)  3(3)
Adjusted methods for continuous data (linear regression and correlation)       3(19)3(3)
Central limit theorem and variability of statistics4(10) 1(2) 1(11)   2(2)
Adjusted methods for binary data (logistic regression)   2(11)    2(2)
Unadjusted methods for independent time-to-event data (Kaplan–Meier curves)   1(6)    1(1)
Adjusted methods for time-to-event data (Cox regression)   1(6)    1(1)
Variable types   1(6)    1(1)
Statistical significance versus sample size       1(6)1(1)
Sampling1(3)       0
Methods appropriate for clustered, matched, paired, or longitudinal studies        0
* Assessed in undergraduate students.

Within the instruments designed to assess numeracy in the medical community, there remains a wide variation in the statistical topics assessed. However, several topics are included in all but one or two instruments. These include the following statistical topic areas: assessing assumptions and selecting an appropriate method, unadjusted methods for independent continuous and binary data, significance testing and p values, confidence intervals, and clinical relevance versus statistical significance.

Table 4 cross-references the statistical methods used in Table 3 with the statistical competency areas. In addition, a gap analysis is provided to identify topics needed to address the competencies but not included in the set of statistical topics from Table 3. The gap analysis addresses only topic areas missing from the existing instruments, not the quality or quantity of existing questions.

Table 4.  Comparison and gap analysis of statistical topics versus competency areas
Competency areaStatistical topic(s) from existing instrumentsAdditional topics needed
1. Assess sources of bias and variation in published studies and threats to study validity (bias) including problems with sampling, recruitment, randomization, and comparability of study groupsSampling; blinding and bias; confounding 
2. Propose study designs for addressing a clinical or translational research questionStudy design 
3. Describe the basic principles and practical importance of probability, random variation, systematic error, sampling error, measurement error, commonly used statistical probability distributions, hypothesis testing, type I and type II errors, and confidence limitsProbability and probability distributions; central limit theorem and variability of statistics confidence intervals; significance testing and p-values; clinical relevance versus statistical significanceStatistical errors
4. Compute sample size, power, and precision for comparisons of two independent samples with respect to continuous and binary outcomesPower and sample size 
5. Explain the uses, importance, and limitations of early stopping rules in clinical trials Explain the uses, importance, and limitations of early stopping rules in clinical trials
6. Describe the concepts and implications of reliability and validity of study measurements and evaluate the reliability and validity of measuresDiagnostic testingDescribe the concepts and implications of reliability and validity of study measurements; evaluate reliability measures; evaluate measures of validity for survey data
7. Scrutinize the assumptions behind different statistical methods and their corresponding limitations and describe preferred methodological alternatives to commonly used statistical methods when assumptions are not metAssess assumptions and select an appropriate method 
8. Distinguish among the different measurement scales and the implications for selection of statistical methods to be used on the basis of these distinctionsVariable types; assess assumptions and select an appropriate method 
9. Generate simple descriptive and inferential statistics that fit the study design chosen and answer research questionGraphing and summary statistics; assess assumptions and select an appropriate method; unadjusted methods for independent continuous data; unadjusted methods for independent binary data; unadjusted methods for independent time-to-event data 
10. Describe the uses of meta-analytic methods Describe the uses of meta-analytic methods
11. Communicate research findings for scientific and lay audiencesDescribe the size of the effect or association; clinical relevance versus statistical significance; confidence intervals; significance testing and p-values; statistical significance versus sample size; confounding; blinding and bias 
12. Describe size of the effect with a measure of precisionDescribe the size of the effect or association; Confidence intervals 
13. Describe the study sample, including sampling methods, the amount and type of missing data, and the implications for generalizabilitySampling; Blinding and bias; graphing and summary statisticsDescribe the type of missing data and the implications for generalizability
14. Interpret results in light of multiple comparisonsMultiple comparisons 
15. Identify inferential methods appropriate for clustered, matched, paired, or longitudinal studies Identify inferential methods appropriate for clustered, matched, paired, or longitudinal studies
16. Identify adjusted inferential methods appropriate for the study design, including examination of interactionAssess assumptions and select an appropriate method; adjusted methods for continuous data; adjusted methods for binary data; adjusted methods for time-to-event dataIdentify methods to assess interaction
17. Describe statistical methods appropriate to address loss to follow-upUnadjusted methods for independent time-to-event data; adjusted methods for time-to-event dataDescribe methods to address missing data

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

This analysis shows the need for a new instrument to assess biostatistical competencies for medical researchers. Neither the existing instruments nor a set of questions taken across these instruments sufficiently addressed the competencies required for medical research using statistics. Further research will be required to identify and validate a brief but complete set but of questions addressing the competency areas appropriately.

The Comprehensive Assessment of Outcomes in Statistics (CAOS) test by delMas et al.13 was specifically designed to assess students’ understanding of variability, and this is reflected in the topic areas of the questions. While appropriate for undergraduates in introductory statistics, the difference in distribution of question topics corroborates prior work demonstrates the quantifiable baseline differences of graduate students in biostatistics (Enders F, unpublished manuscript). While variability remains a critical topic for the foundation of biostatistics, the competency expectations for this group are far more complex. However, the CAOS test is the only measure designed to assess students who are learning statistics in the classroom, rather than those who need only use statistics to read journal articles. As such, it provides an important indication of some topics, which may be important for the learning process, such as understanding study designs and the central limit theorem.

The instruments designed to assess practicing physicians fit more closely with the goals of this paper. However, practicing physicians need only to be able to read and understand the research literature. They need not understand the intricacies of the statistical methods or perform the methods. The questions designed for this population reflect the need for physicians to understand pretest and posttest probability well in order to treat their patients appropriately21; there are numerous questions on diagnostic testing. Nevertheless, as a whole the instruments for this group seem somewhat inadequate even for their designated purpose, as there is little attention paid to assessing whether the appropriate method has been used or to interpreting statistical results. Both of these topics are required to robustly critique or defend a research manuscript. Similarly, very few questions pertain to confounding, yet a thorough grounding in this topic is essential for understanding and assessing observational studies. More research may be required to develop or refine an assessment instrument more appropriate for statistical methods in evidence-based medicine.

The statistical competencies for clinical and translational research seem more mature than those from PH. This likely reflects the timing of competency development; the clinical and translational research group benefited from the prior work of their PH colleagues. Both disciplines’ documents appear to have some gaps (Table 1). These are presumably oversights; competencies such as proposing appropriate study designs and performing sample size and power calculations are routinely included in most graduate level curricula for both disciplines. However, as competency documents are increasingly used to guide program evaluation, it is to be hoped that these gaps will be considered in future revisions.

There are several statistical topics required by the competencies but not included in any of these assessment instruments. Perhaps the most egregious of these omissions is the lack of items to assess dependent data, such as paired, matched, or longitudinal data. Such data are frequently used in observational studies, as matching is one way to minimize the effect of anticipated confounders. Indeed, Horton and Switzer (2005) found that 12% of New England Journal of Medicine’s original articles in 2004–2005 used repeated measures methods, one of the more advanced techniques for coping with dependent data.22 Together, these gaps may prove as a starting point for an effort to develop a new assessment instrument targeted to graduate students in CTS and PH.

In developing a new assessment instrument for researchers, Table 4 may help guide instrument development as it shows both a comprehensive list of statistical competencies and the corresponding statistical methods upon which specific questions can be based. Most importantly, Table 3 includes an assessment of what topics are not included within this set of questions, but this list should not be considered exhaustive. A review of individual items for quality and comprehensiveness will be needed to ensure further questions are not needed for topic areas with existing questions.

It is likely that two instruments will be required, one is a full assessment of statistical competency and another is an assessment designed for topics taught in introductory biostatistics courses. This distinction is aided by the wording of the competencies, which vary in their level of expectation. Students are expected to perform lower level statistical methods themselves but work with a statistician for more complex methods. Working with a statistician requires some knowledge of appropriate methods, sufficient common language for collaboration, and the ability to interpret statistical results. Few of the previously developed items assess this second level of understanding.

A new assessment instrument would ideally be developed and validated by a group of interinstitutional experts in biostatistics as used in clinical research. Fortunately, such a group has been formed by the NIH through the Clinical and Translational Science Award (CTSA) mechanism. Researchers from CTSA institutions collaborate in a variety of areas, including education and evaluation.23 Indeed, the biostatistics courses taught within the CTSA come from both PH and clinical and translational research program, so this collaboration could also provide an initial step toward unifying the statistical competencies from these disciplines. A new publicly available instrument would open the door for further intra- and interinstitutional research on knowledge and skill levels before and after biostatistics coursework, before and after research degree programs, and among practicing researchers. This research would hopefully aid both statistics coursework and program development to improve graduate outcomes.

Acknowledgment

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES

This work was funded by Mayo Clinic CTSA (NCRR U54RR 24150-5).

REFERENCES

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Conclusions
  7. Acknowledgment
  8. REFERENCES
  • 1
    Hellems MA, Gurka MJ, Hayden GF. Statistical literacy for readers of pediatrics: a moving target. Pediatrics. 2007; 119(6): 10831088.
  • 2
    Reed JF3rd, Salen P, Bagher P. Methodological and statistical techniques: what do residents really need to know about statistics? J Med Syst. 2003; 27(3): 233238.
  • 3
    Young JM, Glasziou P, Ward JE. General practitioners’ self ratings of skills in evidence based medicine: validation study. BMJ. 2002; 324(7343): 950951.
  • 4
    Workgroup CECC. Core competencies in clinical and translational science for master’s candidates. Available at: http://www.ctsaweb.org/index.cfm?fuseaction=committee.viewCommittee&amp;com_ID=5. Accessed August 22, 2011.
  • 5
    Calhoun JG, Ramiah K, Weist EM, Shortell SM. Development of a core competency model for the master of public health degree. Am J Public Health. 2008; 98(9): 15981607.
  • 6
    Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010; 340(c869): 128.
  • 7
    Des Jarlais D, Lyles C, Crepaz N, Group T. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004; 94(3): 361366.
  • 8
    von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The ­strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Bull World Health Organ. 2007; 85(11): 867872.
  • 9
    Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M. Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. PLoS Medicine. 2007; 4(10): 16281654.
  • 10
    Strasak AM, Zaman Q, Marinell G, Pfeiffer KP, Ulmer H. The use of statistics in medical research: a comparison of the new England journal of medicine and nature medicine. The American Statistician. 2007; 61(1): 4756.
  • 11
    Olsen CH. Review of the use of statistics in infection and immunity. Infect Immun. 2003; 71(12): 66896692.
  • 12
    Harris AH, Reeder R, Hyun JK. Common statistical and research design problems in manuscripts submitted to high-impact psychiatry journals: what editors and reviewers want authors to know. J Psychiatr Res. 2009; 43(15): 12311234.
  • 13
    delMas R, Garfield J, Ooms A, Chance B. Assessing students’ conceptual understanding after a first course in statistics. Stat Educ Res J. 2007; 6(2): 2858.
  • 14
    Ferrill MJ, Norton LL, Blalock SJ. Determining the statistical knowledge of pharmacy practitioners: a survey and review of the literature1. Am J Pharm Educ. 1999; 63: 371–376.
  • 15
    Berwick DM, Fineberg HV, Weinstein MC. When doctors meet numbers. Am J Med. 1981; 71(6): 991998.
  • 16
    Windish D, Huot S, Green M. Medicine residents’ understanding of the biostatistics and results in the medical literature. JAMA. 2007; 298(9): 10101022.
  • 17
    Wulff HR, Andersen B, Brandenhoff P, Guttler F. What do doctors know about statistics? Stat Med. 1987; 6: 310.
  • 18
    Novack L, Jotkowitz A, Knyazer B, Novack V. Evidence-based medicine: assessment of knowledge of basic epidemiological and research methods among medical doctors. Postgrad Med J. 2006; 82: 817822.
  • 19
    Ahmadi-Abhari S, Soltani A, Hosseinpanah F. Knowledge and attitudes of trainee physicians regarding evidence-based medicine: a questionnaire survey in Tehran, Iran. J Eval Clin Pract. 2008; 14(5): 775779.
  • 20
    Raju TN, Langenberg PW, Vidyasagar D, Sen AK. A biostatistical survey questionnaire. J Pediatr. 1988; 112(6): 859863.
  • 21
    Rao G. Physician numeracy: essential skills for practicing evidence-based medicine. Fam Med. 2008; 40(5): 354358.
  • 22
    Horton NJ, Switzer SS. Statistical methods in the journal. N Engl J Med. 2005; 353(18): 19771979.
  • 23
    Kon AA. The Clinical and Translational Science Award (CTSA) Consortium and the translational research model. Am J Bioeth. 2008; 8(3): 5860; discussion W51–53.