Methodology Protocol

You have free access to this content

Eliciting adverse effects data from participants in clinical trials

  1. Elizabeth N Allen1,*,
  2. Clare IR Chandler2,
  3. Nyaradzo Mandimika1,
  4. Karen Barnes3

Editorial Group: Cochrane Methodology Review Group

Published Online: 31 MAY 2013

Assessed as up-to-date: 11 MAR 2012

DOI: 10.1002/14651858.MR000039


How to Cite

Allen EN, Chandler CIR, Mandimika N, Barnes K. Eliciting adverse effects data from participants in clinical trials (Protocol). Cochrane Database of Systematic Reviews 2013, Issue 5. Art. No.: MR000039. DOI: 10.1002/14651858.MR000039.

Author Information

  1. 1

    University of Cape Town, Division of Clinical Pharmacology, Dept of Medicine, Cape Town, Western Cape, South Africa

  2. 2

    London School of Hygiene and Tropical Medicine, Dept of Global Health & Development, London, UK

  3. 3

    University of Cape Town, Division of Clinical Pharmacology, Cape Town, South Africa

*Elizabeth N Allen, Division of Clinical Pharmacology, Dept of Medicine, University of Cape Town, K45, Old Main Building, Groote Schuur Hospital, Observatory, Cape Town, Western Cape, 7925, South Africa. elizabeth.allen@uct.ac.za.

Publication History

  1. Publication Status: New
  2. Published Online: 31 MAY 2013

SEARCH

 

Background

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support
 

Description of the problem or issue

Manufacturers must demonstrate safety, efficacy and quality of their investigational drug by way of clinical trials in order to achieve registration with regulatory authorities. Thereafter, they, and other stakeholders, continue to evaluate the product’s risk profile in subsequent trials, particularly in under-studied population groups (ICH 2004). Safety analyses in clinical trials largely involve identifying untoward medical occurrences, or harms, after exposure. These endpoints, which are not necessarily causally related, are so-called adverse events (AEs) (ICH 1996). AEs are assessed either on an individual case basis or by aggregate statistical synthesis to provide evidence of likely adverse drug reactions (ADR) (CIOMS 2005). The processes involved in collecting, recording, analysing and reporting AEs and ADRs are generally considered more complex than those involved in evaluating efficacy, and methods are relatively less developed (Huang 2011).

While some AEs may be ascertained from physical examinations or tests, there is a great reliance on reports from the participants to detect subjective symptoms, where the participant is the only source of information. There is no consensus on how these reports should be elicited from participants, although it is well known that methods involving direct questioning influence the extent and nature of the data detected (FDA 2005). For instance, studies have found that giving participants a checklist of potential AEs yields many more reports compared with posing a general enquiry about change in health (Bent 2006). However, it is uncertain whether one way of questioning over another is better for detecting ADRs (Wernicke 2005). Alongside the debate about the best methods for generating these data, there is also a margin for measurement error, and the use of sophisticated statistical methods for pooled data undermines synthesis of trials that use disparate methods (FDA 2005; Huang 2011). The situation is compounded by generally poor reporting of methods used (Ioannidis 2004).

 

Description of the methods being investigated

We plan to investigate any method used in a clinical trial to elicit participant-reported AEs, such as a general enquiry, checklist, diary, memory aid etc, whether applied face-to-face or otherwise. Due to the lack of consensus as described above, the details of all methods studied will only be known once the review is ongoing. Should eligible studies include a comparison of methods used to elicit information on other participant-reported variables (concomitant medications or medical histories), these will also be included in the review.

 

How these methods might work

Little is known about how different methods of elicitation work. A study that tried to identify barriers to accurate and complete reporting of harms data using qualitative methods suggests that questioning detail and terminology influences participants’ recognition of health issues and treatments. Moreover, the authors have suggested that the perceived relative importance of health issues and treatments to the participant may be a factor (Allen 2011).

 

Why it is important to do this review

Current uncertainty about the best practices for participant-reported AE elicitation in clinical trials leaves regulatory authorities, policy makers, healthcare professionals, patients and the public unsure how far results are accurate and comparable. It would therefore be useful to synthesise research that compares elicitation methods. This should contribute to knowledge about the methodological challenges, and possible solutions, for achieving better, or standardised, AE ascertainment in clinical trials.

 

Objectives

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support

To systematically review available literature on research that compares the methods used within, or specific for, clinical drug trials/studies to elicit information from the participants about the AEs that were defined in the protocol or in the preparation for the trial.

 

Methods

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support
 

Criteria for considering studies for this review

 

Types of studies

  • Clinical drug trials that include a comparison of two or more methods to elicit participant-reported AEs
  • Research studies that have been performed outside the context of a clinical drug trial to compare two or more methods to elicit participant-reported AEs, and which could be used in clinical trials (evidenced by reference to such applicability)

 

Types of data

AEs elicited from participants taking part in any such clinical trial. For the purposes of this review, AEs are defined as those outcomes that were pre-specified as potential AEs to be investigated in the trial (including expected or unexpected AEs, the latter which will not be known, but are intended to be detected during the trial/study), recognizing that the trial itself might reveal that these are not actually increased in the intervention group compared with the control group. Concomitant medication and medical history data will also be included in this review, should eligible studies also include a comparison of methods used to elicit those.

 

Types of methods

Any combination of elicitation methods within- or between-participants. This may include, but is not limited to, unstructured or structured enquiries, checklists or questionnaires (e.g. by body system, symptom etc), diaries and memory aids.

 

Types of outcome measures

 

Primary outcomes

  • The effect measure (or number, proportion) and/or nature (e.g. characteristics, severity, causality assessment) of AEs identified by the method of elicitation, as defined by the original authors

 

Secondary outcomes

  • If relevant, the effect measure (or number, proportion) and/or nature (e.g. medication class) of concomitant medications and/or medical histories identified by the method of elicitation, as defined by the original authors
  • If relevant, summary results of qualitative methods used
  • If relevant, results of inherent elicitation method validation studies.

 

Search methods for identification of studies

There will be no date, sample size or language restrictions in the searches. However, it is likely that only English reports will be included fully in the review, because of resource constraints as regards translation.

 

Electronic searches

Electronic search strategies will be developed in consultation with an experienced information specialist for relevant bibliographic databases. These are likely to include MEDLINE, Popline, The Cochrane Library (Cochrane Central Register of Controlled Trials and Cochrane Methodology Register), CINAHL, CAB Abstracts, BIOSIS, JSTOR, and SCISEARCH. A list of databases and search strategies will be finalized prior to starting the search, with subsequent iterations fully documented.

 

Searching other resources

The electronic searches will be supplemented by a review author checking the following: reference lists of included reports (and relevant reports known to the authors who are familiar with the research area) (Horsley 2011), handsearching of relevant topic-area conference abstracts (e.g. International Conference on Pharmacoepidemiology and Therapeutic Risk Management, International Society of Pharmacovigilance annual meeting) (Scherer 2007), and searching online libraries of theses/dissertations. We will ask known content experts for information about potentially eligible studies that we may have missed.

 

Data collection and analysis

 

Selection of studies

The first author will examine titles and, where available, abstracts of identified citations in order to remove obviously irrelevant reports (e.g. non-human studies). Thereafter, two authors will independently review the remaining titles and abstracts for eligibility according to the Criteria for considering studies for this review in this protocol. The full text for all reports that appear relevant will be obtained, as well as those for which the title and abstract is insufficient to determine eligibility. Reports from the same piece of research will be linked together. Two review authors will determine final eligibility independently, with disagreements resolved by discussion (involving a third person with relevant experience if necessary). Review authors will not be blinded to any information, and all documents relating to this search and selection process, including the primary reason for non-inclusion, will be recorded.

 

Data extraction and management

One review author will extract data onto a data extraction form according to a pre-specified list, with a second review author checking 100% of fields. Disagreements will be resolved by consensus, with, if necessary, a third person with relevant experience consulted to resolve disagreements. The list will be pre-tested in a minimum of two reports and modified accordingly. It is likely to include the following.

  • Citation, author, country and contact details
  • Source (journal or other)
  • Date
  • Study design and methods:
    • Country/countries where study was conducted
    • Setting (hospital, clinic etc)
    • Dates conducted
    • Design (description of trial arm(s), disease or indication [including whether acute or chronic], sampling strategy, intervention [if relevant], assessment(s) schedule, duration of follow-up)
    • Eligibility criteria
    • Elicitation techniques’ properties (including, if available, descriptions of their development, components and application methods. Also training/experience of those who elicited information from participants, the detail of how AEs were described and by whom, whether verbatim reports of participants were captured, the language or dialect used in conversations. In addition how reports were analysed, verified and recorded)
    • References to animal or human toxicology, pharmacovigilance databases, participants or patient/consumer experiences (including explanations for differential reportings, such as qualitative results, and underlying conceptual theories or orientations)
    • Validation technique method details, if relevant

  • Outcomes and results
    • The relative effect estimates derived from one method of ascertainment versus the other
    • The number/proportion and/or nature of AEs as defined by the authors of the original study
    • If relevant, the relative effect estimates/number/proportion and/or nature of concomitant medications and/or medical histories
    • If relevant, summary results of qualitative methods used
    • if relevant, statistical test results (including those from validation studies)

  • Key conclusions and limitations as reported by the original authors or as determined by the review author

 

Assessment of risk of bias in included studies

It is anticipated that the comparison of elicitation methods will largely be within-participants and not between-participants. Furthermore, it is possible that comparisons within-participants may involve the addition of one elicitation method to another (i.e. cumulative, rather than direct, comparison). Such heterogeneity in study designs will limit our ability to assess methodological quality and risk of bias using currently accepted methods. However, for reports that do compare outcomes between-participants, the risk of bias will be independently assessed by two review authors, according to The Cochrane Collaboration’s 'Risk of bias' tool, as far as is feasible in terms of the actual study design encountered (Cochrane 2011). For reports that cannot be assessed in this way, including those involving a within-participant comparison, the review authors will independently critically evaluate each study in terms of the potential impact of the study’s design and conduct on its findings. These general observations, which will be made within the framework of potential selection, performance, detection, attrition, reporting and other biases, will be discussed and reported in the full review.

It is acknowledged that Risk of bias' assessment is dependent on the completeness and quality of the original study report and an attempt will be made to contact study authors to retrieve protocols or specific relevant missing information (Young 2011). Open questions will be used in these communications to minimize bias in reporting. Reports will not be excluded from this review on the basis of quality.

 

Measures of the effect of the methods

As mentioned above, the comparison of elicitation methods may be within-participants or between-participants, with the latter involving a cumulative rather than direct comparison. Therefore, the details of all measures of effects of methods studied will only be known once the review is ongoing. Effect measures from different methods can be compared by taking their ratio, or overlap in 95% confidence interval (Golder 2011).

 

Unit of analysis issues

The details of units of analysis will only be known once the review is ongoing.

 

Dealing with missing data

We will seek to minimize the amount of missing data through contact with authors as mentioned above (Young 2011). Thereafter, we will report any assumptions made about missing data, any statistical methods used to impute them and the potential impact of these methods on the findings of the review. Sensitivity analyses may be applied where assumptions have been made.

 

Assessment of heterogeneity

If pooled estimates are calculated, heterogeneity will be assessed by the Q test and I2 statistic (Higgins 2002).

 

Assessment of reporting biases

If a meta-analysis is conducted, reporting bias will be explored using a funnel plot, whereby the studies’ measures of effects will be plotted against a measure of precision (Sterne 2001). We do not, however, anticipate sufficient eligible studies to explore asymmetry of the plot statistically.

 

Data synthesis

It is anticipated that meta-analysis is unlikely to be feasible due to the heterogeneity of included studies, unless similar studies are identified. Therefore, results are likely to be described in narrative form. However, if meta-analysis is appropriate for any outcomes, overall pooled estimates will be calculated using random-effects models as significant heterogeneity is anticipated. The review report will provide a summary description of each included study’s methods, results, strengths and limitations.

 

Subgroup analysis and investigation of heterogeneity

If necessary, we will conduct subgroup analyses by study or elicitation method.

 

Sensitivity analysis

If necessary, the potential impact of risk of bias and assumptions about missing data will be investigated by sensitivity analyses.

 

Acknowledgements

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support

The review authors would like to acknowledge Mike Clarke for advice during the preparation of this protocol.

 

Contributions of authors

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support

EA wrote the protocol with input from the co-authors.

 

Declarations of interest

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support

The review authors have no interests to declare.

 

Sources of support

  1. Top of page
  2. Background
  3. Objectives
  4. Methods
  5. Acknowledgements
  6. Contributions of authors
  7. Declarations of interest
  8. Sources of support
 

Internal sources

  • No sources of support supplied

 

External sources

  • Bill and Melinda Gates Foundation, USA.
    This review was supported by the ACT Consortium
    which is funded through a grant from the Bill and Melinda Gates
    Foundation to the London School of Hygiene and Tropical Medicine

References

Additional references

  1. Top of page
  2. Abstract
  3. Background
  4. Objectives
  5. Methods
  6. Acknowledgements
  7. Contributions of authors
  8. Declarations of interest
  9. Sources of support
  10. Additional references
Allen 2011
  • Allen EN, Barnes KI, Mushi AM, Massawe I, Staedke SG, Mehta U, et al. Eliciting harms data from trial participants: how perceptions of illness and treatment mediate recognition of relevant information to report. Trials 2011;12(Suppl 1):A10.
Bent 2006
  • Bent S, Padula A, Avins AL. Brief communication: Better ways to question patients about adverse medical events: a randomized, controlled trial. Annals of Internal Medicine 2006;144(4):257-61.
CIOMS 2005
  • Council for International Organizations of Medical Sciences. Management of safety information from clinical trials. Report of CIOMS Working Group VI. Management of safety information from clinical trials. Report of CIOMS Working Group VI. Geneva: CIOMS, 2005.
Cochrane 2011
  • Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. Vol. 5.1.0, Chichester: The Cochrane Collaboration, 2011.
FDA 2005
  • US Food, Drug Administration. Reviewer guidance: Conducting a clinical sfatey review of a new product application and preparing a report on the review. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation /Guidances/ucm072974.pdf 2005.
Golder 2011
  • Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Medicine 2011;8(5):e1001026. [PUBMED: 21559325]
Higgins 2002
Horsley 2011
Huang 2011
ICH 1996
  • International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). Guideline for Good Clinical Practice (Topic E6/R1). http://www.ich.org/products/guidelines/efficacy/article/efficacy-guidelines.html. 1996.
ICH 2004
  • International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). Pharmacovigilance planning: notice for guidance on planning pharmacovigilance activities (Topic E2E). http://www.ich.org/products/guidelines/efficacy/article/efficacy-guidelines.html. 2004.
Ioannidis 2004
  • Ioannidis JP, Evans SJ, Gøtzsche PC, O'Neill RT, Altman DG, Schulz K, et al. CONSORT Group. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Annals of Internal Medicine 2004;141(10):781-8.
Scherer 2007
  • Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews 2007, Issue 2. [DOI: 10.1002/14651858.MR000005.pub3]
Sterne 2001
Wernicke 2005
  • Wernicke JF, Faries D, Milton D, Weyrauch K. Detecting treatment emergent adverse events in clinical trials : a comparison of spontaneously reported and solicited collection methods. Drug Safety 2005;28(11):1057-63.
Young 2011