Accuracy of assessment instruments for patients' competence to consent to medical treatment or research.

  • Protocol
  • Diagnostic

Authors


Abstract

This is the protocol for a review and there is no abstract. The objectives are as follows:

To assess the reliability and validity of the index tests for competence assessment versus the reference standard in people of any age. Although a gold standard for competence does not exist and the reference standard may be imperfect, we will examine whether using a structured assessment instrument instead of an expert judgement would be possible without compromising accuracy.

To examine the accuracy of standardised competence assessment instruments in the subpopulation of patients under 18 years of age, with deficits in competence due to developmental stage.

Background

The informed consent model assigns patients autonomy over medical interventions and research participation. Where vulnerable patient groups including children and adolescents are concerned, their degree of competence in exercising such autonomy is questionable. Partly because of concerns about the adequacy of consent procedures in vulnerable populations, interest in research into decisional capacity has grown in recent years.

Consent is required for all aspects of medical care, for preventive, diagnostic or therapeutic interventions and research participation. Competence to consent, as we use it in this review, is the clinical concept of the ability of a person to consent to medical interventions or clinical research. The clinical concept of competence may be distinct from the legal one. By law clinicians are required to determine whether patients are competent to give their consent. Strictly speaking, incompetence denotes a legal status that in principle should be determined by a court. Resorting to judicial review in every case of suspected incompetence, however, would very heavily burden both the medical and legal systems; there is therefore good reason to continue the traditional practice of having clinicians determine patients' competence (Appelbaum 1988).

Within the context of daily medical practice, competence is usually assessed implicitly. However, in some clinical settings competence regularly becomes problematic, especially when concerns arise about a person's capacity to make well-considered medical decisions. For example, if a 14-year old boy with acute lymphatic leukaemia is eligible for drug trial participation, how should the researcher examine the competence of his decision to participate in the trial? If a 15-year old boy with germ cell cancer and anaemia expresses his wish not to receive blood transfusion during planned surgery, for religious reasons, is this decision a competent one? A 63-year old man with type 2 diabetes mellitus and schizophrenia is recommended a below-the-knee amputation for peripheral vascular disease but declines; how can his physician assess whether his choice is based on competent decision-making? A woman of 72, living with dementia and anaemia, is recommended to undergo an investigation to trace a location of blood loss, but refuses - is she competent to give informed refusal?

The most extensive and influential research on patients' competence to consent was conducted by the MacArthur Research Network on Mental Health and the Law (Appelbaum 1995), examining competence standards identified by the legal system as relevant to decision-making competence in the USA, UK and many other nations. The four legally-relevant abilities that were addressed were: the ability to state a choice; to understand relevant information; to appreciate the nature of one's own situation; and to reason with information. These four abilities have been generally accepted as the standard for patients' competence to consent in clinical treatment and research practice. Several interview procedures to operationalise these standards and to measure abilities have been developed in recent decades.

The determination of patients’ competence is critical in striking a proper balance between respecting the autonomy of people who are capable of making informed decisions, and protecting those whose abilities are impaired. There is an undeniable need for standardised and accurate competence assessment methods.

Target condition being diagnosed

The target condition we will focus on in this review is patients' competence to consent to medical treatment or clinical research. We will cover groups of patients with and without developmental or mental deficits, or both.

Index test(s)

A variety of methods for competence assessment exist, mostly consisting of a structured or semi-structured interview format. Some instruments examine a real-life medical decision, while others are based on clinical vignettes. The instruments include the MacArthur Competence Assessment Tool for Clinical Research (MacCAT-CR), the MacArthur Competence Assessment Tool for Treatment (MacCAT-T), and the Hopkins Competency Assessment Test (HCAT). Content, rating and cut-offs vary between index tests and there is no systematic study on their validity and reliability.

Clinical pathway

The process of obtaining informed consent starts with providing appropriate information about the proposed medical intervention and alternative options or the scientific research project. An essential element of consent is that informed choices are made voluntarily without coercion or force, and that the person is competent to make such a decision. Starting-points for competence are task and context specificity (Culver 1982). This means that competence should not be conceived as an all-or-nothing judgement implying that the patient is generally competent or generally incompetent. Instead, assessment of competence should be regarded as a specific judgement at a specific moment of whether the patient is able to complete the concrete task that they are facing (Culver 1982). The law imposes a dichotomy (competent versus incompetent) on what, from a clinical perspective, is a spectrum of capacities (Shaw 2001).

In law, competence has traditionally been regarded as a function of age (De Lourdes Levy 2003). The statutory age of majority is generally set at 18 years, although exceptions exist in states or countries worldwide. Competence is presumed present in all adults and is rarely examined as long as the outcomes of decisions concur with the physician's recommendations (Beidler 2001).

Competence to consent may be reduced by several influences such as cognitive impairment, developmental immaturity, certain psychiatric symptoms, and situational factors such as the complexity of the information. Children are deemed competent if they appear to understand information designed for their level of comprehension to an extent appropriate to the nature and scope of the decision. Internationally the statutory age limits differ for clinical research: the lower age limit varies from 7 to 15 years, while the upper age limit is set at 17 or 18 years (Altavilla 2011). Also, for treatment decisions various age limits exist: in some countries autonomous decision-making is lawful only from 18 years onwards, but in other countries minors are allowed to take healthcare decisions from a fixed age below legal majority starting from 12 years (Grisso 1978; Stultiëns 2007). Parents decide for children who are younger than the lower age limit, as these children are considered by definition incompetent to act for themselves. For these children, no actual assessment of competence is necessary. For children between the two age limits, informed consent is required both from children and parents, provided the child is judged competent to decide. Above the designated upper age limit, children are deemed adult in medical decision-making.

In case of incompetence in adults, usually family is allowed to make a proxy decision for general treatment decisions (Saks 2008). In the context of research, laws are less clear, but generally some kind of proxy decision-making is allowed (Saks 2008). In emergency contexts, a person's stress, pain or diminished consciousness may impair their competency. For treatment decisions the most likely result of the informed consent process under these circumstances is 'uninformed trust' (Gammelgaard 2004). Decisions on research participation in the emergency context are a matter of debate; however, formally some kind of proxy decision-making by a legally authorised person is generally allowed (Gammelgaard 2004; Saks 2008).

Possibilities to advance patients' competence rely greatly on improved information provision. Decision aids are developed to provide adult patients and their families with the relevant information about the available options and possible outcomes, to support them in making a decision that is aligned with their preferences. In children decision-making can be facilitated by breaking the process down into smaller but linked choices. Communication difficulties can be overcome by innovative and age-appropriate techniques to convey information (Hein 2012). In conversation, children need clearly-worded information tailored to their comprehension level. It has been found in current practice that communication with parents and children is often flawed, and even that children are not fully informed (Hein 2012).

At present, clinicians tend to make intuitive assessments of children’s and adolescents’ competence, because no standardised method is available to test it objectively. Currently clinicians base their competence judgements on information such as age or school type. It is recognised that age is, at best, a proxy for developmental capacity, and that experience, maturity and psychological state are key determining factors (Hein 2012). In adults, clinicians may not know which standard to apply, and probably use many factors that are not formally recognised in law when assessing their patients' ability to consent to treatment or research (Tan 2001). In older adults and psychiatric patients, clinicians might find it difficult to distinguish between mental status examinations and competence assessment (Vellinga 2004). Several methods of structured competence assessment exist for adults, varying widely in procedure, reliability and validity. However, in the research context few investigators assess understanding of the research protocol and competence prior to accepting consent, and the use of standardised tools is the exception rather than the rule (Kon 2006). Data suggest that the performance of competence assessments is often sub-optimal and hence the reliability of unstructured judgements has been poor. Providing clinicians with the generally accepted legal standards for competence improves their judgements and increases significantly the inter-rater agreement (Appelbaum 2007). These legal standards embody the four capacities: to communicate a choice, to understand the relevant information, to appreciate the medical consequences of the situation, and to reason about treatment choices. Clinicians who are aware of these relevant criteria should be able to assess a patient's competence (Appelbaum 2007).

Alternative test(s)

An assessment measure for competence to consent should have close conceptual relationships with the relevant standards of competence. This implies that more general measures of cognitive abilities like reading ability or Mini Mental State Examination (MMSE) would not be appropriate for a valid test of the specific context-dependent competence to consent to the research or treatment on offer.

Rationale

This review addresses the following question: which instruments are best qualified to assess patients’ competence to consent to clinical care or research? We will analyse the validity and reliability of the assessment instruments.

Objectives

To assess the reliability and validity of the index tests for competence assessment versus the reference standard in people of any age. Although a gold standard for competence does not exist and the reference standard may be imperfect, we will examine whether using a structured assessment instrument instead of an expert judgement would be possible without compromising accuracy.

Secondary objectives

To examine the accuracy of standardised competence assessment instruments in the subpopulation of patients under 18 years of age, with deficits in competence due to developmental stage.

Methods

Criteria for considering studies for this review

Types of studies

We will include prospective observational studies on test accuracy. We will exclude comparative test accuracy studies, in view of the expected challenges related to the imperfect reference standard. We will also exclude diagnostic case studies.

Participants

This review deals only with assessment of competence regarding consent to treatment and consent to scientific research programmes. Most assessment instruments can be applied to heterogeneous patient populations. We will include children, adolescents, adults and elderly populations with different conditions, including medical conditions, cognitive impairment, psychiatric disorders and co-morbidities. The target condition, competence to consent, can vary over time and be influenced by many factors, so the time period between index test and reference test must be short enough to be reasonably sure that the target condition did not change between the two tests.

Index tests

In an effort to standardise and hence increase the reliability and validity of competence evaluations, several formal assessment instruments have been developed. Some instruments offer a reported cut-off point. Other instruments do not provide a cut-off point, stating that a serious deficit in any of the tested domains may translate to a clinical opinion of incompetence. We anticipate variation in threshold and lack of a consensus about thresholds as challenges for this review. Data on validity and reliability of the index tests are not available in standardised summaries.

We will exclude instruments that do not cover all four relevant criteria (T for treatment context and CR for clinical research context): Aid to Capacity Evaluation (T) (Etchells 1999), Brief Informed Consent Test (CR) (Buckless 2003), California Scale of Appreciation (CR) (Saks 2002), Competency Assessment Interview (T & CR) (Stanley 1984; Stanley 1988), Deaconess Informed Consent Comprehension Test (CR) (Miller 1996), Direct Assessment of Decision-Making Capacity (T) (Fitten 1990), Evaluation to Sign Consent (CR) (Moser 2002), Hopemont Capacity Assessment Interview (T) (Edelstein 1999, Pruchno 1995, Moye 2004), Hopkins Competency Assessment Test (T) (Janofsky 1992), Informed Consent Survey (CR) (Dunn 2002; Wirshing 1998), Ontario Competency Questionnaire (T) (Draper 1990), Quality of Informed Consent questionnaire (CR) (Joffe 2001), Two-Part Consent Form (T & CR) (Roth 1982), University of California San Diego Brief Assessment of Capacity to Consent (T) (Jeste 2007). Most vignette methods (Sachs 1994; Saks 2002; Schmand 1999; Vellinga 2004) will not be included, as we judged assessment of appreciation to be insufficient.

We will include the following index tests, comprising all four relevant criteria:

  • Assessment of Capacity to Consent to Treatment (T), a structured interview and three vignettes, studied in adults with dementia, schizophrenia and controls (Moye 2007).

  • Assessment of Consent Capacity for Treatment (T), three vignettes taking 45 minutes to administer and used in adults with and without mild and moderate retardation (Cea 2003).

  • Competency Interview Schedule (T), a structured interview, applied in people with major depression (Bean 1994).

  • Competency to Consent to Treatment Instrument (T), hypothetical vignettes and a structured interview, taking 20 to 25 minutes administration time. It is used in people with Alzheimer's disease, dementia, Parkinson's disease and controls (Marson 1995; Marson 1997a; Marson 2000; Dymak 2001).

  • Competency Questionnaire (CQ), a 15-item questionnaire covering all four capacities developed by Appelbaum in 1979 (Billick 1998). Each question is rated a 0 or 1, and added up for an overall score. Several modified versions were developed; CQ - Child Psychiatric (T), a 17-item questionnaire to test competence in children to consent to psychiatric hospital care and treatment (Billick 1998); CQ-Peds (T), a 19-item questionnaire for use in paediatrics, used for inpatients and outpatients between 5 and 18 years of age (Billick 2001); and CQ-Med (T), for assessing competence of general medical patients to consent to hospitalisation (Billick 2009).

  • MacArthur Competence Assessment Tool for Clinical Research (MacCAT-CR) (CR), a semi-structured interview format developed by Appelbaum and Grisso in 2001 (Appelbaum 2001), that guides clinicians and patients through the process of information disclosure required for informed consent, combined with an assessment of the patient’s capacities, in approximately 15 to 20 minutes. The instrument provides scores for each subscale: 0 - 26 for understanding, 0 - 6 for appreciation, 0 - 8 for reasoning, and 0 - 2 for expressing a choice. MacCAT-CR is based on the structure of the MacArthur Competence Assessment Tool for Treatment (MacCAT-T).

  • MacArthur Competence Assessment Tool for Treatment (MacCAT-T) (T) was preceded by the original MacArthur instruments (Understanding of Treatment Disclosures, Perception of Disorder, Thinking Rationally About Treatment, Expressing a Choice). MacCAT-T was developed by Grisso and Appelbaum in 1998 and is a semi-structured interview format taking approximately 15 to 20 minutes to administer (Grisso 1998). The instrument provides scores for each subscale: 0 - 6 for understanding, 0 - 4 for appreciation, 0 - 6 for reasoning and 0 - 2 for expressing a choice.The MacCAT scales do not offer a total score or a cut-off for competence, but the scores on the subscales need to be weighed by the interviewer. The MacCAT scales receive the most empirical support out of the variety of assessment instruments. The MacCAT scales have been tested in particular in samples of people with dementia, mental disabilities, schizophrenia and other psychiatric disorders (Cairns 2005; Owen 2008; Palmer 2005; Vollmann 2003).

  • Older Adults’ Capacity to Consent to Research (CR), a brief tool for assessing older adults’ capacity to consent, consisting of four items each testing one of the four capacities, and used in nursing home residents and community-dwelling older adults (Lee 2010).

  • Structured Interview for Competency and Incompetency Assessment Testing and Ranking Inventory (T), a 20-minute structured interview examining all four aspects of competence, and used in psychiatric and medical patients (Tomoda 1997).

Target conditions

The target condition is the patient's competence at that moment to consent to treatment or research participation. The outcome is binary: yes or no.

Reference standards

Agreement is poor between unstructured clinical competence judgements by physicians, and no better than chance. Providing clinicians with information regarding the legal standards improves their judgements and significantly increases the inter-rater agreement to moderate (kappa 0.46) (Appelbaum 2007; Kim 2011). These legal standards embody the four previously-mentioned capacities: to communicate a choice, to understand the relevant information, to appreciate the medical consequences of the situation, and to reason about treatment choices. Clinicians who are aware of these relevant criteria should be able to assess a patient's competence, and they are generally considered to establish the reference standard (Appelbaum 2007; Fazel 1999; Janofsky 1992; Kim 2001). However, limitations of this approach include the frequent discordance of expert competence judgement, which would lead to inconsistencies in the reference standard. Where the experts give their clinical judgement based on their knowledge of the four relevant criteria, the index tests consist of an operationalisation of these criteria into interview questions and do not necessarily have to be administered by an expert. When agreement between reference standard and index test is high, the index test performs as well as the reference standard. Poor performance of the index test could be interpreted in different ways: it could result from imperfections in the reference standard, or the index test may not offer an accurate assessment of competence. Slightly different reference standards in each study would preclude a quantitative meta-analysis, and it would not be possible to demonstrate superiority of any of the index tests.

Search methods for identification of studies

We will use a single search strategy for this review.

Electronic searches

To identify all relevant studies, we will search the following databases: MEDLINE, EMBASE, PsycINFO. We will use the search terms and strategy summarised in Appendix 1. We will restrict the searches to human studies. We will not limit the search by language or publication status. We will perform cited reference checking of the initially included articles in Web of Science.

Searching other resources

Not applicable.

Data collection and analysis

Selection of studies

Two review authors (IH and MM) will initially screen all the titles and abstracts identified by the search strategies for eligibility. Two review authors (IH, MM and/or LG) will retrieve potentially relevant papers in full and assess them independently using a screening form developed for this review. We will list the excluded studies in the Characteristics of Excluded Studies Table. We will resolve any discrepancies by discussion. Where two review authors cannot reach agreement, we will consult the third review author. We will include study reports in English, Dutch and German; if we find suitable studies in other languages we will attempt to get them translated. We will identify studies by the surname of the first author and the year of publication.

Data extraction and management

Two review authors (IH and LG) will independently extract a standard set of data from each study using a tailored data extraction form (see Appendix 2), resolving any discrepancies by discussion. In cases where only a subgroup of participants meet the review inclusion criteria, we will extract and present data for that particular subgroup only.

We will extract information about the capacity domains assessed.

Results of the reference test for competence to consent are dichotomous; positive is defined as 'competent' and negative as 'not competent'. Results of the index tests may be mixed: some may offer a reported cut-off point and provide dichotomous results, while others may not provide a cut-off point but report that a serious deficit in any of the tested domains translates to a clinical opinion of incompetence. In that case we will compare the various given cut-offs with the reference standard. For each comparison of index test with reference test, we will extract data on the number of true positives, true negatives, false positives and false negatives in the form of a two-by-two table.

Assessment of methodological quality

Two review authors (IH and LG) will independently assess the quality of each individual study using the checklist adapted from the QUADAS-2 tool (Whiting 2011); the criteria are summarised in Appendix 3. We will answer each question on the checklist with a yes/no response, or note it as unclear if insufficient information was reported to allow us to make a judgement; we will document the reasons for this judgement. We will assess whether the study design and patient selection was appropriate, and whether the reference test and index test were rated blind to the results of each another. We will also assess if cut-off values were prespecified. Given that most instruments will not provide this information, we will not exclude studies that do not offer a prespecified cut-off but will report it as a possible risk of bias. Furthermore, we will assess whether the reference standard consisted of judgement by an expert aware of the relevant criteria, which must be accounted for by a description of the training the expert received. We will assess whether all patients received both index test and reference test, and will exclude patient groups who did not. If an atypical reference test was used, for example competence judgement by a panel of clinicians who are not aware of the relevant criteria, we will also exclude these patient groups. We will assess the time interval between reference standard and index test; we consider two weeks as justifiable. Although in cases of acute illness the level of competency could change markedly over two weeks, we do not intend to exclude studies designed for a population of patients with chronic diseases where researchers need some time to enrol them. We will summarise the methodological quality judgements graphically.

Statistical analysis and data synthesis

We will examine the validity and reliability of the competence assessment instruments. Content validity (the degree to which the instrument’s content reflects the universe of content relevant to the constructs being measured) is usually determined on the basis of expert consensus. In this review we will examine whether each instrument’s construct is consistent with the widely-accepted four-capacities model.

We will assess criterion validity, the degree to which scores on a scale are associated with the standard, in terms of inter-correlations. Other useful values are the scale’s sensitivity (valid positive, in this case competent) and specificity (valid negative, incompetent) rate. The accepted standard against which criterion validity is evaluated may be an established measure. In the absence of a gold standard for measuring competence, we will use the expert judgement by clinicians aware of the four relevant criteria as a reference standard.

If we have sufficient data, we will summarise results for the patient population under 18 years of age with deficits due to developmental stage only.

Marked inconsistency in the reference standards in the included studies will be particularly difficult to deal with quantitatively. In this systematic review we will demonstrate the descriptive elements of the index tests without performing a meta-analysis, which may be equally important in this first attempt at reviewing evidence on this topic.

We anticipate that within some index tests various cut-off values will have been applied to individuals. In that case we will perform the above-mentioned analyses for subgroups of studies that report similar cut-offs for that index test.

Investigations of heterogeneity

Not applicable.

Sensitivity analyses

Not applicable.

Assessment of reporting bias

We judge it acceptable not to assess reporting bias, given that very little is known about publication bias in test accuracy studies and that extrapolating from publication bias in effectiveness research may not be appropriate.

Acknowledgements

Many thanks are owed to Lotte Gelens (LG) and Marjolein Meester (MM) for their accomplished assistance in reviewing articles.

Appendices

Appendix 1. Search Strategy

Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations and Ovid MEDLINE(R) 1946 to Present

Search Date: 9 July 2012:: 

1. informed consent.ab,sh,ti.

2. (decision* adj3 capacit*).ab,ti.

3. (capacit* adj3 consent).ab,ti.

4. disclosed information.ab,ti.

5. (information adj3 reasoning).ab,ti.

6. (express* adj3 choice).ab,ti.

7. consent form?.ab,ti.

8. mental capacity.ab,sh,ti.

9. mental competen*.ab,ti.

10. mental competency/

11. mental incompeten*.ab,ti.

12. patient participation.ab,sh,ti.

13. or/1-12 [competence to consent]

14. decision making/ or decision making.ab,ti.

15. consensus/ or consensus.ab,ti.

16. or/13-15 [competence to consent sensitive]

17. (macarthur adj3 tool).ab,ti.

18. maccat.ab,ti.

19. consent questionnaire.ab,ti.

20. deaconess informed.ab,ti.

21. two part consent.ab,ti.

22. (california adj3 appreciation).ab,ti.

23. vignette method?.ab,ti.

24. informed consent survey.ab,ti.

25. competency interview schedule.ab,ti.

26. assessment of consent capacity for treatment.ab,ti.

27. Hopemont capacity assessment interview.ab,ti.

28. aid to capacity evaluation.ab,ti.

29. direct assessment of decision making capacity.ab,ti.

30. cq peds.ab,ti.

31. competency questionnaire.ab,ti.

32. SICIATRI.ab,ti.

33. structured interview for competenc*.ab,ti.

34. (hopkins adj2 assessment).ab,ti.

35. brief informed consent.ab,ti.

36. or/17-35 [all relevant psychological tests]

37. (psychiatric adj3 scale?).ab,ti.

38. (psychiatric adj3 test?).ab,ti.

39. (psychologic* adj3 scale?).ab,ti.

40. (psychologic* adj3 test?).ab,ti.

41. (neuropsycholog* adj3 test?).ab,ti.

42. (neuropsycholog* adj3 scale?).ab,ti.

43. (neuropsychiatric adj3 test?).ab,ti.

44. (neuropsychiatric adj3 scale?).ab,ti.

45. psychological tests/ or exp aptitude tests/ or language tests/ or exp neuropsychological tests/ or exp personality tests/

46. exp Psychiatric Status Rating Scales/

47. or/37-46 [psychological tests general]

48. 36 or 47 [psychological tests sensitive]

49. 16 and 48

 

OVIDSP PsycINFO, 1806 to Present

Search Date: 11 July 2012

1. (macarthur adj3 tool).ab,id,ti,tm.

2. maccat.ab,id,ti,tm.

3. consent questionnaire.ab,id,ti,tm.

4. deaconess informed.ab,id,ti,tm.

5. two part consent.ab,id,ti,tm.

6. (california adj3 appreciation).ab,id,ti,tm.

7. vignette method?.ab,id,ti,tm.

8. informed consent survey.ab,id,ti,tm.

9. competency interview schedule.ab,id,ti,tm.

10. assessment of consent capacity for treatment.ab,id,ti,tm.

11. Hopemont capacity assessment interview.ab,id,ti,tm.

12. aid to capacity evaluation.ab,id,ti,tm.

13. cq peds.ab,id,ti,tm.

14. competency questionnaire.ab,id,ti,tm.

15. SICIATRI.ab,id,ti,tm.

16. structured interview for competenc*.ab,id,ti,tm.

17. (hopkins adj2 assessment).ab,id,ti,tm.

18. brief informed consent.ab,id,ti,tm.

19. (competency adj3 interview).ab,id,ti,tm.

20. (Hopemont adj4 interview).ab,id,ti,tm.

21. or/1-20 [specific psych. tests]

22. ("2220" or "2222" or "2223" or "2224" or "2225" or "2226").cc.

23. exp testing/

24. 22 or 23 [psych. tests / testing]

25. decision making.ab,id,sh,ti.

26. informed consent.ab,id,sh,ti.

27. voluntary consent.ab,id,ti.

28. (decision* adj1 capacit*).ab,id,ti.

29. or/25-28 [informed consent specific]

30. 24 and 29

31. ("3400" or "3410" or "3430" or "3450" or "3470").cc.

32. 30 not 31

33. 21 or 32

34. limit 33 to ("0100 journal" or "0110 peer-reviewed journal" or "0400 dissertation abstract")

 

OVIDSP Embase, 1947 to Present

Search Date: 9 July 2012:

1. informed consent.ab,sh,ti.

2. (decision* adj3 capacit*).ab,ti.

3. (capacit* adj3 consent).ab,ti.

4. disclosed information.ab,ti.

5. (information adj3 reasoning).ab,ti.

6. (express* adj3 choice).ab,ti.

7. consent form?.ab,ti.

8. mental capacity.ab,sh,ti.

9. mental competen*.ab,ti.

10. mental incompeten*.ab,ti.

11. patient participation.ab,sh,ti.

12. or/1-11 [competence to consent]

13. decision making/ or decision making.ab,ti.

14. consensus/ or consensus.ab,ti.

15. or/12-14 [competence to consent sensitive]

16. (macarthur adj3 tool).ab,ti.

17. maccat.ab,ti.

18. consent questionnaire.ab,ti.

19. deaconess informed.ab,ti.

20. two part consent.ab,ti.

21. (california adj3 appreciation).ab,ti.

22. vignette method?.ab,ti.

23. informed consent survey.ab,ti.

24. competency interview schedule.ab,ti.

25. assessment of consent capacity for treatment.ab,ti.

26. Hopemont capacity assessment interview.ab,ti.

27. aid to capacity evaluation.ab,ti.

28. direct assessment of decision making capacity.ab,ti.

29. cq peds.ab,ti.

30. competency questionnaire.ab,ti.

31. SICIATRI.ab,ti.

32. structured interview for competenc*.ab,ti.

33. (hopkins adj2 assessment).ab,ti.

34. or/16-33 [all relevant psychological tests]

35. psychometric?.ab,ti.

36. psychometry.ab,sh,ti.

37. (psychiatric adj3 scale?).ab,ti.

38. (psychiatric adj3 test?).ab,ti.

39. (psychologic* adj3 scale?).ab,ti.

40. (psychologic* adj3 test?).ab,ti.

41. (neuropsycholog* adj3 test?).ab,ti.

42. (neuropsycholog* adj3 scale?).ab,ti.

43. (neuropsychiatric adj3 test?).ab,ti.

44. (neuropsychiatric adj3 scale?).ab,ti.

45. neuropsychological test/

46. psychological rating scale/

47. or/35-46 [psych. tests sensitive]

48. 15 and 47

49. or/37-46

50. 15 and 49

51. 50 not 34

52. 34 or 50

Appendix 2. Data Extraction Form

Data Extraction Form

  
Study IDFirst author, year of publication
  
Clinical features and settingsPresenting conditions, clinical setting
  
ParticipantsSample size, age, sex, ethnicity, country
  
Study designWere participants enrolled retrospectively or prospectively?
  
 Was the sampling method consecutive or random?
  
 Duration between reference test and index test
  
Target conditionCompetence to consent to treatment or research participation
  
Reference standardThe reference standard test used
  
Index testsThe index test used. Details of the test content, operators, including any special training provided. Cut-off point used
  
ResultsNumber of true positives, true negatives, false positives, false negatives
  
Conflicts of interest Statment regarding conflicts of interest (whether present or absent)
  
NotesSource of funding

Appendix 3. Quality Assessment Form

Quality Assessment Form

1. patient selection

A. Risk of bias

Was a consecutive or random sample of patients enrolled?           

Yes: sample was consecutive or random

No: sample was not consecutive or random but selected otherwise (e.g. by physician)

Unclear: it is not stated whether the sample is consecutive, random, or otherwise selected

Was a case control-design avoided?                                              

Yes: there was no case control-design, only people with a medical or mental condition were included

No: there was a case control-design with healthy controls

Unclear: it is not clearly stated whether a case control-design was used.

Did the study avoid inappropriate exclusions?                               

Yes: there were no inappropriate exclusions

No: there were inappropriate exclusions (e.g. incompetent patients were excluded)

Unclear: it is not clearly stated if there are any inappropriate exclusions

Could the selection of patients have introduced bias?                    

Risk: high/low/unclear

B. Concerns regarding applicability

Is there a concern that the included patients (prior testing, presentation, intended use of index test and setting) do not match the review question?       

Concern: high/low/unclear

2. Index test(s)

A. Risk of bias

Were the index test results interpreted without knowledge of the results of the reference standard?    

Yes: rater of index test was blind to results of reference test

No: rater of index test was aware of results of reference test or was the same person

Unclear: it was not clearly stated if index test and reference test were judged independently

If a threshold was used, was it prespecified?                                

Yes: threshold was prespecified by a set cut-off value or several prespecified cut-offs were analysed

No: there was no prespecified cut-off

Unclear: it was not stated in de study design which threshold was to be used, or if the threshold was variable

Could the conduct or interpretation of the index test have introduced bias?

Risk: high/low/unclear

B. Concerns regarding applicability

Is there concern that the index test, its conduct, or interpretation differ from the review question?                                    

Concern: high/low/unclear

3. Reference standard

A. Risk of bias

Is the reference standard likely to correctly classify the target condition?

Yes: reference standard consisted of judgement by an expert aware of the relevant criteria, training of expert must be accounted for. Includes also judgement by more than one expert aware of the relevant criteria.

No: reference standard was not a judgement by an expert aware of the relevant criteria.

Unclear: expertise or training of the person establishing the reference standard is not described, or incomplete criteria are used

Were the reference standard results interpreted without knowledge of the results of the index test?                                

Yes: expert was blind to results of the index test.

No: expert performing the reference test was aware of results of the index test or the reference test and index test were performed by the same person.

Unclear: blinding is not described.

Could the reference standard, its conduct, or its interpretation have introduced bias?

Risk: high/low/unclear

B. Concerns regarding applicability

Is there concern that the target condition as defined by the reference standard does not match the review question?     

Concern: high/low/unclear

4. Flow and timing

A. Risk of bias

Was there an appropriate interval between index test(s) and reference standard?

Yes: the interval is within reasonable limits (< 15 days)

No: the interval is longer then 14 days

Unclear: the interval was not clearly mentioned or was variable under and over 14 days

Did all patients receive a reference standard?                                 

Yes: every patient that received an index test also received a reference test

No: a group of patients that received an index test did not receive the reference test. In that case the group of patients that did not receive both tests will be excluded from the review.

Unclear: it was not clearly stated which patient received the reference test and which did not.

Did patients receive the same reference standard?                         

Yes: all patients in the study received the same reference test

No: a group of patients received a deviant reference test. In that case the group of patients that did not receive the set reference test (judgment by expert aware of the four relevant criteria) will be excluded from the review.

Unclear: it is not specified if the whole sample received the same reference test

Were all patients included in the analysis?                                      

Yes: all patients included in the study were included in the analysis, and if not, drop-out was explained.

No: a number of patients included in the study were not included in the analysis without reasons for drop-out

Unclear: it is not clearly stated whether all patients were included in the analysis

Could the patient flow have introduced bias?                                

Risk: high/low/unclear

Contributions of authors

Irma Hein (IH) conceived the literature review and drafted the protocol.
Joost Daams (JD) will perform the literature search.
Robert Lindeboom (RL) will contribute to the statistical analysis.
Pieter Troost (PWT) and Ramón Lindauer (RJL) contributed to drafting the protocol manuscript.
All authors read and approved the protocol manuscript.

Declarations of interest

Irma Hein (IH) has no conflict of interest to declare.
Joost Daams (JD) has no conflict of interest to declare.
Pieter Troost (PWT) has no conflict of interest to declare.
Robert Lindeboom (RL) has no conflict of interest to declare.
Ramón Lindauer (RJL) has no conflict of interest to declare.

Sources of support

Internal sources

  • Netherlands Organization for Health Research and Development (ZonMW), Netherlands.

    Funding of the research project

External sources

  • No sources of support supplied

Ancillary