SEARCH

SEARCH BY CITATION

Keywords:

  • research participants;
  • validated survey;
  • participant perceptions;
  • research ethics;
  • human subjects protection;
  • informed consent

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Introduction: Clinical research participants’ perceptions regarding their experiences during research protocols provide outcome-based insights into the effectiveness of efforts to protect rights and safety, and opportunities to enhance participants’ clinical research experiences. Use of validated surveys measuring patient-centered outcomes is standard in hospitals, yet no instruments exist to assess outcomes of clinical research processes.

Methods: We derived survey questions from data obtained from focus groups comprised of research participants and professionals. We assessed the survey for face/content validity, and privacy/confidentiality protections and fielded it to research participants at 15 centers. We conducted analyses of response rates, sample characteristics, and psychometrics, including survey and item completion and analysis, internal consistency, item internal consistency, criterion-related validity, and item usefulness. Responses were tested for fit into existing patient-centered dimensions of care and new clinical research dimensions using Cronbach's alpha coefficient.

Results: Surveys were mailed to 18,890 individuals; 4,961 were returned (29%). Survey completion was 89% overall; completion rates exceeded 90% for 88 of 93 evaluable items. Questions fit into three dimensions of patient-centered care and two novel clinical research dimensions (Cronbach's alpha for dimensions: 0.69–0.85).

Conclusions: The validated survey offers a new method for assessing and improving outcomes of clinical research processes. Clin Trans Sci 2012; Volume 5: 452–460


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

The assessment of the quality of research participants’ experiences in clinical research, including the degree to which current human subject protection measures maximize safety, assure autonomy, and informed consent, and achieve justice, has not been systematically addressed in the clinical research literature. Historically, audits of adherence to regulatory and ethical requirements, such as the completion of informed consent forms, have provided some assessment of clinical research process quality. Several studies have focused on one or a few aspects of the clinical research process (e.g., the informed consent process),1,2 but few studies have examined research participants’ perceptions across the entire set of experiences in clinical research. In contrast, comprehensive surveys of patient perceptions are routinely conducted by hospitals to assess aspects of care.3–5 We built on the experience gained from hospital surveys in driving evidence-based improvements by developing a robust instrument to assess participants’ perceptions of their research experiences and the fulfillment of the principles of human research protections. This manuscript describes in detail the methodology used to develop and validate this new survey instrument.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Overview

The Clinical Research Participant Perception Survey (RPPS) was developed based on themes that emerged from the analysis of focus groups designed to collect research participants’ perspectives on their research experiences.6 After a draft questionnaire was developed, we extended a broad invitation to participate in validating the survey to 67 individuals involved in conducting or overseeing clinical research at more than 34 academic research institutions, including members of the Clinical and Translational Science Award (CTSA) Consortium. Those accepting the invitation participated, to varying extents, in crafting and refining the survey, in working through the complex challenges created by institutional policies and procedures, developing the methodology to recruit participants and fielding the RPPS. Analysis was conducted on the responses to the fielded RPPS. Based on consideration of all the analyses conducted, revisions were made to produce the final version of the RPPS.

Participating centers and study logistics

Fifteen research centers, including 13 CTSAs, the Clinical Center at the National Institutes of Health (NIH), and one General Clinical Research Center (GCRC) participated in the fielding and validation of the final survey questionnaire. To insure broad representativeness and incorporate appropriate operational planning, a Fielding and Validation Subcommittee (FVS) was created that included at least one representative from each of the participating centers. Each center determined independently whether to provide the Spanish language version of the survey to some or all survey recipients. Each center customized the template “Cover Letter to Participants” to accompany the survey mailing. Participating centers agreed to transfer names and addresses of participants confidentially to the partnering healthcare survey company, NRC Picker, through a secure website for uploading data. Each center received its own confidential center-specific raw data, and a summary report that included the aggregate responses from all centers for comparison; the reports did not contain links to any participant identifiers. Contact information and identifiers held by NRC Picker were destroyed at the conclusion of the analysis.

Study instrument

Questions in the RPPS instrument were framed from the perspective of the participant's experience.6 Questions incorporated the intent, format, and response scales of the questions developed for the national Hospital Consumer Assessment of Health Plans Survey (HCAHPS) of hospital patient care, and asked participants to report whether some aspect of clinical research did or did not occur, and, if it did, with what frequency. For example, “How often did the research coordinator explain what would happen during the study in a way you could understand?” This type of question provides objective information based on participants’ perceptions of their experiences and generates data that are actionable.7 Other questions asked participants to rate a series of potential factors for their relevance, or assessed study or participant characteristics.

The fielded questionnaire consisted of 76 questions, some involving more than one answer, for a total of 115 response items. The questions were arranged to appear in the sequence in which participants experience clinical research, starting with recruitment information, and ending with an overall rating of their research experiences and whether they would recommend joining a research study to family or friends. The survey concluded with 11 background questions about study involvement and participant characteristics as well as an open text field for comments.

The readability of the questionnaire was assessed using the Flesch–Kincaid readability test for sentence length and complexity and was revised to reduce the grade level required to read it by simplifying the words in the questionnaire to grade 6.8.8

Face and content validity

The survey drew broadly from the expertise and experience of collaborating institutions, including the assessment of face and content validity. Through an interactive Web-based tool, FVS members from the centers provided detailed comments and suggestions on the draft questionnaire, the cover letter to participants, and the survey methodology. Incorporating FVS feedback, iterative revisions were made to the format and content of the questionnaire, maintaining alignment with the focus group findings.6 To assess the clarity of several specific questions, and to evaluate whether participants would interpret the response scales in the manner intended, we conducted a single-site substudy involving semistructured cognitive interviews with 19 research participants who completed a portion of the draft survey. Based on the findings of the substudy, questions were then revised by three of the authors (RK, LL, JY) for clarity, and were retested with participants. Several questions were designed to identify the type of research study, and the category of participant (e.g., healthy volunteer or disease-affected individual), in alignment with the themes from the focus groups.6 The revised RPPS was reviewed by the FVS before fielding. A version of the RPPS was developed in neutral broadcast Spanish by professional translators in collaboration with the participating centers electing to field in both Spanish and English.

Human subjects protections and recruitment

The parent protocol, draft questionnaire, and cognitive interviewing substudy were reviewed and approved by the IRB at the Rockefeller University, the coordinating center, before enrolment of any participants.

Substudy

Adults were eligible to participate in the cognitive interviewing substudy if they had participated in a research study within the past 2 years, were willing to complete a set of survey questions in person and agreed to be interviewed about their responses. Written informed consent was obtained from all participants in the substudy.

Primary study

The preferred method of recruitment for fielding the survey instrument consisted of organizing a master dataset of all adult research participants who had enrolled in one or more research studies within the prior 2 years at the participating institution, selecting a random sample from that dataset, and securely transferring the names and addresses of those individuals to NRC Picker under the protection of appropriate confidentiality agreements. Review of the proposed research at each center consisted of consideration of the availability of centralized research participant listings, department- and investigator-controlled databases, previous investigator- and protocol-specific commitments to data management and confidentiality, investigator and leadership concerns about survey mailing, and privacy board policies. In alignment with HCAHPS quality standards for hospital surveys, handout methods of recruitment and fielding (handing surveys to participants at the point of care interaction) were not permitted.9 Each of the participating research centers consulted with its local IRB, either to obtain an exemption from review (4 centers), or to obtain approval for the fielding and validation protocol through expedited (9 centers) or full board (2 centers) review. Each IRB determined whether documentation of informed consent was required (2 centers) or could be waived (13 centers). Among the four centers that received an exemption from IRB review, two required explicit participant permission to receive the survey.

Study procedures

Data transfer

Participant data files were transferred from each participating research center to NRC Picker for the purposes of sampling and mailing using a secure data upload Website. Each center provided information about the size of its overall research participant population indicating which, if any, patients or protocols were systematically included or excluded from the dataset (e.g., patients enrolled in cancer center, behavioral health, HIV/AIDS, or maternal health protocols) and if there were any additional restrictions to recruitment.

Sampling and sample sizes

Primary decisions for determining the participants from which to sample were made at the local institutional level; centers completed a form indicating the types of protocols and participants that were specifically included or excluded and the reasons for any limitations placed on the sample. Centers reported that the modifications to sampling were based on the availability of databases of research participants, previous investigator- and protocol-specific commitments to data management and confidentiality, and IRB and Privacy Board considerations. In general, centers used three sampling methods for the validation study: (1) enrolment of research participants sampled from an entire database of research participants with few or no excluded protocols (e.g., all studies except mental health, or all studies except substance abuse) (four centers); (2) enrolment of research participants including or excluding multiple specific protocols or databases (e.g., only participants enrolled through a cancer center, participants from a dedicated research unit, no maternal health participants, etc.) (nine centers); and (3) enrolment of a set of participants from whom permission was explicitly obtained prior to mailing the questionnaire to them (two centers). Based on their data structures, centers created local algorithms to verify research participation within the prior 2 years.

Based on the National Education Association Model,10 we designated a target sample size of 384 responses from each center, since this would provide 95% confidence in the results for individual centers. Since response rates from a prior pilot survey at NIH and Rockefeller in 2003–2006 ranged from 30% to 50%, we initially anticipated a 50% response rate and sought to sample 700–800 research participants from each center. When the initial response rates from the first two institutions were 9% and 27% after the first of two planned mailings, we amended the protocol to permit mailing surveys to 2,100 participants per center. Surveys were mailed to all research participants in the dataset provided when that number was less than or equal to 2,100 (11 centers) or to a randomly sampled subset when the number of names in the dataset exceeded the number needed (4 centers).

Mailing

The study was conducted using a two-wave mail methodology. The survey package consisted of the research participant questionnaire, a cover letter from the affiliated center, and a confidential postage-paid return envelope addressed to NRC Picker. Two centers included both English and Spanish language surveys and cover letters with each mailing. Surveys were mailed as soon as participating centers completed all preparations and provided their datasets. NRC Picker mailed the first surveys directly to the home addresses of research participants, starting in March and ending in August of 2010. Eighteen days after each center's initial mailing, a reminder survey was mailed to those who had not yet returned their completed surveys. Surveys were accepted up to 84 days after being sent.

Statistical analysis

All data were analyzed using IBM SPSS software version 19 (IBM SPSS software, Version 19, IBM Corporation, Somers, NY).

Sample characteristics

Descriptive questions provided information about participants’ age, education, race, ethnicity, and prior research protocol participation.

Response rates

Overall and center-specific response rates were calculated by subtracting the number of nondeliverable surveys from the number mailed and then dividing the number of surveys returned by the number of surveys delivered.

Survey and item completion and analysis

Survey responses were analyzed to determine the proportion of returned surveys that were at least 80% complete, the standard used by the HCAHPS survey.9 We also analyzed item completion, and rated questions with 10% or more missing responses as “problematic,”11 based on the assumption that participants either had difficulty understanding the intent of the question or could not find a response option that reflected their experiences. To review item response patterns, frequencies were examined. Item and dimension response distributions by facility were also examined to insure that there was variability across facilities; between-group comparisons were completed using an analysis of variance (ANOVA). Although there was no a priori hypothesis about expected variability by facility, some degree of variability by facility for items and dimensions was anticipated to reflect true differences in centers because of variations in participant populations, geography, and other factors.

Thematic alignment/grouping into dimensions

Relying on the Picker Institute's experiences in devising hospital care surveys, we sorted the questions thematically into conceptual groups (dimensions).5 The creation of new clinical research participant dimensions was iterative; questions were grouped based on the qualitative themes identified in the focus groups,6 the conceptual content of the questions, and the results of statistical reliability testing for best fit using Cronbach's alpha coefficient.12

Psychometric analysis

Conceptually related items were summed to form a scale for each dimension. To test assumptions of validity and reliability, scale scores were statistically compared to one another and to the survey in its entirety. Inferences regarding instrument reliability and validity were based on four analyses: internal consistency, item internal consistency, criterion-related validity, and item usefulness. Internal consistency of the dimensions was examined by calculating Cronbach's alpha coefficients for each scale. Item internal consistency, which refers to the “goodness of fit” of items within a scale, was examined using interitem correlation, where values of at least 0.30 (corrected for overlap) are considered generally acceptable,13 although others have suggested that values as low as 0.15 are acceptable.14 Criterion-related validity was calculated by examining the extent to which dimension-specific responses correlated to overall ratings of research participant experiences, an estimate of how well the dimensions were predictive of this particular outcome. Although no specific convention exists for determining what is adequate criterion-related validity; based on the experience from hospital surveys and surveys in other disciplines,15 we judge the range between r= 0.30 to 0.80 as acceptable (Table 1). Item usefulness was tested by conducting Pearson correlational analysis between each item and the overall rating of participant experience. Although there is no identified standard, items should correlate with the outcome measures at a minimum of r= 0.30, and ideally r= 0.40 to 0.60 to infer that they are useful.15

Table 1.  Assessments of the survey.
Assessments of validityPurposeMethod
Content validityAssess whether the questions address the intended content areaExpert review
Face validityAssess whether the questions ask what they appear to askParticipant feedback, expert review
Cognitive interviewsAssess the meaning of the questions and respondents’ answersLocal field test of question(s) with target audience members; semistructured interview and analysis of responses
Response ratesEvaluate representativeness, biasTotal responses/total delivered; subgroup analyses for response
Survey & item completionMarker of comprehension, survey lengthTotal number items completed/total number with required response (e.g., omit ranking scales)
DemographicsRepresentativeness of sampleExamine age, race, ethnicity, education, language spoken at home, prior research experience, type of protocol, duration of participation, general health, disease-affected versus healthy volunteer, and compare to the research participant population
Multitrait analysisInstrument reliability and validity inferences were established based on: |(1) Internal consistency reliability – survey and scale level of analysis (2) Item internal consistency reliability – interitem correlations (3) Criterion-related validity (4) Item usefulness: Item correlation to the overall rating question (criterion)Correlation coefficients; Cronbach's alpha
Sensitivity analysisInternal consistency and interitem correlation were examined for specific subpopulations of the entire dataset including (1) whether the participant took a drug/supplement or medical device/new procedure: n= 2,155, 43%; (2) whether participants required a disease to participate: n= 2,925, 59%; (3) whether participants were black/African American: n= 550, 11%.Correlation coefficients; Cronbach's alpha

Psychometric analyses were conducted with the entire database of research participants. Sensitivity analyses were performed with specific subgroups of research participants, including participants with a disease under study, participants whose study involved taking a new drug, evaluation of a new medical device or use of a novel medical procedure, and the subgroup of African American participants. We also conducted reliability analysis with at least 50% of respondents by eliminating questions that had high percentages of “not applicable” responses. Some questions were designed to be only applicable for a subset of participants (e.g., “Did research staff help you with any disabilities that you have?”).

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Response rates

Of the total of 18,890 surveys that were mailed, 1,867 (9.8%) were returned as nondeliverable, and 4,961 were returned by the response deadline, yielding an overall response rate of 29% (range across sites, 18–74%) (Table 2). The one center that stopped fielding after a single wave mailing had the lowest response rate; research participants from this institution were not systematically more negative (or positive) than the aggregate respondents.

Table 2.  Response rate by participating center.
Participating centerTotal patients sampledNon-deliverable surveysReturned surveysResponse rate
  1. *Center participating in only one wave of a planned two-wave mailing.

  2. Centers including both English and Spanish versions of all the materials with every mailed survey.

  3. Center recruiting exclusively from a registry of volunteers consented for future research contact.

  4. §Centers requiring explicit permission from participants before mailing survey questionnaires.

82,14326933918%*
51,75032533423%
1080013416425%
121,94420342725%
141,9609345825%
21,63211741127%
497210324328%
11,49321738430%
153,09725396834%
79086629936%
135234617236%
117492928540%
6570623241%
9192113370%§
3157511274%§
Total18,8901,8674,96129%

Survey and item completion and analysis

Overall, 89% (range across sites, 77–97%) of respondents met the HCAHPS standard of answering at least 80% of the survey questions (not including the questions that could be legitimately skipped). Of returned surveys, 2% were either blank or contained an answer to only 1 question.

Four specific questions were unanswered by 10% or more of all participants (range 16–22% missing); each of these questions referred to the time period “after the study was over.” Free text responses revealed that some respondents were still completing their study and that they left the question blank because they could not find an appropriate response option for these questions.

Response distributions varied across the range of responses for most items. Three questions had over 90% of responses in one response category. Ninety-four percent of respondents indicated that they were always treated with courtesy and respect by investigators/doctors, and 96% indicated always being treated with courtesy and respect by coordinators or nurses. As well, 94% of respondents indicated that they “never” felt pressure from the research team to stay in the study. Significant between-group variation (p < 0.05) was found across facilities for all dimensions, and 38 of 44 questions. The exceptions included 3 questions that had high proportions of not applicable responses (i.e., research staff respected cultural background, staff respected language differences, staff provided assistance for language differences), 1 question with over 90% of responses in one category (i.e., treated with courtesy/respect by coordinator/nurse), and 2 other questions (e.g., research investigator/Dr. listened carefully; investigator/Dr. answered questions understandably).

Demographics

Demographic data on the responding research participants are summarized in Tables 3 and 4 and are shown in detail in for each center in Appendix A. Gender was not uniformly conveyed in centers’ data files and was not captured in the survey questions. For 4 centers that captured gender (n= 945), the sample was 59% female. Caucasians made up the largest racial group (85%) followed by African Americans (12%) and Asians (3%). Hispanic individuals made up 5% of the response sample. Overall the sample was highly educated, with 54% having completed 4 years of college or more, and only 16% having completed a high school education or less. As shown in Appendix A, the demographics varied substantially at different centers reflecting the site's specialized research focus and geographic location.

Table 3.  Race and ethnicity of the population of the United States, and of the response sample for the Research Participant Perception Survey (RPPS).
Race and ethnicityWhiteBlack or African AmericanAsianNative American or Alaskan nativeNative Hawaiian or Pacific IslanderSome other raceTwo or more racesHispanic
  1. *Shown as the best comparator, since data about the demographics of U.S. research participants is not available.

U.S. Population* 2010 census2072%13%4%1%0.2%6.2%3%16%
RPPS Population Aggregate, (range%)85% (61–98)12% (0–38)3% (0–11)2% (0–5)1% (0–1)5% (2–15)
Table 4.  Demographic data of the responding research participants.
Characteristics of RPPS validation sampleOverall (range% for centers)
Age
 18–40 years19% (3–81)
 41–64 years49% (16–63)
 65 years and older32% (3–61)
Highest level of education
 Elementary school and high school only16% (3–29)
Some College or graduated 2 year college/trade school30% (20–38)
 Graduated 4 year college or beyond54% (42–73)
Study requires diagnosis of a disease/disorder (yes)63% (19–96)
Number of studies participated in:
 151% (17–86)
 2–541% (14–63)
 6 or more8% (0–22)
Drug, new device, new procedure involved
 Yes46% (21–86)
 No47% (11–74)
 Unsure7% (3–10)
Study demands
 Simple56% (36–81)
 Moderate37% (16–50)
 Intense8% (2–19)
Duration of study participation
 Hours18% (4–50)
 Days or weeks15% (8–28)
 Months36% (16–76)
 Years31% (9–62)
Self-rated health status
 Excellent19% (7–38)
 Very good36% (31–47)
 Good29% (13–39)
 Fair13% (2–20)
 Poor2% (0–6)

Research-specific dimensions

Five research-specific dimensions emerged from the analysis of the responses and an iterative process to test conceptual groupings of items: (1) informed consent, (2) trust, (3) coordination of care, (4) information, education and communication, and (5) respect for participant preferences. “Trust” and “informed consent” questions captured conceptual themes uniquely related to the research participant experiences, and thus not included in the standard dimensions of patient centered care. All other questions could be assigned to the standard dimensions of patient-centered care, which include “coordination of care,”“information/education/communication,” and “respect for patient preferences” listed above. As shown in Table 5, four of the five dimensions had strong internal consistency with Cronbach's alpha coefficients greater than 0.80, and the fifth (coordination of care) had a Cronbach's alpha coefficient that was 0.69, just below the desired 0.7 cut off. To test whether the responses of subgroups of participants differed significantly from the aggregate, we conducted sensitivity analysis for three subgroups: those who took drug/supplement or experienced new procedure (n= 2,155); those who required a disease to participate (n= 2,925); and African Americans (n= 550). The analysis showed substantially the same results as those for the sample as a whole (Table 5). Another important sensitivity analysis involved removing items, which collectively, accounted for more than 50% of research participants providing a “not applicable” response within a given research-specific dimension. For this sensitivity analysis, internal consistency remained similar for the dimensions of “trust” and “informed consent” (Table 5). Removing from the overall analysis the responses to questions targeted to a small subset of research participants resulted in only minor changes in the reliability of the dimensions, other than for the dimension, “respect for patient preferences,” in which Cronbach's alpha decreased from 0.84 to 0.66.

Table 5.  Internal consistency and inter-item correlations by dimension including sensitivity analysis with specific populations.
  Cronbach's Alpha CoefficientInter-item correlation
Dimensions and individual items within each dimensionItem NumbersAllDrugDiseaseAfrican AmericanAllDrugDiseaseAfrican American
  1. Items represent shortened paraphrased summaries of the questions and not the wording used in the questionnaire.

  2. Items are listed in each dimension using short forms of the questions. The items included in dimensions are separate actionable questions. Cronbach's Alpha and Inter-item correlation coefficients are calculated for the entire response set (All) of all items (All), and also analyzed for each dimension as a whole (All), and also in sensitivity analyses using only individuals in studies involving an investigational drug or procedure (Drug), or participants who had to be disease-affected to enrol (Disease), or participants who self-identified as African-American (African American). For each dimension, there are two sets of coefficients for all the analyses. The upper set of coefficients represents the analysis including all the questions and responses within the dimension. Only those respondents who completed all questions are included in the analysis. The lower set of coefficients includes at least 50% of respondents by removing questions with high percentages of not applicable responses. The shaded items are questions eliminated from the calculations in the list-wise deletion analysis due to a large number of ‘not applicable’ responses.

  3. *items removed so that more than 50% of research participants were included in analysis.

All items 1–44.96.93.93.92.33.28.29.25
Informed consent 1–13.86.87.86.84.32.33.33.28
1. Overall study explained understandably  2. Someone took the time to answer questions about the study  3. Study details explained understandably  4. Risks/benefits of joining study explained  5. Study details included in informed consent docs  6. Informed consent document understandable  7. Prepared for what to expect by informed consent document  8. Something happened that you were not prepared for  9. Prepared by info/discussions before participation  10. Felt pressure from research staff to join study  11. Had enough time before signing informed consent  12. Felt pressure from research team to stay in study  13. Understood which tests/visits were for research*1–12.84.85.85.83.31.32.33.29
Coordination of care 14–20.69.69.70.71.24.24.25.26
14. Research team ready on time for visit  15. Had to wait too long for visit/procedures to begin  16. Had to wait too long between procedures/tests  17. Knew how to reach research team  18. Able to reach member of research team when needed  19. One person organized in involvement in study  20. Research staff gave conflicting information*14–19.72.71.72.74.30.29.30.32
Information, education and communication 21–28.80.81.81.76.33.34.35.29
21. Coordinator/nurse answered questions-understandably  22. Investigator/doctor answered questions-understandably  23. Pain/discomfort explained during informed consent  24. Experienced unexpected pain/discomfort  25. Risks/benefits included in informed consent  26. Easy to get answers from research team  27. Reasons for delays explained*  28. Summary of results written understandably*21–26.71.72.71.68.29.30.29.27
Respect for participant preferences 29–36.83.85.82.72.37.42.36.25
29. Had enough physical privacy while in study  30. Personal/research information protected while in study  31. Met with investigator/doctor as much as wanted  32. Felt like a valued partner in research process  33. Staff provided assistance for language-differences*  34. Wanted to receive results of routine tests*  35. Research staff respected my cultural background*  36. Staff respected language differences*29–32.66.67.67.66.33.33.34.32
Trust 37–44.85.85.85.84.42.41.42.40
37. Confidence and trust in nurses/coordinator  38. Confidence and trust in investigators  39. Coordinator/nurse listened carefully  40. Investigator/doctor listened carefully  41. Investigator/doctor treated with courtesy and respect  42. Coordinator/nurse treated with courtesy and respect  43. Coordinator/nurse discussed anxieties/fears*  44. Investigator/doctor discussed anxieties/fears*37–42.83.81.83.85.46.41.44.49

Item-internal consistency validity, that is, how well each item is correlated to its scale (the dimensions) rather than with the overall survey, was greater than the accepted standard of 0.30 for all dimensions except “coordination of care” (0.24). Findings were similar for all of the specific subpopulations. When items were removed to include at least 50% of research participants, the item-internal consistency was 0.29 or above for all dimensions (Table 5).

To estimate how well the dimensions are predictive of positive participant outcomes, we examined the correlation between each of the research specific dimensions with the participants’ overall rating of research experiences (the criterion). The range of correlation coefficients for the dimensions was 0.47–0.63, indicating that the dimensions, correlate moderately to the overall rating of their experiences (Table 6).

Table 6.  Criterion-related validity: correlation of dimensions of research experience to the overall rating of the research experience.
Dimension of careCorrelation to overall rating of the experience*
  1. *Correlation of dimensions to the outcome of “overall rating of the participant experience” is the estimate of criterion-related validity.

Information, education and communication0.63
Informed consent0.55
Respect for participant preferences0.53
Trust0.53
Coordination of care0.47

All survey items demonstrated usefulness by having higher correlations with overall participant experiences than 0.30 with the exception of four items including: (1) research staff members gave conflicting information, (2) one person was involved in coordinating the study, (3) whether participants felt pressure to stay in study, and (4) whether the coordinator/nurse discussed anxieties/fears.

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

With 4,961 respondents reflecting the perspectives of a diverse group of research participants, the current sample provides the broadest data set of research participants to date. The use of rigorous survey development methods involved the input of stakeholders throughout the survey development process. Evaluation of quantitative data from the fielded survey to assess question completion and psychometric properties support the final RPPS, revised after the validation study.

Conducting face, content validation, and cognitive interviews of the questionnaire before pilot testing increased the likelihood that the questions would be understood as intended by respondents and that the questions would measure what was intended. The inclusion of multiple centers and participant groups from a broad variety of backgrounds, demographics, and research experiences supports the wide applicability of the instrument for evaluating research participation. Overall, the response rate to the research participant survey was similar to patient responses for the HCAHPS questionnaire across all states in 2010, which was 33%.16 The survey completion and item completion scores met the HCAHPS criteria for acceptable survey completion and the correlations of the dimensions with the response to overall satisfaction scores were acceptable. Notably, we found that the four most frequently unanswered questions (so-called “problematic questions”) presumed that the study already was completed at the time of survey and open-field text comments returned with the surveys indicated that some participants enrolled in ongoing research studies left these questions blank because no appropriate response was offered. In response to this finding, we modified the questionnaire by adding an appropriate response for participants who did not complete their enrolment, or are still enrolled in their studies. By including a “not applicable” category for those still in the study, we anticipate that there will be fewer missing responses for the “after the study was over” questions in the future.

Four items were problematic in the item usefulness analysis in that they did not correlate highly (<0.3) with overall experience, but the reasons for the low correlation differed across items. Very few participants felt pressure to stay in a study (<5%), and this contributed to the lower correlation with overall rating of experience. However the finding of respect for participant autonomy may be of great interest to human protection professionals and research ethicists regardless of its low correlation with participants’ ratings. The fact that the receipt of conflicting information from study staff does not correlate highly with the overall experience is surprising and requires further study to examine its relationship with several dimensions and overall experience. The two questions regarding study coordination and sharing fears/anxieties were felt to be poorly focused.

Variability by facility assists centers in providing context for local results, and in identifying opportunities for performance improvement. All dimensions and almost all question responses varied across response categories and facilities. It was expected that responses would vary by facility to some degree for items and dimensions as a reflection of differences in centers based on participant populations, geography, practices and other factors. Three questions with little variation across response categories may offer opportunities for future refinement due to the importance that research participants and professionals attributed to their relationships with study personnel,6 as well as having participants stay in the study without pressure from the research team.6

Dimensions are useful constructs for summarizing the responses to questions that are conceptually related and identifying areas of successful or suboptimal performance. The specific questions included in each dimension provide actionable data to inform the development of performance improvement initiatives. The relationships of dimensions to the overall experience scores are useful to the extent that the overall experience is a high level summary metric. The dimensions of the clinical research experiences were initially based on the framework of the dimensions of patient-centered care used for healthcare experiences.5 As anticipated, those hospital-based dimensions of care did not fully capture the complexity of clinical research experiences. As a result we derived new dimensions for the clinical research experience from the results of the qualitative research, based on participant-centered themes and without attempting to pre-fill putative research dimensions. For instance, an informed consent conversation contains elements of the traditional dimension “education/information/communication” but also contains other important variables not captured in the dimension such as preservation of autonomy, providing adequate time for consideration, and the absence of undue influence. Similarly, the preservation of participant autonomy, one of the cornerstones of informed consent,17 is not fully captured by the traditional dimension “respect for patient preferences.” Proposed new dimensions of research participation such as “trust,”“informed consent,”“partnership/engagement,”“autonomy,” and “coordination of research care” were tested, and “trust” and “informed consent” emerged as dimensions robustly validated in the current survey. Whether additional dimensions can or should be prospectively defined in the domain of research participation outcomes, and specific questions derived for those dimensions, is a topic for future research.

Cronbach's alpha scores for 4 of the 5 defined dimensions were >0.8, suggesting strong internal consistency in the dimensions as defined. The lower internal consistency for “coordination of care” questions may reflect several unique concepts within the dimension in the clinical research setting (e.g., “research staff gave conflicting information” is a different idea than “waiting for research visits”). Disorganization of services had a significant negative impact, however, on participant perception in the focus group study,6 and thus “coordination of care” may be an area needing further refinement. The sensitivity analysis revealed that the reliability for the dimensions was similar across subpopulations tested (Table 5). For the dimension of “respect for patient preferences,” four questions designed to assess the preferences and needs of a subset of research participants were not applicable to all research participants. However, the questions were deemed important to include on the questionnaire by the authors and collaborators because of the important content the questions covered (i.e., whether participants wanted results of routine tests, whether staff respected the participant's cultural background, whether staff respected language differences, whether the participant was provided assistance with language differences). When these questions were removed from the analysis, the reliability of the dimension decreased slightly. Items within the dimension “respect for patient preferences” also may have represented several distinct concepts of respect (e.g., “physical privacy” is very different from “feeling like a valued partner in the research process”) that affect internal consistency and require further refinement. Participant preferences may also be, by their nature, more subgroup specific than nonpersonal aspects of research conduct and care, and that subgroup preferences may remain important to assess.

Limitations of the Study

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Nonresponse bias is a potential limitation of this survey, as is the case for any survey that has a less than complete response rate. A systematic review of response rates to postal questionnaires points to the importance of sending a follow-up mailing with a second questionnaire enclosed to those who have not yet responded.18 The postal questionnaire review by Edwards et al.18 is consonant with the findings from this study in which the lowest response rate occurred in an institution that did not conduct a second mailing. Nonresponse bias will be more closely analyzed in the future to understand additional ways to increase response rates.

The data infrastructure and practices surrounding research participation and consent for future contact at each institution limited the pool of participants available for recruitment. Similar limitations in data infrastructure at academic research centers have recently gained national attention.19 Thus the demographics from some centers (e.g., cancer center, women's health center) reflected the nature of the specialty rather than a broad sample of research participants, but nonetheless afford insights into the populations studied.

We developed our survey questions based on the themes that emerged from focus groups conducted in English. Although we translated the questionnaire into Spanish, only two centers fielded in both Spanish and English, representing of 10% of the fielding overall; <2% of their response sample utilized the Spanish version. Consequently, the applicability of the survey questions, and survey responses to Spanish-speaking, or other non–English speaking populations, is untested. Our validation cohort consisted largely of academic center-based research participants, and, thus, may not reflect other research settings. The development of a similarly validated tool to measure and compare the perceptions of community-based research participants is the focus of ongoing and future research.

Based on all of the analyses conducted, including the response distributions, survey completion and psychometric analyses, the survey was revised to create a final RPPS for future fielding. Changes to the survey included (1) revision of 3 of the questions that started “after the study was over,” by adding response options indicating ongoing enrolment; (2) deletion of 2 of the “after the study was over” questions that were unfocused; and (3) deletion of 3 questions that performed poorly in more than one of the analyses conducted (e.g., discussing fears and anxieties with coordinator, or with investigator; whether one individual was coordinating the study). The questions targeting subgroups with culture, language and disability concerns, and questions related to relationships with study personnel were preserved for additional validation testing with ongoing fielding because they were deemed important concepts.

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

The value of using of a validated survey to measure research participants’ perceptions of their experiences lies in the opportunity both to understand the impact of current clinical research policies, procedures, and practices on the individuals who volunteer to participate in studies, and to yield data that can be used to improve participants’ research experiences and the conduct of research. The responses to questions assessed individually, and in summary scores of the dimensions of research participation, provide important, broad-based outcome measures that institutions can use to assess how well they: (1) provide informed consent, (2) minimize undue influence or coercion, (3) coordinate communication of critical information to participants, 4) anticipate and successfully manage adverse experiences, (5) explain and transmit the meaning of research results, and (6) train their research teams to protect participants’ rights and safety. In addition, the aggregate results for this large and broad sample will provide benchmarks for the performance of clinical research that can be used to: (1) provide reference points for internal comparison of local performance; (2) provide reference points for evaluation of clinical research programs by potential industry sponsors or accrediting agencies such as the Association for Accreditation of Human Research Protection Programs (AAHRPP); (3) inform national priority setting to improve the processes involved in clinical research; and (4) inform the public about how research participants rate their experiences. As such, this approach represents a crucial step in obtaining the outcome data needed to apply the scientific method to improving the clinical research enterprise.

Conflict of Interest

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Jennifer Yessis, Ph.D. was a survey scientist employed by NRC Picker, Inc., a commercial developer and vendor of health care surveys during part of the conduct of this study. NRC Picker plans to develop a commercial research participant survey based, in part, on the results of the focus groups. Dr. Yessis has no financial interest in NRC Picker, Inc. or any future commercial survey.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Supported in part by grant 8 UL1 TR000043, and by an Administrative Supplement, from the National Center for Research Resources and the National Center for Advancing Sciences (NCATS), National Institutes of Health.

The following individuals participated in the face and content validation of the questionnaire, and the fielding of the survey at their institutions: Jean Larson, RN, and Sandra Alfano, PharmD, Yale University, New Haven, Connecticut, USA; Mollie W. Jenckes, MHSc, BSN, Daniel E. Ford, MD, MPH, Elizabeth Martinez, BSN, and Cheryl Dennison, PhD, RN, The Johns Hopkins University Medical Center, Baltimore, Maryland, USA; Cynthia Hahn, Emmelyn Kim, MA, MPH, CPH, Feinstein Institute for Medical Research, Manhasset, New York, USA; Gerri O’Riordan, RN, Steve Alexander, MD, and Nicholas Gaich, Stanford University, SPECTRUM, Stanford, California; Paul Harris, PhD, Kirstin Scott, MPH, and Jan Zolkower, MSHL, Vanderbilt University, Nashville, Tennessee, USA; Hal Jenson, MD, MBA, and Marybeth Kennedy, RN, Baystate Medical Center, Tufts CTSA, Springfield, Massachusetts, USA; Nancy Needler, Ann Dozier, RN, PhD, and Eric P. Rubinstein, JD, MPH, The University of Rochester, Rochester, New York, USA; Wesley Byerly, PharmD, Laura Beskow, PhD, MPH, and Jennifer Holcomb, MA, Duke University School of Medicine, Durham, North Carolina, USA; Simon Craddock Lee, PhD, MPH, and Andrea Nassen, RN, MSN, University of Texas Southwestern Medical Center, Dallas, Texas, USA; Kathryn Schuff, MD, and Julie Mitchell, Oregon Health Sciences University, Portland, Oregon, USA; Phil Cola, MA, Carol Fedor, RN, ND, and Valerie Wiesbrock, MA, University Hospitals of Cleveland, Cleveland, Ohio, USA; Veronica Testa, RN, Tufts New England Medical Center, Boston, Massachusetts, USA; Kimberly Lucas-Russell, MPH, Sylvia Baedorf, MPH, and Mary-Tara Roth, RN, MSN, Boston University, Boston, Massachusetts, USA, Michael Murray, PhD, Toronto, Ontario Canada. Sarah Winchell and Sarah Fryda, NRC Picker, Inc, Oshkosh, Wisconsin, USA.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information
  • 1
    Cox K. Informed consent and decision-making: patients’ experiences of the process of phases I and II anti-cancer drug trials. Patient Educ Counsel . 2000; 46: 3138.
  • 2
    Edwards SJ, Lilford RJ, Thornton J, Hewison J. Informed consent for clinical trials: in search of the “best” method. Soc Sci Med . 1998; 47(11): 18251840.
  • 3
    Cleary PD, Edgman-Levitan S. Health care quality. Incorporating consumer perspectives. JAMA . 1997; 278(19): 16081612.
  • 4
    Edgman-Levitan S. Through the patient's eyes. J Healthc Des . 1997; 9: 2730.
  • 5
    Gerteis M, Edgman-Levitan S, Daley J, Delbanco TL. Through the Patient's Eyes: Understanding and Promoting Patient-Centered Care . San Francisco , CA : Jossey-Bass; 1993.
  • 6
    Kost RG, Lee LM, Yessis J, Coller BS, Henderson DK. Assessing research participants’ perceptions of their clinical research experiences. Clin Transl Sci . 2011; 4(6): 403423.
  • 7
    Bruster S, Jarman B, Bosanquet N, Weston D, Erens R, Delbanco TL. National survey of hospital patients. BMJ . 1994; 309(6968): 15421546.
  • 8
    Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of New Readability Formulas (Automated Readability Index, Fog Count, and Flesch Reading Ease formula) for Navy Enlisted Personnel. Chief of Naval Technical Training. In: Naval Air Station M, ed. Research Branch Report 8–75. Vol: Chief of Naval Technical Training; 1975.
  • 9
    Center for Medicare and Medicaid Services. Quality Assurance Guideline CAHPS® Hospital Survey (HCAHPS) Version 6.0, March 2011, 2011.
  • 10
    Krejcie RV, Morgan DW. Determining Sample Size for Research Activities. Educ Psychol Measure . 1970; 30(3): 607610.
  • 11
    Gibson NM, Olejnik S. Treatment of missing data at the second level of hierarchical linear models. Educ Psychol Measure . 2003; 63(2): 204238.
  • 12
    Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika . 1951; 16(3): 297334.
  • 13
    Robinson JP, Shaver PR, Wrightsman LS. Criteria for scale selection and evaluation. In: Robinson JP, Shaver PR, Wrightsman LS, eds. Measures of personality and social psychological attitudes . San Diego , CA : Academic Press; 1991: 115.
  • 14
    Clark LA, Watson D. Constructing validity: Basic issues in scale development. Psychol Assess . 1995; 7(3): 301319.
  • 15
    NRC Picker I. Development and Validation of the Canadian Picker Inpatient Childbirth and Comprehensive Maternity Survey Instruments . NRC Picker, Inc. Vol Lincoln , NE : NRC Picker, Inc.; 2006.
  • 16
    NRC Picker I. HCAHPS Survey Results January 2010 to December 2010 Discharges. 2011. HCAHPS Survey results. 2011. http://www.hcahpsonline.org/files/HCAHPS%20Survey%20Results%20Table%20(Report_HEI_October_2011_States).pdf. Accessed January 4, 2012.
  • 17
    The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: ethical principles and guidelines for the protection of human subjects of research. Washington , D.C. : Department of Health and Human Services; 1979: http://hhs.gov/ohrp/humansubjects/guidance/belmont.html. Accessed April 21, 2012.
  • 18
    Edwards P, Roberts I, Clarke M, et al . Increasing response rates to postal questionnaires: systematic review. BMJ . 2002; 324(7347): 1183.
  • 19
    Murphy SN, Dubey A, Embi PJ, et al . Current State of Information Technologies for the Clinical Research Enterprise across Academic Medical Centers. Clin Transl Sci . 2012; 5(3): 281284.
  • 20
    United States Census Bureau. United States 2010 Census. 2010; http://2010.census.gov/2010census/data/. Accessed November 1, 2011.

Supporting Information

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations of the Study
  8. Conclusions
  9. Conflict of Interest
  10. Acknowledgments
  11. References
  12. Supporting Information

Appendix A

FilenameFormatSizeDescription
CTS_443_sm_suppmat.doc145KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.