SEARCH

SEARCH BY CITATION

Keywords:

  • patient survey data;
  • patient’s views;
  • patient–caregiver relationships;
  • patient-centred care;
  • quality improvement

Abstract

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Objectives  To evaluate the use of a modified Consumer Assessment of Healthcare Providers and Systems (CAHPS®) survey to support quality improvement in a collaborative focused on patient-centred care, assess subsequent changes in patient experiences, and identify factors that promoted or impeded data use.

Background  Healthcare systems are increasingly using surveys to assess patients’ experiences of care but little is established about how to use these data in quality improvement.

Design  Process evaluation of a quality improvement collaborative.

Setting and participants  The CAHPS team from Harvard Medical School and the Institute for Clinical Systems Improvement organized a learning collaborative including eight medical groups in Minnesota.

Intervention  Samples of patients recently visiting each group completed a modified CAHPS® survey before, after and continuously over a 12-month project. Teams were encouraged to set goals for improvement using baseline data and supported as they made interventions with bi-monthly collaborative meetings, an online tool reporting the monthly data, a resource manual called The CAHPS® Improvement Guide, and conference calls.

Main outcome measures  Changes in patient experiences. Interviews with team leaders assessed the usefulness of the collaborative resources, lessons and barriers to using data.

Results  Seven teams set goals and six made interventions. Small improvements in patient experience were observed in some groups, but in others changes were mixed and not consistently related to the team actions. Two successful groups appeared to have strong quality improvement structures and had focussed on relatively simple interventions. Team leaders reported that frequent survey reports were a powerful stimulus to improvement, but that they needed more time and support to engage staff and clinicians in changing their behaviour.

Conclusions  Small measurable improvements in patient experience may be achieved over short projects. Sustaining more substantial change is likely to require organizational strategies, engaged leadership, cultural change, regular measurement and performance feedback and experience of interpreting and using survey data.


Background

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Healthcare providers and systems increasingly are trying to make the care they provide more ‘patient-centred’.1 One approach to doing this is to survey patients about their care experiences and to use the survey results to motivate and guide improvement efforts. Major initiatives around the world are now collecting and comparing data on patients’ experience of care in healthcare organizations. In the USA, for example, The Centers for Medicare & Medicaid Services, now surveys annually a probability sample of beneficiaries in both fee-for-service and managed care.2–4 State wide initiatives in California and Massachusetts have surveyed samples of patients at the medical group and individual practice site level.5,6 In the UK, the National Health Service now surveys different patient groups annually7 and in Europe and Australia surveys of single or groups of organizations have been reported.8,9 However, there is little evidence about how best to use such information in quality improvement efforts.10 Most studies have shown that organizations and healthcare professionals can have difficulty responding to patient survey data, mounting interventions and sustaining change.9–14

This article describes an evaluation of a project designed to test the use of a Consumer Assessment of Healthcare Providers and Systems (CAHPS®) survey15 to support quality improvement efforts in a collaborative setting.

The study objectives were to:

  • 1
     Describe how a collaborative process was used to modify the survey for quality improvement, provide continuous (monthly) patient feedback to medical groups and support teams within them to use the data to devise interventions to improve care
  • 2
     Assess the usefulness of continuous patient feedback and collaborative support for quality improvement, changes in patient experience that resulted and factors that promoted or impeded the use of data for improvement.

Methods

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Design

The study evaluated a quality improvement collaborative using both survey data from patients and interviews with collaborative participants to describe and assess project outcomes.

Setting and participants

The collaborative project was organized jointly by the CAHPS team from Harvard Medical School, Boston and the Institute for Clinical Systems Improvement (ICSI) in Minnesota. ICSI is a state wide consortium of health plans, medical groups and hospitals that have worked together to develop clinical guidelines and a collaborative model of quality improvement since 1993.16 The project originally included nine medical groups in ICSI that had previously identified a shared interest in learning to use their survey data more effectively to improve patient-centred care. One group left the collaborative after the first meeting because of pressure of other work. Of the remaining participating medical groups, four served predominantly urban areas and four served smaller towns with rural populations. Three provided primary care services, one provided specialist services and four a combination (see Table 1). The project began in May 2003 and ran for 18 months until December 2004.

Table 1.   Services provided and population served by the eight medical groups participating in the collaborative project
Type of services providedPopulation served
Rural and small town settingUrban setting
Primary care services only12
Primary care and specialist services 31
Specialist services only 1

Intervention

Developing the patient survey

All CAHPS® surveys have a similar development process that involves multiple steps. This process is designed to gather and apply input from relevant stakeholders and ensure the reliability and usefulness of survey results. Typical development steps include: (i) identification of domains to be measured; (ii) review of the literature; (iii) collection and review of existing instruments and related measures (through calls for measures in the Federal Register and the collection of public domain instruments); (iv) stakeholder input and review through Technical Expert Panels and requests for comments in the Federal Register; (v) focus groups with consumers or patients and (vi) field tests.17–19 However, most of the early survey work focused on data that would be useful for consumer decision-making, as opposed to quality improvement. In May 2003, members of participating groups were asked for suggestions on how to modify the current survey15 to make it more useful for quality improvement. Suggestions for modified questions were summarized by the project team and a scoring sheet was distributed to the groups with a request that participants rank the importance of each. Project team members used that information (and other questions from the Ambulatory Care Experiences Survey20 to develop a survey instrument for the project (see Table 2).

Table 2.   Summary of the content of the modified CAHPS® survey used before and after the collaborative project
Categories of questionsNumber of question items
Time frame of referenceTotal
In the last 6 monthsMost recent visit
Major domains
 Office functioning: scheduling and visit flow437
 Access: getting needed care202
 Communication and interpersonal care7411
 Preventive care202
 Integration of care718
Other questions
 Global rating of care011
 Willingness to recommend1
 Verbatim improvement suggestions011
 Sample confirmation3
 Screener questions606
 Respondent demographics4
Total  46
Collecting and reporting survey data

The revised instrument was used to collect baseline survey data in each participating medical group. The groups provided lists of patients who had recently visited one of their physicians. These were generated from electronic scheduling system databases by staff who were not involved in the collaborative. Respondents were drawn randomly with the aim of achieving at least 100 completed surveys per group. This sample size afforded sufficient data to accurately characterize the groups, but not individual physicians. Patients were contacted by telephone and given the option of an interactive automated voice method (with operator assistance if required) or a conventional computer-assisted telephone interview by an operator. To evaluate the overall change in patient experience of care within the groups during the project the survey was repeated in the autumn of 2004 with a new sample of patients drawn according to the same eligibility criteria as before.

Data were also collected continuously for each group from January through December 2004, using a subset of 30 questions that had been identified as having highest priority to participating groups (Table 3). The target sample was 25 completed surveys per group each month with the intention that this data would support rapid cycle quality improvement.

Table 3.   Questions from the shortened modified CAHPS® survey used for continuous data collection from groups throughout the collaborative project Thumbnail image of

Analysis and reporting of monthly survey data were conducted using the Quality DesktopTM (QD) reporting system developed by Quality Data Management Inc.21 Groups could view their data at any time via the internet using software that enabled them to plot control charts and histograms for different questions and composite measures (see Fig. 1). Because of the small sample sizes, results were reported at the site or group level but not for individual clinicians.

image

Figure 1.  Screen shot from the online tool for presenting the data developed for the collaborative by Quality Data Management Inc. showing data from one medical group.

Download figure to PowerPoint

Collaborative activities

Starting in October 2003, groups participated in a patient care experience ‘action group’ that followed a model of collaborative learning developed by ICSI.16 Each medical group identified a senior leader who could attend five full day bi-monthly meetings. The leaders were three medical directors, three directors of clinical improvement or service quality, one group manager and one quality improvement co-ordinator. Each leader brought other relevant team members to the meetings including clinic managers, practice staff, other clinicians and quality improvement staff. In all 50 health professionals and staff members attended the meetings.

Two initial meetings were used to orient teams to the baseline survey (modified CAHPS) data and to help them select specific topics to work on. In addition to the data views provided by the QDM system, the CAHPS team developed ‘focus charts’ for each medical group and presented them at the second of these meetings (see Fig. 2). ‘Focus charts’ plot the adjusted percentile ranking of the group’s survey scores for each area of care against the correlation of these scores to patient’s overall willingness to recommend care in their group. The matrix allows the group to view its performance relative to the other groups (y-axis) in the areas that patients consider most important (x-axis), and thus more readily identify priorities for improvement.

image

Figure 2.  Example of a ‘focus chart’ using baseline modified CAHPS® survey data to identify priority areas of patient-centred care for improvement for one group within the collaborative.

Download figure to PowerPoint

Three subsequent meetings included didactic presentations, discussion of experience using the data and implementing interventions, encouragement and presentation of progress and presentation of results. The meeting agendas were flexible and responsive to the emerging needs of the groups. Members of the CAHPS team presented information on key drivers of patient experience, achieving excellence in patient-centred care, behavioural interviewing techniques for hiring staff, strategies for building patient–clinician relationships, improving communication between healthcare team members, and involving patients and their families in the redesign of health care. Groups were also given The CAHPS® Improvement Guide,22 which includes guidance about launching patient-centred quality improvement activities, tools to identify possible causes of different survey results within groups, a review of strategies and interventions for improving patient-centred care and other resources.

Conference calls were used to discuss progress between in-person meetings and members of the project team were available to give advice on the design and monitoring of specific interventions. In addition, groups were required to provide monthly written reports describing their aims, interventions and progress and to give a final presentation.

Evaluation methods

Interviews with leaders from the groups

One of the authors (ED), who had not been involved in the design or conduct of the collaborative, interviewed leaders from the groups about the project. A semi-structured interview guide covered the usefulness of the new survey and the collaborative resources, project management and responses to data within groups, decisions about improvement priorities and the care processes to focus on, staff skills and training, making, evaluating and sustaining interventions within groups and lessons from the project. The topic areas were based on a previously reported framework which had identified three sets of factors – organizational, professional and data-related – that could be barriers to, or promoters of, the use of patient survey data for quality improvement.10 Leaders were approached for an interview at the beginning and end of the collaborative in January 2004 and 2005. All eight leaders agreed to initial interviews. By the time of the follow-up survey, two groups had left the collaborative and two leaders had changed positions, but six original leaders and one new leader were interviewed. Four leaders invited other members of their collaborative team who they thought had relevant experience to participate in the interview – two directors of customer relations and two quality improvement co-ordinators. Interviews lasted 60–90 min and were conducted in the work setting, tape-recorded, fully transcribed and then reviewed by the participating team leaders.

Analysis of the data included (i) a careful reading of the transcripts to understand ‘the story’ of each team’s response to their data; (ii) coding of transcript sections into different topic areas; (iii) extraction of data on views about the collaborative and comparison between leaders’ responses for different topics; (iv) extraction of reports about barriers and promoters to data use and lessons from the project and comparison of these to the prior framework10 and across the teams and (v) comparison of main barriers, promoters and lessons to significant changes in the patient experience scores achieved across the groups. Quotes are included in the text to represent and summarize key points made by the collaborative participants and to support an interpretation of the key factors for successful data use.

Patient survey data

Patient survey data collected before, during and after the project were analysed and compared to the goals each group had set using a two-tailed chi-squared test of the difference in proportions. Although CAHPS investigators have published sophisticated analyses of patient factors (case-mix) affecting survey responses23 and sources of interregional and site variability,24 we used simple descriptive analyses because of the small number of sites and observations within sites.

Participant feedback

Further information for analysis was available from progress reports from each of the groups, the final reporting session and observations of project members on running this particular collaborative. A draft of this paper was shared with the team leaders.

Results

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Evaluation of the collaborative intervention

Developing a survey tool relevant to quality improvement

Suggestions for changes to the CAHPS® survey included adding more specific questions about coordination and integration of care, office functioning and visit flow, recent visits with specific personal doctors and open-ended questions on ideas for improvement. The revised survey included 46 questions in five major areas (see Table 2). It began by naming a physician whom the patient was believed to have visited recently, asked the patient to confirm this and then asked one series of questions about experiences over the past 6 months with the physician’s practice and another about the patient’s most recent visit with the named personal physician. The leaders from the groups thought that the modified survey was a more useful tool for quality improvement than the original survey tool and they appreciated being involved in its development. However, the revised survey was time-consuming to complete. A further tension that emerged was between the need for questions that could be used to compare care across groups and those that would discriminate particular care processes within each. At follow-up five groups mentioned the need to retain flexibility. For example,

The initial survey needs to be broad enough that you can pick out the hot spots and then allowing a subset of things to dig deeper in. And then the individual site might need to do what we did which is very, very specific (to our clinic). (Team Leader, Group 3)

Collecting and reporting the survey data

Seven groups were able to supply patient lists for monthly sampling and once some initial practical issues were worked out, all leaders had positive comments about the monthly survey data and its potential to keep up the momentum for change. For example,

You can just have a better conversation about the data when it is just a couple of weeks ago than when it (the survey) was last year…You can say, ‘Oh, last week this happened and that’s why that happened and we can’t let that happen again’. You can see it. (Group manager, Group 7)

Five leaders said they had not looked at or used their data every month. Reasons given were lack of time, especially when interventions were not planned or taking place, and wishing to aggregate data every 3 months to have a better chance of detecting a change. For example,

But it’s a lot of information if you’re not actually doing anything with it…But for us it was invaluable to realize that we made the wrong choice (over our intervention) before it got too late to realize that the patients hated it. (Team leader, Group 1)

The online reporting system was seen as a useful tool. For example,

It was really easy to figure out what the responses were. As with anything once you start seeing a response one way or another you start to develop more questions that you can’t get at from a survey. But overall they were useful. We would have been able to slice and dice it just about any way we would have wanted to. (Team leader, Group 4)

Four leaders reported that when they began using the reporting system they found aspects of the function or display they would like to manipulate in different ways. In particular, three succeeded in using it to present at large staff meetings to generate action plans within their organizations. Two other leaders found it time-consuming or confusing to navigate in such meetings and two presented only within their team.

Teaching and supporting teams to make change

All leaders valued the ‘focus charts’ which they saw as presenting information simply and avoiding the need to look through numerous tables and charts. They did not all use them to set their goals. Reasons given included the charts being presented too late, communication and integration of care being very difficult areas to tackle, and the importance of beginning in an area where the organization was already focussing. While groups were encouraged to reassess their areas of focus later in the collaborative, this was resisted somewhat. Groups focussed on up to eight specific survey items, and their choice of foci seemed to be influenced by previous initiatives in patient-centred care and current organizational goals. Five chose to work on office function, four on integration of care, three on communication and interpersonal care, one on preventative care and one on access to care.

At the initial interview five leaders were using The CAHPS® Improvement Guide and beginning to test out ideas from it or distributing suggestions from it within their group. By the time of the follow-up interview six had used at least one tool or intervention. Examples of tools used to assess possible reasons for survey results included walkthroughs (five groups), patient interviews (two groups), patient focus groups (two groups) and cycle-time surveys (one group). Four groups used interventions from the guide: scripting for clinic staff (two groups), communication skills training (one group) and patient education materials (one group). Two groups developed their own interventions. Two leaders mentioned the need for more practical information, and one suggested a greater emphasis on teaching the use of tools and interventions at the collaborative meetings.

All leaders had attended previous ICSI action groups and were positive about the role of this one in keeping up momentum and motivation, and providing accountability and a useful learning experience. For example,

You hear similar experiences from a slightly different perspective. It lets you know that you are not alone…with these problems…Culturally we tend to blame our workers if we get bad outcomes but if the whole world is getting a bad outcome…perhaps you need to think that it’s a common culture…We all have stories about success and failure and sharing those stories is helpful. (Team leader, Group 5)

This particular action group was noted to include more learning from outside experts than from the experience of other groups, possibly because of the variety of topics each chose to work on. Six leaders valued access to the knowledge and support of the CAHPS team, for example,

I valued the encouragement of me individually and feedback on what we’re trying to do…tremendous instructive advice about how we’re trying to approach things. And whether or not we were successful, those pieces were invaluable and made the whole thing worthwhile. (Team leader, Group 6)

However, one felt strongly that a more practical and less academic approach was needed.

Evaluation of outcomes within the groups

Completing the change cycle

Progress of teams through the change cycle varied. Two of the original eight groups got no further than looking at their initial data or setting their goals. After using an assessment tool to gather further information about their services, six leaders decided on and began to mount at least one intervention. Four leaders reported implementing their interventions as originally planned, but two reported problems making or monitoring them.

Improving patient experience

There were both positive and negative changes in survey results between the pre- and post-project survey for all groups. These changes were not always related to the original improvement goals and very few were statistically significant (see Table 4). However, of the four groups in which leaders said they had implemented an intervention, three had some changes in the direction they had hoped for. For example, one (group 2) focussed on giving patients more information about clinic waiting times and test results. They tried to increase the extent to which staff communicated with each other and with patients when delays were occurring and found more patients reported that they had been kept informed about this in the last 6 months. This group also used a newly implemented electronic record system to encourage clinicians to send patients their test results. They met their target for improving patients' reports of receiving their results although this difference was not significant. Another (group 7) had focussed on changing their system for checking that charts included recent test results and discharge letters to improve patients’ ratings that their doctors had information available and appeared knowledgeable of their medical history (see Table 4).

Table 4.   Priorities, goals, interventions and results based on the modified CAHPS® survey data for the six medical groups in the collaborative who completed a change cycle
GroupFocus chart prioritiesTopic area chosenGoals setInterventions madeBaseline scorePost-project scoreSignificance (chi-squared two-tail test)
1Not obviousOffice function1. Decrease by 50% patients reporting they were not kept informed of a clinic wait (Q33) 2. Increase patients reporting they are taken to the exam room within 15 min of their appointment time (Q32)Reassess the process used to inform patients during their wait and develop scripts for staff Redesign the language used to inform patients about appointments. Address delays in check-in, communication and late providers and patients93.5 67.1 89.5 51.4NS P < 0.05
2Communication and interpersonal careIntegration of careOffice function Integration of care1. Increase by 50% patients reporting they were informed of a clinic wait (Q33) 2. Increase patients reporting that they are taken to the exam room within 15 min of their appointment time (Q32) 3. Increase to 80% patients reporting that someone from the doctor’s office followed then up and gave them their test results (Q28)Introduce method for reception staff to communicate with nursing staff Survey and follow patients to identify where waits occur and redesign processes for lab tests Train practice teams to use new electronic medical record to send letters to patients with normal lab results84.5 51.8 70.883.6 62.9 80.7NS NS NS
5Not obviousOffice function Access: getting needed care Integration of care Preventative care1. Increase from 42% to 65% patients always able to schedule an appointment as soon as they needed it (Q5) 2. Increase patients kept informed of their wait (Q33) 3. Increase from 65% to 85% patients always able to schedule an appointment with their personal doctor (Q6) 4. Increase patients reporting that someone from the doctor’s office followed them up about their test results (Q28) 5. Increase from 60% to 85% patients reminded that preventative care services were due (Q16)Collect patient feedback information and add to that from Customer relations and message centre Conduct clinic walk through, develop staff training materials, scripts and customer code of service Design changes to patient management system to facilitate screening and selection of available appointments Self audits by medical support staff, increased participation by providers and use of lab staff to review charts83.0 56.4 89.2 – 69.377.9 54.0 84.4 – 66.2NS NS NS NSNS
6Communication and interpersonal care Integration of careCommunication and interpersonal care Integration of careOffice functionIncrease to 95% patients reporting: 1. That doctor definitely explained things in a way that was easy to understand in most recent visit (Q34) 2. That doctor definitely explained things in a way that was easy to understand in last 6 months (Q13) 3. That doctor definitely spent enough time with them during most recent visit (Q37) 4. That personal doctor definitely listened carefully to them in last six months (Q14) 5. That personal doctor had all the information needed to correctly diagnose and treat their health problems in the last 6 months (Q23) 6. That the doctor seemed informed and up to date at the most recent visit (Q38) 7. That staff at the doctor’s office treated them with courtesy and respect at most recent visit (Q39) 8. That staff at the doctor’s office treated them with courtesy and respect in the last 6 months (Q7)Hold 3-day communication training course for all doctors in the practice Redesign of chart delivery process to clinic sites Conduct patient focus group to learn about office function, co-ordination and communication issues Hire new director of patient access and customer service90 93.3 91 93.3 86.6 79.9 96.0 95.994.5 91.6 84.5 94.2 87.7 84.2 96.0 93.5NS NS NS NS NS NS NS NS
7Not obviousIntegration of care1. Increase to 40% patients reporting that doctor has an ‘excellent’ knowledge of their medical history (Q21) 2. Increase to 65% patients reporting that doctor ‘always had the right information available in the last six months (Q23)Integration of chart notes and print out of notes from any previous or urgent care visit by time of next visit75.6 84.882.7 89.0NS NS
8Not obviousCommunication and interpersonal careIncrease by 50% patients with congestive heart failure reporting understanding and satisfaction with their follow-up plan, indications to seek help and medication useCarry out additional patient survey. Design and implement a shared care record for this specific subgroup of patients in consultation with patients and staff n too small to assessn too small to assess 

Survey data from two other groups showed mixed or negative effects. After training in communication skills one (group 6) found a slight improvement in doctors explaining things in a way that was easy to understand but a decline in patients’ reports that doctors spent enough time with them. Another that had attempted to change the clinic organization to improve visit flow found that patients reported that they were not informed about their wait and significantly more reported longer waiting times in the clinic (group 1).

Factors affecting the use of survey data for improvement

The leaders in the two groups that got no further than looking at their data or setting their goals reported that other organizational change had presented a strong competing priority to the project. In addition, one reported a lack of quality improvement staff. For example,

We made some interventions but they became so insignificant to some of the other issues that were going on, that to work on them, putting more time and energy into them didn’t seem worthwhile…we needed to change administration, we needed to change our whole way of functioning in that group. (Team leader, Group 4)

The two leaders who reported difficulty making and monitoring changes found that non-clinical staff required more training and confidence than expected to change the way they interacted with patients. For example, one group very quickly consulted patients and clinicians to develop an education booklet and care record for patients with heart failure. Rather than relying on physicians to give this to patients it was agreed that nursing assistants should do this. However:

Just knowing when to give the patients the next section in the binder and when to reinforce education, they were a little uncomfortable with it. So it takes competency and the development of skills. They’re certainly capable of doing it but they’re not comfortable. So we have to develop their confidence in their ability to do that. (Team leader, Group 8)

In the second group, reception, nursing and clinical staff felt uncomfortable about changing the information they gave to patients about their wait:

If the nurse knew her doctor hadn’t shown up they were supposed to let the front desk know. But instead of looking at that as an accommodation for patients, they looked upon it as collecting negative information on their performance…If the patient left to refill the slot at this late a date, then how would they make up their productivity? (Team leader, Group 5)

In the two groups that found mixed effects on their patient experience there was some evidence for unintended consequences of interventions. For example, in the group where all clinicians underwent training in communication skills, some aspects of the training package were perceived as conflicting with the group culture and clinicians’ views about themselves. For example,

But the negativity…much of it centred around, the idea that as an individual, I don’t have a problem. I know who the problem is, it’s Doctor ‘A’ over there who’s the problem and I don’t need to work on this because I do just fine, my patients like me. It’s that person who doesn’t know how to do it and you need to focus on them. (Team leader, Group 6)

In the other group it appeared that when patients were asked to arrive for tests and preparation before seeing the physician, they arrived even earlier and interpreted this time as further delay. None of the three groups working on decreasing waiting time managed to decrease waits because of physician delay.

There was a suggestion, derived post hoc from the overall results, that the two groups that succeeded most clearly in improving their patient experience worked on interventions that required no major change in clinician behaviour specifically for the project. For example,

We went up on the electronic medical record and the good news was that one of the functions is to do a results letter. So, we could build on that and we did probably focus on or push that earlier and more than other clinics may have just because it was sort of already on our radar…That was a huge skill for the physicians…learning how to function with the electronic medical record. And we didn’t bring it in to do the letters, but in order to do the letters they had a huge (task). (Manager, Group 3)

These two groups appeared to have established managers who had developed and overseen tight management processes within their sites for making change. Both had aimed for modest improvements that did not require complex changes, had sought minimal input from the CAHPS team and had not checked their monthly data until the end of the project.

Other lessons and outcomes from the project

All leaders reported that their own and team member’s skills in understanding data and making patient-centred interventions had improved and stated their intention to continue to collect data and work on other initiatives. The four leaders who had difficulty implementing interventions or producing change appeared most positive about what they had learned from the project and the fact that they had been able to remove barriers in their groups to using the data. For example, four reported that survey data now had a new prominence in the organization and that they had secured extra resources to use it in quality improvement. Three leaders reported a shift in the cultural values of their organization towards patient-centredness, and two reported the engagement of senior clinical leadership in this agenda. Four of the groups had been simultaneously involved in another ICSI action group on developing a culture of quality.

Discussion

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Summary of findings

This project tested the use of a modified version of the CAHPS® survey to support quality improvement in a collaborative involving eight medical groups in Minnesota, USA. Team leaders from the groups said that they found the survey data useful for quality improvement and for comparing their performance to that of other groups, but they suggested adding more specific questions to monitor care processes of particular interest to their group. Leaders found monthly reports of the data a more timely and effective stimulus for improvement than surveys that had been administered less frequently. However, most group leaders tended to review the data quarterly and needed help and more software support to present it to staff and engage them in making change based on the results. Team leaders valued a collaborative approach to learning about their survey results and having access to individuals with knowledge about the survey. Seven groups set goals for improvement based on their data, and six used tools to assess care processes and mounted interventions to improve care.

After 12 months of the project there were small improvements in patient experience in two medical groups, but in others these were mixed and not consistently related to actions taken by the teams. There was a suggestion that the two improved groups had strong quality improvement structures, had aimed for small improvements and avoided proposing major changes in clinician behaviour. Leaders in the other four groups reported difficulty changing the behaviour of both clinical and non-clinical staff, challenges in monitoring and sustaining interventions and unintended consequences of interventions. All leaders nonetheless said that the project had increased their understanding of patient survey results, helped them to remove previous barriers in their organization and raised the profile of patient survey feedback within it.

Limitations of this evaluation

The medical groups taking part in this collaborative were all self-selected, motivated to participate, and relatively experienced in quality improvement. All had recently taken part in other ICSI collaboratives focussing on improving access to care and were based in Minnesota, a part of the USA which tends to have higher than average patient experience scores. The evaluation of the outcomes from this collaborative took place after only 12 months of the project which may have been too early a point at which to expect major changes in patient experience to have emerged. Groups may have required more time to recognize and learn from the limitations of their early interventions and to persist in implementing the complex change that was required. The different targets and interventions that groups chose for quality improvement made it difficult to draw out lessons for any particular one or to test any a priori hypotheses. Using retrospective self-reports to identify barriers may have missed some within groups or teams that leaders may have wished not to disclose. The interviewer was also aware of the small changes in patient experience that had occurred and derived the suggestions of factors associated with successful data use in a post hoc fashion.

Comparison to other findings for achieving patient-centred care

Many of the outcomes reported by the group leaders were in learning, organizational focus and the development of capacity within their groups to use patient survey data for quality improvement. This kind of foundation development may be critical before groups are in a position to produce a change. It requires support from leaders within medical groups, resources in terms of time and finance, the will to undertake cultural change and transformation, and a commitment to institutionalize news ways of delivering care. One report from a single Finnish hospital9 describes how a structured approach to feeding back focussed survey data from small samples of patients attending outpatient clinics was followed by improved reports of care by patients over a two-year period. Feedback was given to clinic staff and managers and accompanied by teaching about the survey results, meetings across departments and sharing of good practice between them.9 The results from other studies providing patient feedback to individual clinicians are mixed. One Dutch randomized controlled trial found no difference in patient reports about general practitioners who received feedback, and that in addition the study clinicians viewed survey results less positively than controls by the end of the trial.13,14 On the other hand, an Australian trial of feeding back patient reports to trainee general practitioners found that this did improve their communication skills as rated by patients, particularly at earlier points in training.25 Our evaluation reinforces the difficulty that even committed staff with the support of a collaborative have in mounting and sustaining complex interventions that require staff to change their interactions with patients.10–14

Lessons for future work

Using a collaborative for improvement

Our findings indicate that success in quality improvement of patient-centred care requires organizations to be ready to change their culture and to sustain such an initiative over time. As this requires the full engagement of senior leadership, ICSI now requires that clinical and/or administrative leaders are actively involved in collaborative projects from the outset, rather than simply passively endorsing the work. Organizations also need to understand how well they perform in different areas and to make strategic decisions about where to focus their efforts to change. As a result of this work, ICSI is now using an assessment tool to identify where the initial focus should be within a group, i.e. in developing skills in consensus, measurement or strategic planning. This will also help identify a feasible time frame for the project for particular groups. It may, for example, be more sensible for some groups to make progress in other areas critical to achieving patient-centred care. Such areas might include improving employee satisfaction, building strong teams, developing systems solutions for patient-centred care and increasing staff comfort in working with patients to redesign care processes. Patient surveys may in themselves serve to involve patients in care, and improve trust in providers. It is possible that involving patients in the improvement work may have some additional effect as has been suggested by a recent systematic review of user involvement strategies.26 Finally, when medical groups do come together to work as a collaborative if they can commit to work on similar areas this will increase both the chance of positive results in some groups and of subsequent learning between groups.

Using patient survey data for improvement

When using patient survey data for quality improvement a tension emerges between the need for questions that enable comparisons of care across medical groups and those that would discriminate particular care processes within each. The results of the former can be used for overall surveillance of important issues and trends. Once the decision has been made to focus on a particular topic, more detailed questions may be needed to monitor the effects of specific interventions. In terms of persuading other staff in the organization to make change based on the data it might help teams to rehearse presenting their data using the on-line tool and to anticipate possible scepticism or attacks on the quality of the data itself. A limitation of this project is that the sample size for the baseline survey allowed performance to be assessed only at the group-level rather than at the level of individual clinician. It is possible that an initial presentation of such physician-level data may have provided a greater impetus within medical groups for improvement activities.27 The fact that leaders reported reviewing the continuous data at most at quarterly intervals suggests that in the future its collection and reporting might be focussed on areas of highest potential return. This focus could also include a step-by-step teaching of each of the relevant assessment and interventions tools from The CAHPS® Improvement Guide22 in the action groups, explaining their use as well as potential pitfalls. Groups might also decide at the outset whether to choose topics where barriers are less likely to emerge or to tackle head on issues that are priorities for patients and remove any barriers over the longer term.

Implications for future research and policy

The results of this evaluation emphasize the need for more research on the effectiveness of interventions that healthcare organizations can put in place to improve patient experience of care, particularly in the areas of communication and integration of care. There also is a need for more research on the importance of involving leaders in collaboratives, ways of supporting change and designing the infrastructures that provide a foundation for these interventions. Health policies also need to be consistently supportive of these goals both in keeping patient-centred care a priority and in setting expectations that fully recognize the scale of the task and the time frame that this improvement needs.28–30 Although public reporting of patient experience may provide an incentive for organizations to initiate quality improvement efforts31 such external incentives alone are unlikely to be sufficient. Teaching younger professionals and medical students about patient-centred communication strategies and quality improvement will be critical to training a new generation of clinicians to be more knowledgeable about and comfortable with these issues.32

General conclusions

It may be possible to achieve measurable progress in improved patient experience in relatively simple areas, over short periods of time, but it is difficult to sustain these improvements or to leverage more substantial change without a more comprehensive strategy that is organization-wide and regarded as fundamental to organizational success. Such a strategy is likely to require a committed and engaged leadership, a work environment that supports clinicians and other staff in the redesign of patient care using patient survey feedback, and the involvement of patients and families in the process.

Acknowledgements

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

We thank participants of the collaborative for taking part in the interviews, Aimee Wickman for transcribing them and Beth Green for her support to the collaborative.

ED undertook work on this evaluation while supported by a Harkness Fellowship in Health Policy by the Commonwealth Fund, a New York based private independent foundation. The views presented here are those of the authors and not necessarily those of The Commonwealth Fund, its director, officers, or staff.

Sources of funding

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

The work on this study was supported by a cooperative agreement between the Agency for Health Care Research and Quality, (AHRQ; grant number 5U18 HS00924) and the Harvard Medical School to support the CAHPSR project, the Institute for Clinical Systems Improvement (ICSI) Minnesota, and a Harkness Fellowship for ED from the Commonwealth Fund, New York.

Ethical approval

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References

Institutional Review Board Approval for the Interviews was obtained from Harvard Medical School.

References

  1. Top of page
  2. Abstract
  3. Background
  4. Methods
  5. Results
  6. Discussion
  7. Acknowledgements
  8. Sources of funding
  9. Ethical approval
  10. References