Institutional use of National Clinical Audits by healthcare providers

Rationale, aims, and objectives: Healthcare systems worldwide devote significant resources towards collecting data to support care quality assurance and improvement. In the United Kingdom, National Clinical Audits are intended to contribute to these objectives by providing public reports of data on healthcare treatment and outcomes, but their potential for quality improvement in particular is not realized fully among healthcare providers. Here, we aim to explore this outcome from the perspective of hospital boards and their quality committees: an under-studied area, given the emphasis in previous research on the audits' use by clinical teams. Methods: We carried out semi-structured, qualitative interviews with 54 staff in different clinical and management settings in five English National Health Service hospitals about their use of NCA data, and the circumstances that supported or constrained such use. We used Framework Analysis to identify themes within their responses. Results: We found that members and officers of hospitals' cumulative, retrospective reports to real-time reporting, clearly presenting the “ headline ” outcomes important to institutional bodies and staff. Measures may also include further negotiation between hospitals, suppliers and their commissioners about the nature and volume of data the latter are expected to collect; wider use by hospitals of routine clinical data to populate audit data fields; and further development of interactive digital technologies to help staff explore and report audit data in meaningful ways.

cumulative, retrospective reports to real-time reporting, clearly presenting the "headline" outcomes important to institutional bodies and staff. Measures may also include further negotiation between hospitals, suppliers and their commissioners about the nature and volume of data the latter are expected to collect; wider use by hospitals of routine clinical data to populate audit data fields; and further development of interactive digital technologies to help staff explore and report audit data in meaningful ways. Whilst the NHS, HQIP and suppliers attempt to ensure that NCAs represent value for money and that only essential audits with the potential for significant impact are undertaken (by, for example, suppliers consulting with clinical advisory groups), NCAs nevertheless consume many resources. 4 Hospitals not only pay a proportion of NCAPOP running costs, but must also meet the substantial costs to themselves of participating in the audits (associated, for example, with data entry and validation) from within their existing budgets.
NCAs, then, are designed to facilitate measurement of care quality to support both QA and QI. As others have noted, 5,6 there are important differences between these processes. Quality assurance involves the use of quality monitoring and reporting, informed by national standards, guidelines and targets, to ensure that minimum standards are met and poor performance is addressed. Thus, it focuses on providing reassurance about current care quality. By contrast, QI involves the use of systematic methods and tools to improve outcomes for patients continuously. Here, data are used to identify areas for improvement and inform how care could be improved. Whilst there is evidence for both QA and QI associated with NCAs and other such systems worldwide, [7][8][9][10][11] there are reports of variation in how hospitals engage with them, particularly for QI, and consequently their potential to support systematic improvement in patient care and safety is not realized. 4,[12][13][14] Against this background, this paper suggests ways in which national audit data might be used more fully, focusing on their use for QI by the bodies and staff that govern hospitals, and the conse-

| Sample
Our sampling strategy aimed to capture variation in hospitals, NCAs and user groups. Data were collected across five English NHS hospitals, including three large teaching hospitals and two smaller district general hospitals. Many of the staff we interviewed (especially institutional staff) worked with multiple NCAs, but to obtain a more detailed picture of audit use, we focused on two NCAPOP audits: Using purposive and snowball methods, we recruited 54 participants across institutional areas and clinical units within the hospitals (see Table 1). We started by interviewing lead NCA contacts, often senior clinicians, and asked them to identify others involved with the audits, including 32 clinicians and 16 managers (11 of whom who worked institutionally as members or supporting officers of hospital boards and their quality sub-committees).

| Data collection and analysis
Semi-structured qualitative interviews took place with participants between November 30, 2017 and June 6, 2018, using a schedule developed by the research team. The schedule was reviewed by the study Lay Advisory Group and revised, in light of their feedback, to ensure that the interviews covered topics relevant to patients. The interviews were carried out by NA, LM and RR, and ranged from 33 to 89 minutes, with a median length of 57 minutes. They included a discussion of participants' backgrounds and roles, their use of NCA data, and the circumstances that supported or constrained such use. Audiorecordings of the interviews were transcribed verbatim and anonymised.
Interview data were analysed using Framework Analysis, 16 an approach developed for use with qualitative data in applied policy research, which involves familiarizing oneself with the data through repeated reading of transcripts, before developing a thematic framework and indexing. Our thematic framework was developed by the research team, who agreed initial codes for indexing the data and then indexed five transcripts to test the applicability of codes and assess agreement. Codes were refined and definitions clarified where there was variation, and refined codes were applied to all transcripts, using NVivo 11. Subsequently, themes were mapped and interpreted: a process that enabled practice to be examined within and across cases, and convergence and divergence in participants' responses to be identified and explored.

| Ethics
The University of Leeds School of Healthcare Research Ethics Committee gave ethical approval for the study (approval number: HREC16-044). All participants received an information sheet explaining the study's aims, how their input would be used and confidentiality assured, to which they gave their written, informed consent.
Where face-to-face interviews could not be arranged and telephone interviews took place instead, verbal consent was recorded.

| How NCA data are used institutionally in hospitals
English NHS hospitals are governed by boards, which have a remit to build public and stakeholder confidence in the quality, safety, responsiveness and value of the healthcare they provide. 17 In all five hospitals in the study they discharged this responsibility by monitoring a wide range of performance metrics associated with local and national agendas. Often, such monitoring was informed by dashboards that displayed performance levels for various metrics, and sought to align them.
In general, the detailed information about treatment and outcomes provided by NCAs did not feature among these core metrics, except in the case of NCA measures deemed by boards to be publicly and politically sensitive, such as mortality or waiting times. In maintaining oversight of such "headline" measures, institutional staff focused their attention on the cumulative (often annual) public reports provided by NCA suppliers, which include national data summaries.
Institutional staff were motivated to monitor their hospital's performance in these reports because of their public nature, and the risk to their reputations for safe and effective care if they appeared as "negative outliers" in the reports, performing below acceptable levels: Because it's such a public issue is the mortality data, not only in the public but also in the sort of the management of the Trust, they've always got an eye on the real headline values from each of the national audits Managers complained about the number and length of reports, which often extend to well over 30 pages of A4 text, plus long data tables. Committee members were also frustrated by reports that did not unambiguously highlight the "headline" metrics and other information they considered important, such as benchmarking data, or which did not make clear recommendations for institutional action. There was a need for "tools that can help point us in the right direction rather than having to work it out for ourselves" (Institutional quality manager) and it was suggested that dashboards could fulfil this function in institutional committees, because they present data clearly and concisely.

| Unrealistic and changing metrics
As well as expressing concern about the presentation of NCA reports, institutional staff noted that some audits included measures they regarded as unrealistic and unaffordable:

| DISCUSSION
The literature on use of NCAs and other national and international audit and benchmarking systems paints a picture of varying engagement and missed potential for QI, sometimes associated with insufficient institutional support. 4,12,13 Our study highlights factors that may generate outcomes like these, relating particularly to the way in which managers, boards, and quality committees regard and use the audits, and the effects on their relationships with clinical teams. Such findings are important, given the significance accorded to organizational and board oversight of QI, 5,15,18,19 as well as the substantial healthcare resources consumed by data collection and validation for NCAs in the United Kindom 4,20 and comparable systems worldwide that aim to support quality improvement. 21 In our study, institutional and clinical staff members' differing perceptions of NCA legitimacy appear to be key to this outcome, when they limit their responses to clinicians requesting action and expenditure for QI, based on audit data. These differing perceptions were encapsulated succinctly by the consultant cardiologist quoted above, who contrasted clinicians' trust in NCA data-"We're supposed to perform at this level and we're currently at this level and we need to do something about it"-with managerial doubts-"Meh, tell me another one, I hear that all the time." Institutional theory emphasizes the importance of legitimacy as a motivating factor for organizations and their managers, encouraging them to respond to demands. [22][23][24] We found this to be the case with boards and their quality committees, which engaged with those NCAs for which participation was mandated by NHS England; when they appeared in public NCA reports not to be providing safe and effective care; and when NCA performance was associated with financial or reputational gain or penalties. However, when legitimacy or economic gains are perceived to be low organizations are more likely to resist demands, especially when working under pressure and responding to multiple, conflicting constituencies. 24 Dixon-Woods et al 6  Given institutional concerns about discrepancies between the benefits and costs of audit participation, such work is also likely to involve reducing the amount of resource hospitals need to devote to NCA data collection and validation, which is significant, often involving clinicians and support staff gathering data laboriously from hardcopy case notes. This may require further negotiation between hospitals, HQIP and NCA suppliers about the nature and volume of data the former are expected to provide; making more use of routinelycollected data to populate NCA fields; and more use of interactive technologies to explore and report data. Our own study focuses on the latter area. We are developing a web-based dashboard-called "QualDash"-which can be used by hospitals to explore their own NCA data in depth (currently the dashboard displays only MINAP or PICANet data, but focus groups will be held to explore its suitability for a range of other NCAs). QualDash users can interrogate metrics from the audits through interactive "qualcards," to answer key questions about service performance. They can also examine related data in a number of sub-views, contextualized through a "history sub-view," which summarizes data over a 3 year period. Both raw data and the qualcard charts can be downloaded and used, for example, to present recent data at meetings or in business cases. Figure 1 shows PICANet qualcards from the prototype dashboard, using simulated data.
Finally, whilst this paper has focused on national audit systems in the UK, it has implications for users of audit and benchmarking healthcare data in other countries that aim to make fuller and more efficient use of those data for QA and QI. For example, it speaks to the need to streamline data collection and minimize duplication, identified within Sweden's quality registries, 25 and can contribute, too, to international moves towards more fully digital healthcare systems. 26

| STRENGTHS AND LIMITATIONS
Research into NCAs has tended to focus on their use by clinicians.
We consider a strength of this study to be its dual focus on both clinical and institutional involvement with NCAs, enabling us to understand more fully the nuanced social and operational factors that underpin previous findings. These understandings emerged from detailed discussions with participants in qualitative interviews, which we regard as another strength of the research, while acknowledging their emergent and situated nature. We present these findings tentatively, then, and aim to test and refine them in later phases of our study, when we introduce QualDash to the five participating hospitals and evaluate its impact through ethnographic observations.
We acknowledge too that our approach to sampling had limitations. We focused particularly on two audits, MINAP and PICANet, which afforded rich information about how NCAs with different characteristics are used in different clinical areas. However, including an audit at the forefront of live data reporting, like the National Hip Fracture Database, would have enabled us to explore more fully new approaches to promoting NCA use (although PICANet, for example, is also working on making data available in novel ways, through its recent initiative to report risk-adjusted resetting probability ratio test plots to PICUs quarterly, to assist staff in monitoring standardized mortality ratios). This limitation might be addressed in future research by including an audit specifically on the basis of reporting innovation.
In addition, our sample was small, with only 11 participants working directly in institutional areas, which enabled us to explore their responses in detail, but limits the generalisability of our findings. We used a snowball approach, recruiting participants recommended to us because of their existing involvement with NCAs, which could have over-emphasized the views of people who were disproportionately engaged with the audits, compared to other staff, or, conversely, of those who were particularly disengaged with the audits and were concerned to express that disengagement. Either way, in future research, it may be advisable to interview some staff who have no particular allegiance to NCAs, to obtain more general insights into how they are viewed.

| CONCLUSION
NCAs' potential to assure and, critically, improve care quality may be realized more fully from an institutional perspective by enhancing hospital board and quality committee members' perceptions of the audits' F I G U R E 1 Prototype of PICANet dashboard (using simulated data) legitimacy. This is likely to require a rebalancing of the benefits and costs to healthcare providers of participation in NCAs, involving further negotiation about the nature and volume of data hospitals are expected to collect and the timeliness and format of public reports provided by suppliers, as well as wider use in hospitals of routine clinical data and less manual data entry. In addition, further use of interactive digital technologies, like the quality dashboard we are developing, should help institutional and clinical staff to explore and report the data in ways that are meaningful to them.