Mobilising modern facts: health technology assessment and the politics of evidence

Authors


Address for correspondence: Carl May, Centre for Health Services Research, University of Newcastle upon Tyne, 21 Claremont Place, Newcastle upon Tyne, NE2 4AA e-mail: c.r.may@ncl.ac.uk

Abstract

Conventional models of ‘evidence’ for clinical practice focus on the role of randomised controlled clinical trials and systematic reviews as technologies that promote a specific model of rigour and analytic accountability. The assumption that runs through the disciplinary field of health technology assessment (HTA), for example, is that the quantification of evidence about cost and clinical effectiveness is central to rational policy-making and healthcare provision. But what are the conditions in which such knowledge is mediated into decision-making contexts, and how is it understood and used when it gets there? This paper addresses these questions by examining a series of meetings and seminars attended by senior clinical researchers, social care and health service managers in the UK between 1998–2004, and sessions of the House of Commons Health Committee held in 2001 and 2005. These provide contexts in which questions about the value and utility of evidence produced within the frame of HTA were explored in relation to parallel questions about the design, evaluation and implementation of telemedicine and telecare systems. The paper points to the ways that evidence generated in the normative frame of HTA was increasingly seen as one-dimensional and medicalised knowledge that failed to respond to the contingencies of everyday practice in health and social care settings.

Introduction

Struggles about the facts – what they are, who they are made and recognised by, and how they are played out in different kinds of political arena – are ubiquitous in the conditions of late modernity (Beck et al. 1994). In the apparently ‘post-ideological’ politics of the United Kingdom the central focus of political discourse is, increasingly, the question of how the management of the public sector and the delivery of public services can be most effectively accomplished, and the notion that ‘what matters is what works’ has become a centrally important political claim (Nye 1997). What stems from this is (i) that the production of particular kinds of knowledge has become crucial to the policy process, and to the organisation and regulation of public sector agencies; and (ii) that the production of this ‘socially robust’ knowledge has been delegated to university-based researchers rather than retained within the apparatus of the state itself. This is not confined to the UK, but is globalised, as programmes of knowledge production and their associated techniques and mechanisms of synthesis and dissemination become coupled to notions of evidence-based policy and practice.

Evidence-based policy and practice have thus taken on a critical political role across a range of contexts, including health and social care (Poland et al. 2005), criminal justice (Naughton 2005), and education (Gorard et al. 2004), amongst others. It is no accident that these are all areas where, across the advanced economies, the professions have often vocally contested political interventions intended to regulate and restructure their work. In each of these domains, the production of different kinds of evidence forms the basis for projecting idealised models of professional action, conceptualising effective practice and implementing professional behaviour change. Against the background of the emergence of large-scale institutional mechanisms for evidence production and synthesis – for example, the Campbell Collaboration in social care which extends the medical model of research synthesis developed through the Cochrane Collaboration in healthcare1 into the fields of social and behavioural research – so too have new modes of reciprocal surveillance and governance emerged. In this context, the University sector has taken on the mantle of both the production of evidential facts, and critically adjudicating on their utility as the basis of policy and practice, while other agencies – relatively autonomous of the state (for example the Research Councils and specific NHS agencies), but also equally tied to it – have taken on the mantle of both demanding and regulating its production.

The domain in which the production, synthesis and mediation of evidence is most highly technically developed, and in which agencies and practices to promote it have thus developed in their most concrete form, is healthcare. Here, as Poland et al. have argued:

Professional work is constructed as requiring the identification of best practice through careful and rigorous evaluation research, and applying these as faithfully as possible (2005: 18).

There is now a very significant body of critical literature which focuses on the problem of ‘evidence-based’ practice from the perspectives of practitioners (McDonald and Harrison 2004, Harrison and Dowswell 2002). After all, it is the professions – especially medicine – who have been most vocal in promoting and resisting evidence-based practice (Mykhalovskiy and Weir 2004). Much less interest has been directed towards the experiences of those who are in other ways involved in the production and mediation of this evidence. In particular, both the points of interaction between the University sector and the State and its relatively autonomous agencies, and the locus of these interactions as contributors to the policy process, have not figured prominently in recent analyses, in contrast to other areas of R&D policy analysis (Gibbons 2003).

This paper draws on participant-observation work to examine how researchers and policy actors engage with each other over problems of ‘evidence’ and practice in debates about health technologies and health technology assessment (HTA)2. In doing so, it employs questions about the effectiveness and utility of telehealthcare systems – technologies that mediate in different ways between health (and other) professionals and patients – as a vehicle to consider ‘evidence’ as a general problem of policy as well as practice. Here, debates about telehealthcare systems and the means by which they might be employed in practice are of interest not because of their specific qualities, but because they form a useful vehicle for understanding some of the micro-politics of engagement between different practices of rationalisation around healthcare ‘modernisation’ and ‘evidence’. It is these practices of rationalisation, and their relationship to policy formation (rather than to critiques of practice) upon which the paper focuses.

The field of health technology assessment

Recent accounts of the history of HTA reflect in detail on two kinds of problem. The first is the search for rational mechanisms for evaluating healthcare, and making claims about their clinical and cost effectiveness. This is primarily a methodological problem, in which the central question has become that of how to play out the randomised controlled clinical trial in the most operationally effective way, and how to synthesise the results of these trials through systematic review and meta-analysis – in specific political contexts (Banta 2003). The second is finding a set of institutional mechanisms that can be applied to the problem of mediating between evaluation and its practice, either through the production of conceptual structures for organising clinical practice at a micro-level (e.g. guidelines and decision-making rules), or through spending and coverage decisions politically operationalised at a macro-level by different kinds of healthcare agencies (Lehoux and Blume 2000). Underpinning these two problems is the way in which the notion of technology itself is used and has been progressively expanded and elaborated, moving from a focus on apparatus (the ‘effectiveness’ of devices and techniques used within healthcare services) to a more holistic apprehension of systems of practice that underpin the delivery of healthcare and the forms that this takes (Johnstone 2005). This very broad definition of health technology means that the business of HTA involves the evaluation – at least in principle – of components of the whole professional and organisational context in which healthcare is delivered.

A key focus of HTA, as Banta (2003) has argued, is priority setting, linked either directly or indirectly to questions of cost-containment. In this context, the formal research designs of HTA and the practices by which their products are mediated by a variety of agencies into different policy contexts can be seen as mechanisms of regulatory action, in which clinical provision coverage, and spending decisions, are linked by rational and generalisable evaluations of the relationship between clinical and cost effectiveness (Stone et al. 2002). The struggle for generalisability embedded in the use of formal quantitative methods of the randomised controlled trial and systematic review has the political effect, however, of minimising attention to the specific features of the healthcare landscapes in question. The rhetorical conventions of HTA (like other fields in which trials are important) rest on making interventions transportable by making them seem acontextual and asocial. The knowledge produced is thus both highly transportable and methodologically generalisable because it is founded on a contextually minimalist conception of what is at stake. This regulatory mode of HTA has attracted sociological attention because of the ways that its research questions and methods seem to effect the exclusion of consideration of social and ethical aspects of healthcare practice, but also because it pervades thinking about organisational change, as well as the evaluation of specific techniques (Faulkner 1997). Critical accounts of outcomes studies in medicine in general (Tanenbaum 1994), as well as HTA as a methodological field, have therefore often focused their attention on what is elided in this minimalist perspective. They argue that the contexts and meanings through which a system of practice is worked out, and their social and ethical implications, are often hidden from view as a result (Lehoux and Blume 2000).

Although there is a steadily growing body of HTA research that focuses methodologically on qualitative and processual aspects of health technologies, and that seeks to apply qualitative research techniques to understanding these (Murphy et al. 1998), the application of these methods remains secondary to the business of producing the statistically dispassionate knowledge about effectiveness that makes HTA a field of research activity. However, focusing on the relationship between formal quantitative methods and knowledge about clinical and cost effectiveness has informed a critique of HTA that often assumes the same kind of direct correspondence with research inputs to policy that are assumed by HTA's proponents itself, and places an over-emphasis on cost-containment as its objective. What actually comes of the enterprise is inevitably more complex, and it is to this complexity in practice that this paper will subsequently move.

Methods and contexts

The analysis presented here draws on observations made between 1998 and 2005 as part of a series of ethnographic studies that examined (i) factors that promoted and inhibited the effective evaluation of telehealthcare3 systems (May et al. 2003); risk, governance and innovation in the development of telehealthcare in the UK (May et al. 2005), and (iii) a study – currently in progress – that interrogates the micro-politics of the social shaping of health technology assessment trials in the UK. Full descriptions of the sampling frames and methods employed in these studies have been published elsewhere.

While the ethnographic studies referred to above generated a very large body of data, this paper reflects on a series of public and private meetings held between 1998 and 2004, and two sessions of the UK House of Commons Health Committee in 2001 and 2005, which were concerned with problems of relating evidence about telehealthcare with policy about its implementation. I was present at these meetings either as an observer or participant. All took place in the United Kingdom. These include:

  • 1Meetings and seminars (data episodes K1-8), at which evidence about telemedicine was discussed. Participants included:
    • • Senior health service managers, including staff at national, regional and NHS Trust level;
    • • Social care managers drawn mainly from county and metropolitan social work departments and from some voluntary agencies;
    • • Policy-makers from the Scottish Office, the Welsh Assembly – and in England – the Department of Health, Department of Trade and Industry, Department of Work and Pensions and other national departments;
    • • University-based clinical and non-clinical researchers; and,
    • • Representatives of for-profit service providers and manufacturers.

These meetings and seminars took place between March 1999 and November 2004. Data consist of detailed contemporaneous notes of meetings and seminars, minutes of meetings and some associated correspondence.

  • 2An informal meeting of the House of Commons Select Committee on Health in April 2001 (HC1). Data consist of contemporaneous notes.
  • 3A formal hearing of the House of Commons Select Committee on Health in March 2005 (HC2). Data consist of contemporaneous notes, and the Report of the Committee (House of Commons 2005a, House of Commons 2005b), Volume II of which includes correct transcripts of the Inquiry and copies of written evidence submitted prior to the hearing.

In practice, data collected in this work were governed in two ways. In all three studies applications to NHS research ethics committee for approval of the research necessitated the complete anonymisation of all data. This presents a number of problems in reporting but was necessary to secure the co-operation of many participants in these studies. In this paper, it means that the meetings at which material presented was gathered are not identified – in fact the identity of speakers and setting are actively concealed, and data are sometimes paraphrased where this is necessary.

The second factor that governs these data is a more complex methodological problem of the boundaries between research and policy, ethnography and auto-ethnography. At all of the meetings which I discuss in this paper, I was present as a participant because I was deemed to have particular expertise in understanding – from a sociological perspective – problems associated with the development and implementation of telehealthcare systems and services. At meetings K1,2,5,6 and 8 I gave short presentations, and at HC1 and 2 I was called to give both written and oral evidence. It is important to note, therefore, that there can be no pretence of my being present in any of these contexts as a neutral observer. Instead, I was (and continue to be) embedded in the institutional trajectories discussed in this paper, sometimes in contradictory ways. Although the phenomenological problems that stem from these different roles (and the activities that derive from them) cannot be resolved in this paper, it is important to signal their presence at the outset.

Mobilising the modern (medical) fact

Recent historical and sociological accounts – from Poovey's dissection of the ‘modern fact’ (1998), Porter's account of statistics (1995), to MacKenzie's ground-breaking work on computerised guidance systems and proofs (1993), Callon's studies of the market (Callon et al. 2002), or Power's notion of an ‘audit society’ (1997)– have all reflected on the ways in which contemporary modes of reflexivity have come to rely on the production and interpretation of numerical data as the mode of framing generalisable knowledge about social phenomena. On the face of it, the thrust towards formal and quantitative studies in HTA seems to be part, then, of a long-term secular trend in changing patterns of knowledge production. The randomised controlled trials, economic evaluations and synthetic reviews that underpin HTA thus have symbolic as well as concrete significance in framing a shared vocabulary for researchers, and a ‘fit’ with the demands of an emergent evidence-based State. This is because such evidence effects not only the disciplining of fields of HTA by means of cost containment, but also enables the promotion of specific systems of practice that are formally revealed to be ‘effective’, even though ‘effectiveness’ is itself never politically clear cut.

Throughout the discussions on which this paper draws, proponents of telehealthcare, NHS managers and policy makers all saw the production of robust evidence as crucial to the development of the field. At meetings K1 and K2, randomised clinical trials and their systematically reviewed results were seen to be of central importance in making the evidential case for different modes of telehealthcare, but a key question was how this evidence should be converted into the basis for practice. At the second of these meetings, clinical and policy proponents were struck by the problem of how to move on from a small, dispersed, group of ‘clinical champions’ to reach the ‘earlier adopters’ specified by Rogers’ (1995) theory of the ‘diffusion’ of innovations. However, participants also struggled with the rôle of clinical trials as a mechanism for achieving this move. One participant at K1 observed that ‘Trials can go on for ever, and at the end, even if you've achieved your confidence intervals, the moment has passed,’ while at K2 a senior clinician argued:

trials are vital, they give us the evidence, but the evidence is always arguable and it doesn't influence policy makers as much as we would like. They suffer from evidence fatigue ( . . . ) It's not just that, even the name telemedicine is a turnoff. When we push for economic evaluations we need to just call it modernisation.

The very business of producing the robust knowledge needed to underpin the field of telehealthcare was fraught with difficulties. Some of these were at a macro-level, as the participant at K1 suggested, and were organised around questions of use and interpretation. But these difficulties were rooted in the micro-level and everyday technical problems of practice in designing and operationalising complex trial designs in the field. For example, at K3, a clinical researcher told me that:

our study can't succeed, we’re failing to recruit because the GPs can basically see that all we’re doing is adding on work to them and not taking it away, which – you know – is the whole point of telemedicine.

An important question that stemmed from this was the transportability and generalisability of the knowledge that was derived from trials. This is the very source of the trial's strength, but there was often a recognised disparity between trial design and everyday practice. Two statements made three years apart by a participant at K3 illustrate this disparity and also its effects. At K3 this respondent noted, ‘I wouldn't place too much reliance on [name of trial] it's a fantastic trial design but it's nothing like what would really happen in normal general practice’. But in 2004, at the point at which the trial results had been published, this clinician took the view that, ‘you know they've published that trial in [name of journal], but when I talk to people at the NHS it means nothing to them because it was so divorced from reality in its conception. It may have set back telemedicine not advanced it’. Robustness of design, then, and robustness of meaning are two quite distinct qualities in the enterprise of HTA.

The question of robustness can be seen in another way, which reflects on the interests of the different constituencies involved. For the rather fragmented trials, community in telehealthcare robustness of design is formed around a set of aesthetic as well as practical qualities. Trial designs need to be elegant as well as focused on a clinically relevant question, and judgements about this stem from expert reviewers from within clinical research communities. But questions of elegance and utility divide trialists and the user communities to whom their work speaks. For example, a very senior NHS manager at K4 pointed to the ways that:

all these trials are serving the interests of a group of researchers, not the service as a whole, they take too long and they don't reflect what actually happens in the NHS.

At a later meeting (K6), this participant called on clinically-oriented researchers to produce:

more robust evidence that will enable us to make rational choices about how we allocate resources to meet our policy objectives ( . . . ) Your job is to give us evidence that we can use and to work within the parameters of our policy framework.

Here, the role of the research community was conceived as a subordinate and technical adjunct to policy. But, while policy-makers were struggling to make sense of emergent trials, the clinical trialists who attended these meetings also struggled to integrate their research projects into the contingencies of real service provision. They could never fully do so, however, within the constraints of designing and delivering a convincing randomised trial. This mattered very much to some participants. In a germinal paper, Pierre Bourdieu (1975) observed that the ‘field’ of a science is, in fact, a transaction space in which the status of actors and the knowledge that they work with effects symbolic capital. This enables a call upon real material resources and the prestige that attends them, in the form of large research grants and, subsequently, publications in important clinical academic journals. Integrating trials into the messy world of real healthcare practice was to put this at risk, by threatening the purity of design and the symbolic capital that stemmed from this, because it introduced levels of methodological complexity in integrating clinical experiments with normal services; and thus in interpreting causal mechanisms and their outcomes. Indeed, across all kinds of trials, a much greater effort was invested in trial design in bracketing off elements of the social world that might confound them.

Across a series of meetings, problems related to the symbolic capital of trials came into view. Trialists sought and sometimes gained symbolic capital from developing rigorous trial designs that satisfied peer review and obtained funding, and they extended this symbolic capital by publishing results of their work in high status clinical journals. However, as these clinical researchers continued to locate their work within the normative frame of HTA, they also faced a problem that was less immediately evident to them. This was that while their work was being funded through key peer-reviewed research programmes and was being presented at conferences, some key members of the policy community itself were becoming less convinced that ‘evidence’ produced in the normative frame of HTA adequately reflected the circumstances in which these systems of practice might themselves be operationalised. Nor were these senior managers and policy-makers certain that this evidence was, in itself, convincing enough to persuade those local decision-makers who had power over spending decisions to direct funds towards telehealthcare systems.

Evidence across boundaries

In the medical model of social and organisational research that stems from HTA, the randomised controlled trial has both a concrete (it describes the outcomes of an intervention and sometimes explains them) and a symbolic (it makes possible a common vocabulary of meaning as well as methodology) significance. Because trials elide contexts, and focus on the individualisation of homogeneous events, their results appear highly transportable. Within the frame of this model of research, clinical trialists of telehealthcare depicted themselves as doing work that methodologically linked clinical interests with policy values. A participant at K5 put it thus: ‘we need to show that these systems are safe to use, and that they have real clinical value before we start anything else’. This view was common to the clinical research perspectives that were elaborated at K1-8. Amongst policy champions of telehealthcare systems, and the manufacturing sector, such concerns were often seen as misplaced. They saw clinical trials as an ineffective way to identify and promote the benefits of telemedicine, precisely because the contextual and processual insights about workability that were necessary for the realisation of these systems were in practice lost from sight, in favour of statistical generalisations and economic models.

At the same time, the telehealthcare trials community was also rapidly being overtaken by new vehicles for electronically-mediated relationships that crossed the sectoral boundaries between health and social care. The first of these was the diversification of systems, and the technological shift from telehealthcare (as an electronically-mediated relationship between a health professional and a patient, devoted to managing a specific health problem), to telecare (a more generic set of services, using remote surveillance technologies not necessarily provided by health professionals). These shifts began in the late 1990s and have rapidly accelerated; while the demands of practice around modernisation in health and social care delivery have also become more complex as policy itself has sought ‘joined up’ working between different sectors.

The completeness of this shift may be seen in the contrast in presentations given by one of the main for-profit service providers, Tunstall Group Ltd, at the Commons Select Committees in April 2001 and March 2005. On the first occasion the Tunstall presentation was marked by references to the need to develop public-private partnerships around the management of chronic diseases in the community, and by an impassioned argument that further clinical trials were unwarranted as not only could the technology be successfully implemented and a service delivered, but the research orientation of existing developments was impeding development in the field. In March 2005, Tunstall had moved away from medical service provision and repositioned itself as a major telecare provider, operating across sectoral boundaries and contextualising its work in relation to provision for frail older people as a mass market for service provision. The managing director of Tunstall made this clear to the Committee's Inquiry into New Medical Technologies and the NHS.

Now we have vital signs monitors where they can be assessed and a nurse in Glasgow looks at it and says, in 99% of the cases, ‘Fine, same time next month’, and in 1% of the cases, ‘You need to come over here’. But actually that is not the mass market, the mass market is Mrs Smith, aged 82, who is susceptible to falls of the kind that result in broken hips. And for her it is much more arduous to get from the south of Leeds to the north of Leeds by public transport for a monthly assessment. Simple, straightforward, commonsense things like that. So I think there is a simple market and there is a complicated market; the complicated market I am happy to leave to my clinical colleagues who know a lot more about it (House of Commons, 2005b).

Like some other companies in this field, Tunstall had located itself outside the gaze of clinical trials and the problem of clinical evidence at a system level. Indeed, it was the Chair of the Commons Health Committee – David Hinchcliffe MP – who intervened to defend it against the problem of lack of knowledge about the cost effectiveness of telecare systems. Responding to a challenge (by this writer) on the poor methodological quality of much economic evaluation of telehealthcare systems, he said:

Can I pick you up on that point? What you are saying is that the kind of stuff that the telecare aspects of this inquiry are more beneficial to financially is social services. My recollection of the last time I visited Tunstall Telecare, Mr Rice's company, was probably some time last year – and I cannot remember whether you were actually there, Mr Rice – and you gave somebody from the Treasury and myself a presentation on the cost implications of elderly people falling in their own homes (part omitted). The figures that his company gave showed a direct impact upon NHS costs (House of Commons 2005a).

In this changing-practice environment, the nature of evidence itself shifted, and this was the second vehicle for change that champions of clinical trials faced. At the final expert seminar to be discussed in this paper, K8, which took place in Autumn 2004, participants wrestled with the tensions between problems of evidence and the push of policy. For the policy staff present, representing both NHS and social care agencies, the question of generalisable evidence made to the standards of the clinical trial was no longer an issue. For one senior health service manager the business of making and using evidence was one where:

Really, we need to identify who needs evidence, and what sort of evidence they need. It's important because telecare is a link between different policy areas and evidence is the glue that can hold them together. We need to draw on a range of evidence – and there's a lot of frustration about the definition of proper evidence. We need to work on what you might call qualitative evidence because that's much more suited to this task.

Referring to another policy meeting he had recently attended, he added:

No debate about evidence really took place, in fact there's a migration in thought from meeting an assessed need to mainstreaming the technology across sectors. Evidence about the value of that is waited for but we predict a big improvement in delayed discharge levels.

The defence of the randomised controlled trial, from a proponent of evidence-based medicine, now also shifted towards a more inclusive approach to other kinds of evidence:

Starting with evidence in this context seems to me to be starting with solutions, when we need to start with people's problems. So, qualitative evidence should be regarded as a primary kind of evidence for specific questions. . . . The whole point of the RCT is to reliably assign cause and effect, and we need to segregate different types of technology to produce different models of evidence.

The closer the focus of these meetings got to questions about service provision, the less likely it seemed that generalisable evidence about clinical and cost effectiveness was to be configured as part of the debate (it barely figured at all, for example at HC2). For NHS and social care managers located outside the circuits of academic medicine, the ‘medicalised’ model of evidence inherent in HTA represented the material as well as the methodological interests of a very specific academic group, and their perceived dominance. As one very senior social care manager observed, these clinical researchers seemed to lack a shared vocabulary with ‘people on the ground’ about the realities of practice, and found it equally difficult to grasp and address the questions raised by the wider social care community. For that manager, the quantitative evidence about clinical and cost effectiveness drawn from trials was ‘one-dimensional’. It had little to say about the contingencies of everyday inter-professional work, and – because of the rigorous inclusion and exclusion criteria that are applied in the delivery of trials – it was also seen to have little to say about the complexities of service users’ problems, whether these were ‘co-morbidities’ as seen by NHS managers, or ‘complex social problems’ as seen by social care managers. Indeed, she argued that the problem was not one of evidence-based practice – but rather, ‘we need practice-based evidence that makes sense to people on the ground’.

From meeting K6 (in the autumn of 2003) complexities of inter-professional working and service delivery began to be framed in relation to a hierarchy of evidence in which ‘qualitative’ evidence was seen as being of most value in persuading senior managers to make spending decisions because it seemed most closely connected to their experience. It should be noted that the term ‘qualitative’ was used to signify experiential and developmental research rather than in terms of the qualitative research techniques of the social sciences. Evidence about cost effectiveness was seen as of fundamental importance, but this was conceived in quite different terms from the economic evaluations held by proponents of an HTA model of clinical evaluation. It was almost always framed in terms of an accounting model that emphasised savings, often in terms of budgetary comparisons that related to local spending decisions and outputs, rather than economic modelling that focused on system level costs. Crucially, the HTA model was also seen to elide the interests of the manufacturing and service supply sector and the professional skills of information technologists, who argued that their perspectives were always absent from medically dominated accounts.

In the context of the array of management interests worked out in these meetings the symbolic character of the clinical trial, and the symbolic capital of statistical evidence, mattered less and less. Instead, ‘experience’ and ‘local evaluations’ dominated discussion, and these came to be characterised as ‘qualitative’ even when they focused almost entirely on comparisons of budgetary outputs. Participants saw the HTA model of research as having little relevance to the complexes of problems that they faced, and although at K7 and K8 short synopses of review articles were circulated to participants beforehand, they were never discussed. Throughout these debates, the division between telemedicine, telehealthcare and different modes of telecare became increasingly apparent, and it is to these that the paper turns next.

Technology, policy and the problem of knowledge

At the beginning of this paper it was noted that HTA approaches problems of technological innovation and development in healthcare practice by deploying formal methods that are assumed to offer a degree of methodologically secure and transportable knowledge about clinical and cost effectiveness. In telehealthcare there are now a number of trials and many systematic reviews which point both to the potential of these systems of practice and to the problems that are associated with them. In the field of generic telecare this clinical level of evidence remains almost entirely absent, but its proponents seem to be making inroads into public-sector provision despite the complex set of institutional and inter-agency boundaries that need to be negotiated. Why is this?

One answer might be that the manufacturers and service suppliers have strategically targeted this market precisely because it is one where the kinds of evidence that matter are being defined locally and qualitatively because it better suits their business model. In these circumstances, the networks of academic researchers and the methodological constituents (and costs) of the randomised controlled trial as a gatekeeper for clinical practice are absent. By-passing apparently ‘élite’ research clinicians, while at the same time targeting a user group –‘frail’ older people – that makes significant demands on health care resources, and focusing attention on developing collaborative demonstration projects with Primary Care Trusts and social work departments, has led to a focus on practice-based evidence rather than evidence-based practice. This way of thinking about evidence celebrates local modification and interpretive flexibility, and focuses on service integration. In contrast, clinical trials are founded on denying interpretive flexibility in practice to those working within them because they rely on the imposition of a rigorous trial protocol on everyday practice and thus the standardisation of clinical practice. However, demonstration projects emphasise flexible responses to different kinds of problem and to rapidly shifting goals. So while many of the companies and agencies submitting written evidence to HC2 struggled with problems associated with the highly structured frameworks of knowledge development and application that attend the delivery of direct clinical services, companies like Tunstall Ltd, which offer home monitoring for this group, contextualised their products within the goals of the NHS National Service Framework for older people, and appealed to organisational interests that enabled cross-sectoral collaboration between health and social care. Proponents of this service were thus able to make the politically vital claim of experience of both workability and cost-efficiency.

Demonstration projects built a set of knowledge claims that were hard to impeach, since they explicitly denied the kinds of statistical generalisability that underpinned both the HTA model and its associated evidence-based practice. The strength of these claims in written and oral evidence to the 2005 House of Commons Inquiry were reflected in two of its key recommendations (House of Commons 2005a):

  • 22Furthermore, evaluation needs to take account of the qualitative benefits for users and carers over time. There is a need to develop new ways of evaluating the qualitative benefits of new medical technologies in the long-term budgetary cycles. Methodologies are needed that can determine the social and economic benefits of new medical devices that fall outside the direct costs to the NHS.
  • 23We recommend that the Department should seek to introduce a national system for reviewing and tracking the implementation of new devices over a number of years to ensure patient safety and efficacy issues are closely monitored. Currently there is no clear system for determining safety and efficacy beyond the clinical trials and evidence-based model of the Health Technology Assessment (HTA) programme, while there is also a need for developing more sophisticated measures of the utility of systems for patients that reflect more relevant criteria. Much greater patient participation in assessing the utility of telehealthcare is required (2005a: 26).

The problem of methodology formed a subtext for much of the evidence presented to the Inquiry. Where clinical trials did emerge as a central feature of debate, the key question was cost-effectiveness over the potential for integration in service. Again, the report focused on methodological problems:

Several witnesses suggested that there is a need for the development of methodologies that can provide for much longer-term review of the net benefits of new systems or devices. Much of the evaluation depends on clinical trials to provide evidence upon which to make a cost-benefit analysis. These can take considerable time, quite legitimately so, to determine this. Firms complain about the delays this can cause in relation to the introduction of their products, a point also made by some patient advocacy groups (2005a: 12).

So, while the committee did not explicitly reject the HTA model of defining and developing knowledge around telehealthcare, it sought the application of methods of defining ‘qualitative’ benefits, according to ‘more relevant criteria’ and for the development of new evaluation methodologies. It also sought much greater patient involvement in the production of knowledge about these systems of practice.

Interpretive flexibility denied and celebrated

Throughout this paper, it has been noted that a key element of the symbolic capital that derives from the clinical trial is a shared vocabulary in which commonly recognised criteria for clinical and cost effectiveness are embedded, and that material capital is also derived from the process of obtaining trials (larger research grants from esteemed sources, and research publications in highly esteemed clinical journals). In a medicalised model of research practice, the formal and comparable methods, and generalisable results of trials are given a value – best evidence – that sometimes assumes a direct connection to policy. Certainly, many of the participants at the meetings discussed in this paper assumed that their trials would be influential on policy because they would prove the utility of these new systems. In contrast, both practice-based clinical ‘champions’ and equipment manufacturers and private sector service suppliers argued that these approaches retarded developments in their field. They located the blame for this in terms of conflicting policy imperatives around ‘modernisation’ and ‘evidence-based practice’ within the Department of Health and NHS Executive (House of Commons 2005b). As the field developed, a process of technological differentiation took place: telehealthcare systems (as specific medical devices) remained locked in a loop of clinical evidence production and relatively low volume services, while generic telecare systems (as devices for safety monitoring) emerged as manufacturers and private sector suppliers shifted their attention to a more flexible market place – the field of interaction between frail older people, primary health care organisations, and social work departments. This meant that flexibility in the telecare market place was organised around the presence of a large population of potential users (older people with chronic health problems), a practical problem in healthcare delivery (keeping them out of hospital or permitting them to return home earlier), the rapid development of technologies for domiciliary surveillance and their associated call-centre technologies. But it was also constituted around the flexible conception of the evaluation methods that were deemed appropriate by different groups of health and welfare professionals, and these tended to be framed through localised collaborations and public-private partnerships. They excluded the research ‘élite’ and their long and complicated randomised controlled trials, and focused on processes of care delivery rather than outcomes studies. In this context there was a radical shift in the forms of knowledge production that were in play:

  • (i) Complex systems for medical diagnosis, monitoring and management were displaced by simple systems aimed at producing routine surveillance data and ensuring safety at home.
  • (ii) Homogeneous trial samples recruited through rigorous application of inclusion and exclusion criteria were displaced by normal populations with complex and heterogeneous problems.
  • (iii) Mechanisms for knowledge production that employed standardised procedures designed into the provision of care were displaced by the flexible reworking of everyday health and social care practice.
  • (iv) Generalisable statistical data about proof of outcomes were displaced by a mix of site specific quantitative and qualitative data about processes.
  • (v) Generalisable economic modelling and cost-effectiveness research were displaced by local cost accounting estimates of spending and saving across specific budgets.
  • (vi) Academic or research ‘élites’ were displaced by local managers, professionals and service suppliers in cross-sectoral collaborations.

None of these shifts involved a diminution of methodological complexity. Nor, necessarily, did they involve a diminution of the scale of evaluation. But, as is clear, they involved a substantial political modification of the institutional relationships and practices around which knowledge about the effective delivery of services was framed, and most of all they reflected problems around the transportability of knowledge from one field of practice to another. In effect, this meant the denial of interpretive flexibility embedded in the HTA approach to trials and systematic reviews of clinical and cost effectiveness, and the celebration of interpretive flexibility to be found in accounts of local service evaluation. The shift here is from highly structured (medicalised) experiment over long temporal horizons to more rapidly accumulated (service) experience.

How can we understand these shifts and their implications? Brown and Webster (2004) see HTA as one of a number of elements of the system of reflexive innovation that run through late-modern societies. It seeks to provide a rational basis for decision-making and priority setting around provision, and frames this through a set of knowledge production practices that link different institutions and agencies by means of a common methodological commitment. But this common methodological commitment (which is itself framed as being both rational and stable) is normatively applied to technologies (whether highly specific ‘black boxes’, or wider systems of practice) that very rarely have such stability (Mort et al. 2003). Moreover, service innovations themselves are often poorly conceptualised and understood (Greenhalgh et al. 2004). So one answer to this question is that there are competing modes of reflexive innovation around which there are constant political struggles about interpretation and action. The constellation of technologies and systems of practice that make up the field of telehealthcare are themselves highly unstable, and one of the key problems of their policy and practice ‘champions’– and, for that matter, opponents of these new systems of practice – has been to find a way to hold them in place long enough to deal with the problem of methodological complexity in both service delivery and evaluation.

Conclusion

In this paper, aspects of the development and evaluation of telehealthcare have been used as a vehicle to explore key problems in the organisation and reception of knowledge produced within an HTA model of formal, rigorous and quantitative research. In practical terms, the division between research élites and local managers is expressed by the latter seeking more flexible modes of knowledge production. This means that the primary production of such knowledge, and the evidence-based practice that is assumed to be its secondary product, may actually have much less impact in debates about priorities and decisions than is sometimes supposed. In the world of service provision, such highly medicalised models of research practice have been by-passed or displaced by different kinds of institutional actors as they seek to rapidly implement new models of service provision. The de-coupling of the private sector manufacturers and service suppliers from academic R&D, and their re-coupling with health and social care providers on the ground, has been crucial to this. These shifts have been characterised by competing but uneven modes of reflexive innovation, and by the localisation of knowledge production and dissemination – a determination to find practice-based evidence rather than evidence-based practice.

What is the wider significance of these problems? The shift to evidence-based policy and practice is important to the claim of ‘post-ideological’ politics of public sector management and service delivery in the UK. Of course, these politics are anything but post-ideological. The political problem is located firmly in the definition and production of the facts, who makes them and what their consequences are claimed to be. A key element of HTA is therefore the apparently scientifically-neutral and rational construction of service priorities, spending decisions and coverage accomplished through formal methods – the trial, the systematic review and the guideline. These mechanisms give qualitative political decisions the flavour of science, and relocate political struggles about service provision in the domain of methodological argument about evaluation design.

More broadly, this account raises questions about the character of research communities and their relations with the State. Earlier in this paper, I posited the emergence of an evidence-based State. There is no doubt that in the UK successive recent conservative and labour administrations have focused on the notion of evidence-based and, latterly, evidence-informed policy. The analysis presented here gives us a window on some of the problems inherent in this. Crucially, we should note the variability of evidential requirements across two contending policy streams. First, one that presses for certainty about the value of particular modes of service delivery and professional practice in the public sector (which prizes formal summative studies); and second, one that seeks to engender radical changes as part of a programme of ‘modernisation’ (which prizes local, flexible and developmental studies). In this paper, we have seen how some of the policy makers focusing on the political priorities around modernisation seemed to come to regard clinical trialists as an élite group, who sought a gate-keeping role around the definition and production of ‘robust’ knowledge. In this respect, it is important to understand that what counts as evidence and its modes of production are always socially constructed in ways that are deeply embedded not only in general political contexts, but also within the strategic imperatives that drive the interactions of that politics with the local organisation of practice itself.

Acknowledgements

I am grateful to the Economic and Social Research Council (ESRC) for its support of my work through a personal research fellowship (Grant RES 000270084). Other research reported in this paper was funded by the NHSE NW R&D Directorate (Grant RDO 12/20), the UK Department of Health (Grant ICT 2/032) and the ESRC (Grant L218 25 2067). This paper presents the views of the author, and not of the UK Departments of Health. I gratefully acknowledge the material and intellectual contribution of my co-investigators in those studies – Tracy Finch, Linda Gask, Frances Mair and Maggie Mort – in the work that has led to this paper. In addition, Stuart Blume, Catherine Exley, Nick Fox, John Gabbay, Chris May, Anne Macfarlane, Patricia McKeever, Tim Rapley and Andrew Webster all made helpful comments on the paper presented here, which was presented at the 2nd Health Technology Assessment International Conference, Rome, June 2005.

Notes

  • 1

    See http://www.campbellcollaboration.org/index.html and http://www.cochrane.org/index0.htm

  • 2

    It is important to emphasise that in this paper I am referring to HTA as a general field of research and development, and not to the NHS R&D HTA Programme or the National Institute of Clinical Excellence as specific institutional arrangements for funding, supporting and disseminating such work. The HTA approach to evidence production about telemedicine, and especially the focus on trials and systematic reviews, is one of the key legitimating techniques that its champions have used to project these new technologies into the healthcare market place. Two earlier papers point to its importance in this respect. The first points to the ways that evidence-based practice has been used to secure points of resistance to new technologies (May et al. 2001), and the second reviews and discusses different evaluation models, contrasting the application of HTA models of evaluation across sectors of technological and organisational development in relation to telehealthcare (Williams et al. 2003).

  • 3

    These technologies involve a variety of different systems, and a typology is developed in May et al. (2005). In this paper, I distinguish between telemedicine/telehealthcare systems that permit electronically-mediated interactions between health professionals and patients for the purposes of diagnosis, review and management of specific clinical conditions; and generic telecare systems that rely on sensors, and which include alarms, falls detectors and other devices built into domestic environments that permit surveillance – but not necessarily interpersonal relations – between frail older people and telecare call-centre operatives.

Ancillary