Bridging the Relevance Gap in Political Science

Authors


Abstract

This article argues that scholars overestimate the ability of a methodologically ‘pluralistic’ political science to gain impact in policymaking, which in Britain is increasingly ‘positivist’, privileging quantitative evidence. This ‘relevance gap’ between pluralistic political science and positivistic policymaking means that political scientists are disadvantaged in achieving ‘impact’ compared to disciplines like behavioural economics. The article proposes two solutions to bridge this ‘gap’: improving the ‘accessibility’ of research and stimulating ‘interaction’ between researchers and policymakers through methodological workshops.

The third and latest edition of David Marsh and Gerry Stoker's popular textbook Theory and Methods in Political Science (2010) includes a welcome concluding chapter entitled ‘The Relevance of Political Science’ (Peters, Pierre and Stoker, 2010). In this stimulating and timely contribution, Stoker, Guy Peters and Jon Pierre (the acronym PPS is henceforth used in referring to the chapter) review three sets of literature – political participation, institutional analysis and global governance – to argue that ‘a rather dismal picture’ emerges with regard to the ‘relevance’ of political science in terms of its ability to offer workable solutions to policy problems; most studies are big on analysis but thin on action points for policymakers (PPS, p. 341). To remedy this ‘relevance gap’, PPS advocate a threefold research strategy:

  1. Recognition that ‘there is no such thing as a neutral or value-free political science’;
  2. ‘Political science should be substantially set by the concerns raised in “real” world politics’;
  3. ‘Research can address not only the problems of “real” world politics but offer solutions to some of the most pressing problems’ (PPS, pp. 327–328).

This response article agrees with PPS's advocacy of methodological pluralism, problem-based research and the generation of solutions. Moreover, it concurs with PPS's agenda for evaluating political science not merely on its methodological ‘rigour’ or theoretical sophistication, but on its ability to offer workable solutions to real-world political problems. PPS's contribution is particularly salient given the growing debate around ‘relevance’ and the challenges of justifying research grants in a world of tight funding availability (Flinders, 2013; Flyvbjerg, Landman and Schram, 2012; Lake, 2011; Stoker, 2013).

What this response argues, however, is that there is a tension between the advocacy of point 3 (the desire to offer policy-oriented solutions) and adherence to point 1 (that there is no such thing as a neutral or value-free political science). This tension arises from the criteria used to judge ‘good’ policy evidence within government and the criteria used by modern political science to judge ‘good’ research. Specifically, the argument will be that while political science is increasingly moving towards a pluralistic view of what makes for ‘good’ research, policymaking tends (quite understandably) towards a positivistic focus on quantitative ‘evidence-based policy’. This presents acute difficulties for ‘connecting’ evidence from political science to the policy process. I will illustrate this with reference to the current ‘positivist’ direction of policymaking and evaluation under the British coalition government. In the conclusion, I will suggest two ways to ‘bridge the relevance gap’, namely through greater efforts by researchers to increase the accessibility of their research findings, and the interchange of methodological knowledge and skills between political scientists and policymakers, perhaps through seminars or workshops. These recommendations, and some of the other observations presented in this article, are informed by a three-month internship undertaken at the Cabinet Office Efficiency and Reform Group in early 2012.

The pluralism of modern political science

This section applauds PPS's argument that political science should maintain a ‘pluralistic’ methodological approach. However, it is suggested that in assessing how to increase the relevance of the discipline we should assess not only what political scientists see as ‘good’ research, but also what policymakers see as ‘good’ evidence. In arguing for a more ‘relevant’ political science that offers concrete solutions to policy problems, PPS assert:

We are convinced that methodological pluralism will aid the engaging of political science with society. The approach of political science to the problems confronting societies will be confounded by our over-reliance on using quantitative methods or simple causal models … we therefore argue for a more pluralist approach to methodology as well as a greater emphasis on the practical issues of governing (PPS, p. 329).

This emphasis on the complementary value of both qualitative and quantitative methods is supported by a range of contemporary political science texts (Burnham et al., 2008; Hay, 2002; Marsh and Smith, 2001; Marsh and Stoker, 2002; Sil and Katzenstein, 2010). The common refrain here, building on a traditional ‘Whiggish’ suspicion of positivism in British political science (Bevir and Rhodes, 2007, p. 236) is that political science should be based around ‘a conception of politics and the political that [is] inclusive and which [does] not restrict political analysis to a narrow and exclusive focus upon the interplay of governmental variables’ (Hay, 2002, p. 256). Particularly in British politics departments there is a ‘strong preference for methodological pluralism’ (Harrison and Sáez, 2009, p. 350). This is evident in the practice of research itself, with mainstream institutionalists and liberal theoreticians joined by ‘a certain eclecticism … of conservatives, of hard-line big-science modernisers, of Marxist and other left perspectives, of comparativists, from time to time public choice-influenced approaches, and more recently post-structuralist, green and feminist perspectives’ (Dunleavy, Kelly and Moran, 2000, p. 7). Such a colourful blend of research approaches is reflected in professional training, as ‘postgraduate research-training programmes in politics in the UK tend to emphasise both quantitative and qualitative approaches’ (Harrison and Sáez, 2009, p. 347). Indeed, it is wholly uncontentious to suggest that, in Britain, ‘there is a de facto pluralist view of the nature of political science endeavour’ (Stoker and Marsh, 2010, p. 11).

PPS argue that only by maintaining this pluralistic methodological agenda, embedded in a necessarily modest epistemological outlook, can we really offer a socially relevant political science:

Claims to be able to establish causality that could in turn guide a claim to provide solutions should be treated with scepticism … attempts to identify linear cause-and-effect dynamics when examining a problem lead to the attempt to build policy on fictitious grounds as realities are always more complex than any simple model can capture … Solutions do not need to be cast in the nature of ‘iron laws’. ‘Do this and all your problems will be solved’ is not a message that we should offer our fellow citizens and nor it [sic] likely to be believed by them (PPS, p. 329).

Here, I wholly agree that attempts at mimicking the ‘hard sciences’ are likely to produce simplistic, inaccurate results and thus poorly constructed recommendations. Yet, it is pertinent here to make a distinction between what might be considered ‘relevant’ by those ‘producing’ the research (political scientists), and what is considered ‘relevant’ by those ‘consuming’ the research (policymakers). If political scientists are to make their research truly ‘relevant’ by maximising their ‘impact’ in the policy sphere then they should not only consider what might be seen within the research community as good, relevant research, but also what those in the policymaking community itself consider to be good, relevant ‘evidence’ that can inform policy decisions. On these grounds, it can be argued that PPS do not consider the potentially problematic disconnection, particularly heightened under the current coalition government, between the increasingly ‘pluralistic’ direction of political science and the increasingly ‘positivist’ direction of public policymaking in Britain. The next section describes the positivistic direction of public policymaking and explains why it presents a problem for a political science discipline based on methodological pluralism.

Positivism, the coalition government and the dilemmas of ‘relevance’

This section argues that public policymaking in British government is increasingly moving in a ‘positivist’ direction, which, broadly speaking, encompasses the epistemological assumption that it is possible to ‘develop laws’ that ‘hold across time and space’ to direct policy choices (Marsh and Smith, 2001, p. 529). This trend, I argue, presents a pertinent problem for a political science discipline that sees methodological pluralism as the route to establishing greater ‘relevance’, precisely because it imposes a relatively homogeneous conception of what constitutes ‘good’ research findings: namely, quantitative (numerical or statistical) measurements of efficiency savings or productivity gains.

Before detailing the prevalence of positivism in the agenda of the current coalition government it is useful to note that policymaking of a broadly ‘positivist’ epistemological bent has a long history in British government (see Clarke, 2007). As Robert Geyer (2012, p. 20) notes, ‘for much of the twentieth century and particularly in the aftermath of the Second World War, UK public policy has been based on a strong centralist, rationalist and managerialist framework’. The highly professionalised British civil service has tended to see itself as implementing government policy in a neutral, depoliticised and technocratic manner (Clark, 1996; Richards and Smith, 2000). This rationalistic approach to policymaking has appealed to governing parties seeking to improve policy delivery – from Clement Attlee's ‘scientific socialism’ (Owen, 1990) and Harold Wilson's ‘technological revolution’ (Favretto, 2000), to the Blair government's commitment to ‘evidence-based policymaking’ (6 and Peck, 2004). Even New Right governments suspicious of traditional civil service bureaucracy have not challenged the ‘clear epistemology underpinned by … integrity, objectivity and neutrality’ (Richards and Smith, 2000, p. 56). The British context of policymaking, then, has not historically been conducive to a pluralistic epistemological outlook, since it has tended to privilege (at least since the late 1940s) positivistic notions of causality, reductionism, predictability and determinism (Geyer and Rihani, 2010, p. 22).

Since the ascension of the Conservative–Liberal Democrat coalition government to power in 2010, this technocratic approach to policy has accelerated (O'Brien, 2013). Faced with the highest budget deficit in the G7 and G20, stalling economic growth and international pressure from credit ratings agencies and trade partners, the coalition government has embarked upon an unprecedented programme of cuts to public sector programmes (HM Treasury, 2010a). Regardless of the normative desirability or alleged necessity of these cuts, it is clear that in such an austere fiscal context decisions over which projects and policy will and will not survive require an explicit, financially driven rationale. The Treasury's Green Book, which provides ‘binding’ guidance for departments on how to ‘appraise’ publicly funded projects and policies, clearly sets out this rationale, stipulating the coalition's preference for assessing projects based on an economic cost–benefit analysis:

The relevant costs and benefits to government and society of all options should be valued, and the net benefits or costs calculated. The decision maker can then compare the results between options to help select the best (HM Treasury, 2010b, p. 19).

The Green Book's cost–benefit analysis is used in practice, for example, by the Major Projects Authority (MPA), which has a prime ministerial mandate to assess high-cost or high-risk projects carried out across government. Bodies carrying out such large publicly funded projects are advised to submit the cost–benefit analysis described above to provide a ‘business case’ for the continuation of the project (HM Treasury/Cabinet Office, 2011, p. 14).

This quantitative focus has also influenced the government's Open Public Services (OPS) agenda, which has argued for ‘opening up’ the delivery of public services to competition by public, private and voluntary bodies, hoping to create ‘value for money’. The OPS White Paper states:

in this economic climate, when times are tight and budgets are being cut to stabilise the economy and reduce our debts, opening public services is more important than ever – if we want to deliver better services for less money, improve public service productivity and stimulate innovation to drive the wider growth of the UK economy (HM Government, 2011, p. 6).

The positivist underpinnings of this agenda are made clear in one of its central planks – payment by results (PbR). Under PbR, companies or third sector bodies that take up the provision of public services are only paid once they have achieved measurable outcomes. PbR has been introduced in a range of policy areas, most recently in the administration of offender rehabilitation schemes. Moreover, the Cabinet Office's ‘star chamber’ also placed a specific emphasis on quantitative evidence. Departments appearing before the chamber were required to demonstrate statistical or numerical evidence that they are ‘cutting costs’ or increasing efficiency.

Perhaps the most impressive aspect of positivistic policymaking and evaluation, however, is the Cabinet Office's Behavioural Insights Team (BIT). Drawing inspiration from the explosion of academic and policy interest in Richard Thaler and Cass Sunstein's (2008) ‘libertarian paternalism’, the BIT was established ‘to find ways of encouraging, supporting and enabling people to make better choices for themselves’ (Cabinet Office, 2012, p. 1). Put simply, by encouraging people to make ‘good’ choices (like paying their court fines) through introducing small, relatively cheap ‘nudges’ (like personalised reminders by text) and testing these against other similar interventions, the efficiency and effectiveness of governance can be improved (for a discussion, see John et al., 2011). The BIT's goals were hence explicitly positivistic, with one of the core aims being to provide policy solutions that would ‘achieve at least a 10-fold return on the cost of the team’ (Cabinet Office, 2011, p. 4).

By all accounts, the BIT has been tremendously successful and its influence has become widespread throughout government. In particular, its advocacy of randomised control trials (RCTs) as a method for testing the success (or failure) of policy interventions has become particularly popular. The document Test, Learn, Adapt: Developing Public Policy with Randomised Control Trials (Haynes et al., 2012), a ‘how-to guide for policy makers on the use of RCTs to test public policy interventions’ produced by the BIT, has received 25,000 Internet page views and is ‘one of the most downloaded publications since the Cabinet Office website was launched’ (Cabinet Office, 2012, p. 2). RCTs are a good example of how a positivistic outlook on policymaking translates into a focus on quantitative methods. RCTs, as outlined in the Test, Learn, Adapt guide involve three broad (and eponymous) stages (Haynes et al., 2012, p. 19):

  1. Test
    1. Identify two or more policy interventions to compare (e.g. old vs. new policy; different variations of a policy).
    2. Determine the outcome that the policy is intended to influence and how it will be measured in the trial.
    3. Decide on the randomisation unit: whether to randomise to intervention and control groups at the level of individuals, institutions (e.g. schools) or geographical areas (e.g. local authorities).
    4. Determine how many units (people, institutions or areas) are required for robust results.
    5. Assign each unit to one of the policy interventions, using a robust randomisation method.
    6. Introduce the policy interventions to the assigned groups.
  2. Learn
    1. Measure the results and determine the impact of the policy interventions.
  3. Adapt
    1. Adapt your policy intervention to reflect your findings.
    2. Return to Step 1 to improve continually your understanding of what works.

This research process is mainly quantitative, involving the selection of a representative sample size, randomisation, controlling for bias, and hence production of robust statistical indicators for how much improvement a particular intervention has over an existing one. There are elements of qualitative analysis involved in some cases to help interpret results (Haynes et al., 2012, p. 30), but the overwhelming aim is to ‘quantify the benefit [of a particular intervention over others] as accurately as possible’ (Haynes et al., 2012, p. 15). By identifying quantitatively the benefits of one policy intervention over another, the BIT has been very successful, having ‘achieved savings of around 22 times the cost of the team and identified specific interventions which will save at least £300 m over the next 5 years’, and has been rewarded with funding for two further years (Cabinet Office, 2012, p. 1).

This emphasis on producing quantitative evidence of efficiency/productivity savings to demonstrate success has, quite understandably, led civil servants to increase their emphasis on finding statistical and numerical evidence that their department, project or team is producing ‘value for money’ (O'Brien, 2013). This may not, and often in reality does not, mean that when such evidence is produced it is ‘quantitative’ in the sense that political scientists would consider ‘quantitative’. Often such evidence is based on rough estimations or ad hoc calculations (what Stevens, 2011, p. 243, calls ‘killer charts’) ‘pieced together’ from various sources and delivered to ministers with the acknowledgement of only a limited degree of certainty (Freeman, 2007). Nonetheless, where ‘hard’ statistical or numerical evidence of policy success is available, it is given a great deal of attention. Influential think-tanks like Reform (e.g. Tanner, 2013) and Nesta (e.g. Rigby and Ramlogan, 2013) produce easily accessible quantitative reports of the success or failure of certain reforms, or proposals for reforms that may increase efficiency by X per cent, which are pored over by civil servants in central government. Academic institutions such as Cardiff's Violence and Society Research Group and Bristol's Centre for Market and Public Organisation, which both produce quantitatively driven analyses of policy outcomes, have also been successful in forging relationships with central government. Institutions such as these and their quantitative studies as well as the more ad hoc calculations and estimations are given preference over qualitative studies of user satisfaction. By contrast, ‘qualitative’ evidence, usually in the form of anecdotal ‘vox pop'-style case studies, is often used as an evocative, media-friendly extra on top of, but usually not instead of or before, statistical and numerical evidence. Such an emphasis on quantitative evidence is endorsed by the coalition's civil service reforms, which assert that:

all policy makers will be expected to undertake at least five days a year of continuing professional development to ensure they have the right skills, including in new areas such as behavioural sciences (HM Government, 2012, p. 17).

The behavioural science agenda mainly encompasses quantitative methodologies such as RCTs and similar tests of the effectiveness of certain policy interventions. As the government has recently re-commissioned the BIT to conduct further trials in yet more policy fields, it seems unlikely that this positivist focus is going to go away any time soon.

This outline of the coalition government's positivist policymaking is not necessarily intended as a criticism of any of the quantitative research that has (often very successfully) contributed to more efficient and effective governance. What it does highlight, however, is a significant gap between the type of research demanded by civil servants in central government – ‘hard’ statistical/numerical evidence of improvements in efficiency/productivity – and the pluralistic agenda promoted by PPS and the majority of contemporary political science texts and postgraduate training courses in the UK. This gap has significant implications for the ‘relevance’ agenda because it puts political science (conceived in pluralistic terms) at a disadvantage when compared to other more putatively ‘scientific’ social sciences. Some subjects, most notably behavioural economics, pride themselves on their ‘scientific’ credentials, and regardless of whether such credentials are believable to the average political scientist, have found overwhelming favour with central government over ‘pluralistic’ or ‘non-scientific’ subjects. For example, one need only glance at the range of research commissioned by the Bank of England (BoE)1 to observe the overwhelming dominance of formal econometric mathematical modelling over, say, qualitative research in political economy.

Again, note that this is not a criticism per se of econometric modelling or ‘quantitative’ research more broadly.2 Rather, it is to demonstrate the increasing allure to civil servants of any research with an apparently ‘scientific’ or ‘mathematical’ methodology. Research that vigorously denies the possibility of a ‘scientific’ social science or even simply displays results or methodological calculations in non-‘techy’ ways is often placed at a disadvantage, not because in reality it is any less ‘relevant’ but because its results seem partial or ‘subjective’ to civil servants whose very profession (increasingly) pressurises them to provide recommendations to ministers informed by ‘objective’ evidence (Stevens, 2011).

This is not a problem, of course, for researchers who primarily use quantitative methods – they can easily state that their research is part of a wider ‘pluralist’ agenda and leave it at that. Qualitative researchers, on the other hand (particularly those in the infancy of their careers), are placed at a distinct disadvantage because their research does not, often, present findings in ‘scientific’ terms, but as contextual ‘arguments’, particularly those in the interpretive or constructivist traditions (Parsons, 2010, p. 90). These are obviously not appealing to civil servants who are incentivised to search for objective ‘findings’ because their job demands that they present ministers with reports that are as impartial and objective as possible and, increasingly, are based on quantitative indicators. In effect, the argument of PPS that a pluralistic political science will be more relevant to policymakers is in danger of appearing disingenuous when, in reality, the research that policymakers will consume (and the research grants they will fund) is overwhelmingly quantitative in nature. The final section suggests a way forward to close the gap between what research political science can provide and what research is demanded by central government.

Bridging the ‘relevance gap’

This article has not argued that quantitative political science should be banished from the realm of public policy. What it has argued is that there is a gap between what research policymakers demand of political scientists and what research political scientists are willing or able to provide. Policymakers increasingly want ‘hard’, ‘predictive’ scientific evidence offering a level of certainty, not partial or contextual ‘interpretations’ or ‘arguments’. Consequently, the argument of PPS that a necessarily pluralistic political science will be more ‘relevant’ to policymakers is questionable at best and disingenuous at worst. Qualitative scholars whose research is presented in ways that are, in the eyes of policymakers, subjective or partial (not involving regression tables, R-squared figures, fancy algebraic equations, etc.) will lose out, no matter how much we insist they must not, because civil servants are incentivised to look for results that have the appearance of objectivity, usually statistical or numerical findings. ‘Pluralistic’ research loses out to behavioural psychology, econometrics and other forms of avowedly scientific or mathematical social research.

There is no space within this short response article to provide an in-depth elaboration of what the author would advocate instead, but two proposals can be made in the hope of starting a discussion of how to close the relevance ‘gap’. These relate to the accessibility of research and interaction between researchers and policymakers and proceed from the observation that more interpretive or broadly ‘qualitative’ research may not be as ‘irrelevant’ as the current outlook might suggest.

Despite the dominance of quantitative evidence outlined above, it could be argued that contemporary policy problems often require ‘qualitative’ rather than ‘quantitative’ treatment. Policymakers are increasingly preoccupied with intractable, localised issues or ‘wicked problems’, such as long-term unemployment, reducing smoking, antisocial behaviour and teenage pregnancy. Such problems connect with a range of policy-focused interpretive research that emphasises how deep ethnographic (Schatz, 2009) or textual (Roe, 1994) studies uncovering local (Fischer, 2000), peripheral (Yanow, 2004) or everyday (Innes, 1990) knowledge and practices can help solve complex policy problems (for a review, see Colebatch, 2004). For example, interpretivists suggest that policy problems can be dealt with by assessing how different communities interpret an issue (like, say, immigration) and seeking to build bridges between communities with conflicting interpretations by encouraging a ‘reframing’ of the issue via ‘negotiation or mediation’ or ‘discourse and debate’ between communities (Yanow, 2000, pp. 21–22). Interpretive research similar to this has been commissioned by government, for instance in education policy on the issue of food in schools (Kaklamanou, Pearce and Nelson, 2012). Some interpretivists suggest going further to reclaim the term ‘science’ from a constructivist perspective, collapsing the ‘qualitative/quantitative’ dichotomy to reorient policy analysis towards emphasising the subjective nature of all forms of human knowledge (Fischer, 2003; Hajer and Wagenaar, 2003). Hopes of such an epistemological reconfiguration are, in mainstream policymaking at least, perhaps overly optimistic. For now, the ‘qual/quant’ or ‘scientific/non-scientific’ divide remains dominant, even if political scientists are aware that this distinction can be problematic, and this article works within the parameters of this distinction. What it does argue, however, is that by facilitating dialogue between researchers and policymakers the barriers of unreflective positivism may begin to be challenged, and a broader notion of ‘science’ in policymaking may gradually emerge. In this regard, it is important that ‘qualitative’ researchers make their work accessible and that there is sustained interaction between researchers and policymakers on methodological issues and the relative value of different types of research. These recommendations can be expanded on as follows.

Accessibility: Senior civil servants in Whitehall have very busy timetables. They are pressurised to produce concise summaries, recommendations and reports on entire fields of research/policy developments at very short notice, often within only a day or even less. Hence, even if we ignore the issue of policymakers' access to articles in the first place (most major journals are not open access, although this is changing), they do not have time to read through an entire 8,000-word research article. Indeed, often they are only able to make the most cursory skim of key policy recommendations. This is a big problem because research findings are not generally presented in an easily accessible way in most political science journals. They tend, even in the article abstract, to prefer extended prose rather than short bullet points. Particularly in constructivist/interpretive studies, even presenting such ‘findings’ in simplistic ‘bullet-point’ form might be considered rather gauche and reductive.

If we are interested in increasing ‘impact’, however, the way policymakers access academic findings needs to be addressed. Here we may learn something from prominent think-tanks, such as the Institute for Government, which summarise research in easily digestible executive summaries. Although clearly academic studies are different in nature to think-tank pieces (requiring more explicit and sustained justifications of reasoning and methodological decisions, for instance), this does not mean that after peer review and publication, researchers themselves cannot provide short, clear summaries of their findings and implications for policy purposes (Flinders, 2013). This format is already provided by some journals in, for instance, management studies where precis articles or short summaries of findings are available.3 This article hence recommends that researchers provide short summaries of their arguments that can be posted, for instance, on university web pages or disseminated in collaboration with think-tanks. One civil servant even suggested to the author that making short podcasts about research and posting them on an easily accessible central website for civil servants would be beneficial.

Interactivity: The above recommendation may be said to apply for all researchers, rather than just qualitative ones, and it does not necessarily overcome the problem that policymakers are overwhelmingly interested in quantitative findings anyway. In some ways, it is difficult to see this changing immediately: ‘faith in the ideal of evidence-based policy dies hard’ (Stevens, 2011, p. 251). More problematically, policymakers will, clearly, always prefer findings that engage with or justify overarching political themes or preferences and appear authoritative: ‘policy-based evidence’ rather than ‘evidence-based policy’ (Tombs and Whyte, 2003). This problem is substantial and, as Ronald Rogowski (2013) argues, is a broader issue of the incentive structures of democratic politics, which affect the appraisal of all academic evidence (not just political science) in government. Here, I argue more modestly that active engagement with policymakers may begin to break down the barriers of unreflective positivism and promote greater appreciation of the diversity of scientific knowledge (although this would not guarantee that all research findings will find favour with government).

Practically, the current context of reform and improvement within the civil service may offer a window of opportunity for productive, mutually beneficial engagement between policymakers and political scientists. On the one hand, civil servants are increasingly interested in social scientific methodology (as the civil service reform programme suggests) and acquiring new skills for commissioning and evaluating social research, and on the other hand political scientists are increasingly keen to engage, as well as having a wealth of knowledge about, the social scientific research process and the value (and limits) of different methods and methodologies. There is, hence, a significant potential for researchers and policymakers to effectively exchange knowledge about, on the one hand, social scientific methodology, and on the other hand the best way in which research results can be disseminated. In this context, political scientists may emphasise the plurality of research methods (and forms of scientific knowledge) available and how diverse forms of evidence may offer different benefits to evaluating policy interventions and investigating policy problems. Civil servants may also benefit by effectively communicating to political scientists their ‘pressure points’ and suggesting how researchers might channel their findings more effectively into central government.

One way in which this could be done is through collaborative workshops or seminars on research methodology and dissemination. The author of this article (together with Emily Rainsford from the University of Southampton) set up such a workshop in the Cabinet Office entitled ‘Research for Non-researchers’, which aimed to enhance the skills sets of participant civil servants by introducing them to different forms of social scientific research (quantitative, qualitative and mixed method). Around a dozen participants who had not previously received social scientific research training attended the workshop, were given an initial introduction to quantitative, qualitative and mixed-method research, and then asked to assess the methodologies of two pieces of political science research. They were then set a hypothetical scenario in which they had to commission research into a particular policy problem, and asked what type of research would be best for the problem in question. The workshop was positively evaluated by participants, with some commenting that they had not previously been aware of the different benefits and limits of particular methodologies. Indeed, the author of this article was subsequently asked to give informal advice on conducting semi-structured interviews for an internal research project, perhaps indicating more demand within central government for qualitative methodological training.

The ‘Research for Non-researchers’ workshop was a very short (hour-long), one-off event and hence had significant limitations. For example, there was no time for in-depth exchange of perspectives on research production and dissemination between the participants and organisers (although this seemed to be something of significant interest to participants, particularly the problem of searching for policy-relevant research). It could be argued, however, that organising such an event offered the rare opportunity of direct engagement between political scientists and policymakers which was beneficial for both sides and could potentially be rolled out given necessary funding and impetus. The future development of a broader workshop/seminar programme might look to engage policymakers and political scientists in more detailed exchange of questions and viewpoints on the ‘relevance’ agenda and types of social scientific evidence. If nothing else, this might, together with a greater focus on ‘accessibility’, go a little way towards closing the ‘relevance gap’ and perhaps even lead to the more effective use of political science research in the policy process.

Acknowledgements

This response developed from an ESRC-funded internship working in the Cabinet Office's Efficiency and Reform Group from February to May 2012. I would like to thank the ESRC, as well as the many civil servants with whom my informal conversations helped shape the views presented in this article. Special thanks go to Emily Rainsford, Rob Langton and all those who helped organise and participate in the Cabinet Office workshop ‘Research for Non-researchers’ conducted in May 2012.

Notes

  1. 1

    Papers can be downloaded from http://www.bankofengland.co.uk/publications/Pages/workingpapers/default.aspx.

  2. 2

    Of course, rational choice modellers would no doubt point out that their research has different assumptions to that of researchers using, say, regression analysis. Formal modelling and statistical research are often conflated into a straw man ‘crude version of positivism’ by its opponents which tends to be ‘shallow and rests on stereotypes of the research process’, something this essay has attempted to avoid (John, 2010, p. 267).

  3. 3

    See online, for instance, http://first.emeraldinsight.com/samples/ibm_change.pdf.

Biography

  • Matthew Wood is a doctoral candidate in the Department of Politics, University of Sheffield. His research interests include depoliticisation, crises, the relevance debate, democratic governance and moral panics. He is co-convenor of the PSA Specialist Group on Anti-politics and Depoliticisation. Matthew Wood, Department of Politics, University of Sheffield, Elmfield, Northumberland Road, Sheffield S10 2TU, UK. E-mail: m.wood@sheffield.ac.uk

Ancillary