Becoming an evidence-based HR practitioner

Authors


Denise M. Rousseau, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213-3890, USA. Email: denise@cmu.edu

Abstract

Evidence-based HR (EBHR) is a decision-making process combining critical thinking with use of the best available scientific evidence and business information. We describe how to get started as an evidence-based HR practitioner. Actively managing professional decisions is a key aspect of EBHR. Doing so involves making decisions, especially consequential or recurring ones, using practices supported by high-quality research. We present a step-by-step set of approaches to becoming an evidence-based HR practitioner: from getting started, through everyday practices and continuous learning to integrating EBHR into your organisation. In offering guidance for evidence-based practice, this article underscores the connection between effective practice and organisational research.

INTRODUCTION

The complexity and fast pace of today's organisations often lead to knee-jerk business decisions, fad chasing and guesswork regarding ‘what works’. Busy HR managers may put on autopilot critical choices affecting the future of their firms, their employees and the public. The HR practitioner does have a way to learn how to make better-quality decisions and use HR practices that actually work – becoming an evidence-based HR (EBHR) practitioner. This article is a primer on the what, why and how of evidence-based HR practice. It is written with the HR practitioner in mind as well as the HR student and consultant. In celebration of HRMJ's 21 years of publishing academic research which pays particular attention to policy and practice, we describe how practitioners can use research in their day-to-day management activities. The issues we address can also apply to HRM scholars seeking to make their research more accessible to practitioners.

EBHR is motivated by a basic fact: faulty practices and decision making abound in HR. Companies persist in using unstructured interviews to try to assess a job candidate's fit, even though there is little evidence that typical interviews can do that (Stevens, 2009). HR departments often pursue one-size-fits-all standardisation in their policies, despite considerable evidence that programmes promoting flexibility benefit people and firms (Rousseau, 2005). In all honesty, can you answer ‘yes’ to the question, ‘Do you know the scientific evidence for ANY of the HR practices your company uses?’ Recent surveys of HR practitioners lead us to suspect that the frank response from many readers is ‘no’.

Blind faith has no place in professional practice. The fundamental problem is not so much that a practitioner lacks scientific knowledge (though that is an issue). Rather, the key problem is the absence of a questioning mindset. Thinking critically is what good professionals do. Wondering what works, what does not and why is the first step towards improving practice. Critical thinking means actively exploring alternatives, seeking understanding and testing assumptions about the effectiveness of one's own professional decisions and activities.

The opposite of critical thinking is imitation, reliance on copycat practices from other companies, while ignoring widely available scientific findings regarding what works and what does not. Most insights from HR research do not reach the practitioner – despite the existence of evidence-based guides written with practice in mind (Latham, 2009; Locke, 2009).

Here's a quick ‘what do you know’ test to check your knowledge of well-established scientific findings in HR. True or false?

  • 1Combining managerial judgement with validated test results is optimal for selecting successful new employees.
  • 2Incompetent people benefit more from feedback than highly competent people.
  • 3Task conflict improves work group performance while relational conflict harms it.
  • 4Being intelligent is a disadvantage for performing low-skilled jobs.
  • 5Integrity tests do not work because people lie on them.

Are you surprised to learn that all these statements are false? Each has been disproved by large bodies of studies, 30 in the case of Statement 3, regarding task and relational conflict (DeDreu and Weingart, 2003) and more than 200 in the case of the effects of intelligence (Statement 4; Salgado et al., 2003; Hülsheger et al., 2007). Adding managerial judgement into hiring decisions (Statement 1) actually leads to poorer selection decisions than does the use of validated selection tests and indicators alone (Highhouse, 2008). Incompetent people have great difficulty understanding feedback and tend to use it less effectively than their more savvy counterparts (Statement 2; Ehrlinger et al., 2008). Statement 3 might be considered a bit of a trick: Both task and relational conflicts reduce work group performance (DeDreu and Weingart, 2003). Contrary to Statement 4, intelligent people have a widely established advantage in performing all classes of work (Stevens, 2009). The more intelligent worker is likely to perform better overall, regardless of whether the job is designing a hotel or cleaning its rooms. Finally, even if people do distort their answers, integrity tests remain highly predictive of dysfunctional work behaviours such as theft (Statement 5; Ones et al., 1993). It turns out that impression management really does not detract from the predictability of these tests (Barrick and Mount, 2009).

If you got most of the answers wrong, you are not alone. The HR community tends to be poorly informed about what the evidence tells us in such fundamental areas as selection, training, feedback and HR strategy (Rynes et al., 2002). HR professionals actually fare no better on average than college undergraduates on an HR knowledge test, although MBAs are slightly better informed (Timmerman, 2010).

If you got most answers right, you are well informed and may already use evidence in your HR practice. And, you might already know that the HR department's capacity to help firms confront contemporary challenges lies in effectively deploying scientific knowledge regarding what works. Building this capacity requires evidence-informed practitioners. This article is an invitation for HR practitioners to participate in their own development and that of the HR field itself by becoming evidence-informed practitioners.

THE CALL FOR EBHR

Evidence-based practice is a radical change from management and HR ‘as usual’. It entails redoubling our efforts to do what we know works and to develop critical judgement in making decisions that impact the well-being of our organisations and employees. EBHR means making decisions, promoting practices and advising the organisation's leadership through the conscientious combination of four sources of information: the best available scientific evidence; reliable and valid organisational facts, metrics and assessments; practitioner reflection and judgement; and the concerns of affected stakeholders.

The call for greater scientific underpinning of interventions and decisions in practice has met with wide acceptance in such fields as medicine (Sackett et al., 2000), education (Ambrose et al., 2010), criminal justice (Sherman, 2002) and advertising (Armstrong, 2010). At the outset, EBHR has a huge advantage over other fields, especially in business. HR research is well developed, with bodies of evidence related to many ongoing organisational challenges. HR domains in which the science is quite informative include motivation, group processes, task coordination, individual and organisational learning and development, adaptation, innovation and change management, conflict and its resolution. In fact, out of all of management's many subfields, HR has the richest, most expansive base of scientific evidence to date (Locke, 2009; Charlier et al., 2011).

The need to rethink conventional HR practice is urgent. Recent events add further complexity to challenges that by themselves would test the acumen of any expert or practitioner: economic meltdowns, failed business models and deteriorating organisational capacities to forecast and manage risk and adapt effectively to market changes. If the globalised environment is less predictable and stable than in the past, managers need to be realistic about what can and cannot be learned from past practice. Managers must learn how to respond better to uncertainty (Taleb, 2010) by pursuing greater flexibility in the face of unpredictable events (Weick and Sutcliffe, 2007).

At the same time, this environment contains a powerful means for building capacity to address its highly demanding conditions. The explosion of knowledge accessible via the Internet includes the broad accumulation of scientific research on management and HR issues. We are the beneficiaries of over 65 years of post-World War II management and social science research – a deep and broad body of evidence. Lots of information (and knowledgeable people who can point practitioners to it) is accessible, ranging from evidence summaries (Locke, 2009) to Internet-enabled communities of practice (http://www.evidencebased-management.com).

Note that although scholars, educators and consultants provide essential support, EBHR remains something only practitioners actually do. If you are an HR practitioner, your willingness to become involved, innovate and share what you learn in becoming an EBHR practitioner is a key stepping stone in this initiative.

THE PRACTICE OF EVIDENCE-BASED HR

The basic steps for becoming an evidence-based manager fall into three phases: (a) getting started, (b) everyday practice and learning, and (c) integrating EBHR in the organisation. These steps reflect the critical activities today's evidence-informed practitioners are engaged in and form the basis of training programmes and courses in evidence-based management.

It starts with your mind

A practitioner interested in the idea of EBHR has lots of options for what he or she could do differently as a result of adopting it as a standard practice. Some people are drawn to an idea they have read about, like the people who started holding their group meetings standing up after Pfeffer and Sutton (2006) reported that Chevron used this ‘evidence-based practice’ to make meetings shorter and more efficient. Picking up a new idea and trying it out, however, is not in itself evidence-based practice. It is more like a ‘flavour of the month’ approach because the decision making behind the use of the new practice does not take into account what is likely to work in that particular organisation. This kind of faddish adoption is not what we consider to be EBHR practice. It is more like an old wine in a new bottle. Instead, a more mindfully engaged way to get started is to first come to understand what evidence-based practice really is; then, do the critical thinking – with a questioning mindset – that acting on evidence requires.

Understanding what EBHR means

At its core, EBHR combines four fundamental features into everyday management practice and decision making (Rousseau, 2006, 2012):

  • 1Use of the best available scientific evidence from peer-reviewed sources.
  • 2Systematic gathering of organisational facts, indicators and metrics to better act on the evidence
  • 3Practitioner judgement assisted by procedures, practices and frameworks that reduce bias, improve decision quality and create more valid learning over time.
  • 4Ethical considerations weighing the short- and long-term impacts of decisions on stakeholders and society.

The best available research evidence  When referring to the best available evidence, we generally mean findings from published scientific research. Research in scientific journals is vetted according to evidentiary criteria including standards for measurement reliability and internal validity. The vetting process is known as ‘peer review’ (Werner, 2012). Measurement reliability means that indicators are low in error, a concern with all data, from telephone numbers to profit measures and survey questions. Internal validity indicates how likely it is that results may be biased. Bias exists where alternative explanations for a study's results are not controlled or ruled out. For instance, let us say the research question is whether self-managing teams improve labour productivity. Better-quality evidence uses control groups (conventional teams) or longitudinal designs (comparing the base rate of productivity before the teams became self-managing to productivity rates measured a long enough time after the change to see if any initial gains are maintained). In contrast, lower-quality evidence uses cross-sectional (single-time) surveys or case studies. Sometimes, the best available evidence may be cross-sectional surveys that control for some biases but not all. In that case, some evidence is still far better than no evidence at all, and can help improve practitioners' decisions – but it is important to know what kind of evidence is being used and what the advantages and drawbacks of relying on that evidence could be.

Organisational facts, metrics and assessments  An HR manager who seeks to make good use of evidence must take into account the facts of the situation in order to identify what kinds of research findings are likely to be useful. For example, when exit interviews are used to figure out what's causing recent job turnover, leavers who report a high incidence of job stress can direct the practitioner's attention to evidence connecting stress with turnover. Knowing the facts of the situation makes it easier to seek and use appropriate evidence to identify plausible explanations for a problem, potentially useful interventions and how best to carry them out. Such organisational facts can involve relatively ‘soft’ elements such as organisational culture, employees' educational level and skills and one's management style, as well as ‘harder’ figures such as departmental turnover rates, workload and productivity trends.

Practitioner reflection and judgement  Effective use of evidence depends on there being not only good scientific knowledge informed by organisational facts but also mindful decision making. All people have cognitive limits and are prone to bias in making decisions (Simon, 1997). Thoughtful judgement and quality decisions are aided by practices that allow deeper consideration of relevant evidence and facts (Nutt, 2004; Larrick, 2009). In particular, use of decision frameworks and routines calls attention to particular aspects of decisions that might otherwise be neglected (e.g. contingencies, diverse goals; Nutt, 1998, 2004; Yates, 2003). Evidence is not answers. Suppose you are looking to improve the job performance of new hires. We know that general mental ability (GMA) generally leads to higher performance (Stevens, 2009), but if your firm is already selecting people with good grades from good schools, GMA may be pretty much covered in your current criteria. Evidence in itself is not answers but needs to be considered in context. In our example, new hires may need some other specific set of skills to be successful, or any performance problems might be due to something inherent to the work setting itself – inadequate supervision, poor work conditions, etc. Careful analysis of the situation based on critical thinking, supported by a decision framework that calls attention to assumptions, known facts and goals (see next discussion), can lead to more accurate assessment of the problem and interpretation of facts.

The consideration of affected stakeholders  HR decisions and practices have direct and indirect consequences for an organisation's stakeholders. These consequences affect not only the rank and file but executives and managers too. In some cases, the affected stakeholders are outside the organisation, such as its suppliers, shareholders or the public at large. For example, a decision to increase the retention and advancement rates of women is likely to generate push back from men. Implementing career-building activities in a way that lets all employees benefit can reduce the turnover of women and minority group members and increase their advancement, while sending the signal to those traditionally in the majority that this company supports career development for employees broadly (Cox, 1994). Attending to stakeholder issues is a key feature of comprehensive, evidence-based decision practices. These decision practices are designed to reduce unintended consequences by considering relevant issues upfront (Yates, 2003).

You might develop your understanding of these four features of EBHR by reading a few of the sources we cite (most of which can be accessed for free at http://www.evidencebased-management.com). Then, you might practice explaining what EBHR is to friends and colleagues. The questions they raise will help develop your understanding of what it is and what it is not. Looking back over your reading with these questions in mind will help you answer them.

Some people think EBHR is just a knock-off from the field of medicine. To the contrary, EBHR is not randomised control trials for managers. Drugs and people aren't the same. EBHR does mean getting evidence about what works, which is a hallmark of drug and other treatment studies. At the same time, EBHR recognises that HR practitioners often must act regardless of whether evidence is available to guide their decisions. The essence of EBHR is approaching decisions, uncertainty and risk in a mindful fashion. Practising EBHR involves a hunger for knowledge and a questioning mindset.

Developing a questioning mindset  Unfreezing old habits of mind is necessary to EBHR practice. It means questioning assumptions, particularly where someone (including ourselves) asserts some belief as a fact. This habit-forming approach can inform your conversations and deliberations. You will begin to ask yourself and others, ‘What's the evidence for that?’ as impressions, beliefs and attitudes appear in your conversations about the organisation, its practices and the decisions being made. This approach has turned many recent MBA graduates into the ‘evidence police’, an approach they learn to use over time in a manner that promotes critical thinking without necessarily criticising.

Concern for the facts and logic behind decisions translates into active questioning and scepticism. Scientists refer to this critical habit of mind as ‘mindfulness’. It is helpful to know how to develop mindfulness as a way of thinking about information, decisions and actions. Mindfulness is a ‘heightened sense of situational awareness and a conscious control over one's thoughts and behaviour relative to the situation’ (Langer, 1989). Being able to articulate and check the logic underlying your decisions is an important way to monitor any decision's quality (Yates, 2003).

Evidence-focused questioning of statements or assertions changes both the conversations and deliberations of emergent EBHR practitioners. A must here is for practitioners to learn ways to raise these questions in socially effective ways (read: civil and persuasive). To be effective, EBHR managers need to avoid being dismissed as mere naysayer. Raising questions can be anxiety-provoking for would-be EBHR practitioners who fear making waves. This questioning extends to assertions made by professors, consultants and other ‘experts’. And, yes, we expect you to question us by critically considering our arguments, reviewing our sources and contacting us as needs be. Once practised at it, EBHR practitioners become comfortable asking, ‘Is this your personal opinion based on your own experience, or is there any scientific evidence for it?’ You may be surprised to learn how much uncertainty really exists regarding the practices your organisation uses. Evidence-based practice thrives in a questioning culture – not a cocky one. No one benefits when decisions are made that deny or downplay the uncertainties involved. In fact, recognising what we do not know is the first step in identifying whether uncertainties can be managed. So, if a training programme only increases skills and knowledge for some of the people some of the time, we might consider what other interventions might also be useful.

Make your decisions more explicit  Managers make decisions all the time. It is their job. In EBHR, decisions are made explicit to reduce decision neglect (not making a decision that needs to be made), to avoid making decisions on auto-pilot (important actions are taken without deliberation) and to increase mindful, deliberate decision making.

The process of making decisions explicit has two parts. The first aspect is developing decision awareness, recognising the numerous micro-choices you and your company make daily – all with some potential to be informed by evidence. Try making a list of the events of a morning or afternoon at work. Who did you encounter? What did you do or say? Then list out the various opportunities you had to make a decision (no matter how small). You will find that there are far more decisions you make in a day than you ever realised. Now, EBHR is not about making every possible decision using some EBHR formula. Far from it: EBHR means becoming more mindful of the opportunities you have to choose courses of actions, regardless of whether you take action in every one of them. In effect, you need to recognise decision-making opportunities in order to make deliberate choices about when evidence is important to pursue.

The second feature of making decisions explicit means to actually begin paying attention to how a decision gets made. Analyse a recent decision or intervention you have made (alone or with colleagues). Ask yourself, from whom or where did you learn about the facts used in this decision? What evidence supported the actual path taken? Did some pieces of information influence the decision more than others? Was some evidence missing? What indicators do you have of the decision's success? Where does it fall short? What alternative ways might this decision have been made (e.g. using additional or different information, stakeholder discussions, etc.)?

Awareness of assumptions made in making an actual decision (information, sources, ways of deliberating) is a step towards EBHR. Developing decision tools, such as a checklist or decision model (Yates and Tschirhart, 2006; Gawande, 2009), can prompt more systematic thinking and use of information. An evidence-savvy practitioner we know regularly uses a logic model he adopted (see W.K. Kellogg Foundation, 2004) to guide situation analysis and decision making. In working through decisions with his staff, this executive uses a flow diagram that lays out questions about the initial assumptions, inputs, activities and expected outputs that a decision involves. His direct reports tell us that being able to articulate and check the logic underlying a decision makes it easier to be sure important issues are thought through. Other approaches you might consider include using a decision template such as Yates' Cardinal Rules (Yates and Tschirhart, 2006). The key issue is to create/adopt/adapt a framework for thinking through important decisions and then making those decisions using the best information available.

Everyday practice of EBHR: making decisions informed by scientific evidence

Making decisions informed by scientific evidence is a turning point in HR practice – it is a big step, and it is not always easy. The more you do it, the better you will become at it. Start by gathering evidence relevant to a particularly compelling decision. In developing a crisis management policy post-9/11, a New York hospital manager commissioned a systematic review of the evidence to identify effective crisis management practices. When introducing an electronic physician-ordering system, another manager hired a summer intern to conduct a systematic review (i.e. a systematic assessment of all research related to a managerial question) of published studies on managing a change in information technology (IT) (These examples were provided by Kovner et al., 2009.). Still, we recognise that most decisions are made using only the information practitioners have at hand. So, let us first talk about how to increase the quality of evidence you already know.

Doing directed reading on scientific evidence  Think about some important knowledge gap you or your organisation have. Then begin doing regular readings of science-based publications on the issues (e.g. talent management, market trends, problem-solving processes). Check out the business section of your local bookseller for books citing research articles as a basis for their ideas. Avoid books and management articles without citations or full of opinions from so-called experts. This includes Harvard Business Review and other popular management magazines unless you come across an article explicitly citing scientific evidence. Else, search online sources of scientific articles (Your corporate or public library is likely to provide access to HR relevant e-sources including ABI/INFORM or Business Source Complete from EBSCO. Or use Google.scholar which gives reasonable access.). Focus on articles that are peer-reviewed (for the two databases examples, there is a box on the computer screen where you can indicate your choice). In the peer review process, independent scientists anonymously critique scholarly work to determine whether it merits publication in a scientific journal. Peer review is central to establishing the credibility of scientific evidence. The kind of knowledge scientific research produces includes general principles (e.g.‘set specific challenging goals to achieve high performance’, Locke, 2009) as well as frameworks (Boudreau, 1984; Nutt, 2004) to help in making decisions. Sharing relevant science-based articles on managerial concerns (e.g. talent management, market trends) with your colleagues can be a way to get them thinking. Such readings can be the basis for a ‘management book club’ and will provide relevant facts to cite in memos you write, to make an evidence-based case for your recommendations. On the other hand, even peer-reviewed articles can contain evidence that is not top quality or is inapplicable to your situation. So developing and exercising your critical judgement remains important.

Searching for information on a specific decision  Let us talk about how to find high-quality evidence to incorporate into a specific management decision. There is good news, and there is bad news. The good news is that in the past year, at least 1,350 research articles on HR were published. That is also the bad news: You would have to read three or four articles each day just to keep up with them all. Back to the good news: You do not need to read them all. Targeted reading helps you to practice in an evidence-informed way. And in the case of incorporating evidence into a specific decision, we now describe a time-tested approach for gathering evidence in a practical way.

Imagine you are an HR manager at a large Canadian health-care organisation with 4,000 employees. The board of directors has plans for a merger with a smaller health-care organisation in a nearby town. However, the board has been told that the organisational cultures differ widely between the two organisations. The board of directors asks you if this culture difference can impede a successful outcome of a merger. Most of them intuitively sense that cultural differences matter, but they want evidence-based advice. So how do you start?

Formulate an answerable question  Start with a focused question based on a practical issue or problem. Questions like ‘Do team-building activities work?’‘Is 360-degree feedback effective?’ may be interesting to answer, but they are also very broad. A more specific question would be even more relevant and informative. For example, ‘Do team-building activities improve product quality in manufacturing?’ or ‘Is 360-degree feedback effective as a tool for improving governmental managers' service to the public?’ To formulate these targeted questions, consider the kind of setting (professional, government, non-governmental organisation, for profit) in which you're interested, the values and preferences of the target group and the kinds of outcomes that matter.

In our example, let us assume the board explains that the objective of the merger is to integrate the back-office of the two organisations (IT, finance, purchasing, facilities, etc.) in order to create economies of scale. The front offices and the primary processes of the two organisations will remain separate. Your research question might be something like, ‘How do organisational culture differences affect a successful integration of back-office functions during a merger between two health-care organisations of unequal size?’

Search for evidence  The fastest, most efficient way to search is to contact people who have what is called ‘pointer knowledge’ (Goodman and Olivera, 1998). This includes people like business or social science librarians, college professors or researchers in your areas of interest who can direct you to the scientific evidence you are looking for. (As an evidence-informed practitioner, you are likely to find yourself making some new friends over time.) If you do not have access to such people, then search in a bibliographical database such as ABI/INFORM yourself.

Start a search with the keywords from your question. Keep in mind that terms used in everyday speech may differ from the concepts scholars use. In our example, the keywords of the practitioner and scientist are the same: ‘merger’ and ‘organisational culture’. Since we want quality articles that include empirical research, we can reduce this total by adding the term ‘studies’ to our subject terms and checking the box, ‘Scholarly journals including peer reviewed’. This second search results in 95 articles. Still quite a lot, so we use a third term, ‘integration’ to search within these articles and reduce the number to 35. Adding the subject term ‘non-profit organisations’ results in 0 articles, so we stick with the 35.

Critically appraise the evidence  For topics having a deep evidence base, scholars may have already pulled together what is known into reviews of the body of evidence. Although some reviews are summaries of an author's particular point of view, others are important authoritative reviews, i.e. reviews that present the most convincing evidence. The general label for an authoritative review is ‘systematic review’ of which a ‘meta-analysis’ is a particular case.

A systematic review identifies as fully as possible all the scientific studies of relevance to a particular subject and then assesses the validity of the evidence of each study separately before interpreting the full body of evidence. One especially prevalent form of systematic review is a meta-analysis. It is a study of studies, where findings across studies are combined statistically in order to achieve a more accurate estimate of the results and the strength of effects that are described in the various studies. If we look into our 35 articles, we find one systematic review that is in fact a meta-analysis based on 46 studies with a combined sample size of 10,710 mergers and acquisitions (Stahl and Voigt, 2008). In other words, somebody has done some of the work for us and pulled together the results from 46 studies of culture difference and post-merger integration. As a result, the outcome of this single study overweighs the conclusions of any study alone.

If we had no systematic review or meta-analysis, we would read over the abstracts of the other 34 studies we retrieved. As we mentioned before, most studies are not valid or relevant, so how to separate the wheat from the chaff? To do this, we look at the three aspects of each study: its internal validity (closeness to the truth due to limited bias), impact (size of the effect) and relevance (applicability to our situation). For each article, we start by reading the abstract summarising the study. If not enough information is provided, we leaf through the article to see if it is relevant to the kinds of effects we are interested in. If so, let us then evaluate the study's internal validity. The good news is that when it comes to appraising the internal validity, in most cases you can figure this out by identifying the research design. According to Norman and Streiner (2003), ‘Cause and effect can be established only through the proper research design; no amount of statistical hand waving can turn correlations into conclusions about causation’. The bad news is that most articles are tough to read (or at the very least take time to read thoroughly) and may not give all the information you'd like about their methodologies.

Determining how the study was done can require careful reading. Go to the methodology section (if there isn't one, be concerned) and ask yourself two questions: Is a control group used for comparison, and were data collected at more than one point in time to measure the effects of the intervention? If both questions are answered with yes, the level of evidence of the study is fairly good. (You could say ‘it is shown that . . .’ based on this study.) If only one question is answered with yes, the level of evidence of the study is acceptable, which means you must be careful with drawing conclusions on the outcome (You might say ‘it is likely that . . . ’ instead of ‘it is shown that . . .’ if you were citing these results in a conversation or memo). If both answers are with no, then that study has a low level of evidence. In that case, you need a larger set of studies with consistent results before drawing conclusions. (The language changes again, too: ‘there are signs that . . .’ instead of ‘it is likely that . . .’) Additional questions to appraise the article include:

  • • ‘Did the researchers use objective and validated measures, questionnaires or other methods?’
  • • ‘Are important effects overlooked?’ and
  • • ‘Could there be bias?’

Different types of research questions require different types of research designs. As our interest is in the cause–effect relationship between an intervention and its outcome (‘Does it work?’), a controlled study with a pre-test conducted prior to a treatment or intervention generally is the strongest research design. Case studies and cross-sectional designs are the weakest for showing cause–effect relationships. Of course, this does not mean these study types have an inherently weak design overall. A case study is an appropriate design for providing descriptive information and also can be a strong design when it comes to research questions about ‘how’ or ‘why’ an intervention works (Petticrew and Roberts, 2003). Also, case studies are often the first indication that a management practice has unintended consequences. However, a case study is not the best design to assess the strength of the cause–effect relationship that might exist between an intervention and its outcomes (Trochim and Donnelly, 2007).

Well-designed cross-sectional surveys can provide a higher level of evidence when their analyses test competing explanations, use analytic methods to reduce bias, and their findings are supported in multiple settings. Both surveys and case studies can be very useful for management practice, provided it is borne in mind that the results of such study designs are more prone to bias. Also, if only this type of research is conducted, this remains the best available evidence and should not be discarded but used with some mindfulness about the limitations of each.

When we have a look at the abstracts of our 35 articles, we find out that quite a few are not relevant: Some articles are about cross-border acquisition, contract manufacturing, family firms or the measurement of cultural differences. When we also leave out all the case studies, we end up with eight studies. After reading the methodology section, we conclude that seven of the eight studies have a cross-sectional design, most of them surveys. Of course, for a study to be valid, differences in organisational culture should be measured before a merger takes place and compared with data collected afterwards. However, it is very difficult to gain access to data during the merger negotiation period, which explains the lack of controlled studies with pre-tests.

So what is the outcome of the studies we identified? Well, not surprisingly, most studies conclude that there is a negative association between managers' perception of cultural differences and the effectiveness of the post-merger integration. Plainly said, the bigger the cultural differences going in, the less effective managers believe the post-merger integration is. Overall, based on the 46 studies, the authors conclude that it is likely that ‘when a merger requires a high level of integration, cultural differences can create obstacles to reaping integration benefits’. However, the meta-analysis points out that differences in culture between merging firms can also be a source of value creation and learning. For example, in mergers that require a low level of integration, cultural differences are found to be positively associated with integration benefits. When the dominant organisation also grants the smaller organisation a considerable amount of autonomy, moderate cultural differences might even function as a catalyst for value creation.

Integrate evidence with your own expertise, context and stakeholder concerns  Your expertise and experience are important factors in how you apply the evidence you have gathered. With regard to our example, ask yourself if an integration limited to the back office can be considered ‘low level’. When you consider the body of evidence relevant to your question, ask yourself whether some facet of the situation might make the scientific findings inapplicable. A lack of commitment from key stakeholders might make only the lowest level of integration possible. How relevant is the evidence to what you are seeking to understand or decide? You might find that the evidence is about product quality, while your concern is cost reduction. Is the evidence informative? It depends on your needs with respect to the decision you have to make. What are your organisation's potential benefits and harms from the decision? If your circumstances make an intervention particularly difficult or risky, you might consider a pilot test first. Does the evidence give you insights into how to run the test? Would the intervention align with interests of all stakeholders? Depending on the answer, this could be a fatal flaw or simply a problem to be managed.

The last key step is to monitor the outcome and evaluate the results of your decision. Facilities within Robert Bosch, the automotive engineering firm, use an electronic posting system to monitor the outcomes of decisions its teams have made. At the time of the decision, the team indicates its assumptions, the expected outcome, milestones and enabling conditions. On a monthly basis, these decision postings are monitored and updated. As we will describe in the next phase of becoming an EBHR practitioner, evaluating the outcomes and results of your decisions is inherent to evidence-informed practice. Like the grand rounds that physicians and medical residents make each day in a teaching hospital, it is the conversation, reflection and ultimately ‘more critical thinking’ from this process that leads to better use of evidence and better practice.

Integrating EBHR in your workplace

The practice of EBHR described involves activities individuals can do by themselves with or without the support of their employers or others in the organisation. However, the next step is integrating EBHR practices into the broader organisation. Bosses and peers often appreciate the professionalism and conscientiousness that EBHR practitioners manifest. Yet, often there is push back. When decisions need to be made quickly or there is politicking and backbiting in the firm, practitioners report having to choose the situations in which they pursue evidence-based approaches conscientiously. Exceptions apply when the practitioner is in an executive or otherwise high-level position.

The cultural meaning and value of evidence (particularly scientific evidence) varies across firms, with technical firms possibly exhibiting more receptivity. For example, Google and Microsoft structure their employee selection processes based on evidence, using work samples such as technical problems and case questions in assessing candidates. Health-care organisations have begun using management evidence in making decisions in line with the evidence focus of their key internal workforce, nurses and physicians (Kovner, 2012).

Making evidence-based practice organisational and not just personal involves consciousness-raising about the existence and utility of scientific research for HR-related and other management-related decisions. Getting the idea that such evidence exists out to colleagues can entail conversations and lunchtime meetings where new findings or applications of certain information are presented and discussed. Use research citations in internal memos to help build the case for your recommendations, and also raise awareness about the need for and benefits of using evidence to support one's case. The idea of a ‘management journal club’ to introduce new ideas to HR staff and management or discuss the findings of a study that all have read can work well. Or, ensuring that the first part of regular meetings attends to developing the staff's ability to understand and use evidence can in effect ‘sharpen the saw’, that is, enhance the team's abilities to practice EBHR. Often, it is best to use a bundle of these practices, reflecting a higher-level mastery of EBHR concepts.

It is useful to develop routines that incorporate both evidence and reflective decision making. Key elements in good decision making include features such as ‘needs’, ‘tradeoffs’, ‘intervention features and likely success’ (see Yates, 2003; W.E. Upjohn, for examples). All of these provide the basis for a set of queries or steps that call attention to important decision features. Share this thinking with your colleagues and staff so that it becomes a part of a more comprehensive approach to managing decisions. A template provides regular ways to ask the question, ‘Do we have the best evidence for that?’ and other queries that can improve the quality of decisions. The US Army regularly employs decision tools such as checklists or flow-diagram models in making substantive decisions. After-Action Reviews are an example of one type of post-decision routine used by the military as well as consulting teams upon completion of missions or projects. Related practices include conducting small pilot tests to gather facts about the outcomes of decisions, keeping decision logs to review outcomes later and conducting tests of competing assumptions. Research into the value of these routines indicates that they overcome the cognitive limits of human beings by requiring less recall of key processes, allowing individuals to be more reflective and creative (Larrick, 2009).

Expanding the organisation's evidence gathering and research participation can be done in several ways. First, commissioning systematic reviews of evidence on important practice questions gets employees involved in the search for and synthesis of information (Tranfield et al., 2003). When an important decision has lead time, an Evidence Assessment Team can be assembled whose Internet-savvy members are tasked with finding what the science says about a practice question or a pending decision. This can be done in-house or involve students from a local university or research colleagues on faculty. Second, having teams participate in practice-oriented research evaluating the impact of a change in HR practice helps build critical thinking about appropriate indicators, information sources and controls to rule out alternative explanations. This kind of research involvement is the practice-oriented research promoted by the Center for Effective Organizations in the US (Lawler, 2006) and the Cranfield School in the UK (Tranfield et al., 2003). Finally, systematically evaluating the outcomes of practice decisions leads to more accurate feedback and better decisions. Executives who search for disconfirming evidence tend to make better decisions than their less curious or more defensive counterparts who do not (Nutt, 1998, 2004). The key idea is to build quality connections between practice and research.

CONCLUSIONS

The challenges of promoting and practicing EBHR are many. Some of these challenges are unique to HR and management, and others are inherent in any innovation. Every innovation winds up being adapted in some way, big or small, in order to make it easier for practitioners to use (Ansari et al., 2010). The unprecedented challenge of EBHR is that management is not a ‘profession’. Managers have diverse disciplinary backgrounds. HR practitioners have no single credential that authorises their expertise, and the occupation is open to those with no degree and those with several. There are no regulatory requirements regarding the education or knowledge an individual must have to become a manager or an HR professional. The HR industry associations SHRM (Society for Human Resource Management) and CIPD (Chartered Institute of Personnel and Development) administer examinations to certify member expertise. At present, the SHRM exam is not highly evidence-based, instead supporting industry standard practice. In contrast, CIPD (active in Ireland, Britain and elsewhere in Europe) focuses more on science-based knowledge and aligns with masters' level university programmes throughout Europe.

Many professionals have extensive theoretical knowledge and related skills that they apply in practice. The issue is how well evidence is represented in the day-to-day practice of HR professionals. Consider that physicians all over the world have taken the Hippocratic Oath (excerpted):

  • – “I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
  • – I will not be ashamed to say ‘I know not’, nor will I fail to call in my colleagues when the skills of another are needed for a patient's recovery”.

Evidence-based practice, whether in medicine or business, means doing things right and doing the right thing. Taking up the practice of EBHR offers practitioners three huge benefits. First, a science-based practice of management promotes better outcomes from your decisions and eases their implementation. When it comes to new HR practices and trends, EBHR gives you tools to help distinguish the wheat from the chaff. Second, developing yourself as an evidence-based practitioner is empowering. Becoming evidence-informed helps you develop powerful arguments to convince others that implementing constructive practices in your organisation is a good idea. Lastly, practicing EBHR ensures ongoing learning throughout your career, through closer ties with research and researchers and with informed communities of EBHR practitioners.

Evidence-informed decisions are part and parcel of professional practice. By making evidence-based decisions, EBHR practitioners develop greater objectivity and balance in their decisions. At the same time, academic researchers and educators too have an important responsibility, developing a better understanding of the conditions of practice and the critical knowledge and skills that support good professional practice. In doing so, all manifest the responsibility and accountability that is the hallmark of any profession.

Acknowledgements

Cathy Senderling did a superb job in editing this article. The writing of this article was supported by an H.J. Heinz II Professorship.

Ancillary