Source Handler telephone interactions with covert human intelligence sources: An exploration of question types and intelligence yield

Funding information Centre for Research and Evidence on Security Threats, Grant/Award Number: ES/ N009614/1 Summary Law Enforcement Agencies gather intelligence in order to prevent criminal activity and pursue criminals. In the context of human intelligence collection, intelligence elicitation relies heavily upon the deployment of appropriate evidence-based interviewing techniques (a topic rarely covered in the extant research literature). The present research gained unprecedented access to audio recorded telephone interactions (N = 105) between Source Handlers and Covert Human Intelligence Sources (CHIS) from England and Wales. The research explored the mean use of various question types per interaction and across all questions asked in the sample, as well as comparing the intelligence yield for appropriate and inappropriate questions. Source Handlers were found to utilise vastly more appropriate questions than inappropriate questions, though they rarely used open-ended questions. Across the total interactions, appropriate questions (by far) were associated with the gathering of much of the total intelligence yield. Implications for practise are discussed.


| INTRODUCTION
Law Enforcement Agencies (LEAs) gather intelligence with the intention to both understand current and future criminal threats and inform the subsequent decision-making concerning how to prevent criminal activity and pursue those who remain "at large" (Chappell, 2015; Home Office, 2018). To satisfy a LEA's intelligence requirement designed to tackle these threats, effective reporting processes are required. In the context of human source intelligence (HUMINT) collection, intelligence elicitation relies heavily upon the deployment of appropriate evidence based interviewing techniques. Against this background, the present research focused on the use of question types, specifically utilised by Source Handlers in their interactions with Covert Human Intelligence Sources (CHIS) from England and Wales.
Source Handlers are officers whose primary operational responsibility is to elicit intelligence from human sources that addresses a LEA reporting requirement (e.g., a written direction highlighting the organisational need for information that can close current intelligence gaps, corroborate or disprove existing intelligence and highlight emerging threats and risks) (Stanier & Nunan, 2018). In England and Wales, Source Handlers operate within Dedicated Source Units. The core role of a Source Handler is the day-to-day management of CHIS on behalf of a public authority (Chappell, 2015). Whilst the formal title of sources authorised to collect and report on criminal activity is a CHIS, they are more commonly referred to as informants.
The management of CHIS in England and Wales is governed by legislation, the Regulation of Investigatory Powers Act 2000 1 (RIPA), which provides the legal definition of a CHIS as someone who: a. establishes or maintains a personal or other relationship with a person for the covert purpose of facilitating the doing of anything falling within paragraph (b) or (c); b. covertly uses such a relationship to obtain information or to provide access to any information to another person; or c. covertly discloses information obtained by the use of such a relationship, or as a consequence of the existence of such a relationship.

| Source Handler and CHIS interactions
Source Handlers interact with their CHIS on a regular basis, primarily to gather intelligence on criminal activity (Chappell, 2015). Once a CHIS has been legally authorised, regular contact commences, which is commonly undertaken via the telephone. Unlike physical meetings which require detailed planning to address safety issues, telephone contacts can be quickly arranged. As a consequence, telephone interactions provide the CHIS the ability to download their memories to their Handler shortly after experiencing a to-be-remembered event.
The immediacy of CHIS providing new intelligence to their Handlers may reduce memory decay over time and provide the Source Handler with "live" intelligence to be actioned (Billingsley, Nemitz, & Bean, 2001).
In essence, the CHIS should be treated as a vital witness to an incident, albeit not one that will be directly involved in the evidential chain. However, the value of the CHIS' intelligence collection activity, undertaken on behalf of the State, can only be truly optimised by the Source Handler's suitable application of elicitation techniques. As such, the use of appropriate questioning techniques may well determine whether the necessary intelligence has been collected in a timely, reliable, sufficiently detailed and "actionable" format (Grieve, 2004) so as to inform law enforcement decision-making and prioritisation.

Historically, question types have been dichotomised as either open
or closed (Gee, Gregory, & Pipe, 1999;Myklebust & Bjørklund, 2006). By doing so, researchers have been able to contrast open and closed questions against the quantity and/or quality of information gained. Stern (1903/1904 and Verhör (closed) questions, noting that longer responses and free narratives were elicited from witnesses with open as opposed to closed questions. Stern's (1903Stern's ( /1904 initial categorisations remains consistent with recent research (Oxburgh, Myklebust, & Grant, 2010), as the psychological memory processes accessed by open and closed questions has not changed. Open questions, broadly speaking, tap into the free recall processes of the interviewee, whereas, closed questions typically align to recognition memory processes (Gee et al., 1999). The last two decades of research has repeatedly shown that information gathered via free recall processes are more likely to be accurate than memories reported through recognition processes (Hershkowitz, 2001;Lamb et al., 2007). Powell and Snow (2007)  former question type prompting the interviewee to expand the list of broad activities (e.g., "What happened then?"), whereas the latter encourages a more elaborate response about a pre disclosed detail (e.g., "Tell me more about the part where…). What is common to both of these types is that neither dictate what information is required (Powell & Snow, 2007). However, when question types are categorised by their wording alone, discrepancies occur between researchers and across guidance documents. For example, the ABE interview document (Home Office, 2011) and Loftus (1982) define a question commencing with "wh" ("what?," "why?," "when?," "where?," "who?") and "how" (also known as 5WH questions) as a probing question, yet as demonstrated by Powell and Snow (2007), an open question may start with "what" depending on how they are used (Hymes, 1962). The phrasing of a question should not be ignored, as an alternative wording may improve the quality of a question. However, classifying questions solely on the words used to formulate it can itself become problematic (Oxburgh et al., 2010). Hence, question types have been dichotomised in terms of productive or unproductive (Griffiths & Milne, 2006), and appropriate or inappropriate (Phillips, Oxburgh, Gavin, & Myklebust, 2012), to take into account the question's function (e.g., information gathering versus accusatorial), timing within the interview (e.g., using a closed questioning strategy before exhausting open questions), and the context in which the question is posed (e.g., appropriate use of closed questions to establish the provenance 5 of the elicited intelligence once open questions have been exhausted) (Griffiths & Milne, 2006). Fisher, Geiselman, and Raymond (1987) analysed 11 police witness interviews and reported that the interviewers' questions primarily consisted of closed yes/no questions (which can only elicit a yes or no response), were delivered in a staccato manner, and that only three open-ended questions were used per interview. Comparably, Baldwin (1993) found (in his field study) that interviewers conducted poor interviews with suspects, with constant interrupting, quick-fire questioning, and did not allow the interviewee to provide a full account. Similar findings to Fisher et al. (1987) have been reported, revealing that the majority of questions posed were considered closed yes/no questions and only 2% were open-ended (Clifford & George, 1996;Daviesl, Westcott, & Horan, 2000).

| Appropriate versus inappropriate questions in the field
Within the context of police call centres, Leeney and Mueller-Johnson (2012) analysed 40 telephone interactions between police call operators and witnesses. Their research revealed that only 2.46% of questions posed by the police call operators were considered open despite the fact that the majority of questions (88.5%) were categorised as productive (i.e., appropriate). This is disappointing, as a laboratory study which examined police call centre telephone interactions showed that the use of an open-ended question, namely, "tell me everything", increased the number of correct details at no cost to accuracy (Pescod, Wilcock, & Milne, 2013). Although the new interview protocol introduced by Pescod et al. (2013) increased the length of the telephone interaction, it is argued that the report everything approach gathered a detailed and reliable account. The difficulty of conducting an interview should not however be understated. Professionals who carry out investigative interviews have previously discussed the complexity of the interviewing task (Griffiths, Milne, & Cherryman, 2011;, highlighting the simultaneous processes of active listening and generating further relevant questions (Köhnken, 1995). Yet, an open-ended questioning strategy would free up the cognitive load associated to generating numerous relevant questions to allow for active listening instead (Griffiths et al., 2011), as well as positively impact on the interviewee by encouraging a non-leading free recall retrieval that is more likely to be accurate in contrast to closed questioning (Gee et al., 1999;Hershkowitz, 2001;Lamb et al., 2007).
Despite the seminal research evidencing what is considered to be poor questioning, and the development of numerous interviewing guidance documents in response (e.g., the Cognitive Interview, PEACE, ABE), the quality of interviewing has still been reported as problematic (e.g., Clarke & Milne, 2016;Griffiths et al., 2011;Griffiths & Milne, 2006;Snook et al., 2012;Walsh & Milne, 2008).
Poor interviewing practises tend to incorporate the use of inappropriate questions, such as multiple and leading questions. Multiple questions address more than one topic or have two or more questions phrased together (Powell & Snow, 2007). Multiple questions make it difficult for the interviewee to interpret which part of the question requires an answer (Snook et al., 2012). Further errors include the use of forced choice (i.e., was the car red or blue?) or leading questions (Fisher, 1995;Gudjonsson, 1992;Wright & Alison, 2004). Leading questions represent a biased approach to the interview (Wright & Alison, 2004), as they provide information not previously disclosed by the interviewee. Additionally, they are likely to lead the interviewee into providing an answer that was influenced by the interviewer, which has been found to be less accurate in contrast to open questions (Brown et al., 2013;Horowitz, 2009;Lamb et al., 2003;Roberts, Lamb, & Sternberg, 2004).
A common finding amongst the research which analysed the questions deployed by interviewers, is that appropriate questions have been utilised less so than inappropriate questions (Myklebust & Alison, 2000;Walsh & Bull, 2010, 2015Walsh & Milne, 2008). Laboratory and field research have revealed that as an interviewers' input increases the accuracy of the information gathered is likely to diminish, as information reported from follow-up questions has been found to be significantly lower than spontaneously reported information (Kontogianni, Hope, Taylor, Vrij, & Gabbert, 2020). This is further supported by information gained from free-recall prompts (i.e., openended questions) being more likely to be accurate than information elicited via focused prompts (i.e., closed questions) (Lamb et al., 2007).
While a free recall is reported to provide approximately one third to one half of the information extracted (Milne & Bull, 2003), it may become necessary to probe (i.e., ask additional questions, typically a 5WH worded question) for further details. Probing may be needed to either (i) establish the points to prove for suspect interviews (evidential information specifically required to prove a criminal offence has taken place, Griffiths et al., 2011); (ii) gather a full account across all interviewing contexts; or (iii) elicit the provenance during an intelligence interview (i.e., CHIS interactions). If an interviewer were to end an interview too early after exhausting open questions this may result in some key information missed, though probing too hard with an over reliance on closed questions may lead to unreliable information (Ceci & Bruck, 1993;Snook et al., 2012). Thus, once open-ended questions have been exhausted, meaning that they have failed to retrieve information critical to the investigation (Orbach & Pipe, 2011), is it then that probing questions may be considered appropriate, but only when utilised correctly with regard to wording, context and the timing within the interview (Griffiths & Milne, 2006;Guadagno, Powell, & Wright, 2006; see Table 1   (iii) the interaction did not concern the collection of intelligence, such as, arranging a physical meeting between the Source Handler and CHIS; or (iv) the interaction was merely to arrange a call back (e.g., "I can't talk now, I'll call you back later"). A total of 105 telephone interactions across seven Source Handlers were put forward for analysis, ranging from 2.05 to 19.40 min (M = 7.03 min, SD = 3.55). The telephone interactions originated from a Dedicated Source Unit within one English Police Force, 8 and were recorded in 2018 to ensure that the natural verbal behaviour (i.e., questioning) of the Source Handlers was captured.  Table 1 (adapted from Wright & Alison, 2004;Dodier & Denault, 2018;Griffiths & Milne, 2006;Oxburgh et al., 2010;Powell & Snow, 2007;Waterhouse et al., 2018). With regard to minimal encouragers, if they were followed by a question, only the question was coded as it was that utterance which gathered the intelligence (e.g., "uh huh [minimal encourager not coded], what colour was the car? [probing was coded]"). Probing questions typically explored the provenance of the elicited intelligence, utilising 5WH questions to probe free recall. Instances where questions may be categorised as more than one type, the most inappropriate question type was given.

| Procedure and coding
For example, if a question could be coded as multiple and/or leading, in this example the question would be considered leading, as shown in with a total intelligence yield of seven). Ambiguous words relating to quantities (e.g., "lots of drugs") were coded as one item.

| Interrater reliability
Due to the sensitive nature of the data, the first and second authors coded the audio recorded telephone interactions at the same secure policing site. They coded one telephone interaction together as a training exercise and to ensure the coding scheme was viable.
Subsequently, the second author (blind to the coding scheme until trained) independently coded a random sample of 13 of the Source Handler and CHIS interactions. The interrater reliability was calculated using Cohen's kappa (Cohen, 1960) and was found to be .98, suggesting a very strong level of agreement between the two coders (Landis & Koch, 1977).

| RESULTS
To examine the research hypotheses, descriptive statistics were utilised to explore the frequency of both appropriate questions and inappropriate questions, as well as per question type. Additionally, one-way ANOVAs were conducted to compare the appropriate questions and inappropriate questions with regard to overall intelligence yield, which was also broken down by the five detail types (e.g., surrounding, object, person, action and temporal).
A total of 2085 questions were identified across the total 105 audio recorded telephone interactions between Source Handlers and CHIS, with a percentage breakdown of the 12 question types (see Table 3). The mean number of questions per interaction was 19.86 Appropriate 1. Open-ended breadth questions This is a prompt that asks the CHIS to expand the list of broad activities (e.g., "what else happened at the [event]?") or to report the next act/activity that occurred (e.g., "what happened then/next?"). Open-ended breadth questions do not dictate what specific information is required but are used to elicit another broad activity that occurred, not necessarily in sequence.

Open-ended depth questions
This is a question that encourages the CHIS to provide more elaborate detail about a predisclosed detail or part of the event but does not dictate what specific information is required (e.g., "tell me more about the part where… [activity/detail already relayed by the CHIS]"; "what happened when… [activity/detail already relayed by the CHIS]".

Minimal encouragers
These are prompts that do not interrupt the flow of recollection but merely indicate that the CHIS' account is being listened to and understood and encourages open reporting (e.g., "uh huh"; and repeating back the last few words disclosed by the CHIS).

Probing questions
Defined as more intrusive and probing, requiring a more specific free recall regarding the provenance on a subject already mentioned by the CHIS, usually commencing with "who," "what," "when," "where," "why," "which" or "how" (e.g., "where did that happen?"; "what colour was the car"). The CHIS will typically answer with no more than a few words.

Closed yes/no questions
Used at the conclusion of a topic where open and probing questions have been exhausted for provenance on a subject already mentioned by the CHIS. Appropriateness is based on the context, especially when time is a constraint (e.g., "did you see the gun that you have described?").
Inappropriate 6. Closed yes/no questions Used at the wrong point in the interaction and therefore becomes unproductive because they close down the range of responses (e.g., "do you know this man?"; also includes "Can/could you…" questions). Inappropriateness is based on the context.

Multiple questions
Constitute a number of sub-questions asked at once (e.g., "how did you get there, what did you do inside?"; or questions that ask about two concepts at once "what did they look like?").

Forced choice questions
Only offered a limited number of possible responses (e.g., "did you kick or punch the other woman?"; "was is cocaine or heroin?").

Opinion or statement
Defined as posing an opinion or putting statements to the CHIS as opposed to asking a question (e.g., "I think you touched the gun").

Qualitative feedback
These are used to provide positive feedback to what the CHIS has said, which can be perceived as biased as they provide confirmation to a specific detail raised, inappropriately encouraging the CHIS to continue reporting (e.g., assigning a status to a person of interest which may create a selection bias on reporting-"main person", "the organiser").
11. Leading questions Introduces information that the CHIS has not mentioned, implies a desired response or uses suggestive techniques (e.g., "the car was blue, right?").

Interruptions
Questions or statements that interrupt the speech of the CHIS. were the least frequently used. Across the entire sample, the total intelligence yield was 9,162 information items, with appropriate ques- Note: Adapted from (Phillips et al., 2012).

| DISCUSSION
The present research sought to explore two hypotheses, and therefore analysed audio recorded telephone interactions with regard to the questions utilised by Source Handlers with CHIS. Firstly, in contrast to hypothesis one, Source Handlers utilised more appropriate questions (78%) than inappropriate questions (22%) across the sample.
Similar to Phillips et al. (2012), the present research did not confirm the hypothesis that more inappropriate questions will be asked in comparison to appropriate questions. This is particularly surprising, as the telephone interactions in the present research were informal com-  (Guadagno et al., 2006), which is why hypothesis one was generated.
It was further interesting to reveal that hypothesis one was not supported with a sample of intelligence telephone interactions, especially as previous research has established that appropriate questions rarely occur in practise (Myklebust & Alison, 2000;Myklebust & Bjørklund, 2006;Oxburgh et al., 2010). It is promising that hypothesis one was not supported considering a Source Handler's aim is to gather detailed and reliable intelligence from a CHIS. This is possibly due to fact that the Source Handler and CHIS relationship is different to an investigator and suspect or witness interaction. Source Handlers and CHIS endure an ongoing relationship, whereas investigators and suspects will typically meet for the first time within an interview room and experience fewer interactions. The use of more appropriate than inappropriate questions was also found by Leeney and Mueller-Johnson's (2012) in their police call centre research. Perhaps interactions undertaken via a telephone differ greatly to formal face-to-face investigative interviews, impacting on cognitive load, rapport and interviewing ability.
The majority of questions asked by Source Handlers were identified as appropriate, however, less than 4% of all questions asked were open-ended (alike Leeney & Mueller-Johnson, 2012). This is consistent with previous research which has reported the use of open-ended questions at 2% (Clifford & George, 1996;Daviesl et al., 2000;Leeney & Mueller-Johnson, 2012), 7% (Phillips et al., 2012) and 10% (Snook et al., 2012). The fact that practitioners seldom use open-ended questions may be explained by a lack of inadequate training (Smith, Powell, & Lum, 2009) and thus practise (Snook et al., 2012) If Source Handlers, and interviewers more broadly, are not convinced by the benefits of using open-ended questions, then this assumption can reinforce a preference of using closed questions to gather information . The importance of advocating appropriate questions was demonstrated by the present research, as across the 105 interactions, appropriate questions elicited the majority (87%) of the total intelligence gathered. Although closed questions, on the face of it, gather information in a typically shorter time frame, the answer is more likely to be less accurate and shorter in length (Stern, 1903(Stern, /1904. While the present research did not explore the accuracy of the intelligence gathered due to a lack of the ground truth which accompanies field data, research has demonstrated that inappropriate questions are more likely to gather unreliable information in comparison to appropriate questions (Lamb et al., 2007). However, this is not to say that all closed questions are inappropriate because once open-ended questions have been exhausted, which encourages a free narrative, appropriate closed questions are then suitable. It may be necessary to utilise probing (e.g., 5WH) questions in order to probe the unaccounted for provenance of the intelligence provided, in order to gather verifiable information to establish the facts (Griffiths et al., 2011). Thus, Source Handlers should be made aware that as their input increases, the accuracy of the gathered information is likely to diminish (Lamb et al., 2007). Secondly, in support of hypothesis two, the present research revealed that appropriate questions were significantly more associated with the number of intelligence yielded than inappropriate questions.
Appropriate questions have repeatedly been shown to generate more detailed and accurate responses in comparison to inappropriate questions (Lipton, 1977;Milne & Bull, 2003;Orbach & Lamb, 2000;Powell & Snow, 2007;Snook et al., 2012). This is because appropriate questions, particularly open-ended questions and minimal encouragers, provide the interviewee with the time to gather their thoughts, motivate the interviewee who may feel encouraged that somebody wants to listen to what they have to say, consequently promoting an elaborate memory retrieval . Moreover, such questions support free recall, which has been shown to be superior in contrast to recognition processes with regard to detail and accuracy (Lamb et al., 2007).
In addition, it was found that appropriate questions were significantly more associated with the number of intelligence elicited across all five detail types, namely, surrounding, object, person, action and temporal details. This demonstrates the benefit of using appropriate over inappropriate questions regardless of the targeted intelligence detail type. Hence, a CHIS reporting on a particular event will be more likely to report more detailed and reliable intelligence across the five detail types via appropriate questions (Lipton, 1977;Phillips et al., 2012). While the accuracy of the intelligence yielded was unable to be explored in the present research, the benefits (i.e., reliability of the information elicited) of utilising appropriate questions has been evidenced by numerous previous research (e.g., Dent & Stephenson, 1979;Hershkowitz, 2001;Lamb et al., 2007;Orbach & Lamb, 2000).  (Griffiths & Milne, 2006;Lamb et al., 2007;Milne & Bull, 2003).
Across the sample, Source Handlers interrupted the CHIS on one occasion on average per telephone interaction, which was approximately 6% of all utterances utilised. While this may seem small, interruptions were more frequent than the use of both types of open-ended questions. Interruptions of any kind are of concern, even those which intend to prevent the CHIS from digressing (Wright & Alison, 2004).
This is because interruptions break the flow of a free narrative, thus hindering the memory recall process, which may undermine elements of rapport as well as potentially cause shortened future responses in order to avoid anticipated interruptions (Fisher et al., 1987;Powell & Snow, 2007).
A further aspect of concern was in regard to the use of leading questions. Not only did the results of the present research support the common finding that leading questions elicit less information than open-ended questions, the reliability of the information gathered via leading questions is thought likely to be problematic due to their suggestibility of "expected" answers (Oxburgh et al., 2010). Although the use of leading questions only comprised 6% of all questions asked and were used on average once per interaction, this is still considered problematic (Snook et al., 2012). Source Handlers should aim for leading questions to be removed entirely, as the quality of the information recalled is highly dependent on the questions used to elicit it (Powell & Snow, 2007;Waterhouse et al., 2018). Although the negative effects of leading questions can be decreased by using cognitive methods before leading questions are asked (see Geiselman, Fisher, Cohen, Holland, & Surtes, 1986), laboratory and field research has revealed that leading questions result in information of questionable reliability (Brown et al., 2013;Horowitz, 2009;Lamb et al., 2003;Roberts et al., 2004;Sternberg et al., 1996Sternberg et al., , 1997 (Snook et al., 2012).
However, even after comprehensive training about appropriate questioning procedures, it has been reported that interviewers still predominately use closed questions (Aldridge & Cameron, 1999). It appears that such training enhances knowledge but has little longlasting effect on interviewing behaviours (Warren et al., 1999). Conversely, for training to have an impact, it should incorporate three elements, (i) continuous post-training supervision, feedback and guidance in the use of personal reflection, (ii) frequent refresher training sessions, and (iii) structure and planning towards interviewing (Griffiths & Walsh, 2018;. Hence, for Source Handlers, and interviewers generally, training must not be a tick-box exercise, but rather a developed programme that adheres to the three training elements reported in order to improve interviewing practises (Smith et al., 2009;Walsh, King, & Griffiths, 2017;.

| Limitations
It is important to note that the results from the present research are exploratory rather than definitive (Wright & Alison, 2004). First, due to the sensitive nature and reliance on police forces providing access to such data, the sample originates from one police force, and therefore may not reflect the general questioning practises of Source Handlers across England and Wales, although the present sample were trained and accredited via the same national course as those employed elsewhere. Second, a purposive sample was necessary to analyse interactions that met the inclusion and exclusion criteria.
Although this has resulted in a sample which is not random, such sampling methods (i.e., convenience or purposive) are common amongst applied research due to the constraints of the research aims and participating organisations (Snook et al., 2012). Third, as second author, the interrater may arguably not be entirely independent, as security vetting was required to access the dataset. The potential biases of the second author was minimised by independently coding a random sample of telephone interactions, and while the second author may have held preconceived notions of what the research hypotheses were, they were not privy to the actual hypotheses until the data was analysed. Fourth, as it was not possible to establish the ground truth of the intelligence provided by the CHIS, the results were more inferential when exploring the intelligence yield (i.e., quantity), rather than being able to assess the reliability (i.e., quality) of the intelligence coded. As such, the results were discussed in light of the question type used to elicit such intelligence, with the notion that the information elicited from appropriate question types would generate greater yield and be more reliable than information gathered from inappropriate question types (Hershkowitz, 2001;Lamb et al., 2007;Myklebust & Bjørklund, 2006). Finally, as the present research analysed field data, the controllable factors which a laboratory study would enable (e.g., all CHIS witness the same event) are not present.
However, it may be argued that laboratory studies lack ecological validity, as they do not incorporate the stresses, consequences or realism of interviewee engagement that real-life interactions hold (Oxburgh, Williamson, & Ost, 2006).

| Conclusion
The questioning of CHIS is a key skill required by Source Handlers to gather both quantity and quality HUMINT. By gaining unprecedented access to, and analysing such interactions, the present research encouraged the development of an evidenced-based approach to Source Handler intelligence practises. The present research has developed a methodology to analyse the questioning used by intelligence practitioners (i.e., field data), an area that is currently under researched. It is promising that the present findings reported that Source Handlers utilised vastly more appropriate questions than inappropriate questions, and that appropriate questions (by far) were associated with the gathering of much of the total intelligence yield.
However, there is room for improvement with regard to the use of open-ended questions. As such, the creation of a bolt-on training course to be incorporated into the existing Source Handler training concerning intelligence elicitation, should incorporate guidance and training exercises regarding open-ended questioning. In practise, similar to investigative interviews, Source Handlers should plan and prepare for their interactions with CHIS, to ensure they know what questions they need to ask and how to appropriately word them. The present research has added to the evidence-base regarding the benefits of asking appropriate questions and information gathering. Ultimately, information is only as reliable, timely, and detailed as the questions asked, and it is such actionable intelligence that is vital to LEA decision-making, which subsequently tackles criminal activity.

ACKNOWLEDGEMENTS
The authors wish to thank the NPCC Intelligence Practice Research

CONFLICT OF INTEREST
The authors declare no potential conflict of interest.

DATA AVAILABILITY STATEMENT
Research data are not shared due to the sensitive nature of the telephone interactions.