Child mental health literacy training programmes for professionals in contact with children: A systematic review

There has been a surge in child mental health literacy training programmes for non‐mental health professionals. No previous review has examined the effectiveness of child mental literacy training on all professionals in contact with children.

separate survey found that parents did not always see the value in contacting specialist mental health services (Jorm, Wright, & Morgan, 2007), preferring informal or more general sources of help . Consequently, there is a need for professionals who have regular contact with children to be able to recognize mental health issues and know how to facilitate access to further care.
Provision of mental health treatment can be inconsistent, with support often limited to young people whose needs reach a certain severity threshold and even then, people can be on long waiting lists before accessing treatment (Moore & Gammie, 2018). Due to the current treatment gap and the need for referral efficiency, there have been a surge of 'mental health literacy' training programmes aimed at increasing knowledge and skills of professionals in contact with children in order to facilitate early mental health recognition, prevention, and intervention (Kutcher, Wei, & Coniglio, 2016). 'Mental health literacy' is defined as 'knowledge and beliefs about mental disorders which aid their recognition, management or prevention' (Jorm et al., 1997, p. 182) with its measurement commonly divided into constructs of knowledge (of mental health problems and positive mental health), stigmatized attitudes, confidence in helping/intention to help, and actual helping behaviour (Wei, McGrath, Hayden, & Kutcher, 2015).
Previous systematic reviews on mental health literacy training have focused on specific professional groups such as school teachers (Anderson et al., 2018;Yamaguchi et al., 2019), police and public sector employees (Booth et al., 2017), health care workers (Liu et al., 2016), and sports coaches and athletes (Breslin, Shannon, Haughey, Donnelly, & Leavey, 2017). Two have evaluated a specific training programme called Mental Health First Aid (MHFA; Morgan, Ross, & Reavley, 2018;Hadlaczky, Hökby, Mkrtchian, Carli, & Wasserman, 2014) and others have focused on improving young people's mental health literacy instead of professionals (eg, Wei, Hayden, Kutcher, Zygmunt, & McGrath, 2013) or raising awareness of specific mental health conditions (eg, Dickens, Hallett, & Lamont, 2016). Overall, these reviews found that mental health literacy training was effective in improving knowledge and attitudes however there was little or insufficient evidence that training improved professionals' helping behaviour.
There have been no systematic reviews of the effectiveness of youth mental health literacy training programmes across all professionals working with children. Understanding the effectiveness across all professionals is important given that there are typically many different professionals involved in young people's care, including school teachers, public sector workers and healthcare workers. Such a multidisciplinary approach is in the best interests of the young person, but organizations are likely to want to provide a single mental health literacy programme for all those involved. The current review aims to answer (a) what extent child mental health literacy training programmes improve professionals' knowledge and/or stigma-related attitudes towards mental health and (b) to what extent training programmes facilitate young people accessing the mental health support that they might need.

| METHOD
The systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines ).
The strategy was devised and agreed in consultation with a specialist librarian. Key search terms included "mental health", "training", "literacy", and "child" (a full search strategy can be provided upon request). One reviewer (JOC) searched the Cochrane, EMBASE, Medline, and PsycINFO databases up until February 2019. Reference lists of the included papers and relevant systematic reviews were reviewed to identify relevant articles.

| Inclusion and exclusion criteria
The review included studies that delivered training to professionals who have regular contact with young people (0)(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19) in the context of their role. The age range was chosen on the basis of service provision since some services accept people who are 18 at assessment and in full time education, and therefore, professionals may have contact with them when they are 19. Only articles that explicitly stated that the content included child or adolescent material were included and those that quantitatively measured at least one component of mental health literacy (a) knowledge of mental health conditions; (b) attitudes towards mental illnesses; (c) confidence in helping young people with mental health problems, (d) intention to help, and (e) actual helping behaviour. There were no limitations on study design.
Studies that trained non-professionals (eg, parents, young people) in mental health literacy or solely used adult training material were excluded. As the concept of mental health literacy was established in 1997, only published studies after 1997 were included in the review; although a search through the titles of removed papers did not uncover relevant discarded studies. Articles that did not conduct baseline questionnaires prior to training were also excluded.

| Study selection
Duplicates were removed, titles and abstracts were screened for relevance, and full texts were requested for the remaining papers by one reviewer (JOC). Studies identified via relevant reference lists or systematic reviews were included. An independent reviewer viewed full texts of the remaining studies against the eligibility criteria and disagreements were resolved via discussion.

| Quality assessment
The overall quality of the randomized controlled trials (RCT) studies was established using the Cochrane revised Risk of Bias Tool (RoB 2; Higgins et al., 2016) and the Integrated Quality Criteria for the Review of Multiple Study Designs (ICROMS; Zingg et al., 2016) tool was used for non-RCTs. The RoB 2 assesses the risk of bias across three levels (low, some, high concerns) on five/six domains: the randomisation process, identification and recruitment of participants (cluster trials only), deviations from the intended interventions, missing outcome data, measurement of the outcome, and reported result. An overall risk quality score can also be calculated.
The ICROMS tool allows for different methodologies to be assessed under 14 to 15 items across seven domains: clear aims and justification, managing bias in sampling or between groups, outcome measurements and blinding, follow-up, analytical rigour, reporting/ ethical considerations, and managing bias in other study aspects.
Items are rated as "yes criteria", "no criteria", or "unclear criteria".
There is no overall quality judgement. Inter-rater reliability was assessed via an independent reviewer and discrepancies were resolved through discussion.

| Study screening
Initially, 678 studies were identified via the database search and five more were sourced from reference lists (see Figure 1). After removing 126 duplicates, 557 unique citations were screened for inclusion.
Titles and abstracts were assessed for relevance resulting in 93 potential citations. After applying the inclusion criteria to full texts, a further 72 papers were excluded resulting in 21 relevant studies.
Data from the 21 studies were extracted and synthesized into participant characteristics (Table 1), study characteristics (Table 2), and study outcomes (Table 3). A meta-analysis was not conducted because of the limited studies that met criteria for low risk of bias and the high methodological heterogeneity between studies. A narrative synthesis of the data is, therefore, presented.

| Participant characteristics
The total number of participants of the included studies was 3243, ranging from 16 to 1024 (mean 154.4). Training programmes were primarily aimed at primary (n = 3) or secondary school teachers (n = 8).
Others targeted student teachers (n = 1), youth leaders (n = 1), student social workers (n = 1) and mental health agency staff (n = 1 Of the six controlled trials, two were waitlist controls and one had an additional active comparison group. Sixteen studies collected only pre-post data, ranging from immediately after to 12-months post-training. The remaining five also collected follow-up data, ranging from 6 weeks to 9 months post-training, with three of five studies having a follow-up period of 3 months or more. Sixty-two percent (n = 13) achieved a good response rate posttraining (ie, >80%). Three explicitly reported being underpowered to perform the analyses, however the majority made no reference to power calculations. Eight explicitly reported that their study needed to be replicated with a larger sample size; five of which were pilot studies.

| Training delivery
The majority of studies delivered the training content face-to-face (n = 18). Two were delivered online and one study delivered the training simultaneously face-to-face and via video conferencing, finding no difference between either delivery method. Six of the face-to-face programmes were approximately 1 day (7-8 hours), six were 2-4 hours, and seven were 2-3 days. Of the two online studies, participants had access to the training content for 60 days in one and were required to complete the training in 1 three-hour block for three consecutive weeks in the other.

| Training content
programme based on disorder specific content; three on depression and one on ADHD, psychosis, and eating disorders, respectively.

| Study outcomes
Effect sizes were calculated for studies by dividing the mean difference of the paired groups by the pooled SD. For uncontrolled trials, the effect size is the mean difference within groups pre-post. For controlled trials, the effect size is the between-group difference at post although pre-post differences for the intervention group were also computed where available. All effect sizes were reported when data were available. Cohen's (1988) suggestions that 0.2 is a small effect, 0.5 is a moderate effect, and 0.8 is a large effect were used.

| Attitude towards mental health
Fourteen studies measured professionals' general attitudes towards mental health. Items, ranging from 3 to 40, varied across studies from attitudes towards people with mental health concerns, attitudes towards treatment, stigmatized perceptions of specific mental health conditions to mental health conditions more broadly. Three used validated measures, although one had poor reliability. Nine studies reported improved overall attitudes towards mental health, four had mixed results across different items, and one did not report results due to inappropriate reliability. One of the studies (Pereira et al., 2015) reported that their waitlist control group had a lower rating compared to one of the training groups post-training. One study did not report data due to unacceptable reliability (Rose et al., 2017).
Abbreviations: ITT, intention to treat analysis; MFU, month follow-up; MH, mental health; ns, non-significant change; TVBE, text and video-based education; WBIE, web-based interactive education; WL, waiting list control group.

| Confidence and intentions
Three of five studies showed an increase in confidence to support young people with a mental health concern, with effect sizes ranging

| Subsequent support young people received
Four of the 21 studies investigated the subsequent impact of the training on helping behaviour and the results were mixed. Two studies looked at referral data to mental health services following training. In one, teachers reported recognizing mental health concerns in over 200 students and advising them to seek local mental health support (Kutcher, Wei, Gilberds, et al., 2016); however, it was not recorded whether young people proceeded to access support. In the other, there was an increase in the proportion of referrals made (from two to eight) and accepted (from zero to four) to an early intervention service following the training. It was noted, however, that these referrals were not made by participants who attended the training (Cheng et al., 2013).
Two RCTs used self-reported scales to assess whether the training improved teacher's actual helping behaviour. One found that there was no difference in help received following the YMHFA training (Jorm et al., 2010) and the other found that teachers reported providing less help to students with mental health support within the academic year following training (Kidger et al., 2016 The variability amongst the studies in terms of both quality and specific outcomes was striking. One study had attrition rates reaching as high as 56% (Pereira et al., 2015) and two studies had very high baseline knowledge rates before the intervention, thus precluding significant changes (Cheng et al., 2013;Vieira et al., 2014). In one study there was a worsening of stigmatized attitudes in the control group post-training (Jorm et al., 2010) but in another there were improvements in controls' attitudes compared to the intervention group (Pereira et al., 2015). the highest change being observed in a one-day face-to-face session.
Overall, generic and curriculum-based training had more successful outcomes than disorder specific training, possibly because the majority were longer or conduced face-to-face where professionals had more opportunities for discussion and to ask questions. Taken together, the review indicates that a standalone (or curriculum programme for teachers) session delivered face-to-face over one or more days may be the most appropriate approach when professionals require training across a broad range of mental health conditions.
However, it was also notable that the review captured data from 10 different countries and across six continents, and cultural sensitivities need to be considered. The paucity of male and non-Caucasian professionals receiving child mental health literacy training suggests that future studies should attempt to target professionals outside of the female and Caucasian demographics so that children of all genders and ethnicities have role models educating them and advocating for their mental wellbeing.
The majority of studies completed training programmes with teaching staff. Healthcare workers, club leaders and social workers were also recipients of the training. This may reflect the main professional groups that young people come into contact with, however it is stark contrast to the adult mental health literacy training programmes that have delivered training to a diverse range of professionals from the police (eg, Booth et al., 2017), government and public sector workers (eg, Kitchener & Jorm, 2004;Svensson & Hansson, 2014), nurses (eg, Bingham & O'Brien, 2018;Burns et al., 2017), medics (eg, Davies, Beever, & Glazebrook, 2018), and pharmacy students (eg, El-Den, O'reilly, Chen, & Moles, 2016).

| CONCLUSION
Professionals' knowledge and attitudes towards child mental health were significantly improved following training courses included in this review. Changes were observed in disorder specific and global mental health literacy training, ranging from 2 hours to 3 days. Changes in mental health knowledge were observed to be greater than changes in attitudes, however longer follow-ups are needed. Future studies also need to measure both confidence in helping behaviour and actual helping behaviour using objective measures. Currently, there is not enough evidence to suggest that these changes translate to young people accessing appropriate support as studies rarely sought to investigate this. The differences between face-to-face and digital training programmes also need to be investigated as digital programmes may be a more time-efficient way to target a larger proportion of professionals in contact with children. Overall, higher quality research using a blind RCT design, standardized measures of the 'mental health literacy' construct, follow-up period, and a measure of actual helping behaviour is required to appropriately determine the value of such training programmes and understand whether the positive findings are generalizable.