The Relationship between Trainers’ Media‐Didactical Competence and Media‐Didactical Self‐Efficacy, Attitudes and Use of Digital Media in Training

The ongoing digitalization in the training sector produces new demands on the media&#8208;didactical competence of trainers. We conducted an online survey of 279 trainers in Germany to investigate the relationships among media&#8208;didactical competence, media&#8208;didactical self&#8208;efficacy, attitudes toward the use of digital media and the actual use of digital media in training. Furthermore, we compared trainers who attended a course on digital media with trainers who did not attend such a course. The analysis of the theoretically expected correlations between the variables resulted in not all hypotheses being accepted. The analysis of the group differences showed that the trainers who attended a course on digital media had higher media&#8208;didactical competence and media&#8208;didactical self&#8208;efficacy scores and used digital media more often in training. There was no significant difference in negative attitudes. The implications for the promotion of the media&#8208;didactical competence of trainers are discussed.


Introduction
Technology-based learning plays a significant role in training and development, with 69.3 per cent of training hours being delivered via blended learning techniques as reported in the Training Industry Report by the Association for Talent Development (Freifeld, 2018). The ongoing digitalization in the field of adult education and training provides new possibilities for creating learning opportunities and designing learning environments (Gegenfurtner et al., 2020). However, these new possibilities are associated with new demands on trainers' proficiency regarding their media-didactical competence. Media-didactical competence can be defined as the competence needed by trainers to successfully integrate (digital) media into their training with the goal of creating effective learning settings. Examples of digital media used in training include devices and hardware, such as computers, tablets and smartphones and software applications, such as mobile apps, online resources and social networks.
Media-didactical competence has been studied as a facet of the professional competence of schoolteachers, especially with regard to the model of technological pedagogical content knowledge (TPACK) by Mishra and Koehler (2006). However, to date, the media-didactical competence of adult educators and trainers has not been systematically investigated. The field of adult education and training in Germany is very heterogeneous with respect to organizational structures, target groups and the qualifications and professional background of trainers. Therefore, we assume that evidence regarding the media-didactical competence of teachers cannot be transferred one-to-one from the school system to adult education and training. Studies concerning TPACK and information and computer technology (ICT) skills have shown that self-efficacy, attitudes and the use of technology and digital media are closely related to skills and knowledge outcomes (e.g. Abbitt, 2011;Mishne, 2012;Yi & Hwang, 2003). However, to date, these relationships with regard to the media-didactical competence of trainers have not been studied. To gain a better understanding of the important factors facilitating media-didactical competence, the goal of our study is to investigate the relationship among trainers' media-didactical competence, media-didactical self-efficacy, attitudes toward the use of digital media in training and actual use of digital media in training. Regarding the possible promotion of media-didactical competence, we also analyze whether or not a difference exists in these variables if the trainers had attended a course on digital media.

Theoretical framework and hypotheses
The professionalization of adult educators has been recognized as an increasingly important theme in adult education (e.g. Lattke & Jütte, 2014). In recent years, there have been several initiatives throughout Europe to improve the professionalization of adult educators and trainers and introduce qualification standards (Strauch et al., 2010). However, media-related competencies have played only a minor role in these concepts (Rohs et al., 2019).
Competencies related to media and technology have been addressed in several competency models and frameworks of teachers and, to a lesser extent, adult educators. For example, Blömeke (2000) developed a model of the media-pedagogical competence of schoolteachers (Tiede et al., 2015). Another framework within the teaching context proposed by Mishra and Koehler (2006) focuses on the TPACK needed for integrating technology into educational settings. The TPACK model is based on the work conducted by Shulman (1986) concerning different facets of the knowledge needed by teachers. Rohs et al. (2019) developed a model of the media-pedagogical competence of adult educators based on the works conducted by Blömeke, the TPACK model and previous models of adult educators that referenced media-related facets (e.g. Bernhardsson & Lattke, 2011;Buiskool et al., 2010). The model consists of the following four core facets (Rohs et al., 2019): (1) media-related field competence, which includes knowledge regarding the media use of participants and their media competence to adjust the concepts of media-supported teaching accordingly; (2) media-related attitudes and self-regulation, which are based on the general ideas of professionals regarding digital media and its potential in adult education; this facet also includes the readiness to apply digital media to organizing and preparing courses; (3) subject-specific media-related competence, 76 International Journal of Training and Development which considers that the possibilities of using digital media to create learning environments are not independent from the contents; and (4) media-didactical competence, which includes the pedagogical, didactical and psychological knowledge needed for the design of learning settings. Moreover, media-didactical competence comprises knowledge of learning technologies and the skill, willingness and motivation to integrate technology into trainings (see Weinert's concept of competence;Weinert, 2001). Therefore, media-didactical competence is the core of successfully integrating (digital) media into trainings to create effective learning settings.
In their report on improving policy and provision for adult learning in Europe (2015a) and their report on adult learners in digital learning environments (2015b), the European Commission stated not only that digital resources need to be increasingly integrated into adult education but also that there is a need to foster the skills of adult educators to increase their ability to use ICT effectively in training.
To improve media-didactical competence, e.g. through training on the integration of digital media into educational settings, we need to understand its relationship with important variables previously shown to be significant for related constructs. When reviewing the general literature concerning TPACK-and ICT-related studies, the following three areas repeatedly seemed to be closely linked to knowledge and skills in this field: ICT-or TPACK-related self-efficacy, attitudes toward technology or integrating technology into educational settings, and the use of technology. Therefore, we sought to primarily investigate their relationship with media-didactical competence and their relationships with each other (Figure 1).
Corresponding to the European Commission's call for improving trainers' mediadidactical competence and the prospective potential of courses on digital media to reach this objective, we sought to determine whether or not trainers who attended a course on digital media differed in their media-didactical competence, media-didacticalself-efficacy, attitudes and use of technology from trainers who did not attend such a course.
In the following section, we summarize the empirical and theoretical findings underlying each hypothesis. In our literature review, we did not find any studies concerning the media-didactical competence of trainers. However, some studies concerning schoolteachers and a broad body of research concerning ICT in various educational settings were included in our review.
Regarding the construct of media-didactical competence, media-didactical self-efficacy can be defined as a person's belief regarding their ability to successfully implement (digital) media in their training with the goal of creating effective learning settings. Self-efficacy beliefs contribute to a person's attainment and skill (Bandura, 1995;Zimmerman, 1995). Such beliefs affect learning and performance by influencing people's choices of goals, the effort they invest in these tasks and the persistence displayed when facing obstacles or failures (Bandura, 1997).
To the best of our knowledge, no studies have investigated the relationship between the media-didactical self-efficacy and media-didactical competence of trainers, except for studies investigating TPACK-and ICT-related constructs. Abbitt (2011) investigated the relationship between pre-service teachers' self-efficacy beliefs about technology integration and the various facets of the TPACK model and found a positive relationship. Yi and Hwang (2003) conducted a study investigating students' use of a web-based comprehensive class management system. These authors found a positive link between application-specific self-efficacy and perceived ease of use. Studies focusing on ICT in general have described a positive relationship between the facets of computer self-efficacy and self-reported computer competence (Shih, 2006) and between self-efficacy in basic information and computer skills and computer and information literacy (Rohatgi et al., 2016). These findings are also supported by a meta-analysis of computer self-efficacy conducted by Karsten et al. (2012), who found an overall positive relationship with computer skills. Therefore, we expect a positive relationship between media-didactical competence and media-didactical self-efficacy among trainers.
H1. Media-didactical competence is positively related to media-didactical self-efficacy

Media-didactical competence and negative attitudes toward using digital media in training
Although many studies have investigated negative attitude toward technology or the integration of technology in learning settings, to the best of our knowledge, no studies have investigated media-didactical competence or related constructs and their relationship with negative attitudes. However, in general, we assume that a positive attitude is related to competence because attitude and self-efficacy are closely connected (see H5), which, in turn, is related to competence (see H1). In the context of research involving mathematics students in school, studies have shown that there is a positive relationship between a positive attitude toward mathematics and mathematical achievement (Nicolaidou & Philippou, 2003;Wang, 2013). Therefore, we expect a negative relationship between media-didactical competence and a negative attitude toward using digital media in training.
H2. Media-didactical competence is negatively related to a negative attitude towards using digital media in training

Media-didactical competence and the use of digital media in training
No studies have focused on media-didactical competence and the use of digital media in adult training. Mishne (2012) found a positive relationship between the TPACK of pre-service teachers and the self-reported use of technology in the classroom and the use of technology to support learning. Yi and Hwang (2003) reported a link between the ease of use of a web-based comprehensive class management system and the actual use of the system as measured by the frequency of access. Hence, we hypothesize a positive relationship between media-didactical competence and the use of digital media in training.
H3. Media-didactical competence is positively related to the use of digital media in training

Media-didactical self-efficacy and the use of digital media in training
To date, no studies have investigated media-didactical self-efficacy and the use of digital media in training. Ball (2008) found a link between university instructors' computer self-efficacy and their intentions to use emerging educational technology in the classroom. Gialamas and Nikolopoulou (2010) found similar results regarding teachers' computer self-efficacy and their intentions to integrate technology in their instruction. However, the intention to use technology does not necessarily reflect actual use. In the field of research concerning ICT, Yi and Hwang (2003) found a correlation between application-specific self-efficacy and the frequency of use of a web-based comprehensive class management system. In their meta-analysis, Karsten et al. (2012) found a link between computer self-efficacy and the use of technology. We assume, therefore, that there will be a positive relationship between media-didactical self-efficacy and the use of digital media in training. H4. Media-didactical self-efficacy is positively related to the use of digital media in training

Media-didactical self-efficacy and negative attitudes toward using digital media in training
Although no studies have been performed in the training context, a broad body of evidence describes the relationship between high computer self-efficacy and positive attitudes toward technology in general anda positive attitude toward the use of technology (e.g. Conrad & Munro, 2008;Karsten et al., 2012;Otte et al., 2014;Torkzadeh & Van Dyke, 2002). Within the field of teacher education, some studies have investigated TPACK self-efficacy beliefs and found a positive relationship with positive general attitudes toward ICT and positive attitudes toward the educational use of ICT (Scherer et al., 2018;Yerdelen-Damar et al., 2017). Considering these results, we expect a negative correlation between media-didactical self-efficacy and a negative attitude toward using digital media in training. H5. Media-didactical self-efficacy is negatively related to negative attitudes towards using digital media in training

Negative attitudes toward using digital media in training and the use of digital media in training
Although no studies have explored negative attitudes toward using digital media in training and the actual use of digital media in training, the theory of planned behavior proposed by Ajzen (1991) provides a strong foundation for presuming that a connection exists between attitude and intention, which can then lead to behavior. Hence, we assume that trainers who have negative attitudes toward using digital media in training also use less digital media in their trainings.
H6. Negative attitude towards using digital media in training is negatively related to the use of digital media in training

Group differences in attending a course on digital media
Studies investigating interventions targeting TPACK or ICT skills have shown that these interventions can influence self-efficacy (Abbitt, 2011;Kiili et al., 2016;Torkzadeh & van Dyke, 2002), attitudes (Torkzadeh & van Dyke, 2002;Yerdelen-Damar et al., 2017), the use of technology (Graham et al., 2009;Keller et al., 2005) and TPACK (Abbitt, 2011;Cengiz, 2015). Regarding the potential to foster media-didactical competence and related constructs through trainings on digital media, we sought to investigate whether or not there are differences between trainers who attended a course on digital media and those who did not attend such a course. As courses on digital media generally pursue the goal of an overall improvement in competence, self-efficacy, attitudes and use of technology, we can formulate the following hypotheses.
H7. Compared to trainers who did not attend a course on digital media, trainers who attend a course on digital media will show (H7a) higher media-didactical competence, (H7b) higher media-didactical self-efficacy, (H7c) lower scores of negative attitudes towards the use of digital media in training, and (H7d) higher scores of the use of digital media in training

Study design and sample
The data were collected in Germany via an online survey. Trainers from all types of training sectors and training topics were contacted through the researchers' professional network, social media and educational institutions who forwarded the link to our survey to their trainer network. The participants had the opportunity to participate in a raffle for vouchers and were informed of the study's results at the end of the survey. The survey was started by 320 participants; of this group, 41 cases were eliminated due to aborting the survey at a very early stage in the section related to the socio-demographic data. Of the remaining 279 trainers, 51.3 per cent (n = 143) were female. The mean age was 51.1 years (SD = 11.0, range 20-81). The educational level was high, with 70.2 per cent (n = 196) of the trainers having a university degree.
Most trainers (95.3 per cent, n = 266) conducted their training for companies or in a work-related context. On average, the participants had been working as trainers for 18.6 years (SD = 10.0, range 1-45). Most participants (86.4 per cent, n = 241) worked as trainers as their main occupation. Regarding the types of training, Laker and Powell (2011) distinguish between soft-skill and hard-skill training. For example, soft-skill training targets intrapersonal or interpersonal skills or promotes a change in behavior, whereas hard-skill training targets technical knowledge and skills. Nearly half of the trainers (47.0 per cent, n = 131) indicated that they conduct mainly soft-skill training, whereas 21.9 per cent (n = 61) of the trainers indicated that they conduct predominantly hard-skill training; 31.2 per cent (n = 87) of the trainers stated that they conduct both types. Regarding the population of adult educators and trainers in Germany, the sample represents a certain type of trainer who is experienced and works as a full-time professional primarily for companies.
Regarding the training themes, where multiple answers were possible, most participants reported conducting training within the field of pedagogy and social competence (60.4 per cent, n = 168) and the field of economy, work and law (51.1 per cent, n = 142), followed by the fields of nature, technology and information technology (30.2 per cent, n = 84); health and sports (8.6 per cent, n = 24); and languages, culture and politics (8.6 per cent, n = 24). Approximately two-thirds of the participants (68.8 per cent, n = 192) attended at least one or more courses on digital media within the past 5 years.

Instruments
The participants provided socio-demographic and occupational data. Regarding their training, the participants indicated whether or not they predominantly conduct hardskill or soft-skill training using a 7-point Likert scale. This variable was converted to a new variable with three categories for descriptive purposes. The trainers could specify their field of training. Standardized answers were provided based on the training themes of the German Adult Education Survey (BMBF, 2014). Furthermore, the participants were asked whether or not they had participated in a course on digital media within the past 5 years.
After the socio-demographic questions, digital media was defined as including all information and communication technology that could be used to retrieve, collect, process, present or transfer information, such as software applications (e.g. presentation software, apps and games), online resources (e.g. video portals, wikis and e-learning platforms) or digital devices (e.g. smartphones and smartboards). The participants were reminded to consider the entirety of digital media when answering the questions regarding this topic.
Media-didactical competence was measured by eight competence test items (α = 0.69) developed by the MEKWEP project based on a model of media-pedagogical competence, which has been previously validated (Rohs et al., 2019). The participants were given descriptions of training settings or problem statements regarding the didactical use of digital media in training. Then, the participants had to choose the correct solution to the problem statement in a multiple-choice format. One test item included an additional open text field in which the participants were asked to explain their choice. The answers were coded independently by two raters using coding guidelines. The coders reached an agreement in 88.7 per cent of the cases. Cohen's kappa, which considers the number of coding categories, was measured at 0.72. Divergent answers were discussed until an agreement was reached. A correctly answered test item received a score of 1 point. Partially correct items were calculated accordingly. For example, if a participant answered two of four multiple-choice options correctly, he received a score of 0.50. The scores of all test items were added to a sum score within the possible range of 0-8 points.
Media-didactical self-efficacy was assessed by five items (α = 0.92) rated on a 6-point Likert scale from I do not agree at all to I fully agree. The items were developed based on self-efficacy theory proposed by Bandura (1995Bandura ( , 2006 and the concept of media-didactical competence based on the model of media-pedagogical competence proposed by Rohs et al. (2019). Sample items include 'I am able to design digital learning environments in a way that helps the participants learn the content' and 'I am able to choose the appropriate digital media for different learning objectives'.
Negative attitudes toward using digital media in training was measured by four items (α = 0.90) rated on a 6-point Likert scale ranging from does not apply at all to totally applies.

Sample items include 'Digital media is superfluous for my trainings' and 'Digital media does not provide any additional value for my trainings'.
The use of digital media in training was assessed by asking the trainers how often they use six types of digital devices (personal computer/laptop, projector, smartphone, smartboard, tablet and e-book reader) during the preparation and implementation of their trainings using a 6-point Likert scale ranging from never to always.

Analysis
Our correlation analysis included four variables, including two variables measured on a manifest level and two variables measured on a latent level. The use of digital media in training and media-didactical competence were included as manifest variables, and media-didactical self-efficacy and negative attitudes toward using digital media in training were included as latent constructs. Because we conducted a cross-sectional survey, we cannot assume the directions of causality between the variables. Additionally, regarding the literature, sufficient empirical evidence implying the directions of causality is lacking. For all hypotheses regarding the relationships between the variables, both directions could be assumed. Therefore, we only included correlations in our model. Before testing the model, the latent variables were subject to a confirmatory factor analysis (CFA) to test the measurement models. The CFA and correlation analysis were executed with Mplus 7 using the maximum likelihood estimation with robust standard errors (MLR estimator). To address the missing data, we used full information maximum likelihood (FIML) estimation. To assess the model fit, we report a combination of the fit statistics χ 2 , CFI, RMSEA and SRMR as recommended by Kline (2016). Hu and Bentler (1999) propose the following values as indicators of a good fit: CFI > 0.95, RMSEA < 0.05 and SRMR < 0.08. For the χ 2 statistic, we additionally report the ratio between χ 2 and the degrees of freedom (CMIN/DF), which should be < 3 (Schermelleh-Engel et al., 2003).
Media-didactical self-efficacy was measured with five items with standardized estimates of factor loadings between 0.72 and 0.90. The CFA showed a very good model Trainers' media-didactical competence 81 fit (χ 2 = 8.298, df = 5, p = 0.14, CMIN/DF = 1.66, CFI = 0.99, RMSEA = 0.05 and SRMR = 0.01). Negative attitude toward using digital media in training was measured with four items with standardized estimates of factor loadings between 0.76 and 0.92. The CFA showed a good model fit (χ 2 = 5.274, df = 2, p = 0.07, CMIN/DF = 2.64, CFI = 0.98, RMSEA = 0.09 and SRMR = 0.02). As both variables were measured using the same method simultaneously, we tested the common method variance (CMV) using Harman's single-factor test and the common method factor (CMF) technique (Podsakoff et al., 2003). For Harman's single-factor test, we included all items of both variables into a single-factor CFA, which showed a non-acceptable fit (χ 2 = 281.726, df = 27, p = 0.00, CMIN/DF = 10.43, CFI = 0.70, RMSEA = 0.19 and SRMR = 0.15). Then, we included a latent common method factor with equal factor loadings into a model with both latent variables. The factor loadings of the latent variables did not change by more than λ = 0.1 after including the CMF. The factor loadings of the CMF were not significant, except for one item. Both tests indicate that CMV is not a concern in our analysis.
The group differences between the trainers who attended and those who did not attend a course on digital media within the past 5 years were tested in terms of the variables media-didactical competence, media-didactical self-efficacy, negative attitudes toward digital media use in training and the use of digital media in training. The analysis was conducted with SPSS 25. The Shapiro-Wilk test showed that all four variables were not normally distributed, and outliers were identified in all variables, except for the use of digital media in training. Therefore, we used the nonparametric Mann-Whitney test, which compares ranks to test for group differences. Regarding the calculation of the p-value, we used the Monte Carlo method with a one-tailed probability as our hypotheses made assumptions regarding the directions of the group differences. To account for multiple testing, the α-values were adjusted with the Benjamini-Hochberg correction.

Correlation between the variables
The overall fit between the model and observed data was good (χ 2 = 70.388, df = 40, p = 0.00, CMIN/DF = 1.76, CFI = 0.97, RMSEA = 0.05 and SRMR = 0.04). The means and standard deviations of the variables are shown in Table 1. The correlations between the variables are shown in Figure 2. Following Cohen (1992), the effect sizes of r in the social sciences can be categorized as small (0.10), medium (0.30) and large (0.50).
In contrast to our hypothesis H1, there was no significant relationship between media-didactical competence and media-didactical self-efficacy (r = 0.16, p = 0.068). Media-didactical competence was significantly negatively correlated with negative attitudes toward using digital media in training (r = −0.34, p = 0.000; H2). However, there was no significant relationship between media-didactical competence and the use of digital media in training (r = −0.03, p = 0.755) as proposed in H3.
As expected in H4 and H5, media-didactical self-efficacy was positively correlated with the use of digital media in training (r = 0.47, p = 0.000) and negatively correlated Negative attitudes toward using digital media in training were negatively correlated with the use of digital media in training (r = −0.37, p = 0.000); thus, H6 was confirmed.

Group differences
An overview of all four analyses is presented in Table 2. The trainers who participated in a course on digital media showed higher values of media-didactical competence (Mdn = 5.9) than the trainers who did not participate in a course (Mdn = 5.7). The difference between the groups was significant (U = 4393.0, p = 0.040) with a small effect size of d = 0.25. The trainers who participated in a course on digital media showed higher values of media-didactical self-efficacy (Mdn = 5.2) than the trainers who had not participated in a course (Mdn = 4.6). The difference between the groups was significant (U = 5803.5, p = 0.000) with a medium effect size of d = 0.45.
The ranks of negative attitude toward using digital media in training among the trainers who had participated in a course on digital media (Mdn = 1.0) were lower than those of the trainers who had not participated in such a course (Mdn = 1.0). The difference between the groups was not significant (U = 4544.5, p = 0.079) with a small effect size of d = 0.20.
The trainers who participated in a course on digital media use more digital media in training (Mdn = 3.5) then the trainers who had not participated in a course on digital media (Mdn = 3.1). The difference in the means of both groups was significant (U = 6482.5, p = 0.003) with a small effect size of d = 0.37.

Discussion
The goal of our study was to investigate the relationship among media-didactical competence, media-didactical self-efficacy, attitudes and the use of digital media and compare all variables between trainers who participated in a course on digital media and those who had not. We conducted an online study involving 279 trainers, analyzed the relationships among the variables and assessed the group differences with a Mann-Whitney test. Table 3 provides an overview of all hypotheses and displays the outcomes with regard to whether or not the hypotheses were accepted or rejected. In the following, we discuss our results and limitations of our study and present the implications for practice and future research. Regarding our analyses of the correlations between the variables, most hypotheses were accepted when tested with our data and, thus, were consistent with our literature review. The trainers who scored lower on media-didactical competence also seemed to have more negative attitudes (H2). Accordingly, the trainers who had lower mediadidactical self-efficacy also had more negative attitudes (H5). The trainers who used digital media less often in their training also had lower media-didactical self-efficacy (H4) and more negative attitudes about using digital media (H6).
Two hypotheses, i.e. H1 and H3, were rejected. We hypothesized a positive relationship between media-didactical competence and media-didactical self-efficacy (H1). This hypothesis was rejected based on our data, which could be explained by different reasons. First, in our literature review, we only found studies investigating similar constructs, thus making a direct transfer of assumptions difficult. Second, our results could suggest that some trainers with a lower media-didactical competence score have high media-didactical self-efficacy scores and vice versa. Regarding the goal of professionalization, this finding could imply that some trainers overestimate their competence and, as a possible consequence, might not consider the necessity to improve their media-didactical competence.
In contrast to our expectations, media-didactical competence was not related to the use of digital media in training (H3). This finding suggests that trainers who use digital media in training do not necessarily have the media-didactical competence to use such media effectively to create supportive learning settings for their trainees. This finding highlights the need for the professionalization of trainers with regard to media-didactical competence. In contrast, some trainers with higher levels of media-didactical competence may not regularly use digital media in their training. These trainers might not use digital media in their training because the overall design or contextual factors do not allow for the use of digital media.
Although we cannot infer causality based on our cross-sectional data, these results emphasize the important roles of media-didactical self-efficacy, attitudes and use of digital media within their reciprocal relationships with each other and partially media-didactical competence. However, our data also suggest that the trainers' media-didactical self-efficacy and attitudes toward digital media, but not their Accepted/rejected H1: Media-didactical competence is positively related to media-didactical self-efficacy.

Rejected
H2: Media-didactical competence is negatively related to negative attitudes toward using digital media in training.
Accepted H3: Media-didactical competence is positively related to the use of digital media in training.
Rejected H4: Media-didactical self-efficacy is positively related to the use of digital media in training.
Accepted H5: Media-didactical self-efficacy is negatively related to negative attitudes towards using digital media in training.
Accepted H6: Negative attitude towards using digital media in training is negatively related to the use of digital media in training.
Accepted H7: Compared to trainers who did not attend a course on digital media, trainers who attended a course on digital media will show… a … higher media-didactical competence. b … higher media-didactical self-efficacy. c … lower scores of negative attitudes towards the use of digital media in training. d … higher scores of the use of digital media in training. media-didactical competence, are related to the use of digital media in training. Thus, for trainers' decisions to apply digital media in their courses, it is not relevant how competent they are; in contrast, how competent they feel they are and their attitudes regarding digital media are critical.
Finally, we tested the group differences between trainers who participated in one or more courses on digital media in the prior 5 years and those who had not (H7). We hypothesized that trainers who participated in one or more courses on digital media in the prior 5 years would have higher means of media-didactical competence (H7a) and media-didactical self-efficacy (H7b), lower means of negative attitudes toward using digital media in training (H7c) and a more frequent use of digital media in training (H7d). The results indicate that having attended a course on digital media is linked to higher media-didactical competence, higher media-didactical self-efficacy and more frequent use of digital media but not fewer negative attitudes. However, we still found the small effect of the latter variable. Additionally, we do not know the type of course the trainers attended with regard to duration, focus of content or quality. If this component could be controlled in future studies and investigated with a more experimental design, we expect that the negative attitude scores of the trainers who attended a course on digital media would be lower and that the general effect sizes of the difference between the trainers who did and did not attend such a course would be greater.
In summary, based on the findings of our literature review and study, we would assume that the quality and effectiveness of media use in adult learning arrangements might be predicted by the media-didactical competence of the trainers. However, their media-didactical self-efficacy and attitudes could be crucial for their decision whether or not to apply media in their training. Thus, the effective use of digital media in training might depend not only on the trainers' competence but also on their attitudes and how well prepared they feel to implement digital media.

Limitations
Our study has the following limitations. Due to the nature of recruiting participants for the study, we collected a convenience sample that may not be representative of trainers in general. Moreover, there is a possibility that trainers who were particularly interested in the topic or may already have had above-average pre-exposure to the topic were more likely to participate.
Regarding the study design, our cross-sectional data do not allow for the testing of the predictive power of media-didactical self-efficacy, attitudes and the use of digital media on media-didactical competence. To investigate the relevance of the variables in promoting media-didactical competence, a longitudinal intervention study is needed.
Media-didactical competence was assessed through a set of eight test items. Hence, the test did not assess the broad spectrum of media-didactical competence but could be considered an indication of the trainers' competence. To provide a more valid measurement, a more comprehensive assessment with more indicators is necessary.
As shown in our literature review, there are nearly no studies concerning the media-didactical competence of trainers, although some studies have explored similar constructs in schoolteachers. To understand the role of media-didactical competence as a facet of media-pedagogical competence and the professional competence of trainers in general, more studies related to this topic are needed, especially intervention studies, which could enhance our understanding of how to promote media-didactical competence among trainers.

Implications for practice and future research
Despite the limitations of this study, the following implications for practice can be inferred. Digital media are substantial components of training and development (Freifeld, 2018), and policy has stated the need to promote the media-didactical competence of trainers (European Commission, 2015a, 2015b. For adult education practice, it would be relevant to develop effective courses for trainers and control for their contributions to media-didactical competence, self-efficacy, attitudes and the actual use of digital media.
The results of the present study highlight aspects linked to media-didactical competence and may be valuable if considered when designing and implementing programs to structurally support trainers in the development of their media-didactical competence. Although we cannot infer causality, we would assume that interventions aiming to foster media-didactical competence should also address media-didactical self-efficacy and the attitudes of trainers regarding the use of digital media. More differentiated, to foster effective media-supported learning arrangements, a clear diagnosis of the current deficits is needed. If there is a lack in quality in media-supported learning situations, the media-didactical competence of the trainers might be the key to improving the effectiveness of such learning scenarios. However, if there is a general reservation to conduct media-supported training, it could be more effective to examine the trainers' attitudes and strengthen the trainers' media-related self-efficacy.
According to Bandura (1995), self-efficacy could be promoted by vicarious experiences, e.g. by making mastery experiences throughout the training, and through observing the trainer of the course as a positive role model. Higher self-efficacy and a more positive attitude could lead to the increased use of digital media in training. In combination with an increased media-didactical competence, trainers could be supported in creating effective learning settings through the integration of digital media.
Regarding the general goal of improving the professionalization of trainers (Strauch et al., 2010), trainers should be aware of their media-didactical competence and have the opportunity to assess their competence. Therefore, self-assessment systems, such as those developed for Germany (MEKWEP, 2018;Rott et al., 2018), could be helpful.
Future research should focus on intervention studies following a design-based research approach (Collins et al., 2004). Regarding the results of our study, how a trainthe-trainer program could affect trainers' media-related competence, self-efficacy, attitude and their media-related behavior in training and the outcomes on the learners' side should be investigated. Such intervention designs are not easy to realize in the field of adult education and training but could be quite promising in providing the knowledge necessary for future interventions targeting the media-related professionalization of trainers.

Conclusion
The present study aimed to investigate media-didactical competence and its relationship with media-didactical self-efficacy, attitudes regarding digital media in training and the use of digital media in training. Furthermore, we sought to study the differences in all variables between trainers who attended a course on digital media and those who had not attended such a course. In summary, the results of our study propose that all three facets (competence, self-efficacy and attitudes) have to be addressed to effectively integrate digital media into training and further pursue the professionalization of the field of adult education.