An Examination of the edTPA Portfolio Assessment and Other Measures of Teacher Preparation and Readiness
Victoria Russell (PhD, University of South Florida) is Associate Professor of Spanish and Foreign Language Education, Valdosta State University, Valdosta, GA.
Kelly F. Davidson Devall (PhD, Emory University) is Assistant Professor of French and Foreign Language Education, Valdosta State University, Valdosta, GA.
Abstract
The authors examined the outcomes on several measures of world language teacher preparedness, including university‐ and state‐mandated summative evaluations and the edTPA portfolio assessment, for seven world language teacher candidates during their final semester of clinical practice. The candidates were enrolled in an initial certification program (Spanish P–12) at the same university in Georgia. The results revealed that edTPA scores were not well aligned with mentor teachers’ and supervisors’ evaluations on the university‐wide instrument and that measures of teacher content knowledge and target language proficiency did not correlate well with candidates’ edTPA scores. The disparity between edTPA scores and other measures of teacher preparation was most apparent for the two nonnative English speakers. The findings suggest that candidates whose primary language is not English may need additional support with academic English. Moreover, the present study found that candidates, mentors, and supervisors lacked an understanding of how the various university, state, and national assessments fit together to measure the knowledge, skills, and dispositions that are necessary for effectively teaching a world language. Training on the content and purpose of the edTPA may be needed so that stakeholders understand the benefit of having a national‐level view of candidates’ knowledge and skills.
Introduction
From the publication of A Nation at Risk in 1983 (National Commission on Excellence in Education, 1983) through the recent passing of the Every Student Succeeds Act (Civic Impulse, 2015), attempts to reform the educational system in the United States have gone through many iterations. More recently, there has been increased attention placed on exploring how to define the “highly qualified teacher” (Hildebrandt & Swanson, 2014). The current focus on reform and accountability has also led many to examine teacher education programs and the way in which teacher candidates should be assessed for licensure and certification.
The edTPA, formerly known as the Teacher Performance Assessment, was developed by the American Association of Colleges for Teacher Education (AACTE) and the Stanford Center for Assessment, Learning, and Equity (SCALE). It is a portfolio assessment that includes three multipart tasks that are designed to evaluate the knowledge and skills of aspiring teachers. Since its initial implementation in 2013, it has been used increasingly across the United States. Because certification requirements and assessments vary from state to state, the edTPA was designed to provide a national framework to evaluate teacher preparation and readiness. It is the first national world language assessment for undergraduate and graduate teacher candidates who seek initial certification.
This pilot study reports data from a variety of measures, including the edTPA, for seven undergraduate teacher candidates who were enrolled in an initial certification program in Spanish P–12 (prekindergarten through 12th grade) in Georgia during the 2014–2015 academic year. Candidates’ scores were also included in a larger project whose goal was to determine the minimum acceptable passing score on the world language edTPA assessment, which was implemented as a requirement for teacher certification in Georgia across all disciplines beginning in September 2015 (Georgia Professional Standards Commission, 2015). Given the high‐stakes nature of this assessment, the present study had the following goals: (1) it examined candidates’ readiness to teach as defined by SCALE; (2) it compared several other statewide and university‐specific measures of world language teacher preparation with edTPA scores in order to more fully understand the contribution of the different formative and summative assessments; (3) it explored the perceptions of teacher candidates, mentor teachers, and university supervisors with respect to the edTPA assessment; and (4) it examined the notion of equity for students with diverse backgrounds—including those who are not native speakers of English. Using quantitative and qualitative measures, the study specifically sought to determine whether the edTPA aligned with other assessment measures of teacher preparation and readiness that were previously in place and explored how the relevant stakeholders perceived the implementation of the edTPA as a certification requirement for world language teachers.
Review of Literature
The edTPA and National Teacher Candidate Evaluation
The changing landscape of assessment and accountability in education has focused attention on the definition and measurement of teacher effectiveness. Ingold and Wang (2010) stated that a standardized evaluation of teacher preparation would clarify expectations and allow for the comparison of results across differing programs, states, and districts. Such an initiative, however, requires all stakeholders to revisit foundational questions about what it means to be a highly effective language teacher and what it would take to produce such a teacher. In their report, they described how several sets of standards work together to define the body of knowledge, skills, and dispositions that teacher candidates and beginning teachers should possess, including those developed by the Interstate New Teacher Assessment and Support Consortium as well as the ACTFL teacher preparation program standards, which were approved by the Council for Accreditation of Educator Preparation (CAEP), formerly the National Council for Accreditation of Teacher Education. While these sets of standards indicate what candidates and beginning teachers should know and be able to do, they do not specify how that body of knowledge, skills, and dispositions should be assessed.
Developed by SCALE, the edTPA developers attempted to address fundamental questions regarding the assessment of teacher readiness. The resulting multipart assessment guides candidates through the creation of a portfolio that engages three interrelated areas: Planning for Instruction and Assessment, Instructing and Engaging Students in Learning, and Assessing Student Learning, often referred to as Tasks 1, 2, and 3, respectively (SCALE, 2014). Teacher candidates must design a learning segment that consists of 3 to 5 hours of connected instruction in which they focus on developing communicative proficiency within meaningful cultural contexts, videorecord themselves during the implementation of these lessons, and demonstrate effective use of assessment data to inform instructional design and implementation. Within each task, candidates complete extensive reflective commentaries, guided by discipline‐specific prompts that encourage the integration of theories in second language acquisition and foreign language pedagogy. The prompts that guide the commentary encourage candidates to justify their planning decisions, analyze their teaching, and demonstrate their use of data to inform instruction. Upon submission of the edTPA portfolio, trained raters evaluate candidates’ work according to the 13 rubrics that are provided within the edTPA handbook (SCALE, 2014).
According to edTPA data (AACTE, 2015), at the time this article was submitted, five states required candidates to pass the edTPA specifically in order to obtain licensure, with two more joining this group in 2017. Six other states approved the edTPA as a possible assessment for teacher candidate performance, while 24 were still at varying stages of introduction, exploration, and implementation.1 Further examination indicated that more than 600 educator preparation programs were using the edTPA as an assessment, even if the results were not used to determine licensure or certification at the state level.
Implementing the edTPA Across Disciplines
The use of the edTPA as a measure of candidate readiness is a growing topic of inquiry across all 27 disciplines for which portfolio guidelines have been developed (Sato, 2014). Despite this wide range of disciplines, research related to the implications of incorporating or requiring the edTPA as a summative assessment measure in teacher preparation programs has remained limited. Some inquiries have focused on issues associated with instructional contexts. Liu and Milman (2013) found that the implementation of the edTPA encouraged teacher candidates to consider their students’ backgrounds and experiences, e.g., by ensuring that assignments needing computer or Internet access were addressed at school rather than at home, while other teacher candidates explored themes of acceptance, extracurricular duties, and native language differences. Although completing edTPA tasks did promote reflection on contextual and multicultural issues, it “seemed to focus teacher candidates on finding the ‘right answer’ rather than grounding their practice in educational ideals” (p. 133). This study raised questions about the extent to which issues such as diversity and inequality could be authentically addressed through a standardized assessment measure.
With respect to the diversity of the candidates themselves, the edTPA Annual Administrative Report (SCALE, 2015, p. 35) indicated that there were relatively few minorities who submitted an edTPA portfolio assessment in 2014—Hispanic (5.3%), Asian (4.1%), African American (2.8%), and “multiracial” (2.8%)—while the vast majority of candidates were white (79.9%). When mean scores were compared, African American candidates scored significantly less well than Asian, Hispanic, and white candidates. While there were no significant differences found between Hispanic and white candidates, Asian candidates outperformed all other groups. In terms of native language, there were no significant differences found between candidates who reported having a primary language other than English compared to those who reported English as their primary language. There were also no significant differences found for gender. However, the authors of the report cautioned that, due to the disproportionate number of white candidates in the sample, group differences must be interpreted with caution.
Kleyn, Lopez, and Makar (2015), like Liu and Milman (2013), found that the implementation of the edTPA provided candidates with an opportunity to gather information about and reflect on the backgrounds of the students in their classes in order to adjust instruction. Their study of the effects of implementing the edTPA in the fields of teaching English as a second or other language (TESOL), bilingual special education, and bilingual childhood education found that the Context for Learning section of the edTPA portfolio, which requires teacher candidates to describe their students using demographic data related to the district, school, and classroom, as well as Task 3, which prompts candidates to consider assessment data for an entire class as well as for individual students, necessitated further reflection on student backgrounds and how unique learning needs should inform their pedagogy.
A number of studies have also found that collaborative relationships between candidates, supervisors, and mentor teachers changed as a result of the edTPA process. Kleyn et al. (2015) found that teacher candidates relied on each other for encouragement and clarification, a logical result of the stipulation that other stakeholders, such as university supervisors and mentor teachers, not review and comment on candidates’ work. Similarly, Sandholtz and Shea (2012) revealed a changing role for university supervisors, particularly when their perspectives regarding teacher candidate performance did not align with scores and outcomes on edTPA assessments, thus underlining the need for multiple methods of evaluation, including observations by university supervisors, in order to ensure valid and well‐rounded assessment.
Researchers, aware of the possible changing roles of university supervisors, mentor teachers, and teacher candidates, have begun to reflect on how teacher preparation programs could be adapted to accommodate extensive portfolio assessments like the edTPA. Lachuck and Koellner (2015) examined how teacher candidates and university supervisors approached the implementation of the edTPA by documenting tensions between theory and practice, including edTPA requirements. Describing these tensions, they stated, “We struggled with the balance between preparing our teacher candidates for a test and preparing our teacher candidates for teaching literacy” (p. 87). In response, Lachuck and Koellner made general changes to initial coursework in their particular teacher preparation program to include more targeted evidence‐based reflection, and in subsequent coursework, they incorporated elements of planning, instruction, and assessment akin to what would be expected from the edTPA.
Meuwissen and Choppin (2015) also used the lens of tension and questions to explore the implementation of the edTPA in New York and Washington. These researchers identified three areas of tension within their teacher preparation program: support, or how other stakeholders such as university supervisors or mentor teachers could aid in the process of building a portfolio; representation, or how to best represent candidate readiness within edTPA parameters; and agency, or how to navigate other factors while completing portfolios, such as changing classroom dynamics or relationships with other stakeholders. In contrast to Lachuck and Koellner (2015), these researchers found it more constructive to identify how to incorporate the edTPA as a specific key element into working toward strong pedagogical practices, including the navigation of policy and practice.
Similarly, Miller, Carroll, Jancic, and Markworth (2015) found the incorporation of the edTPA as an assessment of teacher readiness to be an impetus for greater changes in teacher preparation course design. They indicated that the rigorous process of building an edTPA portfolio could inform the backward design of courses such that candidates were guided in the development of dispositions and practices that supported specific edTPA requirements. This extended to other stakeholders, such as university supervisors and mentor teachers, who could also be given professional development training related to the edTPA in the hope that the learning community that surrounds teacher candidates could provide support from different perspectives with a strategic focus on the edTPA.
Taken together, this body of research has indicated that the process of building an edTPA portfolio has had an impact on both candidates’ experiences during clinical practice (also known as student teaching) as well as the roles of stakeholders during this period. While some positive aspects were found, such as increased focus on student backgrounds and learning contexts (Kleyn et al., 2015; Liu & Milman, 2013), other findings raised questions about diversity and inequality within the edTPA standardized assessment as well as the alignment of the edTPA with other methods of evaluation (Lachuck & Koellner, 2015; Liu & Milman, 2013; Sandholtz & Shea, 2012). In addition, others found that edTPA assessments prompted a reconsideration of stakeholder roles or programmatic design for teacher preparation course sequences (Kleyn et al., 2015; Meuwissen & Choppin, 2015; Sandholtz & Shea, 2012).
The edTPA and World Language Teacher Preparation
Similar to research on the edTPA across disciplines, there is a growing body of data on candidates’ scores and subscores on the world language assessment and what this means for teacher candidate readiness (Hildebrandt & Hlas, 2013; Hildebrandt & Swanson, 2014; Ingold & Wang, 2010; Troyan & Kaplan, 2015). According to Hildebrandt and Swanson (2014), the need for qualified foreign language teachers is geographically wide‐ranging and continues to increase, thus rendering increasingly critical the need to identify innovative and effective ways of preparing and assessing foreign language teacher candidates. This need requires further exploration of the edTPA and how it functions as a measure of world language teacher readiness.
In one of the first studies exploring teacher candidate assessment using the edTPA, Hildebrandt and Swanson (2014) found that candidates in Georgia and Illinois tended to be more successful in Task 1, Planning for Instruction and Assessment, and struggled with Task 3, Assessing Student Learning, perhaps due to more frequent opportunities to practice planning skills in the courses leading to their clinical practice. In contrast, they posited that candidates had fewer opportunities to gather, analyze, and act on assessment data as they planned for—and provided differentiated learning opportunities in—subsequent lessons.
A number of other studies have also investigated the implementation of the edTPA as a measure of world language teacher candidate readiness. Hildebrandt and Hlas (2013) noted some positive outcomes, including the requirement that candidates focus on creating lessons that use the target language in a meaningful cultural context. However, they questioned the impact and benefit of adding yet another form of evaluation in addition to the numerous assessment measures already required in many states (e.g., the ACTFL Oral Proficiency Interview [OPI], state exams on content and pedagogy, and extensive program‐specific evaluations conducted by university supervisors and mentor teachers). They suggested that comprehensive planning, including collaboration between all stakeholders, is essential to candidate success on the edTPA portfolio. This would indicate that strong scaffolding and guidance are needed for teacher candidates as they prepare for and complete edTPA. In his case study of teacher candidate identity negotiation, Martel (2015) considered how teacher candidates approach such scaffolding and guidance during the clinical practice semester, including preparation for assessments such as the edTPA. Through extensive reflective experiences, his participant reported tension between the supervisor's expectations and the assessment requirements, such that the candidate seemed to fall back on “studentship” (Graber, 1991) to meet expectations without internalizing the intended principles, suggesting that candidate reflective practices play an important role in progress through the clinical practice semester for all stakeholders as they give insight into how different assessments and practices intersect.
Troyan and Kaplan (2015) explored how one teacher candidate's writing developed through different types of reflective compositions: specifically, “personal private reflection” and “critical academic reflection” (p. 372). The researchers found that explicit instruction in critical academic reflection could be of vital importance in preparing candidates to successfully complete the edTPA portfolio. Although the participant in their case study was a native speaker of English and had training in descriptive writing as a former journalism major, the authors emphasized that directed instruction and explicit guidance in critical reflective writing were essential.
Ingold and Wang's (2010) seminal report, which included a discussion of national standards and the benefits of multistate or national collaboration for credentialing, addressed the importance of background factors in specifying the need to provide candidates with nontraditional backgrounds, such as those who are native speakers of a language other than English, with structured opportunities to actively pursue certification. Hildebrandt and Swanson (2014) also called for focused research that examines the implementation of the edTPA in content‐specific programs and its impact on the teacher candidates being assessed, the university faculty who supervise them, the classroom teachers who mentor them, and the students in their classrooms.
Thus, as teacher education preparation programs in world languages have begun to explore, introduce, or require the edTPA as a summative assessment for teacher readiness, it is important to examine how the edTPA portfolio complements and extends other, existing measures of teacher candidate preparation so as to inform stakeholders’ understanding of supervisory, programmatic, and curricular support for teacher candidates and to provide a clearer picture of the impact that the edTPA may have on world language teacher preparation. Specifically, the study addressed the following questions:
- What is the relationship between candidates’ scores on the edTPA portfolio assessment and their end‐of‐semester evaluations by their mentor teachers and university supervisors as measured by the summative, university‐wide instrument (Candidate Assessment on Performance Standards [CAPS]) evaluation?
- What is the relationship between candidates’ edTPA scores and their Spanish content knowledge as measured by the Georgia Assessments for the Certification of Educators (GACE) Spanish content exams?
- What is the relationship between candidates’ edTPA scores and their target language proficiency as measured by the OPI?
- How does the combined set of assessments complement and inform an understanding of candidates’ readiness to teach?
- To what extent are candidates’, mentor teachers’, and university supervisors’ perceptions of the efficacy of the edTPA in measuring a candidate's ability to plan, instruct, and assess student learning congruent?
Methods
Research Design
The present study employed a mixed‐methods sequential exploratory design (Creswell, Plano Clark, Gutmann, & Hanson, 2003). Following this approach, the quantitative and qualitative components were carried out in a sequential order: Candidates completed the OPI, the GACE subject exams, the edTPA, the summative CAPS, and then the questionnaires. The quantitative phase of the study was given analytical priority, and the qualitative findings were used to further illuminate the quantitative findings.
Participants
The interdisciplinary initial certification Foreign Language Education (FLED) program under consideration is shared between the College of Arts and Sciences and the College of Education and Human Services at a regional university in a rural area of the southeastern United States that serves 9,328 undergraduates and 2,235 graduate students. The program is accredited by the CAEP and received ACTFL national recognition through 2021. The seven teacher candidates (six females and one male; ages 21–36) were enrolled in their final semester of clinical practice. All of the candidates were FLED (Spanish) majors who were seeking certification to teach Spanish P–12 in Georgia. Five were native speakers of English, one was a native speaker of Spanish, and one was a heritage speaker of Spanish.
Instruments and Procedures
As shown in the Appendix, data were obtained from five measures: (1) the edTPA portfolio assessment; (2) the summative CAPS evaluations; (3) scores on the GACE subject area exams in Spanish; (4) OPI ratings; and (5) the candidate, mentor, and supervisor questionnaire.
edTPA Assessment
The edTPA world language assessment contains 13 rubrics covering various aspects of the three tasks (planning, instruction, and assessment). Scores on each rubric range from 1 to 5, with 1 indicating a poor performance and 5 indicating a superior performance. For Task 1 (planning), there are four evaluation rubrics, with a maximum score of 20 and a minimum score of 4; Task 2 (instruction) contains five rubrics, yielding a maximum score of 25 and a minimum score of 5; and there are four rubrics for Task 3 (assessment), with a maximum score of 20 and a minimum score of 4. Taken together, the maximum overall score for all three tasks and their associated 13 rubrics is 65 (5 points on each of the 13 rubrics) while the lowest score possible is 13 (1 point on each of the 13 rubrics). Prior to scoring portfolios, all edTPA raters are required to complete 17–22 hours of online training using the SCALE‐authorized scorer training curriculum that is delivered by Pearson (2014). Raters are trained to overlook typos and grammatical errors in English and to focus on the content of the portfolio. Raters must hold a bachelor's degree in the field at a minimum; they must have PK–12 or university classroom teaching experience, and they must have worked with teacher candidates or in‐service teachers in the area of teacher preparation within the past 5 years (Pearson, 2014). Based on the results of the pilot year, a passing score in Georgia for the edTPA world language assessment is 29 or higher until September 2017; after this date, a passing score of 32 or higher will be required. All seven candidates voluntarily shared their edTPA score reports for the purposes of this study.
CAPS Evaluation Rubric
This observation instrument and summative assessment was adapted by the Educator Preparation Provider from the observation component of the Georgia Teacher Keys Effectiveness System, which is used to evaluate all P–12 in‐service teachers in Georgia (Dewar College of Education and Human Services, 2014). The Teacher Assessment on Performance Standards (TAPS), which is the evaluation rubric for this system, is used for licensed teachers. When used with teacher candidates, it is known as CAPS and contains 10 performance standards: professional knowledge, instructional planning, instructional strategies, differentiated instruction, assessment strategies, assessment uses, positive learning environment, academically challenging environment, professionalism, and communication. Scores on the CAPS include 1 (ineffective), 2 (developing), 3 (proficient), and 4 (exemplary), resulting in a composite observation assessment score ranging from 10 to 40. The CAPS performance standards and sample performance indicators are available online (see http://www.valdosta.edu/colleges/education/student-teaching-and-field-experiences/documents/caps---user-guide.pdf, pp. 19–20).
The university supervisors, who are content experts in world language pedagogy, participated in a weeklong (40‐hour) training seminar on the Teacher Keys Effectiveness System with a certified trainer from the Georgia Department of Education, resulting in a formal credential by the state. The mentor teachers received a CAPS user guide from the Educator Preparation Provider and were trained on the content and purpose of TAPS at their schools. In addition, like all teachers in Georgia, they are observed and evaluated using the TAPS protocol by their school administrators.
With respect to the final clinical practice semester, mentor teachers and university supervisors each observe and assess teacher candidates using the CAPS instrument a minimum of three times (at the beginning of the semester—initial formative; at midterm—mid‐formative; and at the end of the semester—final formative). In addition, the teacher candidates complete three self‐evaluations using the CAPS instrument. Approximately 1 to 2 weeks after the final formative evaluation, the mentor teachers, university supervisors, and teacher candidates meet together to examine the record of candidate growth using all nine formative CAPS evaluations to determine the final scores for each performance standard (the final summative evaluation), with the final rating in case of disagreement determined by the university supervisor. While any scores of 1 (ineffective) on the mid‐formative evaluation raises concern, candidates who receive a score of 1 on any of the 10 performance standards on the final evaluation must repeat the entire clinical practice experience. The researchers accessed participants’ CAPS scores through the university's accreditation database.
GACE Spanish Subject Exams
In order to obtain certification to teach Spanish P–12 in Georgia, candidates must pass two GACE Spanish content exams. Test 141 assesses candidates’ ability to read and write in Spanish and comprises 32 items, with 20 items targeting reading and writing skills and 12 items focusing on linguistics, comparisons, cultures, and cross‐disciplinary knowledge. Test 142 assesses candidates’ listening and speaking skills and contains 30 items, with 20 items assessing listening and speaking and 10 items targeting linguistics, comparisons, cultures, and cross‐disciplinary knowledge. Both tests have scores that range from 100 to 300; candidates must score a minimum of 220 out of 300 in order to pass each test. A score between 220 and 249 indicates passing at the Induction level, and a score of 250 to 300 indicates passing at the Professional level. At present, the Georgia Professional Standards Commission accepts passing at either level for certification purposes. Candidates complete these assessments during the final clinical practice semester. The researchers accessed participants’ GACE subject exam scores through the university's accreditation database.
OPI
The OPI is a global assessment of an individual's functional speaking ability.2 Candidates took the OPI during their first methods course and were required to engage in a remediation plan and retake the OPI during the final clinical practice semester if they did not achieve a rating of Advanced Low on their first attempt. The OPI ratings reported in this article represent candidates’ ratings by the end of their final clinical practice semester. Although the FLED program in which the teacher candidates were enrolled recommends a target proficiency level of Advanced Low, historical data show that this benchmark is only attained about half of the time among nonnative speakers of the target language.3 Failure to achieve Advanced Low proficiency in the target language does not prevent candidates from obtaining certification to teach in Georgia provided that they attempt the OPI at least two times and that they complete a remediation plan after their first attempt (if unsuccessful) prior to completing the teacher education program. OPI scores were extracted from the university's accreditation database.
Candidate, Supervisor, and Mentor Questionnaires
University supervisors, mentor teachers, and teacher candidates were invited to respond to a questionnaire. Five open‐ended questions targeted participants’ perceptions of the edTPA and its efficacy in assessing teacher candidates’ planning, instruction, and assessment. Each version of the questionnaire included the same questions, phrased to reflect the respondent's role and responsibilities. An additional set of questions was also tailored to each group of respondents; for example, only university supervisors were asked about the extent to which the edTPA process posed particular advantages or challenges for English language learners and/or heritage speakers. Similarly, only teacher candidates were asked to what extent the process prepared them for future teaching careers. During the semester following their final practicum, questionnaires were sent by e‐mail to all participants. All three university supervisors, three of the seven mentor teachers, and all seven former teacher candidates completed the questionnaire. The questionnaires used in this study are freely available to download on the IRIS database (http://iris-database.org).
Analyses
Because edTPA, CAPS, and GACE scores are interval‐level data, Pearson Product Moment correlation coefficients were computed on the following relationships: (1) edTPA and CAPS scores, (2) edTPA and GACE 141 (reading/writing) scores, and (3) edTPA and GACE 142 (listening/speaking) scores. The Pearson r determined the magnitude of the relationship between the two scores. Because OPI scores represent ordinal‐level data, Spearman rho correlation coefficients were used to determine if there was a relationship between edTPA and OPI scores. OPI scores were coded as follows: 11—Distinguished, 10—Superior, 9—Advanced High, 8—Advanced Mid, 7—Advanced Low, 6—Intermediate High, 5—Intermediate Mid, 4—Intermediate Low, 3—Novice High, 2—Novice Mid, and 1—Novice Low.
Due to the small sample size in the present study, only the magnitude of the correlation coefficients, rather than the statistical significance, was examined (O'Rourke, Hatcher, & Stepanski, 2005). Scattergrams were generated to check whether the relationships between the variables were linear prior to running the Pearson Product Moment correlation analyses.
The constant comparative method (Glaser & Strauss, 1967) was employed to analyze the qualitative data. Responses to the questionnaire were initially coded for themes within each stakeholder group (candidate, mentor, and supervisor). First‐level codes were established, and similar concepts among participants’ responses were grouped together. These codes were regrouped into broader categories, both as a whole set and by participant group. Working separately, the researchers subsequently revisited the codes and coding, reaching 99% agreement.
Results
edTPA Scores and Other Assessment Measures
edTPA Scores
Candidates’ overall scores as well as task subscores on the world language edTPA portfolio assessment are presented in Table 1. The highest overall score was 41 and the lowest overall score was 26 (M = 35.00 and SD = 5.13). Candidate 2 received the highest score for planning (18/20) and instruction (14/25), as well as the highest overall score (41/65). Candidates scored least well on Task 3 (assessment); the highest score in this area was 12.5/20 (Candidate 3). A native speaker of Spanish, Candidate 6, had the weakest edTPA, with scores among the lowest of the seven candidates for planning (8/20) and assessment (7/20), as well as the lowest overall score (26/65).
| Participant | Planning (Range 4–20) | Instruction (Range 5–25) | Assessment (Range 4–20) | Overall (Range 13–65) |
|---|---|---|---|---|
| Candidate 1 | 13 | 11.5 | 9.5 | 34 |
| Candidate 2 | 18 | 14 | 9 | 41 |
| Candidate 3 | 13 | 13 | 12.5 | 39 |
| Candidate 4 | 12 | 14 | 11 | 37 |
| Candidate 5 | 14 | 11 | 12 | 37 |
| Candidate 6* | 8 | 10.5 | 7 | 26 |
| Candidate 7** | 14 | 9 | 8 | 31 |
- * Native Spanish speaker
- ** Heritage Spanish speaker
CAPS Scores
Candidates’ overall scores on the summative CAPS assessment are presented in Table 2. The highest score was 31 and the lowest score was 25 (M = 27.71, SD = 2.29). Although Candidate 2 had the lowest overall summative CAPS score (25/40), this candidate received the highest edTPA score (41/65). Similarly, Candidate 6 had the highest overall summative CAPS score (32/40) yet the lowest edTPA score of the group (26/65). Like Candidate 6, Candidate 7 had a relatively high CAPS score (29/40) but a low edTPA score (31/65). Interestingly, Candidate 6 was a native speaker of Spanish and Candidate 7 was a heritage speaker of Spanish.
GACE Scores
GACE Spanish content exam (Tests 141 and 142) results are presented in Table 3. For the GACE reading/writing test, the highest score was 300 and the lowest score was 210 (M = 247.14, SD = 33.57). For the GACE listening/speaking test, the highest score was 293 and the lowest score was 183 (M = 239.43, SD = 39.79).
| Candidate Number | 1 | 2 | 3 | 4 | 5 | 6* | 7** |
|---|---|---|---|---|---|---|---|
| Reading/Writing (Test 141) Score | 210 | 226 | 283 | 236 | 221 | 254 | 300 |
| Listening/Speaking (Test 142) Score | 183 | 226 | 254 | 220 | 215 | 293 | 285 |
- Notes: Scores ranged from 100 to 300 on each test. Failing = 100–219. Passing Induction level = 220–249. Passing Professional level = 250–300.
- * Native Spanish speaker
- ** Heritage Spanish speaker
Overall, the candidates appeared to struggle more with Test 142 (listening/speaking). As shown in Table 3, Candidates 1 and 5 had the weakest performance on the GACE Spanish exams, while Candidates 3, 6, and 7 had the strongest performance on both exams, passing each test at the Professional level. It should be recalled that Candidates 6 and 7 were native and heritage speakers of Spanish, respectively.
OPI Scores
Scores for the OPI for each candidate are reported in Table 4. An examination of Table 4 indicates that Candidate 2 demonstrated the lowest level of proficiency in Spanish as measured by the OPI (Intermediate Mid), yet this candidate had the highest edTPA score of the group of seven candidates. Conversely, the candidate with the lowest edTPA score (Candidate 6, a native speaker) demonstrated the highest level of proficiency in Spanish as measured by the OPI (Superior). Although Candidate 2 had a low OPI score, she passed the GACE 142 (listening/speaking) exam at the Induction level.
Relationships Among Assessments
This study investigated the relationship between candidates’ edTPA scores and their scores on a variety of other summative assessments.
edTPA and CAPS
edTPA and CAPS evaluation scores were subjected to a Pearson Product Moment correlation analysis. The results revealed a strong negative correlation between edTPA scores and summative CAPS scores, r = −0.752. In other words, high edTPA scores correlated strongly with low CAPS scores and low edTPA scores correlated strongly with high scores on the summative CAPS assessment.
edTPA and GACE
Candidates’ edTPA scores and their scores for GACE Test 141 (reading/writing) were subjected to a Pearson Product Moment correlation analysis. The results revealed a weak negative correlation between edTPA scores and GACE reading/writing scores, r = −0.285. Similarly, candidates’ edTPA scores and their scores for GACE Test 142 (listening/speaking) were also subjected to a Pearson Product Moment correlation analysis. The analysis revealed a moderate negative correlation between edTPA scores and GACE listening/speaking scores, r = −0.586.
edTPA and OPI
edTPA and OPI scores were subjected to a Spearman correlation analysis. The results revealed a strong negative relationship between edTPA and OPI scores, Spearman's rho = −0.748; high edTPA scores correlated strongly with low OPI scores and low edTPA scores correlated strongly with high OPI scores among these seven candidates.
Stakeholder Perspectives
Stakeholder Perceptions of the edTPA
Teacher candidates,’ mentor teachers,’ and university supervisors’ shared perceptions revealed similarities and differences. For university supervisors, Task 1 was viewed as a good indicator of a teacher candidate's ability to plan a lesson sequence. While the supervisors felt that Task 2 was not indicative of candidates’ abilities to deliver those same lessons, only one supervisor felt that Task 3 held some benefits in that it forced students to consider the role of assessment in a basic manner. For one mentor teacher, the edTPA process as a whole was “a most unwelcome distraction for student teachers.” For another mentor, the process could be helpful but was not “necessary.”
Teacher candidates reported mixed perspectives. Candidates 1 and 5 felt that the edTPA process offered an effective way to demonstrate their abilities because it guided them to consider assessment data and other input when planning lessons. Both of these candidates performed well on the edTPA but struggled with other evaluation measures. While Candidate 5 emphasized that the edTPA requirements helped her focus on and develop her use of the target language during instruction, Candidate 6, a native speaker of Spanish, felt that the need to use written scholarly English extensively when completing the portfolio limited her ability to explain and justify her teaching practices, even though no problems existed with her oral English language skills. She stated, “According to the edTPA pilot assessments, I should not have been able to receive certification to teach Spanish, my native language.” Candidate 7 felt that the edTPA portfolio was not an authentic assessment measure of his readiness to teach:
I do not feel that the process of building my edTPA portfolio effectively demonstrated my ability to plan lessons using assessment data to inform my decisions because the process did not align well with the dynamic nature of the learning process in a classroom… most of what I was required to do as part of building and submitting an edTPA portfolio, I had already done at one point or another in my teaching program.
For this candidate, the short video clips and commentaries did not give him the space to effectively demonstrate his knowledge and skills. He likened the edTPA requirements to other experiences in the semesters leading up to the clinical practice semester; however, he felt that the process was not authentic and did not fit well with real‐world instructional contexts. Candidate 3, who performed well on all assessments, presented a slightly more measured view: “The process of building the edTPA portfolio had few positive aspects, but was primarily a waste of valuable classroom experience… . Likely the only teaching practice that I gathered from the edTPA experience is how to implement interpersonal, interactive, and presentational activities well, which has been beneficial.” Taken together, these perspectives indicate that some candidates found that the process of building an edTPA portfolio provided a framework for demonstrating growth when planning lessons and incorporating assessment data, while the majority felt that the process was largely unhelpful.
Perceived Alignment of the edTPA With CAPS and Other Measures
Participants across groups felt that the edTPA was not aligned with the university‐mandated observation evaluation, CAPS. For both the university supervisors and the mentor teachers, their experiences with the edTPA and CAPS, in particular, suggested that there were important differences between the objectives of these two assessments. According to one university supervisor, “edTPA is a good predictor of writing skills, lesson planning, and learning how to follow a checklist. CAPS evaluations more fully assess student teaching.” A mentor teacher stated, “The video tapes include lessons specific for what edTPA ‘wants’ to see, and therefore they are not authentic. The CAPS evaluation, on the other hand, assesses the teacher candidate on his or her actual performance during an entire lesson in a face‐to‐face setting, which is much more authentic.” These comments suggest that the mentor teachers in this study, like the university supervisors, valued the CAPS assessment because they perceived it to be a more holistic, authentic measure representing teacher candidate abilities.
Teacher candidates echoed this idea, stating that the CAPS evaluation, which included the participation of their university supervisors and mentor teachers, taken together with other measures like the GACE and OPI, offered a better indication of their effectiveness as new teachers. Candidate 1 stated, “It would be interesting to see if I would have received a score within the same range if I had submitted a sample of my own teaching without knowing what I needed to include to please the system.” This is notable in that she scored well on the edTPA but poorly on other measures. Candidate 3, who scored well on all measures, including the edTPA, echoed this statement, saying that although there was basic alignment, there were also inconsistencies; for example,
There were some discrepancies as my overall teaching ability in the classroom itself was not effectively measured [by the edTPA] and would not have been effectively measured without the presence of an observer to help me find my weaknesses. I do not agree that a few short clips of my teaching accurately measure my abilities as a whole.
Consideration of responses indicate that stakeholders felt that the requirements to complete the edTPA did not constitute a holistic measure of pedagogical practices or a complete assessment of teacher candidate readiness. They also felt that the edTPA did not align with other evaluation measures of teacher candidates, confirming data reported above.
Integration of the edTPA Into Standard Clinical Practice
Responses from each group of stakeholders also focused heavily on the integration of the edTPA within the standard clinical practice model. All university supervisors felt that candidates had to complete their edTPA portfolios too early during the semester, before being fully integrated into their classrooms and without having sufficient formative evaluation experiences to hone their skills. According to one university supervisor, “edTPA can be done in this timeframe, but what is the cost? Instead of reflecting on their students, focusing on classroom management, and getting acclimated to student teaching, student teachers are fixated on edTPA.” Another university supervisor echoed this sentiment, stating that introducing the edTPA measure pulled the focus away from students’ classroom needs toward logistical requirements:
The clinical practice period is meant to give candidates the opportunity to learn about planning, instruction, and management in a real‐world environment—this requires an enormous amount of work already. Adding edTPA to this increases their workload and stress level and takes their focus away from the true goals of clinical practice.
Mentor teacher responses also indicated that the edTPA process was largely prescriptive and not connected to authentic practices in the classroom. For these participants, the process was equated more with paperwork to be completed rather than the creation of a rounded portfolio demonstrating candidate readiness. One mentor teacher stated,
edTPA robs student teachers of precious observation and planning time. Additionally, student teachers are so busy formatting lessons into 10‐page documents that they have no time to interact with students or develop teacher‐student relationships… . edTPA should fit naturally and effectively into the clinical teaching experience. As it is, it demands that the natural flow of the classroom bend to meet its requirements.
Another mentor teacher felt that the candidates had been “kidnapped for four to six weeks and forced to do busywork.” These participants felt that the process seemed inauthentic and therefore changed clinical practice in disadvantageous ways. Like the university supervisors, they expressed serious concerns regarding the timing and placement of edTPA in the clinical practice semester.
Reponses from the teacher candidates mirrored those of the university supervisors and mentor teachers. It should be noted that these perspectives were shared despite candidates’ differing scores for the edTPA and other evaluation measures. Candidate 3, who scored well on all measures—including edTPA—stated, “I found that during my observation time I was struggling to meet deadlines and write commentaries instead of learning from the experienced teachers around me.” She noted that her focus and pedagogical choices were affected by the edTPA implementation and timeline, saying, “My decisions for planning were not based upon the needs of my students, but rather the demands of the program.” Candidate 5, who scored well on the edTPA, also felt that the edTPA assessment negatively affected her planning and instruction in that she was obliged to include certain elements in those lessons in a way that did not flow naturally for her instructional context in order to demonstrate that she had met the requirements of the edTPA rubric. She felt that that the 15‐minute video submission parameters did not allow her to show a progression of introducing concepts using of a variety of pedagogical techniques throughout the learning process: “In my opinion, edTPA is too prescriptive and cookie‐cutter. Because the edTPA wants to see XYZ, you have to somehow fit XYZ into your activities/plans: therefore the lessons in my unit were not fully mine.” For these teacher candidates, the process of building an edTPA portfolio interfered with the natural flow and real‐world experiences of a clinical practice semester.
Considerations for Nonnative Speakers of English
Respondents within all three participant groups mentioned the impact of the edTPA on candidates whose native language was not English. Supervisors noted that native speaker teachers are needed, but pointed out the challenges that these candidates faced due to the extensive commentary and required use of edTPA‐specific language that may not normally be used in professional preparation contexts. As one supervisor stated, “Since edTPA requires extensive commentaries in English, this puts native and heritage speakers of languages other than English at a disadvantage compared to their counterparts who are native speakers of English.” Another university supervisor echoed this sentiment, stating, “EdTPA requires extensive writing in English. Advanced to superior writing skills are required to reflect on and respond to concepts that border on educational jargon. The process is a challenge to nonnative speakers of English.” Each of these supervisors focused on possible differentials that could be created for candidates who were not native speakers of English.
Candidate 6, a native speaker of Spanish, felt that the extensive nature of the required documentation for edTPA had a negative impact on her ability to plan lessons for which she could have infused a deeper cultural context. She stated, “I had to spend a substantial amount of time perfecting my English for the edTPA documents, leaving me less time to effectively plan for rich, cultural lessons in Spanish.” As another university supervisor commented,
Native and heritage language speakers are valued in a world language classroom and effective teacher preparation programs ensure that candidates have the needed level of English language proficiency. When restrictive practices biased in favor of native speakers of English are introduced into teacher preparation programs, we risk depriving our students of the wonderful experience of learning from a native speaker.
In the statement above, the supervisor expressed concern that the implementation of the edTPA may contribute to the world language teacher shortage because she viewed the edTPA as being significantly more challenging for nonnative speakers of English than for those whose primary language is English. Taken together, these statements suggest that the extensive advanced‐level writing requirements in English can pose problems for candidates with nontraditional backgrounds and for those whose primary language is not English.
Discussion
World language teacher candidates need a broad platform on which to base their professional practice, including knowledge of the target language and culture(s); pedagogical content knowledge; the ability to plan and deliver proficiency‐ and standards‐based lessons; and the ability to design, interpret, and act on data that are obtained from formative and summative assessments. Candidates also must possess a deep understanding of their instructional contexts as well as their students’ diverse interests and needs, and they must also learn to effectively manage the learning environment. Furthermore, world language teachers must possess and sustain a high level of oral and written proficiency in the target language in order to provide an acquisition‐rich environment and to create a meaningful cultural context. Given the diverse knowledge base and set of skills that are necessary to teach a world language effectively, it is appropriate that various assessments be used to measure a teacher candidate's preparation and readiness.
It is not surprising that some teacher candidates in the present study failed to develop a sufficiently high level of skills across each domain by the end of the clinical practice semester; some candidates were weak in target language proficiency (as measured by the OPI) or in content knowledge (as measured by the GACE Spanish exams), while others were weak in planning for and assessing standards‐ and proficiency‐based learning or in reflective writing (as measured by the edTPA). Since teacher candidates have had different learning experiences and each brings unique strengths and weaknesses, the findings emphasize the importance of using a range of local‐, state‐, and nationally endorsed assessments as well as the critical role of teacher educators in helping each candidate to meet professional expectations across all domains. In states and programs where the edTPA is a requirement for certification, teacher education programs need to ensure that all of their candidates are prepared for this rigorous assessment and that data from other complementary assessments are also considered when making decisions about successful completion of a teacher preparation program that leads to licensure and certification. In addition, candidates, supervisors, and mentors need to understand the edTPA assessment's rationale, expectations, and fit with other required assessments, as well as the manner in which implementing it will alter traditional roles and expectations.
Understanding Assessment Types and Goals
The findings suggest that teacher candidates, mentors, and supervisors appear to lack an understanding of the distinctive skill sets that are addressed by the various assessments; the extent to which the content of such assessments may, or may not, overlap; and how the various assessments that are required by their state and/or institution work together to paint a broad picture of the total skill set that is required of world language teachers. In addition, it is possible that teacher candidates, mentor teachers, and university supervisors may have confused the formative nature of the CAPS evaluations, which occurred throughout the clinical practice semester and were designed to facilitate coaching and growth based on shared discussion, with independent and summative evaluations like the GACE, OPI, and edTPA, for which dialogue between candidates, mentors, and evaluators is neither possible nor acceptable. Such conversations will help affirm that the role of the mentor teacher, in particular, is not diminished by the inclusion of the edTPA as a national assessment.
Furthermore, the present study compared the CAPS, which is a generic evaluation instrument that was not specifically developed to measure world language instruction, with the edTPA, a content‐specific summative assessment that was designed to reflect “best practices” in world language instruction. While the CAPS is used by evaluators across disciplines, the CAPS performance standards are sufficiently broad so that mentors and supervisors who are subject matter experts are able to determine whether candidates engage in research‐based and/or best practices in their specific disciplines. In addition to evaluating candidates’ ability to plan, instruct, and assess based on state (performance in Georgia) and national (ACTFL World‐Readiness) standards (National Standards Collaborative Board, 2015), the CAPS instrument also assesses candidates’ content and pedagogical content knowledge, their instructional strategy use, and their ability to differentiate instruction. Candidates, mentors, and supervisors need to be aware of both the similarities and the differences between the CAPS general evaluation instrument and the edTPA discipline‐specific assessment so that all participants better understand how, why, and in what domain(s) teacher candidates are being assessed.
In addition, mentor teachers and university supervisors may benefit from content‐specific pedagogical training. While the university supervisors in this study were experts in world language education, supervisors at many institutions are content generalists and, may in fact need professional development experiences that address the proficiency guidelines for listening, speaking, reading, and writing as well as national and state standards for language learning—in particular, the three modes of communication. Although the various handbooks prepared by SCALE provide excellent insights into the goals and operationalization of the edTPA, mentor teachers and supervisors may need more sustained and personalized explanations as well as examples of exemplary work.
The edTPA and the Role of Mentors
Prior studies have described the changing roles of candidates, mentors, and supervisors as a result of the implementation of the edTPA portfolio assessment (Kleyn et al., 2015; Sandholtz & Shea, 2012). Teacher candidates universally perceive the final clinical practice semester as the most important component of their teacher preparation coursework and the role of the mentor as critical for success in their teacher preparation programs (Kirk, Macdonald, & O'Sullivan, 2006; Weiss & Weiss, 2001). While mentor teachers have a great deal of responsibility during the final clinical practice semester, research has shown that current practices are inadequate for ensuring that they are prepared for the task of mentoring (Clarke, Triggs, & Nielsen, 2014; Glickman & Bey, 1990; Knowles & Cole, 1996; Metcalf, 1991).
According to Tedick and Walker (1995), there is often a mismatch between what teacher candidates learn in their teacher preparation programs and what they actually observe in the field. Sometimes there is also a disconnect between what mentors report doing and what they actually do in the classroom. Izadinia (2015) found that some mentor teachers’ behaviors were not aligned with the educational theories that they claimed to support. Clarke et al. (2014) highlighted the difficulty of placing candidates with suitable mentors because mentor teachers are essentially volunteers who, in addition to their professional responsibilities, assume the responsibility of working with teacher candidates. Moreover, university personnel who are responsible for making student teaching placements are typically unable to be highly selective in making placement decisions, which can be especially challenging when there is a small pool of volunteers (Clarke et al., 2014).
A national assessment such as the edTPA may help level the requirements for all teacher candidates. If mentor teachers are made aware of this purpose, then they may be more supportive of it in the future. The findings of the present study echo Miller et al. (2015), who asserted that mentor teachers and university supervisors alike are in need of professional development training on the edTPA, which will enable them to provide candidates with support from various perspectives. Taken together, the importance of sustained professional development experiences that address proficiency‐ and standards‐based curriculum, instruction, and assessment will help mentors and supervisors alike better align their practice and coaching with the types of pre‐practicum training that their future teacher candidates will receive.
Timing of edTPA Assessment
The qualitative data suggested that the timing of the edTPA as placed within candidates’ programs of study at the institution in question is a concern: The edTPA portfolio must be submitted during Week 7 of the clinical practice semester in order to give candidates an opportunity to repeat a portion of the portfolio during the same clinical practice semester rather than requiring candidates to repeat the entire clinical practice experience if they fail to meet the state expectations. However, it appears that this summative assessment falls too early in the semester: Teacher candidates may not have had time to become sufficiently aware of the unique features of the instructional context, investigate students’ individual learning needs, and master skills that are required to complete the three edTPA tasks. What is more, the qualitative analysis revealed that both the mentors and the candidates believed that the intense focus on the edTPA at the beginning of the semester hampered the candidates’ ability to build relationships with their students and with other world language teaching faculty in the department. The extensive writing commentaries, in particular, appeared to consume much of the candidates’ time during the first 7 weeks of the semester. Moving the edTPA into the final weeks of student teaching or perhaps extending the final clinical practice experience over two semesters would address these concerns.
The edTPA and Native Speakers of Languages Other Than English
As noted by Troyan and Kaplan (2015), all teacher candidates may need explicit training in the kind of reflective academic writing that is valued by the edTPA. Moreover, teacher candidates who are nonnative speakers of English may need additional remediation in academic English as well as more intensified practice in reflective writing prior to the final clinical practice semester. Both candidates and university supervisors felt that the extensive commentaries in English that were required by the edTPA placed heritage and native speakers of languages other than English at a disadvantage compared to their peers who were native speakers of English. Perhaps allowing candidates to write commentaries in their strongest language (either the target language or English) would place the raters in a better position to evaluate candidates’ knowledge base and whether their pedagogical decisions are grounded in research and theory. The findings of this study stand in contrast to data in the edTPA Annual Administrative Report (SCALE, 2015), which examined candidates across all disciplines and did not find significant differences in mean scores between candidates whose primary language was or was not English. The findings of this study may perhaps be due to the exclusive focus on world language teachers and to the very small number of participants. However, more research is needed before any definitive claims can be made with respect to the edTPA and world language candidates who are nonnative speakers of English.
The edTPA and Target Language Instruction
Finally, since the edTPA portfolio assessment only allows candidates to submit two video clips totaling no more than 15 minutes of instruction (drawn from a 3‐ to 5‐hour learning segment), it may be difficult for edTPA raters to evaluate the instructional context and the extent to which target language instruction is occurring at least 90% of the time. In this study, the teacher candidate who had the lowest target language proficiency (Intermediate Mid as measured by the OPI) and who struggled to deliver instruction in the target language (as measured by the CAPS communication standard) achieved the highest edTPA score of the group of seven candidates. This suggests that 15 minutes of video may be insufficient for assessing the daily instructional practices of teacher candidates effectively. However, it should be noted that having a high level of proficiency or native speaker status does not necessarily ensure that teacher candidates will adhere to best practices or that they will not have difficulty mastering the broad range of skills that are necessary to become effective practitioners (Hall Haley & Ferro, 2011; Kissau, Algozzine, & Yon, 2011). Nevertheless, the qualitative results of the present study suggest that mentor teachers and content‐specialist university supervisors may be in a better position to evaluate a candidate's ability to use the language in a sustained manner with their students and to promote communicative use of the target language by their students. This finding supports Liu and Milman (2013), who claimed that context, an important factor in teacher candidate evaluation, may be difficult to capture with the edTPA assessment.
Limitations
The present study had a number of limitations: first, the mentor teachers who participated in the present study did not receive training on the edTPA and, since this was the pilot year, they had not previously worked with any candidates who were required to complete an edTPA portfolio. While the mentor teachers were provided with a copy of the edTPA World Languages Handbook (SCALE, 2014), there is no guarantee that they had the time to read and familiarize themselves with it. Second, the CAPS is a cross‐disciplinary evaluation instrument that is used as both a formative and a summative assessment, while the edTPA is a discipline‐specific summative evaluation, making comparisons of these two different types of assessments difficult. Third, the qualitative data in the present study were used to illuminate and elucidate the quantitative findings. However, the qualitative data were limited to questionnaires that were administered to teacher candidates, mentor teachers, and university supervisors. Future studies should aim to include richer qualitative data such as individual interviews and focus groups of candidates, mentors, and supervisors. Finally, due to the small sample size, the quantitative results must be interpreted with caution.
Conclusion
The findings of this study suggest that the content and purpose of the edTPA as a national assessment was poorly understood by all of the stakeholders involved in the pilot year, particularly the way in which the edTPA complements data from other university‐wide, state‐mandated, and national assessments. Going forward, it may be necessary to reconcile university‐wide assessments with the edTPA and other state and national assessments because university‐wide evaluations, such as the CAPS, provide a window into teacher candidates’ future evaluations as novice teachers. Because the edTPA is a relatively new assessment for world languages, more research is urgently needed, especially in the states where the edTPA is currently used to determine initial certification in a world language. Future studies may address in greater detail candidates’ readiness to prepare lengthy writing commentaries, particularly for those whose native language is not English, as well as the impact of allowing longer recorded excerpts to better capture candidates’ daily instructional practices, including the delivery of instruction in the target language. On a national level, the addition of the edTPA may equalize the requirements of all teacher candidates and thus has the potential to level clinical practice expectations, normalize best practices, strengthen teacher preparation programs, and improve students’ learning experiences. However, it will be important to balance attempts to raise standards for teacher preparation, and thus ensure the readiness of prospective teachers, with concerns about the increasingly worrisome shortage of world language teachers across the nation.
Notes
- States that require the edTPA for licensure include Georgia, Illinois, Minnesota, New York, Wisconsin, and Washington, with Hawaii (effective July 2017) and Oregon (effective September 2017) upcoming. States that have approved the edTPA as a possible assessment option include Arkansas, California, Delaware, Iowa, New Jersey, and Tennessee. States that are in varying stages of exploring, introducing, or piloting the edTPA include Alabama, Arizona, Colorado, Connecticut, Florida, Idaho, Indiana, Maryland, Michigan, Louisiana, Mississippi, Nebraska, North Carolina, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, Texas, Utah, Vermont, Virginia, West Virginia, and Wyoming. States not participating in the edTPA include Alaska, Kentucky, Maine, Massachusetts, Missouri, Montana, Nevada, North Dakota, South Dakota, Kansas, New Hampshire, and New Mexico (American Association of Colleges for Teacher Education, 2015).
- There are five main levels for the ACTFL OPI (Novice, Intermediate, Advanced, Superior, and Distinguished); the Novice, Intermediate, and Advanced levels are each further broken down into three sublevels (Low, Mid, and High). The language functions and tasks that speakers can perform as well as the level of accuracy, contexts, content, and type of discourse that are associated with each level are described in the ACTFL Proficiency Guidelines for Speaking (1999, 2012). The OPI may be taken by phone or in person with a certified OPI rater.
- While teacher preparation programs across the United States aspire to graduate novice teachers who have attained the ACTFL's minimum proficiency recommendations or higher, the fact remains that many do not reach this benchmark by graduation (Cooper, 2004; Glisan, Swender, & Surface, 2013; Liskin‐Gasparro, 1999; Schulz, 2000; Vélez‐Rendón, 2002). Glisan et al. (2013) examined the official OPI scores of 1,957 teacher candidates from 2006 to 2012 and found that 45% of the examinees were unable to reach the ACTFL's minimum proficiency recommendation for certification (p. 276).
Acknowledgments
We would like to thank all of the teacher candidates, mentor teachers, and university supervisors who took the time to participate in this research study. We are also very grateful to the anonymous reviewers for their insightful comments and suggestions. Finally, we would like to thank the editor, Dr. Anne Nerenz, for the extensive time that she dedicated to working with us during the review process. Her expertise and her many years of experience, both as a teacher educator and as an editor, enabled us to identify and incorporate many more practical implications that could be drawn from our results. Our work was strengthened considerable thanks to her input and guidance.
APPENDIX
Teacher Candidate Assessments
| Instrument | Level/Type | Proficiencies Assessed | Scoring |
|---|---|---|---|
| edTPA | National Summative | Planning, Instruction, Assessment | 13–65 pts1 |
| CAPS | State/Local Formative and Summative | General Teaching Performance Standards | 10–40 pts2 |
| GACE 141 | State Summative | Target Language Reading and Writing | 100–300 pts3 |
| GACE 142 | State Summative | Target Language Listening and Speaking | 100–300 pts3 |
| OPI | National Summative | Target Language Speaking | Novice Low–Distinguished4 |
- 1 A passing score on the edTPA in Georgia is currently 29 or higher; it will be 32 or higher beginning September 1, 2017.
- 2 A rating of ineffective on any of the 10 standards is unacceptable on the summative CAPS.
- 3 A score of 220 indicates passing at the Induction level, and a score of 250 indicates passing at the Professional level.
- 4 A passing score for teacher candidates is Advanced Low in Spanish or French.
Number of times cited: 4
- Troy L. Cox, Margaret E. Malone and Paula Winke, Future directions in assessment: Influences of standards and implications for language learning, Foreign Language Annals, 51, 1, (104-115), (2018).
- Cornelia V. Okraski and Scott P. Kissau, Impact of content‐specific seminars on candidate edTPA preparation and performance, Foreign Language Annals, 51, 3, (685-705), (2018).
- Scott Kissau and Bob Algozzine, Effective Foreign Language Teaching: Broadening the Concept of Content Knowledge, Foreign Language Annals, 50, 1, (114-134), (2017).
- Eileen W. Glisan and Richard Donato, Effective Foreign Language Teaching: Broadening the Concept of Content Knowledge, Foreign Language Annals, 50, 4, (821-828), (2017).




