SEARCH

SEARCH BY CITATION

References

  • Antil, L. R., Jenkins, J. R., Wayne, S. K., & Vasdasy, P. F. (1998). Cooperative learning: Prevalence, conceptualizations, and the relation between research and practice. American Educational Research Journal, 35, 419454.
  • Ball, D. L., & Rowan, B. (2004). Introduction: Measuring instruction. The Elementary School Journal, 105(1), 310.
  • Bell, R. L., Matkins, J. J., & Gansneder, B. M. (2011). Impacts of contextual and explicit instruction on preservice elementary teachers' understandings of the nature of science. Journal of Research in Science Teaching, 48, 414436.
  • Blank, R. K., Porter, A., & Smithson, J. (2001). New tools for analyzing teaching, curriculum and standards in mathematics and science: Results from survey of enacted curriculum project. Final report. National Science Foundation/HER/REC. Washington, DC: CCSSO.
  • Borko, H., Stecher, B. M., Martinez, F., Kuffner, K. L., Barnes, D., Arnold, S. C., … Gilbert, M. L. (2006). Using classroom artifacts to measure instructional practices in middle school science: A two-state field test (CSE Report No. 690). Los Angeles, CA: University of California, Center for Research on Evaluation. Standards and Student Testing.
  • Brennan, R. L. (2001). Generalizability theory. New York: Springer-Verlag.
  • Clare, L., & Aschbacher, P. R. (2001). Exploring the technical quality of using assignments and student work as indicators of classroom practice. Educational Assessment, 7(1), 3959.
  • College Board. (2009). Science College Board Standards for College Success. Available: http://professionals.collegeboard.com/profdownload/cbscs-science-standards-2009.pdf [August 2011].
  • Gerard, L. F., Spitulnik, M., & Linn, M. C. (2010). Teacher use of evidence to customize inquiry science instruction. Journal of Research in Science Teaching, 47, 10371063. DOI: 10.1002/tea.20367.
  • Glenn, J. (2000). Before it's too late: A report to the nation from the National Commission on Mathematics and Science teaching for the 21st Century. Washington, DC: Department of Education. As of March 22, 2009: http://www2.ed.gov/inits/Math/glenn/index.html
  • Grossman, P., Loeb, S., Cohen, J., Hammerness, K., Wyckoff, J., Boyd, D., Lankford, H. (2010). Measure for measure: The relationship between measures of instructional practice in middle school English Language Arts and teachers' value-added scores, NBER Working Paper No. 1601.
  • Hill, H. C. (2005). Content across communities: Validating measures of elementary mathematics instruction. Educational Policy, 19(3), 447475.
  • Hill, H. C., Blunk, M., Charalambous, C., Lewis, J., Phelps, G. C., Sleep, L., & Ball, D. L. (2008). Mathematical knowledge for teaching and the mathematical quality of instruction: An exploratory study. Cognition and Instruction, 26(4), 430511.
  • Jaeger, R. M. (1998). Evaluating the Psychometric Qualities of the National Board for Professional Teaching Standards' Assessments: A Methodological Accounting. Journal of Personnel Evaluation in Education, 12(2), 189210.
  • Kennedy, M. M. (1999). Approximations to indicators of student outcomes. Educational Evaluation and Policy Analysis, 21, 345363.
  • Knapp, M. (1997). Between systemic reforms and the mathematics and science classroom: The dynamics of innovation, implementation, and professional learning. Review of Educational Research, 67, 227266.
  • Laguarda, K. G. (1998). Assessing the SSIs' impacts on student achievement: An imperfect science. Menlo Park, CA: SRI International.
  • Le, V., Stecher, B. M., Lockwood, J. R., Hamilton, L. S., Robyn, A., Williams, V. L., … Klein, S. P. (2006). Improving mathematics and science education: A longitudinal investigation of the relationship between reform-oriented instruction and student achievement. Santa Monica, CA: RAND Corporation. As of May 31, 2011: http://www.rand.org/pubs/monographs/2006/RAND_MG480.pdf
  • Lee, O., Penfield, R., & Maerten-Rivera, J. (2009). Effects of fidelity of implementation on science achievement gains among english language learners. Journal of Research in Science Teaching, 46, 836859.
  • Li, H. (2003). The resolution of some paradoxes related to Reliability and Validity. Journal of Educational and Behavioral Statistics, 28(2), 8995.
  • Luykx, A., & Lee, O. (2007). Measuring instructional congruence in elementary science classrooms: Pedagogical and methodological components of a theoretical framework. Journal of Research in Science Teaching, 44, 424447.
  • Marder, M., Walkington, C., Abraham, L., Allen, K., Arora, P., Daniels, M., … Walker, M. (2010). The UTeach Observation Protocol (UTOP) training guide (adapted for video observation ratings). Austin, TX: UTeach Natural Sciences, University of Texas Austin.
  • Matsumura, L. C., Garnier, H., Pascal, J., & Valdés, R. (2002). Measuring instructional quality in accountability systems: Classroom assignments and student achievement. Educational Assessment, 8(3), 207229.
  • Matsumura, L. C., Garnier, H. E., Slater, S. C., & Boston, M. D. (2008). Toward measuring instructional interactions “at-scale.Educational Assessment, 13(4), 267300.
  • Mayer, D. P. (1999). Measuring instructional practice: Can policymakers trust survey data? Educational Evaluation and Policy Analysis, 21, 2945.
  • McCaffrey, D. F., Hamilton, L. S., Stecher, B. M., Klein, S. P., Bugliari, D., & Robyn, A. (2001). Interactions among instructional practices, curriculum, and student achievement: The case of standards-based high school mathematics. Journal for Research in Mathematics Education, 32(5), 493517.
  • Moss, P. (1994). Can there be validity without reliability? Educational Researcher, 23(5), 512.
  • Moss, P. A., Sutherland, L. M., Haniford, L., Miller, R., Johnson, D., Geist, P. K., … Pecheone, R. L. (2004). Interrogating the generalizability of portfolio assessments of beginning teachers: A qualitative study. Education Policy Analysis Archives, 12(32). Retrieved [June 15, 2011] from http://epaa.asu.edu/ojs/article/view/187.
  • National Research Council. (2011). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. [July, 20, 2011] Available: http://www.nap.edu/catalog.php?record_id=13165#toc
  • National Research Council. (1996). National science education standards. Washington, DC: National Academy Press.
  • National Research Council. (2006). Systems for state science assessment. In: M. R. Wilson & M. W. Bertenthal (Eds.), Board on Testing and Assessment, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.
  • National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: National Academies Press.
  • Pecheone R. & Chung, R. (2006). Evidence in teacher education: The performance assessment for California teachers. Journal of Teacher Education, 57(1), 2236.
  • Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher, 38, 109119.
  • Pianta, R. C., Hamre, B. K., Haynes, N. J., Mintz, S. L., & Paro, K. M. (2009). Classroom Assessment Scoring System (CLASS), secondary manual. Charlottesville, VA: University of Virginia Center for Advanced Study of. Teaching and Learning.
  • Resnick, L., Matsumura, L. C., & Junker, B. (2006). Measuring reading comprehension and mathematics instruction in urban middle schools: A pilot study of the instructional quality assessment (CSE Report No. 681). Los Angeles, CA: University of California, National Center for Research on. Evaluation, Standards, and Student Testing (CRESST).
  • Rowan, B., Camburn, E., & Correnti, R. (2004). Using teacher logs to measure the enacted curriculum: a study of literacy teaching in third-grade classrooms. The Elementary School Journal, 105, 75102.
  • Rowan, B., & Correnti, R. (2009). Studying reading instruction with teacher logs: Lessons from the study of instructional improvement. Educational Researcher, 38, 120131.
  • Ruiz-Primo, M. A., Li, M., & Shavelson, R. J. (2002). Looking into students' science notebooks: What do teachers do with them? (CSE Report No. 562). Los Angeles, CA: University of California, National Center for Research on. Evaluation, Standards, and Student Testing (CRESST).
  • Ruiz-Primo, M. A., Li, M., Tsai, S. -P., & Schneider, J. (2010). Testing one premise of scientific inquiry in science classrooms: Examining students' scientific explanations and student learning. Journal of Research in Science Teaching, 47, 583608.
  • SAS Institute, Inc. (2002–2003). SAS 9.1 documentation. Cary, NC: SAS Institute, Inc.
  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Thousand Oaks, CA: Sage Publications.
  • Shavelson, R. J., Webb, N. M., & Burstein, L. (1986). Measurement of teaching. In: M. Wittrock (Ed.), Handbook of research on teaching. New York, NY: McMillan.
  • Silver, E., Mesa, V., Benken, B., Mairs, A., Morris, K., Star, J. R. (2002). Characterizing teaching and assessing for understanding in middle grades mathematics: An examination of “best practice” portfolio submissions to NBPTS. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.
  • Smithson, J. L., & Porter, A. C. (1994). Measuring classroom practice: Lessons learned from the efforts to describe the enacted curriculum—The Reform Up-Close study. Madison, WI: Consortium for Policy Research in Education.
  • Spillane, J. P., & Zeuli, J. S. (1999). Reform and teaching: Exploring patterns of practice in the context of national and state mathematics reforms. Educational Evaluation and Policy Analysis, 21, 127.
  • SPSS Inc. (2007). SPSS Base 16.0 User's Guide. Chicago, IL.
  • Stecher, B. M., & Borko, H. (2002). Integrating findings from surveys and case studies: Examples froma study of standards-based educational reform. Journal of Education Policy, 17, 547570.
  • Stecher, B., Le, V., Hamilton, L., Ryan, G., Robyn, A., & Lockwood, J. R. (2003). Using structured classroom vignettes to measure instructional practices in mathematics. Educational Evaluation and Policy Analysis, 28(2), 101130.
  • Tateneni, K., Mels, G., Cudeck, R., & Browne, M. (2008). Comprehensive Exploratory Factor Analysis (CEFA) software version 3.02. As of May 31, 2011: http://faculty.psy.ohio-state.edu/browne/software.php
  • Von Secker, C. E., & Lissitz, R. W. (1999). Estimating the impact of instructional practices on student achievement in science. Journal of Research in Science Teaching, 36, 11101126.
  • Wilkerson, J. R., & Lang, W. S. (2003). Portfolios, the Pied Piper of teacher certification assessments: Legal and psychometric issues. Education Policy Analysis Archives, 11(45), Retrieved [July 10, 2011] from http://epaa.asu.edu/ojs/article/view/273.
  • Windschitl, M. (2001). The diffusion and appropriation of ideas in the science classroom: Developing a taxonomy of events occurring between groups of learners. Journal of Research in Science Teaching, 38, 1742.
  • Wolfe-Quintero, K., & Brown, J. D. (1998). Teacher portfolios. TESOL Journal 7(6), 2427.