Printer Friendly

Teacher candidates' literacy in assessment.

Abstract

The present study investigated graduate and undergraduate teacher candidates' assessment literacy by identifying the extent to which assessment standards were met. Participants' teaching experiences were also examined for their influence on level of assessment literacy. Results showed that graduate teacher candidates had higher assessment literacy than undergraduate teacher candidates, and those with prior teaching experience demonstrated higher assessment literacy. Participants were found to have the most difficulty with communicating the assessment results to others such as parents, school personnel, and students.

Introduction

As Linn and Gronlund (2000) state, educational accountability means higher demands in P-12 classroom assessment, and the number of required assessments will increase in the years to come. Assessment and evaluation greatly impact teachers, students, parents, schools, educational reform, and teacher preparation programs, and are hotly debated issues in the educational field (Phye, 1997). The No Child Left Behind Act (NCLB) of 2001, which was signed into law in 2002, required state public schools to implement accountability systems. This act mandates states to test students annually in grades 3 to 8 and document schools' progress statewide. [1] With this act's emphasis on accountability and assessment, there is an increase in standardized tests and greater demand for classroom assessment as well. With this current trend, teacher candidates are now pressured to prepare for assessing and evaluating their own students' learning and improving instruction inside their classrooms, and be able to interpret externally mandated assessment results. As this federal demand increases, one critical question is: How well-prepared are teacher candidates to assess their pupils? To learn about teacher candidates' assessment literacy, an equally important question to raise is: To what extent are the Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990) being met? Researchers have advocated that classroom assessment should support instruction and enhance students' learning (Shepard, 2001). However, studies show that teachers have consistently used a variety of factors in their assessment practices and consequently make erroneous decisions. Even more disturbing is that most teachers lack effective assessment knowledge and skills; that is, when evaluating student academic achievement, teachers exhibited misconceptions about assessment practices (Cizek, Fitzgerald, & Rachor, 1996; McMillan, 2001). In short, while many seem to understand assessment, more seem to misunderstand it instead.

Theoretical Background

Individuals seem to have multiple points of view to describe assessment. As Cizek (1997) states, at least four definitions of assessment can be found in the current literature. Assessment can be referred to as new formats for gathering information about student achievement (e.g., portfolio assessment); a new attitude toward gathering information (e.g., methods "kinder than'" standardized testing); a new ethos of empowerment (e.g., information gathered to serve students and teachers); and a new process (e.g., diagnosing and providing alternative instructions for students' with learning difficulties). Despite these definitions, one consistent theme in the assessment literature is the many roles assessment plays in the classroom. While one major role is to promote student learning (Shepard, 2001; Stiggins, 2002), teachers are not effective in using assessment to do so.

Assessment Literacy

In a major joint effort to address concerns about classroom assessment and delineate teacher assessment literacy, the American Federation of Teachers, the National Council on Measurement in Education, and the National Education Association developed seven Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990). These standards were intended to guide the preparation of preservice and inservice teachers as effective and skilled educators. [1] The standards were skills and knowledge in: (1) choosing assessment methods appropriate for instructional decisions; (2) developing assessment methods for such decisions; (3) administering, scoring, and interpreting results of externally-produced and teacher-produced assessment methods; (4) using assessment results in making decisions about individual students, instruction, curriculum development, and school improvement; (5) developing valid grading procedures using pupil assessments; (6) communicating assessment results to students, parents, lay audiences, and educators; and (7) recognizing unethical, illegal, and inappropriate assessment methods and uses of assessment information. Similarly, Stiggins (1995) described the importance of having clear standards to define teacher assessment literacy, thereby helping students attain higher academic achievement. [2] As he stated, "without a crystal clear vision of the meaning of academic success and without the ability to translate that vision into high-quality assessments at the classroom, building, and district levels ... we would remain unable to assist students in attaining higher levels of academic achievement" (p. 238). Although Stiggins (1995) detailed five standards to define the concept of assessment literacy, similar to the seven standards mentioned above, they have not been widely cited or used in the literature. These standards are: (1) identifying clear purposes of assessment; (2) focusing on achievement targets; (3) selecting proper assessment methods; (4) sampling student achievement; and (5) avoiding bias and distortion.

Although these standards should be integral to teacher education programs to ensure preservice teachers' assessment literacy, few studies have examined how the standards ascertain teacher candidates' competence in classroom assessment. Hake and Impara (1997) conducted a national survey that measured the competence levels of inservice teachers in these seven areas. Teachers were found to generally have some knowledge of administering assessments, but less knowledge of communicating assessment results to others. However, the number of teachers participating in the study was very small (e.g., only eight for New York State); thus, more in-depth studies are needed. Mertler (2005) indicated that assessment literacy means meeting the seven competence standards delineated by AFT, NCME, and NEA. He compared both inservice and preservice teachers' assessment competence and the effect of classroom/teaching experiences on assessment literacy. The assessment literacy of the two groups was found to differ statistically on Standards 1, 2, 3, 4, and 7; inservice teachers did better on these specific standards than preservice teachers. However, Mertler (2005) did not clarify whether the inservice teachers had taken assessment courses during their teacher preparation; thus, they may have scored higher for this reason than the preservice teachers who were taking assessment at the time of testing. Furthermore, the testing situation differed for both preservice and inservice teachers; preservice teachers completed an assessment literacy questionnaire during their assessment course, while inservice teachers received the questionnaire in the mail and/or electronically. Thus, inservice teachers had the opportunity to consult resources to answer the questions. The lack of a controlled testing situation further complicated interpretation of the results.

Rationale and Research Questions

As indicated earlier, the seven standards of assessment competence were intended to guide preservice and inservice teachers in their preparation as educators. However, very few studies (Impara, Plake, & Fager, 1993; Mertler, 2005; Plake & Impara, 1997) have specifically examined inservice and/or preservice teachers'--teacher candidates'--knowledge of assessment to meet these standards. In addition, no study to date has examined whether taking assessment courses ensures that teacher candidates are increasing their assessment literacy and meeting these standards. Most important, information gathered on preservice teachers' knowledge of assessment before and alter taking assessment courses could help educators who teach such courses to make better instructional and curriculum decisions, since these standards guide the development of assessment courses in many teacher preparation programs (Gallagher, 1998). Therefore, the present researcher examined secondary teacher candidates' knowledge of classroom assessment before and after taking assessment courses in a teacher preparation program. In the present study, three questions were answered: (1) To what extent were the seven standards of assessment met before and after taking an assessment course? (2) To what extent did undergraduate and graduate preservice teachers differ in their assessment literacy? and (3) To what extent does having teaching experience influence assessment literacy?

Methods

Participants The participants (25 undergraduate, 36 graduate) were teacher candidates in the Adolescent Education Program in an urban public college in New York City. With the approval of the college's Human Subjects Committee, teacher candidates were recruited during the first week of school in Fall 2004 and Spring 2005. Sixty-one teacher candidates volunteered to participate by completing one survey at the beginning and one at the end of each semester. All participants were preparing to be middle school (n = 9) or high school (n = 52) teachers, with concentrations in such subject areas as English, Mathematics, Social Studies, and Science (e.g., biology, chemistry, and physics).

Measures The present study used two measures. First, a 35-item Assessment Literacy questionnaire developed by Plake and Impara (1997) measured teachers' knowledge of classroom assessment. These validated items were aligned with the Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990). With five items per standard, 35 items were designed to measure the seven standards. The second instrument (11 items), adapted from Impara, Plake, and Fager (1993), gathered background information on teacher candidates' assessment experiences and asked perception questions on their interests in learning about assessment and attitudes toward testing. Dr. Plake granted permission to use these questionnaires.

Procedures The participants were informed about this study and their rights, and assured that their responses would remain confidential. Each testing session lasted about 40 minutes. Once their written consent was obtained, the teacher candidates filled out the Assessment Literacy questionnaire (i.e., pre-test) during the first week of the semester. The same questionnaire (i.e., post-test) was re-administered during the last 2 weeks of the semester.

Results

None of the undergraduate teacher candidates had teaching experience at the time they completed the questionnaires. Of the graduate teacher candidates, 25 indicated having some teaching experience (n = 16 for less than 1 year; n = 1 for 2 years; n = 5 for 3-5 years; and n = 3 for 6-10 years). The majority of participants had never taken an assessment course before (n = 56 or 91.8%); only 5 (8.2%) indicated taking taken an assessment course previously. To answer the research question on whether undergraduate and graduate teacher candidates differed in their assessment literacy, independent-samples t tests were computed on pre-test means (M = 17.52, undergraduate; M = 21.17, graduate) and post-test means (M = 19.48, undergraduate; M = 22.51, graduate). Even though both groups' mean scores for the post-test increased from their pre-test means, the two groups differed on pre-test and post-test. The pre-test means were statistically different between the two groups, with a t-value (59) of 4.22, a p-value of 0.0., and an effect size of .23. The statistical difference between the two groups was also found on the post-test means, with a t-value (59) 2.55, a p value of .01, and an effect size of .10. The effect sizes of .01, .06, and .14 were considered small, medium, and large, respectively (Cohen, 1992; Green & Salkind, 2003). Therefore, the effect size of means difference between the undergraduate and graduate teacher candidates on their pre-tests was considered large, while the effect size of the post-test means difference was considered medium.

Although the majority of participants (all undergraduate and a few graduate teacher candidates) indicated having no teaching experience (n = 36 or 59%), some graduate teacher candidates indicated they had some (n = 25 or 41%). To answer the research question on whether teaching experience influenced assessment literacy, another set of independent-samples t tests were computed to compare mean differences between the two groups. For the pre-test means (M = 18.48 for no experience; M = 21.44 for teaching experience), the two groups differed significantly, with a t value (59) -3.25, a p value of .00, and an effect size .25, which is a large effect size. For post-test means (M = 20.18 for no experience; M = 22.96 for teaching experience), the two groups differed significantly, with a t value (59) of -2.29, a p value of .03, and an effect size of .08, which is a medium effect size. To answer the main research question--To what extent were the seven standards of assessment met before and alter taking an assessment course?--paired-samples t tests and effect sizes were computed on pre-test and post-test mean scores for each standard to identify whether the mean differences were statistically significant. Table 1 indicates the group means and standard deviations on pre-test and post-test scores by each standard as well as t statistics and effect sizes. As shown in Table 1, see website http://rapidintellect.com/AEQweb/fal2005.htm at the beginning of the semester, teacher candidates as a group (N = 61) earned 19.73 points out of 35; at the end of the semester, they earned 21.46 points out of 35. The paired-samples t test showed a statistically significant difference, with a t value (60) of -3.36, a p value of .00, and an effect size of .16, which is a large effect size. Specifically, at the beginning of the semester, preservice teachers scored lowest on Standard 6 (M = 1.87), which was communicating assessment results, and highest on Standard 3 (M = 3.13), which was developing assessments. At the end of the semester, teacher candidates still scored lowest on Standard 6 (M = 2.6), but highest on Standard 1 (M = 3.57), which was choosing assessment methods. Overall, teacher candidates' scores rose at the end of the semester and some standards gained more points than others (e.g., Standards 1, 4, and 6). The mean differences on Standards 1, 4, and 6 were large, as indicated by effect sizes (.21, .15, .25, respectively).

Discussion

The present researcher examined the assessment literacy of graduate and undergraduate secondary education teacher candidates and compared the assessment literacy of those with and without teaching experience. Results revealed that both graduate and undergraduate teacher candidates significantly increased their assessment literacy after taking an assessment course. However, graduate teacher candidates scored higher than undergraduate teacher candidates on pre-tests and post-tests, even though the majority of participants indicated never having taken an assessment course before. Thus, further analysis was conducted to examine whether those with teaching experience had higher assessment literacy than those without. Findings indicated that teacher candidates with some teaching experience (the majority had 1-5 years) had significantly better assessment literacy than those with no teaching experience, similar to Mertler's (2005) research which found that inservice teachers scored higher on the assessment literacy questions than preservice teachers did.

When examining the extent to which teacher candidates met the seven standards, many results of this study paralleled those of both Mertler (2005) and Plake and Impara (1997). The present participants had the most difficulty with Standard 6 (M = 1.87, pre-test; M = 2.60, post-test), which is communicating assessment results. Plake and Impara (1997) also found that inservice teachers scored the lowest (M = 2.70) on this standard. Even though participants in Mertler's (2005) study likewise did not score high on this standard (M = 2.27, preservice teachers; M = 2.48, inservice teachers), the scores were very similar to those obtained in the present study as well as in Plake and Impara (1997). Similarly, teacher candidates in the present study scored higher (M = 3.13, pre-test; M = 3.45, post-test) on Standard 3 (administering, scoring, and interpreting assessment results) as they did in Plake and Impara (1997) (M = 3.96) and Merrier (2005) (M = 3.24, preservice teachers; M = 3.86, inservice teachers). Despite a few differences among these studies in identifying the degree of assessment literacy for preservice and inservice teachers, the present study showed that communicating assessment results was the most difficult standard to meet. However, the importance of the present study was in demonstrating that teacher candidates did increase significantly in their assessment literacy by the end of the course; also, for some standards, the scores gained were statistically significantly higher than before taking an assessment course. Thus, an assessment course seems to have a tremendous impact on teacher candidates' assessment literacy. By identifying the strengths and weaknesses of teacher candidates' assessment knowledge and skills prior to and alter taking an assessment course, teacher educators can modify instruction and enhance the assessment literacy of teacher candidates.

Conclusions

Results from this study could be used to guide the further development and modification of assessment courses in teacher preparation programs and to motivate teacher candidates to become assessment-literate in accountability-driven environments. Ultimately, providing rigorous assessment courses to teacher candidates can help their future students strengthen academic learning. As Stiggins (2002) indicates, classroom assessment practices need to be reformed so that assessment processes can become integrated into instruction to promote student learning, support instructional decision-making, and provide feedback for teachers on their instructional effectiveness. The present study took an initial step toward understanding the assessment knowledge and skills of teacher candidates, and ultimately toward promoting and sustaining such knowledge and skills in P-12 classroom assessment practices. As the present study indicated, as did prior studies, one area of focus is communicating assessment results and making instruction decisions accordingly. Thus, in further research on classroom assessment, one focus should be on strengthening preservice teachers' knowledge of accurately communicating students' assessment results. To do so, training programs should spend more time on interpreting assessment results at the informal classroom level as well as high-stakes state level (i.e., standardized tests). Another area of research should be re-examining the standards that were developed by AFT, NCME, and NEA in 1990. With changing federal educational polices and assessment requirements, the different weights of these seven standards should be re-prioritized according to new demands and focus on assessment, and amended as necessary to ensure that teachers are more literate about assessment.

References

American Federation of Teachers, National Council on Measurement in Education, & National Education Association (AFT, NCME, & NEA). (1990). Standards for teacher competence in educational assessment of students. Washington, DC: Author.

Cizek, G. J. (1997). Learning, achievement, and assessment: Constructs at a crossroads. In G. D. Phye (Ed.), Handbook of classroom assessment: Learning, adjustment, and achievement (pp. 1-32). San Diego: Academic Press.

Cizek, G. J., Fitzgerald, S. M., & Rachor, R. E. (1996). Teachers' assessment practices: Preparation, isolation, and the kitchen sink. Educational Assessment, 3, 159-179.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. Gallagher, J. D. (1998). Classroom assessment for teachers. Upper Saddle River, N J: Prentice.

Green, S. B., & Salkind, N. J. (2003). Using SPSS for Windows and Macintosh: Analyzing and understanding data (3[rd] ed.). Upper Saddle River, N J: Prentice-Hall.

Impara, J. C., Plake, B. S., & Fager, J. J. (1993). Teachers' assessment background and attitudes toward testing. Theory into Practice, 32, 113-117.

Linn, R. L., & Gronlund, N. E. (2000). Measurement and assessment in teaching (8[th] ed.). Upper Saddle River, N J: Prentice-Hall.

McMillan, J. H. (2001). Secondary teachers' classroom assessment and grading practices. Educational Measurement: Issues and Practices, 20, 20-32.

Mertler, C. A. (2005). Secondary teachers' assessment literacy: Does classroom experience make a difference? American Secondary Education, 33, 76-92.

Phye, G. D. (1997). Classroom assessment: A multidimensional perspective. In G. D. Phye (Ed.), Handbook of classroom assessment: Learning, adjustment, and achievement (pp. 33-51). San Diego: Academic Press.

Plake, B. S., & Impara, J. C. (1997). Teacher assessment literacy: What do teachers know about assessment? In G. D. Phye (Ed.), Handbook of classroom assessment: Learning, adjustment, and achievement (pp. 53-68). San Diego: Academic Press.

Shepard, L. A. (2001). The role of classroom assessment in teaching and learning. In V. Richardson (Ed.), Handbook of research on teaching (4[th] ed.) (pp. 1066-1101). Washington, DC: American Educational Research Association.

Stiggins, R. J. (1995). Assessment literacy for the 21 c. Phi Delta Kappan, 238-246.

Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83,758-765.

Endnote

[1] See the U.S. Department of Education website for more detailed information about NCLB's implications on accountability and assessment.

[2] These standards are most current ones delineated by the professional organizations AFT, NCME, and NEA.

Peggy P. Chen, Hunter College, CUNY

Chen, Ph.D. is an assistant professor in the Department of Educational Foundations teaching classroom assessment and evaluation, and educational psychology.
COPYRIGHT 2005 Rapid Intellect Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Chen, Peggy P.
Publication:Academic Exchange Quarterly
Geographic Code:1USA
Date:Sep 22, 2005
Words:3279
Previous Article:Using AI to learn about algorithms.
Next Article:Emergent literacy of bilingual kindergarteners.
Topics:


Related Articles
Incorporating information literacy into teacher education.
The impact of engagement in large-scale assessment on teachers' professional development: the Emergent Literacy Baseline Assessment Project.
Providing authentic contexts for learning information technology in teacher preparation.
Collaborating for preservice teacher assessment.
Uniting information literacy & teacher education.
The use of technology in portfolio assessment of teacher education candidates.
Beginning with a baseline: insuring productive technology integration in teacher education.
Service-learning synergy in teacher education.
Voices From the Classroom: Literacy Beliefs and Practices of Two Novice Elementary Teachers.
Teacher preparation without boundaries: a two-year study of an online teacher certification program.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters