Printer Friendly
The Free Library
22,728,043 articles and books

Teacher candidates' literacy in assessment.



Abstract

The present study investigated graduate and undergraduate teacher candidates' assessment literacy by identifying the extent to which assessment standards were met. Participants' teaching experiences were also examined for their influence on level of assessment literacy. Results showed that graduate teacher candidates had higher assessment literacy than undergraduate teacher candidates, and those with prior teaching experience demonstrated higher assessment literacy. Participants were found to have the most difficulty with communicating the assessment results to others such as parents, school personnel, and students.

Introduction

As Linn linn  
n. Scots
1. A waterfall.

2. A steep ravine.



[Scottish Gaelic linne, pool, waterfall.]
 and Gronlund (2000) state, educational accountability means higher demands in P-12 classroom assessment, and the number of required assessments will increase in the years to come. Assessment and evaluation greatly impact teachers, students, parents, schools, educational reform, and teacher preparation programs, and are hotly hot·ly  
adv.
In an intense or fiery way: a hotly contested will.

Adv. 1. hotly - in a heated manner; "`To say I am behind the strike is so much nonsense,' declared Mr Harvey heatedly"; "the
 debated issues in the educational field (Phye, 1997). The No Child Left Behind Act The No Child Left Behind Act of 2001 (Public Law 107-110), commonly known as NCLB (IPA: /ˈnɪkəlbiː/), is a United States federal law that was passed in the House of Representatives on May 23, 2001  (NCLB NCLB No Child Left Behind (US education initiative) ) of 2001, which was signed into law in 2002, required state public schools to implement accountability systems. This act mandates states to test students annually in grades 3 to 8 and document schools' progress statewide. [1] With this act's emphasis on accountability and assessment, there is an increase in standardized tests A standardized test is a test administered and scored in a standard manner. The tests are designed in such a way that the "questions, conditions for administering, scoring procedures, and interpretations are consistent" [1]  and greater demand for classroom assessment as well. With this current trend, teacher candidates are now pressured to prepare for assessing and evaluating their own students' learning and improving instruction inside their classrooms, and be able to interpret externally mandated assessment results. As this federal demand increases, one critical question is: How well-prepared are teacher candidates to assess their pupils? To learn about teacher candidates' assessment literacy, an equally important question to raise is: To what extent are the Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME NCME National Council on Measurement in Education
NCME National Center for Montessori Education
, & NEA NEA
abbr.
1. National Education Association

2. National Endowment for the Arts

NEA (US) n abbr (= National Education Association) → Verband für das Erziehungswesen
, 1990) being met? Researchers have advocated that classroom assessment should support instruction and enhance students' learning (Shepard Shep·ard   , Alan Bartlett, Jr. 1923-1998.

American astronaut who on a 15-minute flight on May 5, 1961, became the first American in space. He also commanded the Apollo 14 mission to the moon (1971).

Noun 1.
, 2001). However, studies show that teachers have consistently used a variety of factors in their assessment practices and consequently make erroneous erroneous adj. 1) in error, wrong. 2) not according to established law, particularly in a legal decision or court ruling.  decisions. Even more disturbing is that most teachers lack effective assessment knowledge and skills; that is, when evaluating student academic achievement, teachers exhibited misconceptions Misconceptions is an American sitcom television series for The WB Network for the 2005-2006 season that never aired. It features Jane Leeves, formerly of Frasier, and French Stewart, formerly of 3rd Rock From the Sun.  about assessment practices (Cizek, Fitzgerald, & Rachor, 1996; McMillan Mc·Mil·lan   , Edwin Mattison 1907-1991.

American physicist and chemist. He shared a 1951 Nobel Prize for the discovery of neptunium (1940).
, 2001). In short, while many seem to understand assessment, more seem to misunderstand mis·un·der·stand  
tr.v. mis·un·der·stood , mis·un·der·stand·ing, mis·un·der·stands
To understand incorrectly; misinterpret.
 it instead.

Theoretical Background

Individuals seem to have multiple points of view to describe assessment. As Cizek (1997) states, at least four definitions of assessment can be found in the current literature. Assessment can be referred to as new formats for gathering information about student achievement (e.g., portfolio assessment); a new attitude toward gathering information (e.g., methods "kinder than'" standardized testing); a new ethos e·thos  
n.
The disposition, character, or fundamental values peculiar to a specific person, people, culture, or movement: "They cultivated a subversive alternative ethos" Anthony Burgess.
 of empowerment em·pow·er  
tr.v. em·pow·ered, em·pow·er·ing, em·pow·ers
1. To invest with power, especially legal power or official authority. See Synonyms at authorize.

2.
 (e.g., information gathered to serve students and teachers); and a new process (e.g., diagnosing and providing alternative instructions for students' with learning difficulties). Despite these definitions, one consistent theme in the assessment literature is the many roles assessment plays in the classroom. While one major role is to promote student learning (Shepard, 2001; Stiggins, 2002), teachers are not effective in using assessment to do so.

Assessment Literacy

In a major joint effort to address concerns about classroom assessment and delineate teacher assessment literacy, the American Federation of Teachers American Federation of Teachers (AFT), an affiliate of the AFL-CIO. It was formed (1916) out of the belief that the organizing of teachers should follow the model of a labor union, rather than that of a professional association. , the National Council on Measurement in Education, and the National Education Association developed seven Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990). These standards were intended to guide the preparation of preservice and inservice teachers as effective and skilled educators. [1] The standards were skills and knowledge in: (1) choosing assessment methods appropriate for instructional decisions; (2) developing assessment methods for such decisions; (3) administering, scoring, and interpreting results of externally-produced and teacher-produced assessment methods; (4) using assessment results in making decisions about individual students, instruction, curriculum development, and school improvement; (5) developing valid grading procedures using pupil assessments; (6) communicating assessment results to students, parents, lay audiences, and educators; and (7) recognizing unethical unethical

said of conduct not conforming with professional ethics.
, illegal, and inappropriate assessment methods and uses of assessment information. Similarly, Stiggins (1995) described the importance of having clear standards to define teacher assessment literacy, thereby helping students attain higher academic achievement. [2] As he stated, "without a crystal clear vision of the meaning of academic success and without the ability to translate that vision into high-quality assessments at the classroom, building, and district levels ... we would remain unable to assist students in attaining higher levels of academic achievement" (p. 238). Although Stiggins (1995) detailed five standards to define the concept of assessment literacy, similar to the seven standards mentioned above, they have not been widely cited or used in the literature. These standards are: (1) identifying clear purposes of assessment; (2) focusing on achievement targets; (3) selecting proper assessment methods; (4) sampling student achievement; and (5) avoiding bias and distortion distortion, in electronics, undesired change in an electric signal waveform as it passes from the input to the output of some system or device. In an audio system, distortion results in poor reproduction of recorded or transmitted sound. .

Although these standards should be integral to teacher education programs to ensure preservice teachers' assessment literacy, few studies have examined how the standards ascertain teacher candidates' competence in classroom assessment. Hake hake: see cod.
hake

Any of several large marine fishes (genus Merluccius) usually considered part of the cod family. Hakes are elongated, large-headed fishes with large, sharp teeth, two dorsal fins (one notched), and a notched anal fin.
 and Impara (1997) conducted a national survey that measured the competence levels of inservice teachers in these seven areas. Teachers were found to generally have some knowledge of administering assessments, but less knowledge of communicating assessment results to others. However, the number of teachers participating in the study was very small (e.g., only eight for New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of
 State); thus, more in-depth in-depth
adj.
Detailed; thorough: an in-depth study.


in-depth
Adjective

detailed or thorough: an in-depth analysis

 studies are needed. Mertler (2005) indicated that assessment literacy means meeting the seven competence standards delineated de·lin·e·ate  
tr.v. de·lin·e·at·ed, de·lin·e·at·ing, de·lin·e·ates
1. To draw or trace the outline of; sketch out.

2. To represent pictorially; depict.

3.
 by AFT, NCME, and NEA. He compared both inservice and preservice teachers' assessment competence and the effect of classroom/teaching experiences on assessment literacy. The assessment literacy of the two groups was found to differ statistically on Standards 1, 2, 3, 4, and 7; inservice teachers did better on these specific standards than preservice teachers. However, Mertler (2005) did not clarify whether the inservice teachers had taken assessment courses during their teacher preparation; thus, they may have scored higher for this reason than the preservice teachers who were taking assessment at the time of testing. Furthermore, the testing situation differed for both preservice and inservice teachers; preservice teachers completed an assessment literacy questionnaire during their assessment course, while inservice teachers received the questionnaire in the mail and/or and/or  
conj.
Used to indicate that either or both of the items connected by it are involved.

Usage Note: And/or is widely used in legal and business writing.
 electronically. Thus, inservice teachers had the opportunity to consult resources to answer the questions. The lack of a controlled testing situation further complicated interpretation of the results.

Rationale rationale (rash´nal´),
n the fundamental reasons used as the basis for a decision or action.
 and Research Questions

As indicated earlier, the seven standards of assessment competence were intended to guide preservice and inservice teachers in their preparation as educators. However, very few studies (Impara, Plake, & Fager, 1993; Mertler, 2005; Plake & Impara, 1997) have specifically examined inservice and/or preservice teachers'--teacher candidates'--knowledge of assessment to meet these standards. In addition, no study to date has examined whether taking assessment courses ensures that teacher candidates are increasing their assessment literacy and meeting these standards. Most important, information gathered on preservice teachers' knowledge of assessment before and alter taking assessment courses could help educators who teach such courses to make better instructional and curriculum decisions, since these standards guide the development of assessment courses in many teacher preparation programs (Gallagher Gallagher may refer to: People
  • Gallagher (surname)
  • Gallagher, the stage name of American stand-up comedian Leo Gallagher
  • Angela Gallagher, English politician
  • Benny Gallagher, Scottish singer/song writer and member of Gallagher and Lyle
, 1998). Therefore, the present researcher examined secondary teacher candidates' knowledge of classroom assessment before and after taking assessment courses in a teacher preparation program. In the present study, three questions were answered: (1) To what extent were the seven standards of assessment met before and after taking an assessment course? (2) To what extent did undergraduate and graduate preservice teachers differ in their assessment literacy? and (3) To what extent does having teaching experience influence assessment literacy?

Methods

Participants The participants (25 undergraduate, 36 graduate) were teacher candidates in the Adolescent ad·o·les·cent
adj.
Of, relating to, or undergoing adolescence.

n.
A young person who has undergone puberty but who has not reached full maturity; a teenager.
 Education Program in an urban public college in New York City New York City: see New York, city.
New York City

City (pop., 2000: 8,008,278), southeastern New York, at the mouth of the Hudson River. The largest city in the U.S.
. With the approval of the college's Human Subjects Committee, teacher candidates were recruited during the first week of school in Fall 2004 and Spring 2005. Sixty-one Adj. 1. sixty-one - being one more than sixty
61, lxi

cardinal - being or denoting a numerical quantity but not order; "cardinal numbers"
 teacher candidates volunteered to participate by completing one survey at the beginning and one at the end of each semester se·mes·ter  
n.
One of two divisions of 15 to 18 weeks each of an academic year.



[German, from Latin (cursus) s
. All participants were preparing to be middle school (n = 9) or high school (n = 52) teachers, with concentrations in such subject areas as English 1. English - (Obsolete) The source code for a program, which may be in any language, as opposed to the linkable or executable binary produced from it by a compiler. The idea behind the term is that to a real hacker, a program written in his favourite programming language is , Mathematics, Social Studies, and Science (e.g., biology, chemistry, and physics).

Measures The present study used two measures. First, a 35-item Assessment Literacy questionnaire developed by Plake and Impara (1997) measured teachers' knowledge of classroom assessment. These validated val·i·date  
tr.v. val·i·dat·ed, val·i·dat·ing, val·i·dates
1. To declare or make legally valid.

2. To mark with an indication of official sanction.

3.
 items were aligned with the Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990). With five items per standard, 35 items were designed to measure the seven standards. The second instrument (11 items), adapted from Impara, Plake, and Fager (1993), gathered background information on teacher candidates' assessment experiences and asked perception questions on their interests in learning about assessment and attitudes toward testing. Dr. Plake granted permission to use these questionnaires.

Procedures The participants were informed about this study and their rights, and assured that their responses would remain confidential. Each testing session lasted about 40 minutes. Once their written consent was obtained, the teacher candidates filled out the Assessment Literacy questionnaire (i.e., pre-test) during the first week of the semester. The same questionnaire (i.e., post-test) was re-administered during the last 2 weeks of the semester.

Results

None of the undergraduate teacher candidates had teaching experience at the time they completed the questionnaires. Of the graduate teacher candidates, 25 indicated having some teaching experience (n = 16 for less than 1 year; n = 1 for 2 years; n = 5 for 3-5 years; and n = 3 for 6-10 years). The majority of participants had never taken an assessment course before (n = 56 or 91.8%); only 5 (8.2%) indicated taking taken an assessment course previously. To answer the research question on whether undergraduate and graduate teacher candidates differed in their assessment literacy, independent-samples t tests were computed on pre-test means (M = 17.52, undergraduate; M = 21.17, graduate) and post-test means (M = 19.48, undergraduate; M = 22.51, graduate). Even though both groups' mean scores for the post-test increased from their pre-test means, the two groups differed on pre-test and post-test. The pre-test means were statistically different between the two groups, with a t-value (59) of 4.22, a p-value p-value,
n in statistics, the probability that a random variable will be found to have a value equal to or greater than the observed value by chance alone. This value provides an objective basis from which to assess the relative change in the data.
 of 0.0., and an effect size of .23. The statistical difference between the two groups was also found on the post-test means, with a t-value (59) 2.55, a p value of .01, and an effect size of .10. The effect sizes of .01, .06, and .14 were considered small, medium, and large, respectively (Cohen cohen
 or kohen

(Hebrew: “priest”) Jewish priest descended from Zadok (a descendant of Aaron), priest at the First Temple of Jerusalem. The biblical priesthood was hereditary and male.
, 1992; Green & Salkind, 2003). Therefore, the effect size of means difference between the undergraduate and graduate teacher candidates on their pre-tests was considered large, while the effect size of the post-test means difference was considered medium.

Although the majority of participants (all undergraduate and a few graduate teacher candidates) indicated having no teaching experience (n = 36 or 59%), some graduate teacher candidates indicated they had some (n = 25 or 41%). To answer the research question on whether teaching experience influenced assessment literacy, another set of independent-samples t tests were computed to compare mean differences between the two groups. For the pre-test means (M = 18.48 for no experience; M = 21.44 for teaching experience), the two groups differed significantly, with a t value (59) -3.25, a p value of .00, and an effect size .25, which is a large effect size. For post-test means (M = 20.18 for no experience; M = 22.96 for teaching experience), the two groups differed significantly, with a t value (59) of -2.29, a p value of .03, and an effect size of .08, which is a medium effect size. To answer the main research question--To what extent were the seven standards of assessment met before and alter taking an assessment course?--paired-samples t tests and effect sizes were computed on pre-test and post-test mean scores for each standard to identify whether the mean differences were statistically significant. Table 1 indicates the group means and standard deviations In statistics, the average amount a number varies from the average number in a series of numbers.

(statistics) standard deviation - (SD) A measure of the range of values in a set of numbers.
 on pre-test and post-test scores by each standard as well as t statistics t statistic, t distribution

the statistical distribution of the ratio of the sample mean to its sample standard deviation for a normal random variable with zero mean.
 and effect sizes. As shown in Table 1, see website http://rapidintellect.com/AEQweb/fal2005.htm at the beginning of the semester, teacher candidates as a group (N = 61) earned 19.73 points out of 35; at the end of the semester, they earned 21.46 points out of 35. The paired-samples t test showed a statistically significant difference, with a t value (60) of -3.36, a p value of .00, and an effect size of .16, which is a large effect size. Specifically, at the beginning of the semester, preservice teachers scored lowest on Standard 6 (M = 1.87), which was communicating assessment results, and highest on Standard 3 (M = 3.13), which was developing assessments. At the end of the semester, teacher candidates still scored lowest on Standard 6 (M = 2.6), but highest on Standard 1 (M = 3.57), which was choosing assessment methods. Overall, teacher candidates' scores rose at the end of the semester and some standards gained more points than others (e.g., Standards 1, 4, and 6). The mean differences on Standards 1, 4, and 6 were large, as indicated by effect sizes (.21, .15, .25, respectively).

Discussion

The present researcher examined the assessment literacy of graduate and undergraduate secondary education teacher candidates and compared the assessment literacy of those with and without teaching experience. Results revealed that both graduate and undergraduate teacher candidates significantly increased their assessment literacy after taking an assessment course. However, graduate teacher candidates scored higher than undergraduate teacher candidates on pre-tests and post-tests, even though the majority of participants indicated never having taken an assessment course before. Thus, further analysis was conducted to examine whether those with teaching experience had higher assessment literacy than those without. Findings indicated that teacher candidates with some teaching experience (the majority had 1-5 years) had significantly better assessment literacy than those with no teaching experience, similar to Mertler's (2005) research which found that inservice teachers scored higher on the assessment literacy questions than preservice teachers did.

When examining the extent to which teacher candidates met the seven standards, many results of this study paralleled those of both Mertler (2005) and Plake and Impara (1997). The present participants had the most difficulty with Standard 6 (M = 1.87, pre-test; M = 2.60, post-test), which is communicating assessment results. Plake and Impara (1997) also found that inservice teachers scored the lowest (M = 2.70) on this standard. Even though participants in Mertler's (2005) study likewise did not score high on this standard (M = 2.27, preservice teachers; M = 2.48, inservice teachers), the scores were very similar to those obtained in the present study as well as in Plake and Impara (1997). Similarly, teacher candidates in the present study scored higher (M = 3.13, pre-test; M = 3.45, post-test) on Standard 3 (administering, scoring, and interpreting assessment results) as they did in Plake and Impara (1997) (M = 3.96) and Merrier (2005) (M = 3.24, preservice teachers; M = 3.86, inservice teachers). Despite a few differences among these studies in identifying the degree of assessment literacy for preservice and inservice teachers, the present study showed that communicating assessment results was the most difficult standard to meet. However, the importance of the present study was in demonstrating that teacher candidates did increase significantly in their assessment literacy by the end of the course; also, for some standards, the scores gained were statistically significantly higher than before taking an assessment course. Thus, an assessment course seems to have a tremendous impact on teacher candidates' assessment literacy. By identifying the strengths and weaknesses of teacher candidates' assessment knowledge and skills prior to and alter taking an assessment course, teacher educators can modify instruction and enhance the assessment literacy of teacher candidates.

Conclusions

Results from this study could be used to guide the further development and modification of assessment courses in teacher preparation programs and to motivate teacher candidates to become assessment-literate in accountability-driven environments. Ultimately, providing rigorous assessment courses to teacher candidates can help their future students strengthen academic learning. As Stiggins (2002) indicates, classroom assessment practices need to be reformed so that assessment processes can become integrated into instruction to promote student learning, support instructional decision-making decision-making,
n the process of coming to a conclusion or making a judgment.

decision-making, evidence-based,
n a type of informal decision-making that combines clinical expertise, patient concerns, and evidence gathered from
, and provide feedback for teachers on their instructional effectiveness. The present study took an initial step toward understanding the assessment knowledge and skills of teacher candidates, and ultimately toward promoting and sustaining such knowledge and skills in P-12 classroom assessment practices. As the present study indicated, as did prior studies, one area of focus is communicating assessment results and making instruction decisions accordingly. Thus, in further research on classroom assessment, one focus should be on strengthening preservice teachers' knowledge of accurately communicating students' assessment results. To do so, training programs should spend more time on interpreting assessment results at the informal classroom level as well as high-stakes state level (i.e., standardized tests). Another area of research should be re-examining the standards that were developed by AFT, NCME, and NEA in 1990. With changing federal educational polices and assessment requirements, the different weights of these seven standards should be re-prioritized according to according to
prep.
1. As stated or indicated by; on the authority of: according to historians.

2. In keeping with: according to instructions.

3.
 new demands and focus on assessment, and amended a·mend  
v. a·mend·ed, a·mend·ing, a·mends

v.tr.
1. To change for the better; improve: amended the earlier proposal so as to make it more comprehensive.

2.
 as necessary to ensure that teachers are more literate about assessment.

References

American Federation of Teachers, National Council on Measurement in Education, & National Education Association (AFT, NCME, & NEA). (1990). Standards for teacher competence in educational assessment of students. Washington Washington, town, England
Washington, town (1991 pop. 48,856), Sunderland metropolitan district, NE England. Washington was designated one of the new towns in 1964 to alleviate overpopulation in the Tyneside-Wearside area.
, DC: Author.

Cizek, G. J. (1997). Learning, achievement, and assessment: Constructs at a crossroads. In G. D. Phye (Ed.), Handbook
For the handbook about Wikipedia, see .

This article is about reference works. For the subnotebook computer, see .
"Pocket reference" redirects here.
 of classroom assessment: Learning, adjustment, and achievement (pp. 1-32). San Diego San Diego (săn dēā`gō), city (1990 pop. 1,110,549), seat of San Diego co., S Calif., on San Diego Bay; inc. 1850. San Diego includes the unincorporated communities of La Jolla and Spring Valley. Coronado is across the bay. : Academic Press.

Cizek, G. J., Fitzgerald, S. M., & Rachor, R. E. (1996). Teachers' assessment practices: Preparation, isolation, and the kitchen sink. Educational Assessment, 3, 159-179.

Cohen, J. (1992). A power primer prim·er
n.
A segment of DNA or RNA that is complementary to a given DNA sequence and that is needed to initiate replication by DNA polymerase.
. Psychological Bulletin, 112, 155-159. Gallagher, J. D. (1998). Classroom assessment for teachers. Upper Saddle River Saddle River may refer to:
  • Saddle River, New Jersey, a borough in Bergen County, New Jersey
  • Saddle River (New Jersey), a tributary of the Passaic River in New Jersey
, N J: Prentice.

Green, S. B., & Salkind, N. J. (2003). Using SPSS A statistical package from SPSS, Inc., Chicago (www.spss.com) that runs on PCs, most mainframes and minis and is used extensively in marketing research. It provides over 50 statistical processes, including regression analysis, correlation and analysis of variance.  for Windows and Macintosh: Analyzing and understanding data (3[rd] ed.). Upper Saddle River, N J: Prentice-Hall.

Impara, J. C., Plake, B. S., & Fager, J. J. (1993). Teachers' assessment background and attitudes toward testing. Theory into Practice, 32, 113-117.

Linn, R. L., & Gronlund, N. E. (2000). Measurement and assessment in teaching (8[th] ed.). Upper Saddle River, N J: Prentice-Hall.

McMillan, J. H. (2001). Secondary teachers' classroom assessment and grading practices. Educational Measurement: Issues and Practices, 20, 20-32.

Mertler, C. A. (2005). Secondary teachers' assessment literacy: Does classroom experience make a difference? American Secondary Education, 33, 76-92.

Phye, G. D. (1997). Classroom assessment: A multidimensional mul·ti·di·men·sion·al  
adj.
Of, relating to, or having several dimensions.



multi·di·men
 perspective. In G. D. Phye (Ed.), Handbook of classroom assessment: Learning, adjustment, and achievement (pp. 33-51). San Diego: Academic Press.

Plake, B. S., & Impara, J. C. (1997). Teacher assessment literacy: What do teachers know about assessment? In G. D. Phye (Ed.), Handbook of classroom assessment: Learning, adjustment, and achievement (pp. 53-68). San Diego: Academic Press.

Shepard, L. A. (2001). The role of classroom assessment in teaching and learning. In V. Richardson (Ed.), Handbook of research on teaching (4[th] ed.) (pp. 1066-1101). Washington, DC: American Educational Research Association The American Educational Research Association, or AERA, was founded in 1916 as a professional organization representing educational researchers in the United States and around the world. .

Stiggins, R. J. (1995). Assessment literacy for the 21 c. Phi Delta Kappan, 238-246.

Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83,758-765.

Endnote See footnote.  

[1] See the U.S. Department of Education website for more detailed information about NCLB's implications on accountability and assessment.

[2] These standards are most current ones delineated by the professional organizations AFT, NCME, and NEA.

Peggy Peggy may refer to:
  • Peggy (musical), a 1911 musical comedy by Stuart and Bovill
  • Peggy (given name), people with the given name Peggy
See also
  • Peggy-Ann, a 1926 musical comedy by Rodgers and Hart
 P. Chen, Hunter College Hunter College: see New York, City University of. , CUNY CUNY City University of New York  

Chen, Ph.D. is an assistant professor in the Department of Educational Foundations teaching classroom assessment and evaluation, and educational psychology.
COPYRIGHT 2005 Rapid Intellect Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Chen, Peggy P.
Publication:Academic Exchange Quarterly
Geographic Code:1USA
Date:Sep 22, 2005
Words:3279
Previous Article:Using AI to learn about algorithms.
Next Article:Emergent literacy of bilingual kindergarteners.
Topics:



Related Articles
Incorporating information literacy into teacher education.
The impact of engagement in large-scale assessment on teachers' professional development: the Emergent Literacy Baseline Assessment Project.
Providing authentic contexts for learning information technology in teacher preparation.
Collaborating for preservice teacher assessment.
Uniting information literacy & teacher education.
The use of technology in portfolio assessment of teacher education candidates.
Beginning with a baseline: insuring productive technology integration in teacher education.
Service-learning synergy in teacher education.
Voices From the Classroom: Literacy Beliefs and Practices of Two Novice Elementary Teachers.
Teacher preparation without boundaries: a two-year study of an online teacher certification program.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters