Lessons learned from the first year of program reviews for ACEI national recognition of elementary programs. (Professional Standards/ Teacher Education).
Because the Drafting Committee staunchly and deliberately did not provide a matrix for the new Standards, a "structureless" response process exists. Hence, responses to the Standards vary from narrative to differing types of graphic organizers. Responses that rely on narrative focused more on input information (courses, activities, experiences) and data than those responses that presented data in charts or tables. The use of charts and tables apparently allowed faculty to concentrate more on output data than on input. The fluidness of not having a finite structure was difficult for many faculties, as well as for reviewers.
Perhaps that was why some reports neglected to address the Standards. This neglect occurred in two ways: 1) they were not mentioned at all; and 2) the report crosswalked the ACEI Elementary Standards to the SEA or INTASC Standards, but the responses addressed the SEA or INTASC Standards, not the ACEI Elementary Standards. While the crosswalks were valuable to reviewers, there was an expectation that reviewers ought to make the connections between sets of standards. However, the burden of proof must come from the program faculty, not the reviewers. Reviewers should not be placed in situations that require them to interpret linkages to standards. The institution must do this in the report.
While all of the 20 Standards noted candidates' knowledge and skills/abilities, fewer mentioned dispositions and impact on student learning. Reports that attempted to address attributes seemed to have a harder time with the last two--dispositions and impact on student learning--in both documentation and assessment or evaluation practices.
The major difference between NCATE 2000 and ACEI's Standards is the rubrics embedded into the Standards for NCATE 2000. Although the Drafting Committee did not generate rubrics, what we have uncovered is that two sets of rubrics are needed: one for developing the program report against the Standards and one for reviewing the report document. The reviewing set of rubrics was developed from two perspectives: determining how the report addressed each Standard, individually; and assessing how, holistically, the quality of the program measures up to one that ought to be "nationally recognized."
In other areas involving processes, faculties did not report data--even if they presently had it available to them--and that which might be reported was not aggregated so that the data can be interpreted and used for decision making and improvements. Often, assessments of candidates' abilities were limited to course grades and state test scores for licensure. Many faculties have not moved from reporting what they do to what candidates do. Few institutions have established rubrics (i.e., ways to assess candidates) that clearly delineate proficiency levels. Many reports contained pages and pages of candidates' evidence of performance (e.g., one journal entry from one candidate); however, such evidence was meaningless to a reviewer who needed the context of the piece. Context would show the reviewer how the evidence compares to the levels of proficiencies determined by rubrics used by program faculty to monitor candidates' progression and competencies and for understanding relevance to the Standards.
Two final areas related to process issues include pagination and time. Page numbers were often not used in report documents. Because many reports exceeded the 145-page limit established by NCATE, there seemed to be a "fear of rejection" based on page limits. Time will tell whether some of the documents with many pages were either too much or quite practical in dealing with samples of performance evidence. The lack of page numbers was problematic and cumbersome for both reviewers and faculty when feedback was written thusly: "Behind Tab 3, the fourth document or six pages back...." Likewise, reviewers were thwarted by some institutions' failure to indicate how items listed in an appendix addressed a specific Standard.
Finally, faculties were telling us two things relevant to performance standards: 1) that it takes a good year, minimally, to prepare; and 2) that if an institution can do the ACEI Standards, addressing NCATE 2000 is much easier.
Larger institutions seemed to have more problems with the performance direction than smaller programs. Electronic submissions that were not Web-based have been problematic for reviewers because of differences in software and formats. Likewise, submissions on discs or CDs presented and provided much more information than was needed for a program report. Institutions did not limit their materials to only what the specialty association needed. The numerous links were helpful, but unit related documents, such as the Institutional Report, were not. Using a Web-based submission would be a better electronic option for transmitting program reports. Explanations, directions, and even cover sheet information were missing, making the non-Web-based submissions extremely challenging.
Program faculties were unable to comprehend and interpret the review critique received back from ACEI. The critique the faculty received looked very unfamiliar (especially when compared to the old guidelines critique). The review critique was much more detailed in an unfamiliar way and much longer (5-8 pages on average). The review addresses only standards "not met," primarily focusing on aggregated data, findings, interpretations, and usage of data feedback into the program, along with evidence supporting candidates' competencies against the Standards. If faculties had not made the shift from input to output, they were very perplexed and had difficulty in interpreting the review. Likewise, faculties who did not have a clear plan for addressing anchor points for candidate assessment had difficulty providing documentation for the Standards. Through 2004-2005, an assessment plan must provide information when aggregated data is missing or lacking.
The new Elementary Standards and the review process are proving to be a challenge to both the institutions and ACEI. What we have learned in the first year of reviewing programs will be used in future training sessions for both institutions and reviewers.
--Nancy Quisenberry and Catheryn Weitman, Professional Standards/Teacher Education Committee
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Association for Childhood Education International|
|Date:||Dec 22, 2001|
|Previous Article:||Breathwaite mini-grant project: an ABC photo gallery. (ACEI Elizabeth Breathwaite Mini-Grant Award).|
|Next Article:||A lingering question for middle school: what is the fate of Integrated Curriculum? (Issue in Education).|