Printer Friendly

District self-reporting and teacher retention.

Abstract

This article presents data problems from a longitudinal study on teacher retention in Florida. While many researchers experience delays or problems during data collection, the experiences of this research team present serious implications for the validity of any study or state policy built on district-reported data. Cross-district and within-district factors are also discussed.

Background

Our research team embarked on a study to identify the variables that affect teacher retention during teacher shortages. The study focuses on a group of new teachers hired during the 2000-2001 school year. A longitudinal approach to track which of our teachers were retained in their school districts was undertaken. As a team, we identified and defined variables we wanted to track in order to develop a predictive model for districts to use in teacher recruitment. We planned to use archival data collected on these variables from our four partner school districts. All we needed to do, we thought, was to ask the districts for the information, input the data, and watch the results roll out of SPSS.

That's where the plan went awry. Getting the data accurately proved to be difficult. Our data collection plan was similar to other models (Ross, 2000). We outlined a method to collect data, designed and assigned management of data collection, and created a timeline for completion. We had planned that the data collection portion of our study would take about a month. It became a year-long process during which we encountered problems that we had not anticipated--problems that could impact the validity of any policy or research based on self-reported data from school districts.

Cross-district

We identified several problems in data collection common among the districts with which we worked. One problem encountered was the inconsistency across districts in the definitions and categories used to enter teacher information into district databases. Each district had slightly different definitions of key terms such as "alternative certification" and "out-of-field certification". Two districts reported teachers with temporary state certificates had received those certificates through following a course-by-course option, (taking only courses required for certification) when, in fact, the teachers may have completed an approved teacher education program. Also, some districts coded teachers as teaching out-of-field when they were teaching LEP children without the necessary endorsement, while other districts did not. One district reported that their teachers went through an alternative certification program in a year when the district did not have an alternative certification program. Our experience confirms Doucette-Gates' (2000) findings regarding cross-agency data collection that "idiosyncratic vocabularies" posed problems.

The number of paths and ways one might become certified in Florida causes other interesting problems as well. In Florida, teachers can become certified through completion of a college of education program, an alternative certification program, taking courses for certification, and/or taking a state test. In addition, a number of out-of-state certified teachers are added annually to Florida's teaching ranks, and the question of how these teachers were certified becomes even more difficult. We also discovered that some districts are able to issue their own emergency certification when an employee was not eligible for a state certificate by any other route. And, of course, the plethora of paths to certification caused a lack of consistency across districts in coding teacher preparation.

Consistent with another study (Doucette-Gates, 2000), we found that there was no uniformity in the databases and software districts were using. School districts used different types of software to store data. Each software program analyzed data according to different categories and codes resulting in different types of reports. For instance, one district collected data by examining the hardcopy personnel files. Two of the districts examined kept both software databases and hardcopies of personnel information. The fourth district also examined an electronic database, but different software was used. In addition, districts get to choose how to report data to the state, which means that there is no standard database used by the state and its school districts. Some districts reported losses of electronic data, so they filed hard data, which then had to be entered into the state's database.

Within-district

A variety of within-district factors created problems in data collection also.

First, in our study no one department in three of the districts kept all the data on teachers in the same place physical place. For example, the largest district stored certification in its Certification Department. This department kept track of all elements required for state certification such as test scores and areas of concentration. But, we found that another department, one we were told was on "the third floor", stored the actual transcripts. If we wanted to know if the teacher had completed a state approved program, for example, we had to look in that department. In one district, the coordinator of the beginning teacher program kept a set of notebooks on new teachers that contained information that the school district's information services, which creates reports, could not give an account of at all. In fact the beginning teacher program and the personnel office were in different buildings across town. Only the smallest district kept all teacher data within one department.

Not only was the data kept in different places physically, it was often in different places electronically. We found that it was common for each department within a school district to use different software for coding, entering, and storing data. These different software programs were often not connected to each other through the school district's local area network. In districts where this was the case, data collected on teachers varied depending upon which department was reporting the data. As was the case with the physical separation of data, this electronic separation meant that we often got incomplete information because we didn't ask the correct department for the data.

Because different departments entered different data in different ways, communication between departments was vital to collecting accurate information. Too often, that communication appeared to be lacking.

The size of the districts in Florida created other problems in data collection. We found that the more people who collected and input data, the more inaccuracies there were in the data. One of the districts in our study, with approximately 6,000 students, is very small in relative size to Florida's districts (Florida Department of Education, 2002). The personnel department in that district is also the department that runs the beginning teacher program. So the one person at the district office who had easy access to all of the teacher data was the one who coded, handled, and entered the data in one data base. We found we could safely rely on that district's self-reported data. On the opposite end of the scale, the largest district in our study employs approximately 14,000 instructional staff members and another 12,000 clerical or support staff (Florida Department of Education, 2002). The school district offices are housed in a large, multi-story building. Several people in various offices were responsible for coding, handling, and entering data. This district, as with the next smaller district, presented us with data that was less reliable.

Finally, there appeared to be a lack of internal controls and checks on the inputting of data in some of the districts in our study. We encountered mysteriously erroneous data that no one at the district level had caught. Earlier in this paper, we alluded to the fact that one district reported teachers prepared through the district's alternative certification program when, in fact, the district had no alternative certification program that year. In another district, several incomplete records were included in our original data. These records contained only teacher names and social security numbers with no data on certification, degree, or teaching position. The district deleted these records from their database only after our researcher pointed them out. No one else in the district seemed to have checked final entries for data so inaccuracies went uncovered and uncorrected.

Conclusion and Implications

Of course, as researchers, we knew that it was ultimately our responsibility to be sure the data we collected was complete and accurate. Once we discovered the problems in collecting data sent to us by the districts, we sent one person out to all the districts to collect that data personally. While this was a labor-intensive and time-consuming process, it guaranteed that the data we collected and analyzed was accurate and reliable. When dealing with numerous agencies for data collection, it is better to plan carefully and collect quality data than to collect vast amounts for quantity sake (Doucette-Gates, 2000). Besides the amount of time it took for one person to clean up the data to make it consistent, the problems in the accuracy of data collected was worrisome for other reasons. We began to wonder about the validity of any study or any state policy built on district-reported data. Without careful review, researchers should look with caution on any district self-reported data. If our experience is any measure, district reported data may be inaccurate or incomplete or both.

If the state of Florida, or any other state, hopes to get an accurate picture of who its teachers are and the most effective method of teacher preparation, then a more uniform system of identifying variables and reporting them needs to be instituted. States and researchers need to be sure that the variables are clearly defined and that those definitions are relayed to the districts. Districts, themselves, also need to develop more uniform definitions of data to ensure that the data collected by different departments takes the same form and means the same thing as the data collected by the state.

In addition, the number of paths to certification in Florida should be reduced. Right now, getting certified in Florida is a bewildering maze for both teachers and school districts alike. We began to wonder how any researcher, media person, or politician could report the superiority of one teacher preparation method over another when is superior to another when there are so many different ways, and so many different definitions of those ways, to get certified.

Finally, follow-up research is needed to track the variables as districts implement state-approved alternative certification programs. The State of Florida is in the process of standardizing alternative certification programs across districts. Research is needed to determine if the data collected from the more standard programs is more uniform and accurate. After almost an entire year trying to get the data we received from the districts organized, we are moving ahead on our study of the variables that affect teacher retention. But, along the way, we have learned some valuable lessons about research in the real world of schools and school districts.

References

Broward County Public Schools. (2002). District overview. Retrieved November 25, 2002 from www.browardschools.com/about/overview.htm

Doucette-Gates, A. (2000). Capturing data: Negotiating cross-agency systems. Education and Treatment of Children, 23(1), 5-19. Retrieved February 13, 2001, from First Search database.

Ross, S. (2000). How to evaluate comprehensive school reform models: Getting better by design [Electronic version]. Arlington, VA: New American Schools Development Corp. (ERIC Document Reproduction Service No. ED447599)
Kathleen K. Huie, Florida Atlantic University
Deborah L. Earley, Florida Atlantic University
Mary G. Lieberman, Florida Atlantic University
John D. Morris, Florida Atlantic University
Robert E. Shockley, Florida Atlantic University
Eliah J. Watlington, Florida Atlantic University


Drs. Huie, Earley, Lieberman, Morris, Shockley, and Watlington are a research team from Florida Atlantic University's College of Education. The team research areas include teacher preparation, recruitment, and retention--areas in which they are currently conducting longitudinal studies.
COPYRIGHT 2003 Rapid Intellect Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Watlington, Eliah J.
Publication:Academic Exchange Quarterly
Geographic Code:1U5FL
Date:Mar 22, 2003
Words:1917
Previous Article:Improving online interactivity and learning: a constructivist approach.
Next Article:What chess has given us.
Topics:


Related Articles
How can we prepare and retain effective special education teachers?
Quality teaching: reaching your goals.
A study of teacher resilience in urban schools.
Virginia offers cash to keep teachers.
Unfulfilled Promise: Ensuring High Quality Teachers for Our Nation's Schools.
Teachers' efficacy in preparation and retention.
Please stay: district leaders focus on hiring the best qualified teachers. But then what?
Beginning Teacher Concerns in an Accountability-Based Testing Environment.
Hunting for talent: school administrators try innovative strategies to recruit and retain teachers.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |