A quantitative review of functional analysis procedures in public school settings.
Functional behavioral assessments can consist of indirect, descriptive and experimental procedures, such as a functional analysis. Although the research contains numerous examples demonstrating the effectiveness of functional analysis procedures, experimental conditions are often difficult to implement in classroom settings and analog conditions often lack ecological validity. This has led some authors to recommend additional research be conducted to evaluate assessment procedures that are appropriate for classroom settings. This article presents a quantitative review of functional analyses procedures implemented in school settings. Results support previous statements that the literature demonstrating functional analysis procedures in classroom settings is not adequate to guide practice. Practitioners require additional research demonstrating assessment procedures that are appropriate for classroom settings.
The 2004 Individuals with Disability Education Improvement Act (IDEIA) states that individual education plan teams must address the behavioral needs of students with disabilities when their behavior impedes their learning or the learning of others. Although the reauthorization does not provide specific guidance on procedures, the act does state that individualized education plan teams are required to address problem behavior using a behavior intervention plan based upon a functional behavior assessment ("IDEIA", 2004). Despite multiple conceptual articles stating the importance of conducting accurate and meaningful functional behavioral assessments in school settings, few empirical studies are available to guide reliable and accurate practice (Asmus, Vollmer, & Borrero, 2002; Gresham, Watson, & Skinner, 2001; Schill, Kratochwill, & Elliott, 1998). The lack of clarity concerning how, when, and under what circumstances a functional behavior assessment is appropriate and should be conducted has led to increased variation in assessment procedures across school districts, and inconsistent definitions of functional behavioral assessments (Gresham et al., 2004; Kates-McElrath, Agnew, Axelrod, & Bloh, 2007; Killu, Weber, Derby, & Barretto, 2006; Weber, Killu, Derby, & Barretto, 2005).
What is a functional behavioral assessment?
Neef and Peterson (2007) state that a Functional Behavior Assessment is designed to obtain information about the function, or purpose, a behavior serves for a person. By gathering data on environmental events associated with the occurrence and maintenance of problem behavior, interventions can be more consistent with the function of the behavior, and thus more effective (Ervin, Ehrhardt, & Poling, 2001; Gresham et al., 2001; Vollmer & Northup, 1996). Functional behavioral assessment can encompass four phases: collect data, develop hypothesis, formally test those hypothesis, and then develop interventions based upon tested hypotheses (Ervin, Ehrhardt et al., 2001). Methods of collecting behavioral data can include both indirect and descriptive procedures. Examples of indirect data collection procedures include reviews of historical/archival records, structured interviews (e.g. Functional Assessment Interview, (O'Neill et al., 1997)), and the administration and interpretation of behavior rating scales/ checklists (e.g. Motivational Assessment Scale, (Durand & Crimmins, 1988)). Descriptive procedures can include direct observations of the frequency, duration, and/or intensity of antecedents, behaviors, and consequences (Gresham et al., 2001). Information drawn from indirect and descriptive procedures allows practitioners to develop hypotheses about environmental variables associated with the occurrence of the problem behavior. Following the development of a behavioral hypothesis, practitioners might implement experimental procedures (e.g. Functional Analysis) to test the hypothesis, as an intervention developed based upon an incorrect hypothesis has the potential to worsen behavior (Vollmer & Northup, 1996).
Although the literature often uses the terms functional behavior assessment and functional analysis synonymously, it should be clear that the two procedures describe different activities (Cone, 1997; Kates-McElrath et al., 2007). The term functional behavioral assessment (or functional assessment) is typically a general term that describes the process of collecting environmental information to develop a hypothesis about the occurrence of problem behavior. Functional analysis on the other hand describes the process of systematically manipulating environmental events to test behavioral hypothesis (Cone, 1997). Even though functional behavioral assessments can provide information regarding the relationship between behavioral and environmental events, functional analysis procedures are often a necessary component to experimentally test/validate these relationships (Asmus et al., 2002; Gresham et al., 2001).
There are three general categories of reinforcement which maintain behavior: positive reinforcement, negative reinforcement, and sensory reinforcement (Carr, 1977). Functional analysis evaluates the function of problem behaviors across these three categories as a means of identifying which category or categories are maintaining the problem behavior of an individual (Asmus et al., 2002). The process involves exposing an individual to analog conditions related to potential categories of reinforcement, and recording the occurrence of the problem behavior (Gresham et al., 2001). For behavior demonstrated in a school setting, the variables assessed typically include attention (positive reinforcement), escape from academic demands (negative reinforcement) and play (sensory reinforcement) (Iwata & Worsdell, 2005). During each experimental condition, experimenters reinforce an individual's problem behavior by presenting the specific reinforcer for that condition. The condition that shows the highest frequency/ duration of problem behavior represents the primary maintaining variable. Through this process, practitioners are able to test behavior hypotheses and identify maintaining variables (Iwata, Dorsey, Slifer, & Bauman, 1982/1994; Iwata & Worsdell, 2005).
Implementing Functional Analysis Procedures
To demonstrate that changes in a behavior are due to specific variables, practitioners must be able to control extraneous variables in the environment during experimental procedures (Mace, Yankanich, & West, 1988). As a result, standard functional analysis procedures are often not practical for applied settings such as classrooms. When conducting a functional analysis within a classroom environment, it can be difficult to maintain procedural integrity due to maintaining variables that are difficult to control (e.g. peer attention) (Ellis & Ma-gee, 2004; Ellis & Magee, 1999). In addition, school personnel are often understandably resistant to allow the implementation of functional analysis procedures within classroom settings as it may increase risks to teachers and peers (Desrochers, Hile, & Williams-Moseley, 1997), and may reduce instructional time (Ellingson, Miltenberger, Strieker, Galensky, & Garlinghouse, 2000). As a result, practitioners often implement functional analysis procedures in analog conditions to control extraneous variables and improve the validity of the findings. The frequent use of analog conditions is not, however, surprising given the inherent difficulty of conducting functional analyses in classroom settings. Unfortunately, functional analysis procedures that present conditions in an analog setting can have limited ecological validity concerning the consequences present in the classroom where the behavior occurs (Conroy, Fox, Grain, & Jenkins, 1996).
By focusing on analog conditions, practitioners are often promoting the occurrence of problem behavior in settings that are significantly different from the natural setting where the behavior occurred. The motivational variables identified during analog conditions may not be consistent or relevant for behaviors that occur in the natural environment (Conroy et al., 1996; Mace, 1994; Repp & Karsh, 1994). Lack of agreement between results of functional analyses and the variables that maintain behavior in natural settings raises concerns about the ecological validity of using standard analog conditions to assess problem behavior (Conroy et al., 1996). Considering these limitations, numerous researchers have questioned the utility, relevance, and applicability of using analog functional analysis procedures in classroom settings (Conroy et al., 1996; Ellis & Magee, 2004; Martin, Gaffan, & Williams, 1999; Sturmey, 1995; Sugai, Sprague, & Horner, 1999).
Despite a lack of abundant research that clearly demonstrates the appropriateness of conducting functional analysis in school based settings, a number of articles exist which attempt to summarize the existing literature and provide recommendations for practice and future research. These articles, however, fail to adequately review the literature related specifically to studies evaluating functional analytic procedures within school settings. For instance, Ervin, Radford et al. (2001) conducted a review of articles that described the functional assessment of behavior in school settings, but they did not adequately focus on functional analysis procedures. Although a review presented by Hanley, Iwata, and McCord (2003) did examine the best practices and future directions of functional analysis procedures, their focus was not limited to the use of functional analysis procedures in school settings. Kates-McElrath et al. (2007) examined the use of functional analysis and functional behavior assessment procedures in school settings, but the focus of their review was to identify common procedural elements, not the implementation of functional analysis procedures. Finally, although Ellis et al. (2004) provided a review of the functional analysis procedures in classroom settings, their review focused on modified functional analysis procedures and alternative experimental methods that practitioners might implement in classroom settings. Although it is important to examine how modifications can improve implementation in classroom settings, additional quantitative information is necessary to evaluate the quality of research available to guide practice in classroom settings. The purpose of this literature review is to conduct a review of the literature related specifically to functional analyses procedures implemented within school settings and to provide researchers and school practitioners with a better understanding of the degree to which the literature supports the use of functional analysis in school settings.
To identify the greatest number of published studies that utilize functional analytic procedures, a computer based and a manual based review of the literature from 1992 to 2007 was conducted. To be included in the analysis articles had to (a) be peer reviewed (b) involve an experimental manipulation of consequences for inappropriate behavior (i.e., a functional analysis), and (c) involve experimental manipulations within a public school setting. Studies conducted in a home, university, clinic, residential setting, or a privately funded setting for individuals with special needs were not included in our analysis. In the computer based search, we simultaneously used two web based search engines (PsychlNFO and ERIC) to identify articles. We first conducted a search by entering the key terms "functional analysis" and "intervention," with "functional analysis" also listed as a subject term. In a second search of the two databases, "functional analysis" and "school" were entered as key terms. Eight undergraduate students and the first author conducted the manual based search. In total, the undergraduates and the first author examined 13 peer reviewed journals (see Appendix for a list of journals) which either had a clear focus on behavioral assessments and interventions in school settings, or was identified during the web-based search as having published at least one functional analysis article.
Appendix Journals Reviewed 1. Behavior Analyst 2. Behavior Therapy 3. Journal of Applied Behavior Analysis 4. Journal of Early Intervention 5. Journal of Positive Interventions 6. Journal of School Psychology 7. Psychology in the Schools 8. School Psychology Quarterly 9. School Psychology Review 10. Behavioral Interventions 11. Education and Treatment of Children 12. Journal of Autism and Developmental Disorders 13. The Analysis of Verbal Behavior Note: Undergraduate reviews, Journals 1 though 9; First author reviews, Journals 10 through 13
Coding procedures for each article focused on four areas: location of assessment, participant demographics, assessment procedures, and results of the assessment. Each area was comprised of specific dimensions categorized by labels found in the articles, and then recorded per each student in the article. Coding for the location of assessment consisted of two labels, analog settings and classroom settings. We used the analog label to categorize articles that conducted assessment procedures in an analog setting, an empty classroom, or therapy room. We used the classroom label for studies in which articles demonstrated assessment procedures in preschool, general education, and self-contained classroom settings. Coding focused on the actual location of assessment procedures, as compared to the student's educational location. To focus this review on assessment procedures relevant for classroom settings, we separated articles into separate groups based upon the location of the assessment, and then utilized the following coding procedures to examine the articles.
Coding for student demographics consisted of two dimensions, the number of students in the study, and the psychological and/or mental classification of the students. Students' psychological and/or mental classification included seven labels: mental disability, developmental delay, behavioral or emotional delay, not disabled, genetic syndrome, hearing/vision impairment, and not specified. The 'mental disability' label represented students without another type of psychological or developmental condition (e.g. developmental delay, downs syndrome, etc.). The 'not disabled' label represented students that attended a general education classroom without a formal disability.
Coding for assessment procedures consisted of five dimensions: personnel used to present consequences, personnel that collected data, number of conditions manipulated during assessment procedures, length of sessions, and the number of maintaining condition replications. The personnel that presented conditions and the personnel that collected data dimensions included seven labels: peer, therapists, consultant, assistant, graduate student, teacher, or not specified. The 'consultant' label represented individuals described as an experimenter, professor, researcher, or consultant in the article.
Coding for assessment results consisted of three dimensions: conditions that maintained problem behavior, the average rate of problem behavior for conditions identified as maintaining the problem behavior, and application of assessment results. The maintaining conditions dimension consisted of six labels: alone, tangible, attention, escape, automatic, and other. Categories for maintaining conditions were determined by reading assessment results or by visually examining assessment data presented within the reviewed articles. The average rate of problem behavior dimension included interval and frequency data collected during the condition hypothesized as maintaining problem behavior, as specified in the article for each student. When reviewing articles, 67% of the articles did not specify the rate of problem behavior, and thus required a visual inspection of the findings on assessment graphs to calculate the average rate of behavior across the condition. The application of assessment results dimension included five labels: no intervention information, non-individualized intervention, antecedent based intervention, punitive establishing operation, and individualized intervention with reinforcement.
In order to assess the reliability of the selection criteria, a trained graduate student applied the selection criteria to 23 articles, 13 articles that satisfied the criteria for inclusion and 10 articles did not meet criteria. The student received a written description of the selection criteria, along with examples, and then coded articles as either meeting criteria or not meeting criteria. Inter-rater agreement was determined by calculating the number agreements, divided by the number of agreements plus disagreements, multiplied by 100%. Results indicated 100% agreement between the graduate student selection codes and the first author's selections.
To check the reliability of the coding procedures, the same graduate student repeated coding procedures for 36% (n=14) of the articles in the review. The first author provided a written description of the coding criteria, reviewed procedures with the student, and presented examples. The graduate student then independently recoded each dimension using category labels described by the first author. Inter-rater agreement was determined by calculating the number of codes that were in agreement with the first author's codes, divided by the number of agreements plus disagreements, multiplied by 100%. Results indicated 95.3% (range = 86% - 100%) agreement between the coding procedures for the first author and graduate students across each dimension. Errors occurred most often when the graduate student coded the ability level of the student and the mean rate of behavior, related possibly to vague descriptions of student's ability level and the need to use visual inspection to code the average rate of behavior.
Location of Assessment Procedures
Combined, the web and manual based searches of the literature yielded 39 articles that demonstrated functional analysis procedures for 98 students. Although 19 articles (48 students) demonstrated assessments procedures in classroom settings, 17 articles (50 students) demonstrated assessment procedures in an analog setting. Three articles listed assessment procedures in both settings, but the authors counted each student's data only once, based upon the location of the assessment. Of the articles that demonstrated assessment procedures in a classroom setting, 52.9% of the assessments occurred in a self-contained classroom setting, 31.4% in a general education classroom, and 15.7% in a preschool classroom.
Participant Demographic and Assessment procedures
When examining assessments implemented in classroom settings, most published articles consisted of a small sample size (Mode= one student, M=2.18 students; Figure 1, top panel). Although 'not disabled' was the most frequent label used to identify students (34.5%), the majority of assessment procedures focused on students with a variety of disabilities (Figure 1, bottom panel). Assessment procedures typically involved the evaluation of four maintaining conditions (43.8%; Figure 2, top panel) using 10 min. sessions (54.2%; Figure 2, bottom panel), across 4 to 5 replications (43.75%; Figure 3, top panel). During assessment procedures, teachers typically presented consequences (45.6%; Figure 3, bottom panel), but graduate students generally served as the data collector (40%; Figure 4, top panel) using interval-recording procedures (83.3%; Figure 4, bottom panel).
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
[FIGURE 3 OMITTED]
[FIGURE 4 OMITTED]
Results of Assessment Procedures
Results of assessment procedures implemented in classroom settings indicated escape/demands as the condition most associated with higher rates of behavior 46.8% (Figure 5, top panel). When examining the application of assessment results, articles often did not list intervention information (41.6%) or did not describe individualized interventions (31.25%; Figure 5 bottom panel). When articles did present information on interventions, only 4.17% of the assessment procedures led to antecedent interventions, and only 22.92% of the assessment procedures led to the development of individualized interventions based upon reinforcement. For articles that used interval data, results indicated that problem behavior occurred in 41-60% of the intervals during conditions hypothesized as maintaining problem behavior (Figure 6, top panel), with an average of 51.08% (SD=20.14) of intervals across students. For articles that used frequency data, results indicated that most problem behavior occurred between 2.0 and 3.9 occurrences per minute (Figure 6, bottom panel), with an average rate of 2.42 occurrences during maintaining conditions (SD=1.28). Across the 48 students, only two students demonstrated inappropriate behavior during less than 10% of the intervals. In addition, only one student demonstrated aberrant behavior at a frequency rate that was less than once per minute.
[FIGURE 5 OMITTED]
[FIGURE 6 OMITTED]
Assessment Procedures in Analog Settings
After examining assessment procedures implemented in classroom settings, the authors coded assessment procedures implemented in analog conditions and compared the results to the previously mentioned findings. Assessment procedures implemented in analog settings typically focused more on children with developmental delays (Figure 1, top panel) that demonstrated higher rates of problem behavior, as measured by interval recording (Figure 6, top panel), and frequency recording procedures (Figure 6, bottom panel). In addition, assessment procedures in analog settings appeared to rely more on therapists to present conditions (Figure 3, bottom panel) and non-specified data collectors (Figure 4, top panel) using frequency counts of problem behavior (Figure 4, bottom panel). Findings indicated few differences across analog and classroom settings in the number of students evaluated, conditions that maintained problem behavior, and application of assessment results. In addition, the intensity of assessment conditions appeared similar across settings given minimal differences in length of assessment sessions, number of conditions, and number of replications per condition.
A functional behavioral assessment is a collection of procedures used to gather information associated with the occurrence of problem behavior, and can include indirect, descriptive, and experimental procedures such as a functional analysis (Gresham et al., 2001). Functional analysis can be an effective method of experimentally validating behavior hypotheses, but are often difficult to implement in classroom settings, given the need for experimental control (Lalli, Browder, Mace, & Brown, 1993). As a result, many authors have examined the research and called for additional assessment procedures that are more appropriate for classroom settings. In order to develop appropriate assessment procedures, it first appears necessary to evaluate the quality of the research available to guide practice in school settings. The purpose of this review was to evaluate the existing research regarding the implementation of functional analysis procedures in school settings, in an attempt to highlight areas that require investigation in future studies. Coding procedures focused on four areas: location of assessment procedures, participant demographics, assessment procedures, and assessment results. After coding the articles, results highlighted four topics that require further discussion: (a) participants are not representative of general classroom students (b) data collection procedures focused on high rates of behavior, (c) assessment conditions in classroom settings require similar resources as analog settings, and (d) articles typically do not provide information concerning the use of individualized interventions based on reinforcement.
Although the results of this review did indicate that study participants were most often students without disabilities (35.4%), 50% of the students had a label consistent with a low incident disability (e.g. Pervasive Developmental Disability, Down Syndrome, Mental Disability, etc.). The frequent use of children that have low incident disabilities could limit the relevance of using functional analysis procedures to evaluate the problem behavior of a typically developing student. In a general classroom setting, it is more likely that a student would have a classification of a significant emotional/behavioral disorder (7.4% of total disability enrollment), compared to students with autism/traumatic brain injuries (2.8%) or multiple disabilities (2.1%) (Snyder, 2005). Students that have emotional disabilities are likely to have significantly higher cognitive skills, compared to students with developmental disabilities, which may affect their interaction with environmental events and reinforcing variables. Additional examples of functional analytic procedures are needed that can be employed to evaluate behavioral hypotheses for typically developing students with significant behavioral/emotional problems, as opposed to studies continuing to focus functional analysis procedures on students with developmental delays.
Another finding of the current study that highlights the limited application of functional analysis procedures to school settings is the high rate of problem behavior demonstrated across participants within the studies. Given that most students with high rates of problem behavior received assessment procedures in analog conditions, which may seem appropriate given the need for additional control and the impact on instruction, it does little to ensure that assessment results are relevant to variables in the classroom setting. Only 3 of the 48 students that received assessment procedures in a classroom setting demonstrated behaviors at a rate somewhat typical of classroom behaviors. Some might consider low rates of problem behavior as insignificant, but high intensity behaviors that can occur at a low rate are more common in classroom settings, and can have a significant impact on classroom activities (Sterling-Turner, Robinson, & Wilczyn-ski, 2001). Students might engage in non-compliant behavior or verbal outbursts only once an hour, but the intensity of the behavior is significant enough to disrupt the entire lesson, especially if it occurs at the beginning of the lesson. Although most students that demonstrated higher rates of problem behavior received functional analysis procedures in an analog setting, assessing high intensity behaviors can still be challenging given the fact that experimental manipulation might promote occurrences of high levels of problem behavior that are undesirable (Sturmey, 1995). School-based personnel need examples of procedures that they can use to evaluate a behavioral hypothesis for a problem behavior in a typical classroom setting.
An interesting aspect of this review that also requires consideration is the high degree of similarity between many of the assessment procedures implemented in classroom and analog settings. Findings indicate that the number of conditions implemented, the number of replications for each condition, and the length of each condition are similar across classroom and analog settings. In addition, assessment procedures implemented in classroom settings rarely employed teachers in the collection of data, and relied on either graduate students or another unspecified professional. Given the fact that natural settings such as classrooms are typically difficult to control, recording behavior and implementing multiple assessment conditions with the same frequency and duration as analog settings would likely require significant resources (e.g., individuals skilled in conducting functional analyses) that are not typically available in school settings. Considering the difficulty of establishing experimental control over classrooms and the time required to conduct a functional analysis within a classroom environment, practitioners would benefit from additional research demonstrating classroom friendly assessment procedures.
Of ultimate concern is the fact that many of the assessment procedures reviewed in this study did not present information concerning the development of individualized positive behavior interventions using reinforcement. Although studies reviewed often used reinforcement for individualized interventions, most studies either did not present intervention results or evaluated intervention procedures by applying the same intervention across students. Given that the purpose of any assessment is to develop interventions that will improve an individual's quality of life, the current research would appear insufficient for guiding the development of interventions in classroom settings based upon results of a functional analysis. Although intervention procedures can include a variety of strategies (e.g. selecting an appropriate replacement behavior, developing prompting procedures, changing reinforcement schedules, etc.), practitioners in school settings require examples of assessment procedures that lead directly to the implementation of intervention procedures in classroom settings. If the literature does not present sufficient examples, it is unlikely that practitioners would be able to translate results from a functional analysis into an effective intervention. Although researchers will likely continue to present research on the application of a variety of assessment procedures across multiple settings, researchers need to present more examples of how those results might translate into interventions in general education classroom environments.
Given these findings, it would appear appropriate to recommend that future research be conducted in classroom settings. Although standard functional analysis procedures may be appropriate in analog settings, they often have limited ecological and social validity. Practitioners require examples of assessment procedures that are relevant for classroom settings, and lead to effective interventions. Although the literature has offered methods for modifying and altering experimental assessment procedures for implementation in classroom settings (Ellis & Magee, 2004), this research base remains limited and focused on participants with low incidence disabilities engaging in high rates of aberrant behavior (Ervin, Radford et al., 2001). Other promising assessment procedures that deserve further attention include structural analysis, conditional probabilities, and contingency space analysis. These methodologies appear more applicable for the assessment of problem behavior in typical classroom settings, but also require additional investigation to establish their accuracy and ability to verify behavioral hypotheses (Eckert, Martens, & DiGennaro, 2005; English & Anderson, 2006; Martens, DiGennaro, Reed, Szczech, & Rosenthal, 2008).
First, not every journal that contained relevant, school-based, behavioral information was included in the manual search for articles. The manual review did utilize a representative sample of journals, and both the PsychlNFO and ERIC database searches likely examined all available relevant peer reviewed journals. Second, studies not based in public school settings were not included in this review. While articles that present findings from private or university based settings may be relevant, it is likely that these settings have additional resources and training not readily available to personnel in public school settings. Third, in order to make the review of materials manageable, given the procedural variations present in the current literature, articles included in this review were limited to functional analysis procedures that manipulated consequence based variables. Consequence based manipulations were selected due to their relevance to the original methodology of functional analysis, as demonstrated in Iwata et al. (1982/1994). For example, Moore, Edwards, Wilczynski and Olmi (2001) attempted to evaluate the function of student problem behavior by manipulating the difficulty level of academic work administered to students and the level of attention. Because this study did not manipulate consequences for inappropriate behavior, it was not included in the analysis. A final limitation of this study is that it was necessary to categorize some data under the category of 'not specified' or 'other'. We attempted to be as accurate as possible in our labeling, but the lack of detailed explanations in each article presented challenges during the categorization of the dimensions.
Overall, the findings of this study support the conclusion that current demonstrations of functional analysis procedures in classroom settings appear inadequate for guiding implementation in classroom settings. The current research appears to focus on participants with low incident disabilities that demonstrate a high level of problem behavior. In addition, assessment procedures typically require resources that are not common in classroom settings and do not focus enough on the development of individualized interventions. Given these findings, practitioners need additional examples of assessment procedures that are not only relevant for classroom setting, but are likely to lead to effective interventions.
Asmus, J. M., Vollmer, T. R., & Borrero, J. C. (2002). Functional behavioral assessment: A school-based model. Education & Treatment of Children, 25, 67-90.
Carr, E. G. (1977). The motivation of self-injurious behavior: A review of some hypotheses. Psychological Bulletin, 84, 800-816.
Cone, J. D. (1997). Issues in functional analysis in behavioral assessment. Behaviour Research and Therapy, 35, 259-275.
Conroy, M., Fox, J., Crain, L., & Jenkins, A. (1996). Evaluating the social and ecological validity of analog assessment procedures for challenging behaviors in young children. Education & Treatment of Children, 19, 233-256.
Desrochers, M. N., Hile, M. G., & Williams-Moseley, T. L. (1997). Survey of functional assessment procedures used with individuals who display mental retardation and severe problem behaviors. American Journal on Mental Retardation, 101, 535-546.
Durand, V. M., & Crimmins, D. (1988). Motivational assessment scale. Topeka, KS: Monaco.
Eckert, T. L., Martens, B. K., & DiGennaro, F. D. (2005). Describing antecedent-behavior-consequence relations using conditional probabilities and the general operant contingency space: A preliminary investigation. School Psychology Review, 34, 520-528.
Ellingson, S. A., Miltenberger, R. G., Stricker, J., Galensky, T. L., & Gar-linghouse, M. (2000). Functional assessment and intervention for challenging behaviors in the classroom by general classroom teachers. Journal of Positive Behavior Interventions, 2, 85-97.
Ellis, J., & Magee, S. (2004). Modifications to basic functional analysis procedures in school settings: A selective review. Behavioral Interventions, 19, 205-228.
Ellis, J., & Magee, S. K. (1999). Determination of environmental correlates of disruptive classroom behavior: Integration of functional analysis into public school assessment process. Education & Treatment of Children, 22, 291-316.
English, C. L., & Anderson, C. M. (2006). Evaluation of the treatment utility of the analog functional analysis and the structured descriptive assessment, Journal of Positive Behavior Interventions, 8, 212-229.
Ervin, R. A., Ehrhardt, K. E., & Poling, A. (2001). Functional assessment: Old wine in new bottles. School Psychology Review, 30, 173-179.
Ervin, R. A., Radford, P. M., Bertsch, K., Piper, A. L., Ehrhardt, K. E., & Poling, A. (2001). A descriptive analysis and critique of the empirical literature on school-based functional assessment. School Psychology Review, 30, 193-210.
Gresham, F., Watson, T. S., & Skinner, C. H. (2001). Functional behavioral assessment: Principles, procedures, and future directions. School Psychology Review, 30,156-172.
Gresham, F. M., Mclntyre, L. L., Olson-Tinker, H., Dolstra, L., McLaughlin, V., & Van, M. (2004). Relevance of functional behavioral assessment research for school-based interventions and positive behavioral support. Research in Developmental Disabilities, 25, 19-37.
Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis, 36, 147-185.
Individuals with Disabilities Education Improvement Act of 2004, PL. 108-446.
Iwata, B. A., Dorsey, M. F., Slifer, K. J., & Bauman, K. E. (1982/1994). Toward a functional analysis of self-injury, Journal of Applied Behavior Analysis, 27, 197-209.
Iwata, B. A., & Worsdell, A. S. (2005). Implications of functional analysis methodology for the design of intervention programs. Exceptionality, 13, 25-34.
Kates-McElrath, K., Agnew, M., Axelrod, S., & Bloh, C. L. (2007). Identification of behavioral function in public schools and a clarification of terms. Behavioral Interventions, 22, 47-56.
Killu, K., Weber, K. P., Derby, K. M., & Barretto, A. (2006). Behavior intervention planning and implementation of positive behavioral support plans: An examination of states' adherence to standards for practice, Journal of Positive Behavior Interventions, 8, 195-200.
Lalli, J. S., Browder, D. M., Mace, F. C, & Brown, D. K. (1993). Teacher use of descriptive analysis data to implement interventions to decrease students' problem behaviors. Journal of Applied Behavior Analysis, 26, 227-238.
Mace, F. C. (1994). The significance and future of functional analysis methodologies. Journal of Applied Behavior Analysis, 27, 385-392.
Mace, F. C, Yankanich, M. A., & West, B. J. (1988). Toward a methodology of experimental analysis and treatment of aberrant classroom behaviors. Special Services in the Schools, 4, 71-88.
Martens, B. K., DiGennaro, F. D., Reed, D. D., Szczech, F. M., & Rosenthal, B. D. (2008). Contingency space analysis: An alternative method for identifying contingent relations from observational data. Journal of Applied Behavior Analysis, 41, 69.
Martin, N. T., Gaffan, E. A., & Williams, T. (1999). Experimental functional analyses for challenging behavior: A study of validity and reliability. Research in Developmental Disabilities, 20, 125-146.
Moore, J. W., Edwards, R. P., Wilczynski, S. M., & Olmi, D. J. (2001). Using antecedent manipulations to distinguish between task and social variables associated with problem behaviors exhibited by children of typical development. Behavior Modification, 25, 287-304.
Neef, N. A., & Peterson, S. M. (2007). Functional behavioral assessment. In J. O. Cooper, T. E. Heron & W. L. Heward (Eds.), Applied Behavior Analysis (pp. 500-524). Columbus, Ohio: Pearson.
O'Neill, R. E., Horner, R. H., Albin, R. W., Sprague, J. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program development for problem behavior: A practical handbook (2nd ed.). Pacific Grove, CA: Brooks/Cole Publishing Company.
Repp, A. C, & Karsh, K. G. (1994). Hypothesis-based interventions for tantrum behaviors of persons with developmental disabilities in school settings. Journal of Applied Behavior Analysis, 27, 21.
Schill, M. T., Kratochwill, T. R., & Elliott, S. N. (1998). Functional assessment in behavioral consultation: A treatment utility study. School Psychology Quarterly, 13, 116-140.
Snyder, T. D. (2005). Table 50. Children 3 to 21 years old served in federally supported programs for the disabled, by type of disability. Digest of Educational Statistics, from http://nces.ed.gov/programs/digest/d05/tables/dt05_050.asp?referrer=report
Sterling-Turner, H. E., Robinson, S. L., & Wilczynski, S. M. (2001). Functional assessment of distracting and disruptive behaviors in the school setting. School Psychology Review, 30, 211-226.
Sturmey, P. (1995). Analog baselines: A critical review of the methodology. Research in Developmental Disabilities, 16, 269-284.
Sugai, G., Sprague, J. R., & Horner, R. H. (1999). Functional-assessment-based behavior support planning: Research to practice to research. Behavioral Disorders, 24, 253.
Vollmer, T. R., & Northup, J. (1996). Some implications of functional analysis for school psychology. School Psychology Quarterly, 11, 76-92.
Weber, K. P., Killu, K., Derby, K. M., & Barretto, A. (2005). The status of functional behavior assessment (FBA): Adherence to standard practice in FBA methodology. Psychology in the Schools, 42, 737-744.
Correspondence to Scott P. Ardoin, 325L Aderhold Hall, Athens, GA 30602.; e-mail: firstname.lastname@example.org.
Mark D. Solnick
University of South Carolina
Scott P. Ardoin
University of Georgia