Printer Friendly

Improving teachers' knowledge of functional assessment-based interventions: outcomes of a professional development series.

Abstract

This paper provides outcomes of a study examining the effectiveness of a year-long professional development training series designed to support in-service educators in learning a systematic approach to functional assessment-based interventions developed by Umbreit and colleagues (2007) that has met with demonstrated success when implemented with university supports. Forty-eight educators attended a 4-day, practice-based professional development series, with coaching and applied practice occurring between sessions. Participants completed pre- and post-training surveys to evaluate their perceived knowledge, confidence, and usefulness as well as actual knowledge of 15 concepts and strategies addressed in the training series. Outcomes as well as implications for professional development to support in-service teachers and school personnel in learning how to design, implement, and evaluate functional assessment-based interventions are offered. Results indicated statistically significant improvements in each concept and strategy measured. Findings are discussed in light of noted limitations, with recommendations offered to improve subsequent professional development efforts.

Key words: functional assessment-based interventions, professional development, treatment integrity, social validity, school-based interventions

**********

Functional assessment-based interventions (FABIs) are designed based on the reasons why challenging behaviors occur. Rather than using applied behavior analytic techniques to simply suppress undesirable behaviors, FABIs use a range of tools including teacher, student, and parent interviews; direct observation (A-B-C) data; rating scales, and experimental analyses (functional analysis) to identify antecedents (A) that set the stage for problem (target) behaviors (B) to occur, and the consequences (C) that maintain the target behavior. Information from these multiple sources are used to determine the function(s) of the behavior, which includes accessing or avoiding attention, tangibles/ tasks, or sensory experiences. Then, an intervention is designed to teach the student a functionally equivalent (replacement) behavior offering the student a new way of getting their needs met (although in some cases the replacement behavior does not serve the same function as the target behavior, see Umbreit, Ferro, Liaupsin, & Lane, 2007). Such interventions are valued given they respect the communicative intent of behavior, avoid punishment procedures, support skill building, and yield new behaviors that generalize and maintain over time (Cooper, Heron, & Heward, 2007).

In looking at the development of FABIs, functional assessment procedures initially were studied in clinical settings to support individuals with developmental disabilities who exhibited self-injurious behavior (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982). During the last two decades, functional assessment procedures have been applied in new ways. For example, FABIs have been conducted in authentic settings including general education classrooms, self-contained classrooms, and alternative learning centers and have been conducted for students at risk for school failure, attention difficulties, emotional and behavioral disorders (EBD), autism spectrum disorder, and severe disabilities (e.g., Gann, Ferro, Umbreit, & Liaupsin, 2014). These procedures have moved from highly controlled clinical settings and into applied settings to assist students with a range of behaviors (e.g., disruption and engagement).

Although mandated in the Individuals with Disabilities Education Improvement Act (IDEA, 2004), concerns have been raised by the research community that this mandate may not be warranted if FABIs do not meet minimum criteria to establish it as an evidence-based practice. While some suggest FABIs do meet criteria for being an evidence-based practice (Gage, Lewis, & Stichter, 2012), others contend this mandate may be a bit hasty suggesting requisite evidence is not yet available to establish the efficacy of FABIs with students with and at risk for high incidence disabilities or EBD in authentic K-12 settings delivered by practitioners (Fox, Conroy, & Heckaman, 1998; Lane, Kalberg, & Shepcaro, 2009; Sasso, Conroy, Stichter, & Fox, 2001).

One issue making it difficult to evaluate FABIs as a practice is the tremendous variability in how (a) data collected using traditional functional assessment tools are analyzed, (b) interventions are designed based on functional assessment results, (c) core quality indicators are addressed in interventions leading to legally defensible plans, and (d) generalization and maintenance are assessed (Benazzi, Horner, & Good, 2006; Goh & Bambara, 2012).

A Systematic Approach to Functional Assessment-based Interventions

Umbreit and colleagues (2007) designed a systematic approach to identify maintaining function(s) of target behaviors and developing interventions linked directly to the function(s). This systematic approach includes key features such as the Function Matrix to assist practitioners in analyzing data from all functional assessment tools to determine the hypothesized function(s) of the target behavior.

A second unique feature of this systematic approach is the Function-Based Intervention Decision Model to guide intervention planning. This model involves two core questions: (1) Can the student perform the replacement behavior? and (2) Do antecedent conditions represent effective practices? Responses focus the intervention on one of three methods. Method 1: Teach the Replacement Behavior is used when the replacement behavior is not in the student's repertoire (acquisition deficit; Gresham & Elliott, 2008) or performed fluently and when the classroom represents effective practices. Method 2: Improve the Environment is used when the student has the replacement behavior in his or her repertoire, yet the antecedent conditions preceding the target behavior may not offer the most effective conditions for learning to occur for this student (Lane, Weisenbach, Phillips, & Wehby, 2007). Method 3: Adjust the Contingencies is used when the replacement behavior is in the student's repertoire and antecedent conditions represent effective practices, with intervention efforts focused on shifting reinforcement rates: decreasing the rate of reinforcement for the target behavior and increasing the rate of reinforcement for the replacement behavior. Each method includes A-R-E components: teaching or modifying antecedents (A), reinforcing (R) the occurrence of the replacement behavior, and withholding reinforcement (extinction, E) of the target behavior. Once the intervention method is selected, the school-site team designs specific plan tactics.

A third feature of this systematic approach is the inclusion of core intervention elements to draw valid inferences regarding intervention outcomes. As a practice, the model includes the previously described features as well as (a) use of an experimental design (e.g., withdrawal, multiple baseline) that allows for experimental control to be established; (b) assessment of treatment fidelity to ensure all procedures are in place; and (c) assessment of social validity to measure the social significance of the goals, social acceptability of the treatment procedures, and social importance of effects (Gast, 2010).

Several studies have tested this systematic approach to constructing FABIs. Lane, Bruhn, Cmobori, and Sewell (2009) conducted a systematic literature review using core quality indicators developed by Horner et al. (2005) as criteria to evaluate the extent to which this systematic approach for FABIs was an evidence-based practice (EBP). If the practice is evidence-based, then it will provide a systematic approach for meeting the mandate established in IDEA and offer practitioners a respectful support for students with challenging behavior. If so, the next step would be to design a professional development series to increase the capacity of practitioners to learn and implement this feasible approach to FABI with fidelity.

In their review, Lane, Bruhn et al. (2009) found nine articles meeting inclusion criteria, containing students with diverse characteristics including those with specific learning disabilities (e.g., Lane, Rogers et al., 2007), at risk for negative outcomes (e.g., Lane, Weisenbach et al., 2007), and with comorbid conditions (Stahr, Cushing, Lane, & Fox, 2006). Likewise students were of a variety of ages ranging from kindergarten (Lane, Smither, Huseman, Guffey, & Fox, 2007) to high school (Turton, Umbreit, Liaupsin, & Bartley, 2007). While all studies were conducted in school settings, they ranged across inclusive schools, traditional general education schools, and self-contained schools. Six studies met [greater than or equal to] 80% of specified core quality indicators. In evaluating these six studies, this practice fell short of the established criteria for defining it as evidence-based because studies supported nine participants rather than the requisite 20 participants. Yet, remaining standards were met: (a) the practice was operationally defined; (b) the context and associated outcomes were defined; (c) treatment fidelity was established; (d) a functional relation between the practice and changes in dependent variables was evident; and (e) experimental effects were replicated across a minimum of five peer-reviewed studies by three different researchers in three unique locales. Given all standards except for the requisite number of participants were established, it may be additional studies using the same rigorous research design features are needed to establish this systematic approach as an EBP. Since this review, other studies have been published using this model with students with challenging behaviors in authentic K-12 settings (e.g., Germer et al., 2011). We suggest this systematic approach to conducting FABIs is at least a promising practice--and perhaps now an EBP--yielding socially valid changes for school-age students with and at risk for behavior challenges. Yet, most studies were conducted by university researchers (e.g., pre-service teachers as research assistants). This raises questions as to whether or not this practice can be used by practitioners, independent of university support, with sufficient feasibility and required rigor to obtain desired outcomes.

Practitioner-led Functional Assessment-based Interventions

To date there is initial evidence to suggest this practice can be accomplished with limited university support. For example, in one study, two general educators conducted FABIs for students identified through systematic behavior screening procedures to be at risk of developing persistent behavioral challenges (Lane, Weisenbach, Little, Phillips, & Wehby, 2006). Teachers participated in a six-hour summer training to learn principles of applied behavior analysis and the FABI model. During the school year while they were implementing the FABI procedures, teachers met with a university-based liaison for one-hour per week for support in using the Function Matrix, designing the intervention using the Function-Based Intervention Decision Model, and developing the intervention with materials for monitoring treatment integrity. Teachers led the FABI with support from the liaison. Attention was given to the feasibility of data collection procedures given teachers served as interventionists and primary data collectors (with liaisons collecting interobserver agreement data; IOA). Student outcomes were positive. One student, Marcus, improved his rate of engagement and the second student, Julie, increased her level of work completion. As interventionists, teachers implemented the A-RE intervention components with high rates of treatment integrity, as measured by teacher self-report and confirmed by liaisons. Teachers rated the procedures, goals, and outcomes favorably, indicating social validity of the FABI practice. In this study, teachers successfully assumed the lead role in the design, implementation, and evaluation of the FABI obtaining positive student outcomes.

Lane, Barton-Arwood, Spencer, and Kalberg (2007) examined how to support in-service teachers in learning this specific FABI process. Four elementary teams attended a university training series that included (a) three 6-hour sessions and (b) 1-hour onsite meetings twice per month (28-30 hrs). Each team selected one student to support. Two cases were offered as illustrations, with both suggesting a functional relation between the A-R-E intervention package and changes in students' behaviors. Social validity ratings provided by students, teachers, and one parent suggested favorable perceptions. Findings offer initial evidence to suggest that with training and coaching support, in-service practitioners can implement these procedures with fidelity and yield positive student outcomes. Yet, these studies did not examine the extent to which participants learned the content taught during the training series.

Ample evidence suggests this systematic approach is effective when implemented with university support. We documented that pre-service teachers learned and applied this process during their preparation programs (Lane, Oakes, & Cox, 2011) and initial evidence shows this can be done by in-service teachers (Lane, Barton-Arwood et al., 2007). The next step is to determine if these results can be replicated with in-service teachers using a practice-based approach to professional development. In this study, we examined how the training series with coaching support improved practitioners' knowledge of, confidence in, and views on the usefulness of the strategies and concepts needed to design, implement, and evaluate FABIs. This practice-based professional development model is described in the method section.

We emphasize that No Child Left Behind Act (NCLB, 2001) and IDEA both issued a call for schools to provide high-quality instruction for all students (including those with exceptionalities) in inclusive settings with the support of EBPs (Cook, Smith, & Tankersley, 2012). Functional assessment-based interventions are one such practice. While changes in student performance are an important behavioral marker of successful professional development offering, knowledge acquisition is an important starting point (Bergstrom, 2008; Crone, Hawkens, & Bergstrom, 2007; Kratochwill, Volpiansky, Clements, & Ball, 2007). This is particularly true for intensive supports such as FABIs that require additional knowledge and skill acquisitions beyond general classroom management principles typically offered in many teacher training programs (Brunsting, Sreckovic, & Lane, 2014).

Purpose

We investigated three questions: Did participants demonstrate increased perceived and actual knowledge of core features of functional assessment-based interventions? Did participants demonstrate increased perceived confidence in their ability to use the techniques taught? Did participants demonstrate increased perceived usefulness of the strategies taught? Based on the professional development model employed and the success of similar interventions conducted with university support, we hypothesized participants would demonstrate increased knowledge (perceived and actual), confidence, and perceived usefulness (Harris et al., 2012). We also anticipated these outcomes given Barton-Arwood, Morrow, Lane, and Jolivette (2005) found improvements in perceived knowledge, confidence, and usefulness mean scores using a similar measure for 22 educators following a one-day workshop focused on teaching social skills and replacement behaviors, conducted as part of Project IMPROVE, a professional development project targeting positive behavior supports.

Method

Participants and Setting

Participants were 48 educators attending a professional development training series hosted by a district located in a large suburban area in a Midwestern state that provides special education services to surrounding partner districts. Educators were predominately female (n = 29; 87.88%) from eight partner school districts whose students' with special educational needs were served through the special school district (see table 1). General and special educators comprised the majority of the group (n = 10; 21.74% and n = 16; 34.78%, respectively) with the remaining being related service providers and school administrators. The majority taught at the elementary level. Participants were highly educated with 40 participants holding a Master's degree or higher. More than 97% were certified in their current assignment and one was seeking board certification as a behavior analyst (Behavior Analyst Certification Board, n.d.).

The hosting district supported all special and technical education programming and multi-tiered systems of prevention training and coaching for 22 partner school districts, allowing students to attend neighborhood schools while receiving these centralized services. Collectively there were 265 schools across all partner districts. During the school year of this study, student enrollment across all districts was 145,356 with 17.37% (n = 25,252) of students served through district programs. More than 1,600 students were served under the category of emotional disturbance (ED; IDEA, 2004) requiring that they have behavior intervention plans in place. Four lead coaches employed by the special school district supported schools with coaching during this series. They had expertise in school-wide Positive Behavior Interventions and Supports; Tier 2 and 3 systems development and EBPs; data-based decision making; and planning, implementing, and evaluating professional development. On average coaches supported four teams (range: 3-9) at the training series and through the application of each step through face-to-face on-site visits, phone conversations, and email.

Procedures

Sixty-nine educators representing eight partner districts registered to attend the training series. Prior to the first day of training, we secured necessary approvals from the university, the hosting district, and eight partner districts to invite attendees to participate in the current study evaluating participant learning. Participants registered for the training series as school-based teams. Each team supported one student during this series by applying each step of the FABI process with coaching support. Because FABIs are part of regular school practices for students demonstrating the need for tertiary supports, teams contacted parents for permission for their child to participate in the intervention according to district procedures.

We invited the 69 educators attending the training series on the first day to participate in the study. As part of the evaluation study, educators were asked to allow researchers to analyze for research purposes (a) the data collected over the course of the training process as they designed, implemented, and evaluated functional assessment-based interventions and (b) the pre-and post-Knowledge, Confidence, and Use survey data completed to evaluate the overall learning process. They were informed that the same information would be analyzed and shared with the district (without using their names or their schools' names) so the district could evaluate the overall effectiveness of the professional development series. Attendance at the training series was not dependent on participation in the evaluation study, meaning people who elected to attend the training series were invited, but not required, to participate in this evaluation study.

Forty-eight registered educators agreed to participate and completed the pre-training survey. Three participants attended only Day 1 of the training and, therefore, did not have the opportunity to complete the post-training measure. Of the remaining 45 participants, 39 completed the post-training measure.

The 45 participants constituted 19 school teams. After the school teams were consented, we provided consenting information for school teams to send to parents of the students selected for the functional assessment-based intervention in addition to their district procedures. For parents to be invited to participate in the study, at least one member of the school team had to be a study participant. Of the 19 teams completing the training, parent materials (e.g., two copies of the consent form, a stamped return envelope, and an envelope to send the materials to the parents) were given to 15 teams. Parents were asked to allow researchers to analyze information collected about their child as the team designed and implemented the behavioral supports. Parents were asked to talk with their child (depending on maturity level) about their comfort with allowing his/her information to be used to help children and teachers in other schools. Parent consent forms were returned from parents of 11 students. One parent declined participation and one student left the school before the team completed the functional assessment. This resulted in nine students completing the study. Although student outcomes are not the focus of the current study, we note students included eight elementary students (grades: first, n - 4; second, n = 1; third, n = 2; fourth, n = 1) and one middle school student (grade six). Eight were male and five received special education services.

Practice-based Professional Development

Members of our research team have been involved in pre- and in-service professional development for many years (see Harris et al., 2012; Lane, Barton-Arwood et al., 2007; Lane, Oakes et al., 2011). Our approach is influenced by the theoretical constructs associated with the method and importance of using a practice-based professional development approach (Grossman & McDonald, 2008). In our work with LABI and previous work with Project WRITE--which focused on improving the writing skills of students with behavior challenges (Harris et al., 2012)--we have had reservations regarding traditional approaches to professional development that are relatively brief and top down (the "sage on the stage" model). These models do not enable teachers to engage actively in the practices they are learning with sufficient (if any) support during implementation (Ball & Forzani, 2009; Sargent & Hannum, 2009).

As in the work on Project WRITE, in this study we aligned our development model with research on effective professional development using a practice-based approach (Desimone, 2009; Sarama, Clements, Starkey, Klein, & Wakeley, 2008). Practice-based professional development focuses on practitioners' knowledge and application of skills regarding an effective educational practice (i.e., FABI). This is in contrast to other professional development approaches that focus solely on increases in teachers' knowledge of practices. Theoretical and empirical evidence suggests several critical features of professional development: "(a) collective participation of teachers within the same school with similar needs; (b) basing professional development around the characteristics, strengths, and needs of the students ...; (c) attention to content knowledge needs of teachers ...; (d) opportunities for active learning and practice of the new methods ...; (e) use of the materials and other artifacts during professional development ... identical to those ... used in the classroom; and (f) feedback on performance while learning" (Harris et al., 2012, p. 105).

Each feature was addressed in our professional development approach used in this study. For example, we created a model in which practitioner teams from each school committed to participating in this extended professional development to learn how to conduct FABIs with the support of district-level coaches. Second, as part of this process, practitioners examined students' behavioral, social, and academic strengths as well as areas of need. They identified a keystone behavior, that if learned by the student could facilitate access to instruction and support desirable behavior while still meeting the student's individual needs. Third, practitioners read published FABI cases to serve as exemplars and learned each step within this multi-component intervention practice supported by the first and second authors. As well, they received on-site and web-based support from district-level coaches and their superior (third and fourth authors) with planned activities during and after each training session. Coaches provided timely performance feedback, coupled with behavior specific praise (Duchaine, Jolivette, & Fredrick, 2011). Materials and processes used in the training series were the same as those the district is adopting to support consistent district-wide implementation. This allowed an opportunity to learn and practice this FABI process (see table 2) using district materials while supporting a student at their school-site. We have worked collaboratively with the district for several years, developing a respectful district-university partnership. Theoretical and empirical evidence suggests such partnerships improve implementation of evidence-based practices, a research objective of the current project (Gallimore, Ermeling, Saunders, & Goldenberg, 2009).

The four-day training series was designed to teach school-based teams how to design, implement, and evaluate functional assessment-based intervention to support students exhibiting challenging behavior for whom primary and secondary preventions (e.g., Check In/ Check Out; Todd, Campbell, Meyer, & Homer, 2008) were insufficient. The training series was one professional development opportunity offered by the district for in-district and partner district faculty and staff. This series was offered under the umbrella of PBIS as a tertiary (Tier 3) prevention. Concepts and strategies taught and applied in this training series were those of the systematic process for conducting FABI published by Umbreit and colleagues (2007) and grounded in applied behavior analysis (see table 2). The five step process was as follows: (a) Step 1: Determining which students need a FABI; (b) Step 2: Conducting the functional assessment and determining the hypothesized function; (c) Step 3: Collecting baseline data; (d) Step 4: Designing the intervention; and (e) Step 5: Testing the intervention. District coaches attended the training series with the teams they were coaching through the process. Along with the trainers, coaches supported completion of activities during training. Coaches also worked with their teams at the school sites to examine data, coach through the application of each step, model procedures, and provide feedback. Specific examples of coaching activities included: reviewing the operational definition of the target and replacement behaviors; participating in team meetings to support interpretation of A-B-C data collected; modeling how to use the Function Matrix to determine the hypothesized function(s); providing additional instruction in data collection and graphing activities; demonstrating how to complete the Function-Based Intervention Decision Model to guide intervention planning; helping with the design of specific intervention tactics, with attention to ensuring the tactics were related to the maintaining function(s) of the target behaviors; and facilitating discussion on the challenges of completing each step within the context of the regular school day.

Step-by-step checklists were created to guide teams through each step of the FABI process. The checklists were also used by coaches to monitor teams' progress. After each step, teams completed the checklist and turned in all documents for their coaches to review.

Step 1: Determining which students needed a functional assessment-based intervention. In the first step, participants were provided with a referral checklist used to examine academic, behavioral, and social skill outcomes. Recommendations were made to use universal screening data to identify students in need of tertiary supports. Education records were examined to look for patterns of behavior, duration of difficulties, and any potential interventions that may have been implemented previously.

Step 2: Conducting the functional assessment and determining the hypothesized function. In the second step, functional assessment (FA) tools were used to identify the target behavior and determine the maintaining function(s) of this behavior. The process began with a systematic review of school records and informal observations in the classroom. Teams identified and operationally defined the target behavior during the teacher interview, followed by parent and student interviews to obtain information about the student's strengths and needs as well as information on potential functions of the target behavior. Next, three hours of direct observation (A-B-C recording) was conducted over at least three sessions. During the first professional development session, teams learned about how to collect A-B-C data as well as the theoretical underpinnings. They then participated in guided practice with the trainers. Next, videos were used to practice collecting A-B-C data focusing the observations on occurrences of the target behavior (B). Other topics included (a) determining when to conduct the observations (e.g., using the interview to guide this decision), (b) determining the optimal context for the intervention, (c) aligning schedules, and (d) preparing materials. The data from the functional assessment (FA) were then organized into the Function Matrix. To supplement the interviews and observational data, rating scales were used to assess possible skill acquisition deficits from teacher and parent perspectives (e.g., Social Skills Improvement System-Rating Scale; Gresham & Elliott, 2008). A hypothesis statement of the function of the target behavior was written based on data summarized in the Function Matrix.

Step 3: Collecting baseline data. In the third step, participants were taught, provided examples, and practiced methods for data collection, with multiple checks for understanding. Teams first determined the dimension of the behavior to be measured, using a flow chart to guide the decision-making process. Once the dimension of the target behavior was determined, an appropriate method of measure was selected and reliability procedures were taught and practiced. For example, if a discreet behavior was targeted such as taking others' materials, teams might have selected event recording. Participants learned two methods of data collection (momentary time sampling and event recording), using video clips to practice these methods. First the group reported and compared scores and differences were resolved. For example, the target behavior may not have been clearly defined or there may have been inconsistencies in recording procedures. Then IOA reliability procedures were taught and participants practiced with a team partner. Later teams revisited operational definitions of their student's identified target behaviors to ensure the examples and non-examples provided sufficient detail so the two observers would measure the behavior at the same occurrence. Additionally, teams were taught to use timing devices such as MotivAiders[R].

Step 4: Designing the intervention. In the fourth step, the Function-Based Intervention Decision Model was used to focus intervention development. Functional Assessment data were used to answer the two guiding questions previously noted. Then teams drafted A-R-E intervention tactics to address teaching or increasing the replacement behavior while meeting the hypothesized function. Teams included more ideas than would be used so the teachers implementing the intervention would have items to select that fit within his/her instructional style, schedule, and comfort level. Teams constructed the treatment integrity form to align intervention components with each tactic numbered. After session three, the teams met with the teachers (and often the coaches were in attendance) and revised the intervention tactics according to teacher feedback. Once the intervention was taught to the student and teacher by the team and checks for understanding were completed, social validity was assessed using rating scales (e.g., Intervention Rating Profile-15 and Children's Intervention Rating Profile; Witt & Elliott, 1985).

Step 5: Testing the intervention. The fifth step was to use an experimental design to test the intervention. When appropriate, teams used an ABAB withdrawal design for their learning case. Finally, the behavior intervention plan form was completed along with the final graph showing all phases. Teams addressed ethical issues throughout the training series with emphasis in the last three sessions. Post-intervention social validity was assessed from the teacher and students' perspective at the end of all phases. An optional fifth session was held for representative team members to create presentations to share this process and outcomes with their school faculty. Graphs and data were finalized with support of the coaches so that they were ready to be shared with teachers and parents.

For readers interested in replicating this professional development series, examples of the step checklists, professional development model methods, and team activities can be secured from the first and second authors.

Outcome Measure

To examine participant learning, pre- and post-training series surveys, adapted from the Borthwick-Duffy, Lane, and Mahdavi Project SKIL survey (2002) and also used in Project IMPROVE (Barton-Arwood et al., 2005), were administered to all participants who consented to participate in Focusing on Function. The survey included (a) participant ratings of perceived knowledge, confidence, and usefulness of Focusing on Function content (15 items), (b) participant report of actual knowledge of a subset of content items (10 items), and (c) participant demographic information. Participants rated 15 concepts and strategies constituting the process of designing, implementing, and evaluating FABIs taught in the professional development training series (see table 2 and professional development section). They provided three ratings for each item: how knowledgeable they perceived themselves to be about each concept or strategy, how confident they were in their ability to use the concept or strategy in the FABI training process, and the perceived usefulness of each concept or strategy. Ratings were completed using a 4-point Likert-type scale ranging from 0 to 3 (e.g., 0 = 1 have no knowledge of this concept or strategy, 1 = 1 have some knowledge of this concept or strategy, 2 = 1 have more than average knowledge of this concept or strategy, and 3 = 1 have a substantial amount of knowledge about this concept or strategy). After rating these 15 items of perceived knowledge, confidence, and usefulness according to the Likert-type scale provided, participants hand wrote definitions for 10 of these concepts or strategies to ascertain an actual knowledge score. All participants rated the same 10 items, believed to represent the most salient strategies and concepts taught (e.g., performance deficit, functional assessment-based intervention, social validity, operational definitions of behavior, positive reinforcement, replacement behavior, A-B-C data collection, antecedent adjustment, extinction, treatment integrity). Only 10 items were included in this section of the survey to ensure the survey did not exceed 15 min of participant time. Participants' definitions were scored independently by the first and second authors (with the second raters' scores used to compute interrater reliability) using a similar Likert-type scale (e.g., 0 = no knowledge; 1 = partially accurate knowledge, but inaccurate information included; 2 = partially accurate knowledge, with no inaccurate information included; and 3 = complete answer, with all provided information correct). This same type of methodology has been used to examine knowledge acquired in other professional development training offerings (Barton-Arwood et al., 2005). Demographic information included gender; current assignment (general education teachers, special education teacher, related service provider, or administrator); grade levels taught or supported; highest degree obtained; certification status, and years of experience in current assignment. Nominal demographics were requested to protect confidentiality and increase the likelihood of participants providing these data. The pre-training survey was completed on the first day of training, following consenting. The post-training survey was completed on the fourth day of training, following lunch.

We computed coefficient alphas to assess scale reliability. Reliability estimates were .95, .94, and .94 for perceived knowledge, confidence, and usefulness scales, respectively. Interrater reliability of scoring for actual knowledge scores was determined by computing Pearson correlation coefficients between first and second rater scores. The interrater reliability value for participants' actual knowledge was .98 (p < .0001) for pre-training series scores and .97 (p < .0001) or post-training series scores.

Statistical Analysis

Prior to conducting data analyses, ratings for the 15 concepts or strategies were summed to create composite scores for each construct (e.g., perceived knowledge [15 items], perceived confidence [15 items], and perceived use [15 items], and actual knowledge [10 items]). Total scores for 15 item composite scores ranged from 0 to 45 and 10 item composite scores ranged from 0 to 30, with higher scores indicating higher knowledge, confidence, or perceived usefulness. Mean scores were computed for pre- and post-training series scores. Pre-post scores for each construct were compared using dependent f-test (alpha = .05) to determine if there were statistically significant differences in mean scores for these four constructs over the course of the training. Effect sizes were computed using the Hedges's g formula between pre- and post-survey means scores for each construct to determine the magnitude of changes over time. Hedges's g was calculated using Comprehensive Meta-Analysis Version 2 program (CMA-2; Borenstein, Hedges, Higgins, & Rothstein, 2000), offering the standardized difference between the pre- and post-training series group means. Hedges's g was calculated using the mean, variance (pooled standard deviation), and sample size for each measure. Effect sizes are frequently described according to their range, with effect sizes of 1.0 indicating high-magnitude effects.

Results

Results of dependent f-test indicated statistically significant, high magnitude differences between pre- and post-training series mean scores on all constructs measured (see table 3). Specifically, participants demonstrated statistically significant increases in the levels of perceived knowledge, t(30) = 8.81, p < .0001 (effect size = 2.28); perceived confidence, t(26) = 11.43, p < .0001 (effect size = 2.55); and perceived usefulness, t(16) = 7.03, p < .0001 (effect size = 1.91), of the information pertaining to FABIs presented in the professional development training series. Furthermore, participants made significant improvements in their actual knowledge, t(36) = 14.01, p < .0001 (effect sizes = 2.64). Post-training series mean scores increased approximately two-fold on perceived knowledge and perceived confidence constructs and approximately three-fold for actual knowledge. The smallest increase (although still substantial as evidenced by an effect size of 1.91) was for perceived usefulness scores, increasing 1.5-fold. However, pre-training scores were the highest for this construct relative to the others, suggesting the participants who provided information regarding perceived usefulness believed the concepts and strategies were useful at the onset of the training series. Yet, their perceptions still significantly increased following the training series.

Discussion

We conducted the present study to determine the extent to which a practice-based professional development model for using a systematic, team-based approach to designing, implementing, and evaluating FABIs was effective in developing in-service teachers' knowledge, confidence, and perceived usefulness of key concepts and strategies. Multiple studies have documented establishing this systematic approach as an effective, efficient method of conducting FABIs when implemented by pre-service teachers as part of their preparation programs (Lane, Oakes, & Cox, 2011). Also, preliminary evidence has been offered to suggest this approach has been successful with in-service teachers as well (Lane, Barton-Arwood et al., 2007). Yet, Lane, Barton-Arwood et al. (2007) did not explicitly examine changes in teachers' perceived and actual knowledge, perceived confidence, or perceived usefulness of the core concepts and strategies constituting the professional development series. Given Barton-Arwood et al. (2005) documented impressive gains on similar constructs with a very brief, one-day professional development offering, we expected teachers would experience increases in these areas.

As expected, results indicated participants not only reported statistically significant increases in their perceived knowledge of the content covered during the training process, but also in their actual knowledge as evidenced in the responses to open-ended items. In some cases, training programs rely exclusively on teacher-self report data, which raises issues concerning reliability. For example, some participants may tend to inflate either pre- or post-training series scores due to issues of social desirability. People may inflate their scores in an effort to please others (social desirability) and/or portray to others they are more familiar with the content then they are in reality (Barton-Arwood et al., 2005). To address this concern, we assess participants' actual knowledge by asking them to explicitly define a subset (10 core concepts and strategies) of the perceived knowledge items. In looking at the mean scores for actual content knowledge, we recognize that while gains were substantial, increasing from a 6.46 (SD = 5.10) mean score to 19.33 (SD = 4.47), mastery was not achieved. With the potential range of scores 0-30, the 19.33 average suggests additional supports are needed to achieve a fuller understanding. However, representative samples from each of the nine student cases suggest some teachers were quite successful in applying the content learned to their practice case (Lane, Oakes, & Germer, 2014). For example, teams were able to operationally define target behaviors, collect A-B-C data, correctly complete a function matrix, and develop hypothesis statements. Thus, lower scores as measured using the Likert-type scales may reflect issues of terminology (e.g., not fully understanding the terminology used) and not problems with the application of the skill or concept. We note several teams did not complete the reintroduction of the intervention prior to the final day of training day, although they received instruction in how to do so during the training sequence. It may be additional training time (e.g., five full days)--or perhaps full participation in the currently planned four days (a point we discuss in the limitations section)--will be necessary to support teaching to mastery and allowing teams the opportunity to fully, experimentally test the intervention during the scope of the training sequence. It may also be that school-site administrators must allocate resources to allow the team members time to communicate for effective planning and data collection (Sugai & Homer, 2006). Given each team member was already managing a full schedule, creating extra time for the application of these new skills was challenging according to teachers and coaches. If the use of this systematic approach is a valued resource for supporting students in the school building, time must be allotted to educators to sufficiently conduct the procedures.

Not surprisingly, participants' confidence in their ability to use or implement the concepts and strategies increased, as did their perception of the usefulness or relevance of the concepts and strategies in their teaching. Consistent with the findings of Barton-Arwood et al. (2005), post-training series mean scores for perceived confidence, although increased from pre-training series scores, were lower than the post-perceived usefulness scores. Thus, although participants recognized the concepts and strategies as more useful at the end of the professional development training series, their level of confidence in implementation was not as strong. This may impact the degree to which they are willing to implement (and possibly the success associated with implementation) these new concepts and strategies. It may be that building fluency through coaching immediately after the completion of the training series may increase confidence in the use of procedures. Also consistent with Barton-Arwood et al. were the lower-magnitude changes in perceived usefulness scores relative to the changes observed in perceived knowledge and confidence. Yet, the changes were still quite pronounced (effect size 1.91) and pre-training series mean scores were quite high (M = 29.29) relative to perceived knowledge (M = 16.35) and perceived confidence (M = 14.92) mean scores.

Collectively, findings suggest this Focusing on Function training series was effective. Results indicated teachers improved in their perceived and actual content knowledge. Furthermore, they become more confident in their ability to implement what they learned and viewed the concepts and strategies as useful in their teaching. However, our enthusiasm regarding these findings is tempered in light of the following limitations.

Limitations and Future Directions

First, there are a number of concerns related to sample size. For example, of the 69 people attending the training series, only 48 (70%) elected to participate in the evaluation study. During the consenting process, a number of questions were raised as to how the data would be used and with whom data would be shared. It appeared some participants were concerned they would be making themselves vulnerable by revealing what they did not know at the onset of the training series. This may be due, in part, to the pressure many teachers are feeling in response to increased accountability efforts such as the NCLB Act and actual principal evaluations. Others teachers expressed concern about meeting IDEA mandates for writing the behavior intervention plans. We noted some whole school teams declined participation. Principals might already be pressured to improve practices for students and may have interpreted study procedures to be offering proof of inadequacies, despite the fact the information provided would be treated as confidential per study procedures. Data were not available to compare the characteristics of attendees who did and did not elect to participate. However, this would be important information in subsequent studies. Also, it would be of interest to explore the reasons why some attendees elected not to participate to better understand perceived concerns.

Noting three people did not return after the first day of training, 39 individuals completed post-training series surveys (39 out of 45), yielding an 87% completion rate. Given the relatively small scope of this study (small n) and the expectation that attrition would occur, we did not assess maintenance by administering a follow-up survey. The absence of maintenance data is a key limitation as one of the goals of intervention and professional development inquiry is to determine if learning sustains over time (Cooper et al., 2007). We strongly recommend future inquiry assess short- and long-term maintenance to see how knowledge, confidence, and perceived usefulness scores shift over time as participants apply this systematic process over the course of the academic year with other students beyond the training setting.

A final concern related to sample size is missing item level data, particularly with respect to usefulness items. In table 3, sample size is reported for each construct pre- and post-training series to illustrate the fluctuating sample size. For example, while 36 of the 39 completing participants completed all usefulness items following the training series, many participants did not complete all items on the pre-survey, which limited the number of cases included in the dependent t-test and prevented us from more detailed analyses (e.g., examining differential patterns of responding). Attrition and missing data are key concerns in any study given their impact on statistical analyses and the corresponding influence on achieving accurate answers to research questions posed, particularly when working with small sample sizes. We recommend future studies explore methods for incentivizing participants to fully participate in series such as Focusing on Function, including completing all assessments items. In other studies we have used drawings for nominal gift cards (e.g., $100) to increase completion of study materials. While we do not necessarily advocate for incentivizing the completion of any study activity that is being implemented as a regular school practice (e.g., implementation of the FABI), we have found it helpful for increasing the number of completed study related materials.

A second limitation pertains to the lack of information collected on coaching activities. Resources did not permit dosage or quality data to be collected on coaching activities occurring between each day of training. Although the homework tasks were specified for each team and their coaches, treatment integrity of these components were not monitored. We recommend future studies collect information on coaching tactics and dosage. Specifically, we suggest developing protocol to clearly define the coaching methods and then collecting treatment integrity data on the coaching process, each team member's participation in the coaching activities, and the quality of the coaching offered. Future studies should also compare coaching approaches such as the use of onsite coaching with Bug-In-Ear technologies, video feedback, modeling, self-monitoring, and remote supports through technology.

Finally, clearly the design, implementation, and evaluation of functional assessment-based interventions is challenging and requires practiced-based, continuum-based, high quality professional development that includes features such as internal coaching and training (Bergstrom, 2008; Kratochwill et al., 2007). NCLB and IDEA both suggest improvement in student performance is a key behavioral marker of high-quality professional development. This standard is a marked shift from previous goals for professional development offerings that sought only to obtain improvements in participants' knowledge acquisition (Bergstrom, 2008; Kratochwill et al., 2007). We now need to know if (a) participants learn the content, develop the skills, increase their confidence, and see the concepts and strategies learned as important as well as (b) implement the practices with sufficient fluency and fidelity to improve student performance. We strongly encourage subsequent inquiry to examine this next part: shifts in student performance. In taking on this next, important step we recommend other research teams to consider factors such as (a) participants' readiness to receive the training, (b) a plan to offer initial and on-going training support, and (c) administrative support for ongoing professional development activities as it is the administrators that hold the key to resource allocations (Bergstrom, 2008; Kratochwill et al., 2007). We view these findings to offer insight in changes in participants' knowledge, confidence, and use of concepts and strategies taught as important. Yet, this is only a starting point. To achieve long-term changes in students' performance, professional development activities must be embedded within a systems change view (Kratochwill et al., 2007).

Summary

Despite these limitations, we offer this study as modest evidence to indicate the professional development training series offered as part of Focusing on Function: Designing, Implementing, and Evaluating Functional Assessment-Based Interventions, was effective in supporting in-service educators to learn Umbreit and colleagues' (2007) systematic approach to functional assessment-based interventions. In this study we focused on participants' learning outcomes using pre- and post-training surveys to evaluate their perceived knowledge, confidence, and usefulness as well as actual knowledge for 10 concepts and strategies addressed in the training series. Results suggested that while mastery was not achieved, there were statistically significant improvements on each construct measured. We encourage other research teams to join us in this continuing line of inquiry--with an emphasis on how to support professional learning using a range of coaching support, recognizing the multiple constraints placed on teacher-based teams as they learn this systematic approach to designing, implementing, and evaluating functional assessment-based interventions for students with behavioral challenges using a practice-based professional development model.

References

Ball, D. L., & Forzani, F. M. (2009). The work of teaching and the challenge for teacher education. Journal of Teacher Education, 60, 497-511.

Barton-Arwood, S., Morrow, L., Lane, K. L., & Jolivette, K. (2005). Project IMPROVE: Improving teachers' ability to address student social needs. Education and Treatment of Children, 28, 430-443.

Benazzi, L., Homer, R. H., & Good, R. H. (2006). Effects of behavior support team composition on the technical adequacy and contextual fit of behavior support plans. Journal of Special Education. 40, 160-170.

Bergstrom, M. K. (2008). Professional development in response to intervention: Implementation of a model in a rural region. Rural Special Education Quarterly, 27, 27-36.

Behavior Analyst Certification Board (n.d.) Becoming certified: Board Certified Behavior Analyst. Retrieved from http://www.bacb.com

Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2000). Comprehensive meta-analysis (Version 2) [Computer software and manual], Englewood, NJ: Biostat.

Borthwick-Duffy, S., Lane, K. L., & Mahdavi, J. (2002). SKIL survey. Unpublished survey.

Brunsting, N. C., Sreckovic, M. A., & Lane, K. L. (2014). Special education teacher burnout: A synthesis of research from 1979 to 2013. Education and Treatment of Children. 681-711

Cook, B. G., Smith, G. J., & Tankersley, M. (2012). Evidence-based practices in education. In K. R. Harris, T. Urdan, and S. Graham (Eds.), American Psychological Association Educational Psychology Handbook (Vol. 1, pp. 495-527). Washington, DC: American Psychological Association.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River, NJ: Pearson.

Crone, D. A., Hawken, L. S., & Bergstrom, M. K. (2007). A demonstration of training, implementing, and using functional behavioral assessment in 10 elementary and middle school settings. Journal of Positive Behavior Interventions, 9(1), 15-29.

Desimone, L. M. (2009). Improving impact studies of teachers' professional development: Toward better conceptualizations and measures. Educational Researcher, 38,181-199.

Duchaine, E. L., Jolivette, K., & Fredrick, L. D. (2011). The effect of teacher coaching with performance feedback on behavior-specific praise in inclusion classrooms. Education and Treatment of Children, 34, 209-227.

Fox, J., Conroy, M., & Heckaman, K. (1998). Research in functional assessment of the challenging behaviors of students with emotional and behavioral disorders. Behavioral Disorders, 24, 26-33.

Gage, N. A., Lewis, T. J., & Stichter, J. P. (2012). Functional behavioral assessment-based interventions for students with or at risk for emotional and/or behavioral disorders in school: A hierarchical linear modeling meta-analysis. Behavioral Disorders, 37, 55-77.

Gallimore, R., Ermeling, B., Saunders, W. M., & Goldenberg, C. (2009). Moving the learning of teaching closer to practice: Teacher education implications of school-based inquiry teams. Elementary School Journal, 109, 537-553.

Gann, C. J., Ferro, J. B., Umbreit, J., & Liaupsin, C. J. (2014). Effects of a comprehensive function-based intervention applied across multiple educational settings. Remedial and Special Education, 35(1), 50-60. doi:10.1177/074193251350108

Gast, D. L. (2010). Single subject research methodology in behavioral sciences. New York, NY: Routledge.

Germer, K. A., Kaplan, L. M., Giroux, L. N., Markham, E. H., Ferris, G., Oakes, W. P., & Lane, K. L. (2011). A function-based intervention to increase a second-grade student's on-task behavior in a general education classroom. Beyond Behavior, 20, 19-30.

Goh, A. E., & Bambara, L. M. (2012). Individualized positive behavior support in school settings: A meta-analysis. Remedial and Special Education, 33, 271-286. doi:10.1177/0741932510383990

Gresham, F. M., & Elliott, S. N. (2008). Social skills improvement system: Rating scales. Bloomington, MN: Pearson Assessments.

Grossman, P., & McDonald, M. (2008). Back to the future: Directions for research in teaching and teacher education. American Educational Research Journal, 45,184-205.

Harris, K. R., Lane, K. L., Graham, S. Driscoll, S. A., Sandmel, K., & Schatschneider, C. (2012). Practice-based professional development for self-regulated strategies instruction in writing: A randomized controlled study. Journal of Teacher Education, 63, 103-119.

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165-179.

Individuals with Disabilities Education Improvement Act of 2004, Pub. L No. 108-446, 20 U.S.C. [section] 1400 et seq.

Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1982). Toward a functional analysis of self-injury. Analysis and Intervention in Developmental Disabilities, 2, 3-20.

Kratochwill, T. R., Volpiansky, P., Clements, M., & Ball, C. (2007). Professional development in implementing and sustaining multitier prevention models: Implications for response to intervention. School Psychology Review, 36, 618-631.

Lane, K. L., Barton-Arwood, S. M., Spencer, J. L., & Kalberg, J. R. (2007). Teaching elementary school educators to design, implement, and evaluate functional assessment-based interventions: Successes and challenges. Preventing School Failure, 52(4), 35-46.

Lane, K. L., Bruhn, A. L., Cmobori, M. E., & Sewell, A. L. (2009). Designing functional assessment-based interventions using a systematic approach: A promising practice for supporting challenging behavior. Advances in Learning and Behavioral Disabilities, 22, 341-370.

Lane, K. L., Kalberg, J. R., & Shepcaro, J. C. (2009). An examination of the evidence base for function-based interventions for students with emotional and/or behavioral disorders attending middle and high schools. Exceptional Children, 75, 321-340.

Lane, K. L., Oakes, W. P., & Cox, M. (2011). Functional assessment-based interventions: A university-district partnership to promote learning and success. Beyond Behavior, 20, 3-18.

Lane, K. L., Oakes, W. P., & Germer, K. (2014). Outcomes of a practice-based professional-development training of functional assessment-based interventions: From knowledge acquisition to application. Manuscript in preparation

Lane, K. L., Rogers, L. A., Parks, R. J., Weisenbach, J. L., Mau, A. C., Merwin, M. T., & Bergman, W. A. (2007). Function-based interventions for students who are nonresponsive to primary and secondary prevention efforts: Illustrations at the elementary and middle school levels. Journal of Emotional and Behavioral Disorders, 15, 169-183.

Lane, K. L., Smither, R., Huseman, R., Guffey, J., & Fox, J. (2007). A function-based intervention to decrease disruptive behavior and increase academic engagement. Journal of Early and Intensive Behavioral Intervention, 3.4-4.1, 348-364.

Lane, K. L., Weisenbach, J. L., Little, M. A., Phillips, A., & Wehby, J. (2006). Illustrations of function-based interventions implemented by general education teachers: Building capacity at the school site. Education and Treatment of Children, 29, 549-671.

Lane, K. L., Weisenbach,). L., Phillips, A., & Wehby, J. (2007). Designing, implementing, and evaluating function-based interventions using a systematic, feasible approach. Behavioral Disorders, 32, 122-139.

No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107-110 [section] 115, Stat. 1425 (2002).

Sarama, J., Clements, D. H., Starkey, P., Klein, A., & Wakeley, A. (2008). Scaling up implementation of a pre-kindergarten mathematics curriculum: Teaching for understanding with trajectories and technologies. Journal of Research on Educational Effectiveness, 1, 89-119.

Sargent, T. C., & Hannum, E. (2009). Doing more with less: Teacher professional learning communities in resource-constrained primary schools in rural China. Journal of Teacher Education, 60, 258-276.

Sasso, G. M., Conroy, M. A., Stichter, J. P., & Fox, J. J. (2001). Slowing down the bandwagon: The misapplication of functional assessment for students with emotional or behavioral disorders. Behavioral Disorders, 26, 282-296.

Stahr, B., Cushing, D., Lane, K. L., & Fox, J. (2006). Efficacy of a function-based intervention in decreasing off-task behavior exhibited by a student with ADHD. Journal of Positive Behavior Interventions, 8, 201-211.

Sugai, G., & Horner, R. R. (2006). A promising approach for expanding and sustaining school-wide positive behavior support. School Psychology Review, 35, 245-259.

Todd, A. W., Campbell, A. L., Meyer, G. G., & Homer, R. H. (2008). The effects of a targeted intervention to reduce problem behavior: Elementary school implementation of Check In-Check Out. Journal of Positive Behavior Interventions, 10, 46-55.

Turton, A. M, Umbreit, J., Liaupsin, C. J., & Bartley, J. (2007). Function-based intervention for an adolescent with emotional and behavioral disorders in Bermuda: Moving across culture. Behavioral Disorders, 33, 23-32.

Umbreit, J., Ferro, J., Liaupsin, C., & Lane, K. L. (2007). Functional behavioral assessment and function-based intervention: An effective, practical approach. Upper Saddle River, NJ: Pearson.

Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T. R. Kratochwill (Ed.) Advances in school psychology (Vol. 4, pp. 251-288). Mahwah, NJ: Erlbaum.

Kathleen Lynne Lane

University of Kansas

Wendy Peia Oakes

Arizona State University

Lisa Powers, Tricia Diebold

Special School District of St. Louis County

Kathryn Germer

Nashville, Tennessee

Eric A. Common

University of Kansas

Nelson Brunsting

University of North Carolina at Chapel Hill

Address correspondence to Kathleen Lynne Lane, PhD. Professor in the Department of Special Education, University of Kansas Kathleen.Lane@ku.edu
Table 1
Participant Characteristics

Variable          Level                       Total N = 48

                                                % (n)      M (SD)
Gender
                  Male                        12.12 (4)
                  Female                      87.88 (29)
Assignment
                  General Education Teacher   21.74 (10)
                  Special Education Teacher   34.78 (16)
                  Related Service Provider    6.52 (3)

                  Administrator               4.35 (2)
Grade Levels
Taught            Kindergarten                47.83 (22)
                  First Grade                 43.48 (20)
                  Second Grade                45.65 (21)
                  Third Grade                 52.17 (24)
                  Fourth Grade                52.17 (24)
                  Fifth Grade                 47.83 (22)
                  Sixth Grade                 21.74 (10)
                  Seventh Grade               23.91 (11)
                  Eighth Grade                15.22 (7)
                  Ninth Grade                 4.35 (2)
                  Tenth Grade                 6.52 (3)
                  Eleventh Grade              4.35 (2)
                  Twelfth Grade               4.35 (2)
Highest Degree
Obtained          Bachelor's Degree           13.04 (6)
                  Master's Degree             52.17 (24)
                  Master's + 30               21.74 (10)
                  Doctoral Degree             6.52 (3)
                  Educational Specialist      6.52 (3)
Certified for                                 97.83 (45)
  Current
  Assignment
Behavioral                                    0.00 (0)
  Certification
Seeking BCBA                                  3.45 (1)
  Certification
Years of                                                    8.21
  Experience
  in Current
Assignment                                                 (4.98)

Note. Percentages are based on the number of participants
who completed the item. BCBA refers to Board Certified
Behavior Analyst.

Table 2
Overview of Professional Development Series.

Session     Agenda Topics and Homework Activities

1           Welcome and Introductions

            * Overview of functional assessment-
            based interventions (FABI)

            * Illustrations at the elementary and
            middle school levels

            * Participant Consent: For teams with at least one
            consenting member, parent consent forms and
            instruction provided Pre-training assessment

            Step 1: Determining which students need a FABI
            Referral Checklist

            Step 2: Conducting the functional assessment and
            determining the hypothesized function

            2.1 Review of educational records School Archival
            Records Search (SARS; Walker, Block-Pedego, Todis,
            & Severson (1998)

            2.2 Interviews conducted
                Teacher interview identifies the target behavior,
                operationally defined with examples and
                non-examples Parent and Student interviews

            2.3 Rating scales completed
                SSiS (Gresham & Elliott, 2008) examines whether
                the target behavior may be due to acquisition
                or performance deficit

            2.4 Direct observation of behavior
                A-B-C data collection practice and feedback
                with video clips

            2.5 Identify the function of the behavior
                Function Matrix for organizing functional assessment
                data to determine what the student is trying
                to access or avoid with the target
                behavior (attention, task/ tangible, sensory)

            * Select a possible replacement behavior,
            operationally define the behavior with examples
            and non-examples

            * Coordinate and plan for application homework
            activities with coach's support

After       Work with coach to complete:

Session 1   * Records Review (15-30 min)

            * Interviews (Teacher, Parent, Student; 30 min each)

            * Direct observation A-B-C (3 hrs over one week;
            at least 8 instances of the target behavior observed)

            * Rating scales (teacher and parent completed; 20 min).
            Contact coach to score and interpret scales

            * Read the two published case illustrations presented
            as examples (Lane, Weisenbach, Little, Phillips,
            & Wehby, 2006)

2           Welcome

            * Review Steps 1 and 2

            2.5 Complete the Function Matrix and determine the
            function of the behavior (What is the student
            trying to get or avoid?)

            Write a hypothesis statement about the function
            of the behavior

            Finalize replacement behavior

            Step 3: Collecting baseline data

            Identify the dimension of the behavior
            and recording system

            Learn about and practice data collection
            procedures including IOA

            Step 4: Designing the intervention

            4.1 Select the intervention method using
            the Function-based Intervention
            Decision Model (Umbreit et al., 2007)

            * Coordinate and plan for application
            homework activities with coach's
            support

After       Work with coach to:
Session 2
            * Complete A-B-C data collection

            * Confirm hypothesis with teacher

            * Decide on measuring system and draft
            collection procedures (materials,
            data collection sheet, observation times)

            * Get reliable on training system with a
            partner (IOA training)

            * Collect at least 5 baseline data points with
            25% of those with IOA

            * Complete Decision Model and draft initial
            ideas for intervention

3           * Review and finalize Steps 1 and 2

            * Review Step 3 and examine and graph baseline
            data with IOA.

            Step 4: Designing the intervention using the
            Function-based Intervention Decision Model
            (Umbreit et al., 2007)

            4.1 Review the intervention method selected
            using the Decision Model after confirmed
            with the teacher

            4.2 Develop intervention components A-R-E

            Antecedent adjustments (A), adjusting rates of
            reinforcement (R), and extinction of the target
            behavior (E) with specific tactics for each
            component

            Component tactics linked to the hypothesized
            function

            4.3 Components related to valid inference making

            Measuring treatment integrity and social
            validity of the intervention

            Step 5: Testing the intervention

            Experimental design, graphing and determining
            phase changes (stability, level, and trend of data)

            * Ethical considerations related to
            consenting, confidentiality, choosing
            evidence-based practices, avoiding punishment
            and harmful reinforcers

            * Coordinate and plan for application homework
            activities with coach's support

After       Work with coach to:
Session 3
            * Finalize intervention draft (A-R-E components)

            * Introduce the plan to the teacher and
            revise as needed

            * Polish treatment integrity form

            * Teach the intervention to the teacher and student

            * Assess social validity from the teacher
            and student perspectives

            * Implement the intervention; collect data and 25% IOA

            * Graph data, examine for level, trend, and stability

            * Withdrawal intervention for a minimum of
            three data points (with IOA)

4           * Brief review of Steps 1 through 4

            Step 5: Testing the intervention

            Review of procedures; examine participants' data

            * Ethical considerations (review)

            * Create a faculty presentation of Steps 1-5
            to share with school faculty

            * Post-training assessment (Knowledge, Confidence,
            and Use survey; 15 min)

            * Coordinate and plan for application homework
            activities with coach's support

After       Work with coach to:
Session 4
            * Complete all phases to test the intervention
            (withdrawal and reintroduction)

            * Collect and graph data for each phase with
            IOA for 25% of sessions in each phase

            * Assess treatment integrity of the intervention
            and absence of the intervention during withdrawal

            * Complete final materials for the student's file
            (Behavior Intervention Plan, graph of student
            performance, intervention components on the
            treatment integrity form)

            * Review the ethics checklist

            * Assess social validity from the teacher and
            student perspectives (post intervention)

5           * Review of all FABI steps

            * Focus on outcomes

Optional    * Scoring the social validity measures

            * Treatment integrity (IOA)

            * Student outcomes (IOA)

            * Graphing student data

            * Coordinate and plan for application homework
            activities with coach's support

After       Share:
Session 5
            * Outcomes with teacher and parents (and student,
            as appropriate)

            * Presentation of process and outcomes with
            faculty (ensure confidentiality)

Note. A-B-C refers to Antecedents, Behavior, and Consequences.
Function Matrix (Umbreit, Ferro, Liaupsin, & Lane, 2007).
IOA refers to Inter-observer agreement.

Table 3
Knowledge, Confidence, and Use Scores Over Time.

                               Time

                  Pre-Series        Pre-Series
                    M (SD)            M (SD)
Construct              N                 N

Perceived        16.35 (10.14)     37.32 (7.84)
Knowledge             40                37

Perceived        14.92 (9.44)      35.61 (6.28)
Confidence            37                36

Perceived Use    29.29 (9.47)      41.78 (3.77)
                      21                36

Actual            6.46 (5.10)      19.33 (4.47)
Knowledge             46                39

                      Significance Testing

                   t Value          Effect Size
                    p value         Hedges's g
Construct

Perceived        t (30) = 8.81         2.28
Knowledge          p <.0001

Perceived       t (26) = 11.43         2.55
Confidence         p <.0001

Perceived Use    t (16) = 7.03         1.91
                   p <.0001

Actual          t (36) = 14.01         2.64
Knowledge          p <.0001
COPYRIGHT 2015 West Virginia University Press, University of West Virginia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lane, Kathleen Lynne; Oakes, Wendy Peia; Powers, Lisa; Diebold, Tricia; Germer, Kathryn; Common, Eri
Publication:Education & Treatment of Children
Article Type:Report
Geographic Code:1USA
Date:Feb 1, 2015
Words:10179
Previous Article:Using incremental rehearsal to teach letter sounds to English language learners.
Next Article:A programmatic description of an international private behaviorally orientated autism school.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters