Printer Friendly

Effects of programmed errors of omission and commission during auditory-visual conditional discrimination training with typically developing children.

Treatment integrity has been described as the consistent and accurate implementation of a treatment protocol or intervention in the manner in which it was designed (Gresham, 1989). In other words, treatment integrity involves the precision with which the independent variable is implemented (Peterson, Homer, & Wonderlich, 1982). Precise measurement and accurate implementation of the independent variable increases the likelihood the manipulation of this variable is responsible for changes in the dependent variable (i.e., internal validity; Fryling, Wallace, & Yassine, 2012). The science of applied behavior analysis is based on identifying functional relations between dependent and independent variables; thus, accurate implementation of independent variables is of critical importance (DiGennaro Reed & Codding, 2014).

The impact of treatment integrity on the effectiveness of behavioral interventions has begun to receive special attention in recent years (Fryling et al, 2012). Errors of integrity, specifically errors of omission (i.e., not implementing components of a protocol) and commission (i.e., implementing procedures not proscribed by a protocol), have been examined in the context of interventions for problem behavior (e.g., differential reinforcement of alternative behavior; Leon, Wilder, Majdalany, Myers, & Saini, 2014; St. Peter Pipkin, Vollmer, & Sloman, 2010; Vollmer, Roane, Ringdahl, & Marcus, 1999; Worsdell, Iwata, Hanley, Thompson, & Kahng, 2000) and have been shown to reduce the efficacy of intervention. Yet the effects of reduced treatment integrity on the efficacy and efficiency of skill-acquisition procedures has received less attention in the extant literature.

Studies that examined the effects of integrity errors on skill acquisition have evaluated errors of omission of controlling prompts (e.g., Grow et al., 2009; Holcombe, Wolery, & Snyder, 1994; Noell, Gresham, & Gansle, 2002), omission of reinforcement (e.g., Carroll, Kodak, & Fisher 2013), commission of reinforcement (e.g., DiGennaro Reed, Reed, Baez, & Maguire, 2011), and combined omission and commission errors (e.g., Carroll et al, 2013; Hirst & DiGennaro Reed, 2015; Jenkins, Hirst, & DiGennaro Reed, 2015; Pence & St Peter, 2015).

For example, Hirst and DiGennaro Reed (2015) investigated the effects of feedback accuracy on arbitrary matching and auditory-visual conditional discrimination (AVCD) tasks in a series of basic and translational studies conducted in analogue settings with undergraduate students and typically developing preschoolers, respectively. The authors manipulated the accuracy of feedback provided to participants by either committing an error of omission following a correct response (i.e., not providing feedback on accurate responses) or an error of commission following an incorrect response (i.e., saying, "Nice try" following incorrect responses). In Experiment 1, the authors assigned each undergraduate participant to one level of accuracy (i.e., 25%, 50%, 75%, or 100%) for a block of trials before moving on to 100% accuracy. None of the participants exposed to 25% and 50% accuracy acquired the skills prior to exposure to the 100% accuracy condition. Of these participants, some showed carryover effects with acquisition occurring more slowly, with six failing to master the skill. In Experiment 2, Hirst and DiGennaro Reed examined the same levels of feedback integrity with four preschool-aged children and AVCD tasks. All four participants acquired the skills that were exposed to 100% accurate feedback. None of the participants mastered the skills associated with 25%, 50%, and 75% accuracy. With subsequent exposure to 100% accuracy, all participants acquired the AVCD tasks with the possibility of carryover effects from inaccurate feedback supported by slower rates of acquisition. Overall, results revealed that the combined effects of errors of commission (i.e., reinforcing an incorrect response) and omission (i.e., no reinforcement following a correct response) at various percentages hindered learning. However, conclusions regarding the effect of specific levels of accuracy are hampered by differing levels of difficulty across the tasks; that is, one cannot determine whether acquisition was slowed because of integrity errors or difficulty of the skill.

Pence and St Peter (2015) evaluated the effects of programmed integrity errors on acquisition of mands with six children with developmental disabilities in two experiments. In Experiment 1, the authors provided an incorrect item following a mand on all (0% integrity), some (40% and 70% integrity), or none (100% integrity) of the trials. This type of integrity error could be conceptualized as a combined error of omission (i.e., not providing the correct target item) and commission (i.e., providing an incorrect item). One participant did not acquire any mands in the study, one participant acquired the mands in the 0% and 100% conditions, and one participant acquired the mands in all conditions except 0% integrity. In addition, for the two participants who did acquire mands in Experiment 1, the mand in the 100% integrity condition required the fewest sessions to mastery.

With the exception of the recent articles published by Hirst and DiGennaro Reed (2015) and Pence and St Peter (2015), previous research examining the effects of decrements to integrity on skill acquisition (e.g., Carroll et al., 2013; DiGennaro Reed et al, 2011) investigated integrity levels between 0% (i.e., 100% errors) and 67% (i.e., 33% errors) compared to high-integrity instruction (i.e., 0% errors). Previous literature has shown that participants learn skills more efficiently without integrity errors, and errors made on 50% or more trials leads to poorer outcomes for learners. Few studies have explored the effects of lower decrements to integrity on skill acquisition. In some settings (e.g., classrooms), integrity at or around 80% may be considered "acceptable" (Cook et al., 2015); however, it is not yet known if this is truly an acceptable level of integrity, meaning that errors on 20% or fewer trials causes little-to-no disruption to learner outcomes. Therefore, evaluating lower percentages of errors of omission and commission (i.e., higher levels of integrity) on skill acquisition is warranted.

Although the literature on the effects of detriments to treatment integrity on skill acquisition has grown recently, we are not aware of any studies that compared the effects of isolated errors of omission of reinforcement to isolated errors of commission of reinforcement. Combined errors of omission and commission have been shown to lead to interventions that are less efficient or efficacious; however, further research is needed to determine whether one type of error may be more detrimental to skill acquisition. Researching the effects of integrity errors in isolation allows the field of behavior analysis to identify the impact of specific types of errors on learning.

A first step in making these comparisons involves examining the effects of different types of errors (i.e., omission and commission errors) with one specific component of instruction (i.e., reinforcement) on skill acquisition. Because these types of examinations may negatively impact student learning and may take time away from high-integrity instruction in other settings, translational studies (i.e., studies that begin by evaluating basic processes that have implications for clinical populations and interventions; Lerman, 2003) should be conducted initially. Thus, the current studies were conducted with typically developing children and instructionally irrelevant stimuli to mitigate concerns over potential long-term effects of exposure to integrity errors in the context of instructional activities. The results of translational studies can inform subsequent comparisons with clinically relevant populations (e.g., individuals diagnosed with autism spectrum disorder; ASD) and tasks.

To isolate the effects of integrity errors of reinforcement in the absence of prompting, we used trial-and-error instruction (Saunders & Spradlin, 1993). That is, we did not include prompts in any of the conditions in the current studies. Rather, instructors provided reinforcement following a proportion of incorrect responses (errors of commission condition) or omitted reinforcement following a proportion of correct responses (errors of omission condition). Although the current studies are translational, the effects of similar trial-and-error procedures on skill acquisition have been evaluated within the extant behavior-analytic literature (e.g., Kodak, Clements, & LeBlanc, 2013; Kodak et al., 2015; McGhan & Lerman, 2013; Saunders & Spradlin, 1993; Schilmoeller, Schilmoeller, Etzel, & Leblanc, 1979) and may be used in clinical and educational contexts. For example, Saunders and Spradlin (1993) used a trial-and-error procedure to teach visual arbitrary matching to adults with intellectual disabilities, and one participant consistently learned this skill with only trial-and-error instruction. In addition, Kodak et al. (2013) evaluated whether children diagnosed with ASD could learn AVCD tasks in the absence of prompts with a condition that included praise and a preferred tangible following correct responding and progression to the next trial following an incorrect response (i.e., trial-and-error; no error correction). All three participants acquired AVCD skills with this trial-and-error arrangement. Trial-and-error procedures may also be used in the context of behavior analytic and psychological assessments whereby reinforcement or feedback is provided for correct responses and incorrect responses are ignored in order to assess preexisting behavioral repertoires. For example, Kodak et al. (2015) used a trial-and-error procedure to assess behaviors that may be prerequisite skills for acquiring AVCD skills (e.g., visual matching, auditory discrimination) with six children with developmental disabilities. The authors also evaluated a trial-and-error procedure in the acquisition of AVCD skills with the same participants. Three of these participants learned the skills via trial-and-error instruction. Thus, although the current studies involve arbitrary stimuli for our typically developing students for whom one-on-one AVCD instruction is not employed in their education setting, the trial-and-error procedure has been evaluated and shown to be effective with more clinically relevant populations.

The purpose of the current translational studies was to systematically compare the efficacy and efficiency of instruction with programmed errors of omission or commission of reinforcement on acquisition of an AVCD task with two typically developing children. Another purpose of the study was to examine the effects of lower percentages of errors on skill acquisition than have been investigated in prior research.

Method

Subjects, Setting, and Materials

Two individuals participated in both experiments. Kyle was a 5-year-old boy and Cassie was an 11-year-old girl. Both participants attended public school, were typically developing with no reported difficulties with learning, and were native English speakers. Kyle had a history of engaging in noncompliant behavior and negative vocalizations in his classroom and childcare settings. Parents nominated their children to participate in this study. We obtained participant assent (verbal and written) at the outset of the study and at the beginning of each day's sessions using a developmentally appropriate informed assent procedure. Sessions occurred in private rooms in a university-based clinic. The rooms contained child-sized tables, chairs, and preferred edibles and activities. Kyle also participated in sessions conducted at the kitchen table in his home.

The experimenters presented Japanese characters printed on flashcards (9 cm x 8.5 cm for Kyle and 5 cm x 5 cm for Cassie; see Table 1 for English translations of characters). We obtained images to include in the study from an Internet search, and we used a translation program to confirm the appropriate translation for each character. Each condition included a unique set of four Japanese characters. We selected these targets because neither participant had previous exposure to the English translation of these stimuli and were not part of regular curricular learning at their schools. We attempted to equate targets across conditions based on the visual complexity of the stimuli (e.g., number of lines in the character, similar forms), the number of syllables in each word, and the initial sound.

Response Measurement, Interobserver Agreement, and Treatment Integrity

Our dependent variables included the efficacy and efficiency of instruction. Efficacy was determined by correct responding and meeting the mastery criterion. A condition was considered mastered when a participant responded correctly on 11 out of 12 trials for two consecutive sessions. Efficiency of instruction was determined by the total number of sessions and minutes to mastery. Sessions to mastery included 12-trial blocks. Minutes to mastery included instructional time recorded with stopwatches; experimenters timed-out of session during breaks for Kyle in Experiment 2. Observers collected paper-and pencil data on the dependent variables during each trial of a 12-trial session. The experimenters scored a correct response when the participant pointed to or touched the target comparison stimulus (S+) within 5 s of the presentation of the auditory stimulus. An incorrect response was scored when the participant pointed to or touched any comparison stimulus (S-) other than the S+ within 5 s of the presentation of the auditory stimulus. Data collectors scored no response if the participant did not touch a comparison stimulus within 5 s of the presentation of the auditory stimulus. In addition, experimenters required the participants to scan the array before the trial began. Scanning was defined as the participant making eye contact with each stimulus in the array without looking away between stimuli.

A second observer independently collected data on participants' responding either within the session or from a video recording during 63% of Kyle's sessions and 89% of Cassie's sessions in Experiment 1 and 59% and 95% of Kyle's and Cassie's sessions in Experiment 2, respectively. We calculated trial-by-trial interobserver agreement by dividing the number of trials in which the observers recorded the same behavior by the total number of trials in a session, and converted the ratio to a percentage. Mean agreement for all dependent variables across conditions in Experiment 1 was 98% (range: 75%-100%) for Kyle and 99% (range: 83%-100%) for Cassie. In Experiment 2, mean agreement for all dependent variables across all conditions was 98% (range: 75%-100%) for Kyle and 97% (range: 75%-100%) for Cassie.

Observers collected treatment integrity data during baseline and experimental sessions throughout both experiments. An instance of treatment integrity was scored if the experimenter (a) placed the S+ in the position specified by the data sheet, (b) waited for the participant to scan the array or prompted the participant to scan the array if 5 s elapsed, (c) delivered the correct auditory stimulus, (d) waited up to 5 s for the participant to respond, (e) provided praise and a token following a correct response (if relevant), (f) withheld reinforcement following an incorrect response (if relevant), and (g) removed the stimuli from the table. We also recorded treatment integrity during programmed errors in two conditions. An error of omission was performed with integrity when the experimenter withheld reinforcement for a correct response; an error of commission was performed with integrity when an incorrect response produced praise and a token.

We calculated treatment integrity by dividing the number of trials implemented with integrity by the total number of trials in a session and converted the ratio to a percentage. In Experiment 1, treatment integrity data were collected for 63% of Kyle's sessions and 89% of Cassie's sessions. Mean treatment integrity was 99% (range: 91%-100%) for Kyle and 100% for Cassie. The observers recorded treatment integrity in Experiment 2 for 59% and 95% of Kyle's and Cassie's sessions, respectively. Mean treatment integrity was 99% (range: 91%-100%) for Kyle and 99% (range: 91%-100%) for Cassie. Only one error occurred in the high-integrity condition across all trials and experiments for both participants.

Experimental Design

We evaluated the effect of programmed errors on the acquisition of AVCD using an adapted alternating treatments design (Sindelar, Rosenberg, & Wilson, 1985) within a concurrent multiple baseline design across participants. The order of conditions was pseudorandom. A session of each of the four conditions occurred within a session block before randomizing the order of the next session block.

Experiment 1

The purpose of this experiment was to compare the effects of programmed integrity errors during AVCD training on skill acquisition for two typically developing individuals. Throughout the experiment, it was our goal to make errors of omission and commission during (a) a consistent percentage of trials across conditions and stimuli, (b) a consistent percentage of errors across participants, and (c) approximate 80% integrity. Based on the performance of the participants and our goal to match the integrity errors across conditions and participants, the percentage of obtained errors of omission and commission conditions was 17% to 18%.

Integrity errors were randomly distributed across trials of the omission and commission conditions; refer to Table 2 for the percentage of trials that a response contacted reinforcement across conditions. Error trials were determined prior to each session and were based on (a) our goal programmed errors during the study and (b) the obtained error percentage for sessions and stimuli. We staggered programmed errors so that they occurred throughout the session (e.g., Trials 1,4, and 9) rather than in succession across trials (e.g., Trials 1, 2, and 3). If the experimenter was unable to make an integrity error on a trial because of the participant's response, the programmed integrity error was moved to the next trial with the same stimulus. If the same stimulus did not occur in the remainder of the session, an error was made on the stimulus with the lowest percentage of obtained errors. Evaluating the effects of treatment integrity near 80% on performance allowed for an examination of acquisition under what might be considered high, or "acceptable," integrity in a variety of settings (Cook et al., 2015) including educational settings (e.g., classrooms).

Procedure

The experimenter conducted four to 10 sessions per day, 2 days per week. Each session consisted of 12 trials. Each condition included a unique set of four Japanese characters presented three times per session in a random order. Each stimulus was an S+ during three trials and an S- in the remainder of the trials. We pseudorandomly rotated the position of stimuli so each stimulus was in a different position in the array across trials, and the S+ was not placed in the same position on more than two consecutive trials. We also alternated the presentation of stimuli so that the same S+ was not presented on more than two consecutive trials.

During each trial, the experimenter presented four stimuli in a horizontal array in front of the participant, waited for the participant to scan the array or prompted the participant to scan the array with a point and "Look" if 5 s elapsed without independent scanning, delivered the auditory stimulus (e.g., "Man"), and waited up to 5 s for a response. The experimenter conducted sessions until correct responding reached the mastery criterion in each condition (i.e., two consecutive sessions with correct responding on 11 or more trials). Following mastery, the remaining conditions with unmastered stimuli were rotated. The evaluation continued until the participant's responding reached the mastery criteria in all conditions other than control.

To identify preferred items for inclusion in the study, the participants vocally nominated food and tangible items (e.g., tablet, fish cracker, chocolate candy). The participants selected a group of items prior to each day's sessions based on available edibles and activities. If the participants did not vocally nominate items or if specifically requested, the instructor presented several options for edibles and activities until the participants vocally requested preferred items to include in the session room. Kyle and Cassie had previous exposure to token economies; therefore, token training did not occur in the context of this study. The experimenters explained when tokens would be exchanged (i.e., after each session) and the back-up reinforcers for which tokens were exchanged (i.e., a small edible or an additional 15 s of break time). In addition, participants received a noncontingent (i.e., not based on performance) 3-min break following each session. During this 3-min break, participants were able to access any item or activity in the session room (e.g., play food, dinosaurs) plus edible items for which they exchanged tokens. The type and number of activities available in the session room varied based on participant requests. Kyle typically accessed play food, car ramps, building block sets and manuals, a tablet, and action figures with preference for these items shifting across sessions. Cassie consistently preferred access to a tablet during breaks.

Baseline No programmed consequences were delivered for correct or incorrect responses. Sessions included mastered tasks (e.g., tacts of animals) interspersed every three trials, and the experimenter provided praise and a token following correct responses to mastered tasks. At the end of each session, the participant received a 3-min break and exchanged tokens.

High Integrity If the participant responded correctly, the experimenter provided praise and a token. If the participant engaged in an incorrect response, the experimenter did not provide any prompts, removed the stimuli, and initiated the next trial. Thus, the experimenter implemented a trial-and-error procedure to evaluate the specific effect of errors with reinforcement delivery on skill acquisition. At the end of the session, the participant received a 3-min break and exchanged tokens. This condition did not include any programmed integrity errors.

Errors of Omission The procedures and consequences were similar to those in the high-integrity condition with one exception. During trials with programmed integrity errors, the experimenter withheld praise and a token following a correct response. That is, the experimenter cleared the array and initiated the next trial following a correct response during error trials.

Errors of Commission The procedures were similar to those described in the high-integrity condition with the exception that the experimenter provided reinforcement for an incorrect response during trials with programmed integrity errors. That is, the experimenter delivered praise and a token following an incorrect response during error trials.

Control The procedures and consequences were similar to those in baseline except that mastered tasks were not interspersed. Thus, token boards were on the table, and tokens were available, but not provided, in the control condition in the integrity comparison. Participants accessed a 3-min break following each session.

Results and Discussion: Experiment 1

The top panel of Fig. 1 shows Kyle's skill acquisition data for Experiment 1. In the high-integrity condition, Kyle required seven sessions and 35 min of instruction to reach the mastery criterion. His responding reached the mastery criterion in eight sessions and 45 min of instruction in the commission condition. In contrast, Kyle required 18 sessions and 86 min of instruction to reach the mastery criterion in the omission condition. The experimenter made errors in 17% of the trials in both the omission and commission conditions. Kyle performed at or below chance level in the control condition. Refer to Table 3 for a summary of participants' results and the percentage of integrity errors across conditions. Although both types of errors led to slower acquisition compared to the high-integrity condition, errors of omission had a greater overall impact on Kyle's learning than did errors of commission.

The results for Cassie's integrity comparison are displayed in the bottom panel of Fig. 1. She met the mastery criterion in the high-integrity condition in four sessions with 15 min of instruction. Cassie required eight sessions to meet the mastery criterion in the omission and commission conditions with errors made during 18% of trials in both conditions. Cassie received 32 and 31 min of instruction in the omission and commission conditions, respectively (see Table 3). Thus, conditions with programmed integrity errors required double the number of sessions and instructional time to reach the mastery criterion compared to the high-integrity condition. Responding in the control condition remained at or below chance level. Errors of omission and commission appeared to affect Cassie's acquisition equally by making instruction less efficient. The results of Experiment 1 showed that 17% to 18% integrity errors impacted acquisition, although the specific effects of types of integrity errors differed somewhat across participants. The amount of programmed integrity errors in Experiment 1 was lower than those evaluated in prior studies (e.g., 33% of trials; Noell et al. 2002), but detriments to learning were observed. It remains unclear how slightly more frequent integrity errors of omission or commission effect skill acquisition and whether a particular type of integrity error may be more detrimental when more frequent errors occur.

The purpose of Experiment 2 was to evaluate the effects of increased percentages of programmed errors of omission and commission with the same typically developing participants from Experiment 1. Efforts were not made to match the percentages of errors in conditions across participants or targets (as was the case in Experiment 1; see Table 4). In Experiment 2, we programmed errors on 30% of trials to compare our results to previous studies with similar error percentages (e.g., Carroll et al. 2013; DiGennaro Reed et al., 2011). Based on participant responding, our obtained programmed errors ranged from 20% to 30% across participants and conditions.

Experiment 2

Procedure

The experimenter conducted three to nine sessions, one to two days per week. Each session consisted of 12 trials during which we presented four new targets in sets of Japanese characters (see Table 1) in a pseudo-random order. We selected targets for inclusion in stimulus sets in the same manner as Experiment 1, and neither participant had previous exposure to the English translations of the targeted characters.

The experimenter conducted sessions until correct responding reached the mastery criterion in each condition (i.e., two consecutive sessions with correct responding on 11 or more trials). The evaluation continued until the participant's responding reached the mastery criterion in all conditions other than control, or until the participant met a discontinuation criterion of four times the number of sessions required to reach mastery in the high-integrity condition. Any conditions that met the discontinuation criterion (with the exception of control) were exposed to instruction using the high-integrity condition procedures until the participant's responding met the mastery criterion. The training procedures matched those used in Experiment 1.

A variation in Kyle's sessions involved the use of differential reinforcement of alternative behavior (DRA) due to elevated levels of problem behavior observed throughout Experiment 1 and baseline sessions of Experiment 2. Kyle was taught to request a break when a laminated card flipped from red to green, indicating he behaved appropriately throughout the preceding trial. The DRA procedure was implemented with instructional materials and was used during all sessions and conditions throughout the integrity manipulation in Experiment 2.

Results and Discussion: Experiment 2

The results of Experiment 2 for Kyle and Cassie are displayed in Fig. 2; the top panel displays Kyle's data. He required 11 sessions and 49 min of instruction to master the targets in the high-integrity condition. The experimenter was not able to match the percentage of errors made in omission and commission conditions due to Kyle's patterns of responding in Experiment 2. Thus, the experimenter conducted errors of omission on 20% of trials and errors of commission on 30% of trials. Despite the difference in the percentage of errors across conditions, both the omission and commission conditions met the discontinuation criteria because Kyle failed to meet the mastery criterion in either condition following 44 sessions and 170 min (omission) and 186 min (commission) of instruction. Following the introduction of high-integrity instruction in the error conditions, Kyle required slightly more high-integrity sessions (11) and instructional time (40 min) to master the targets in the omission condition compared to the commission condition (eight sessions in 32 min; see Table 3). Thus, in comparison to Kyle's responding in Experiment 1, higher percentages of both types of errors were detrimental to Kyle's acquisition. Nevertheless, targets that were initially exposed to errors of omission and commission were subsequently acquired following exposure to high-integrity instruction. Prior exposure to either type of integrity error did not appear to have a lasting effect on learning because Kyle acquired the targets after exposure to high-integrity instruction in approximately the same amount of training sessions and time as the targets that were exposed to high-integrity instruction in Experiment 2. Responding in the control condition remained at chance level.

An analysis of Kyle's error patterns during training conditions may help elucidate the failure to acquire targets during the omission and commission conditions in Experiment 2. We observed a position bias during sessions of the omission condition. Kyle selected the stimulus in the fourth position (i.e., rightmost position) of the four-stimulus array during 58% of trials across sessions. During the commission condition, Kyle consistently made errors to two of the four target stimuli. That is, he selected the character for deer in the presence of the auditory sample "twig" and selected the character for twig in the presence of the auditory sample "deer." These errors were reinforced during a proportion of trials due to the schedule of integrity errors in each commission session (see Table 4).

Cassie met the mastery criterion in the high-integrity condition in two sessions, the fewest number of sessions possible based on our mastery criterion (Fig. 2; bottom panel). Stimuli in the high-integrity condition were mastered in 7 min of instruction. Next, she met the mastery criterion in the commission condition in 14 sessions and 62 min. The targets in the omission condition were mastered in 16 sessions and 67 min of instruction. The experimenter made errors in 21 % and 22% of the trials in the commission and omission conditions, respectively (see Table 3). Responding in the control condition remained near chance level. Compared to Cassie's performance in Experiment 1, the increased percentage of errors impacted her acquisition and the length of instructional time required to reach the mastery criterion in both the omission and commission conditions. Similar to her results in Experiment 1, the effects of reinforcement errors on learning during omission and commission conditions were similar.

General Discussion

We investigated the impact of errors of omission and commission of reinforcement on the efficacy and efficiency of instruction for AVCD skills with two typically developing children. In both experiments, decrements to treatment integrity in the form of errors of omission and commission of reinforcement affected the efficiency with which participants acquired conditional discriminations. In addition, integrity errors reduced the efficacy of the trial-and-error procedure in some of the conditions for Kyle.

In Experiment 1, integrity errors during 17% to 18% of trials conducted with Kyle and Cassie resulted in reductions in the efficiency of intervention. Kyle acquired the targets in all conditions, except the control condition, and commission errors did not appear to influence his learning. However, he required more than double the sessions to acquire the targets in the errors of omission condition compared to the high-integrity and errors of commission conditions. Thus, errors of omission (i.e., not receiving reinforcement following a correct response) when made on 18% of trials resulted in a slower rate of acquisition. Cassie acquired the conditional discriminations in the errors of omission and commission conditions in double the amount of sessions compared to high-integrity instruction; however, differential effects on acquisition according to the type of error were not observed.

The findings of Experiment 1 add to a growing body of literature on treatment integrity and skill acquisition by comparing the effects of errors of omission and commission. Although prior studies have examined the effects of errors of omission of prompts (e.g., Holcombe et al., 1994; Noell et al, 2002) or commission of reinforcement (DiGennaro Reed et al., 2011) on skill acquisition, the present investigation is the first to directly compare the outcomes of these types of errors of reinforcement delivery on learning. Our results suggest that even relatively small percentages of either type of error (i.e., 17% for Kyle and 18% for Cassie) can effect skill acquisition for some learners. Although Cassie's results showed that both types of errors reduced the efficiency of instruction, one type was not found to be more detrimental. In contrast, errors of omission resulted in less efficient acquisition for Kyle than did errors of commission in Experiment 1. However, the preliminary results of our study should be replicated with additional participants to further evaluate the effects of each type of error on learning. Additional studies that compare specific types of treatment integrity errors can provide information regarding the types of errors that should be reduced during training with therapists, teachers, and caregivers.

A corollary finding for Kyle was that reductions in treatment integrity influenced the likelihood of problem behavior during instruction, and problem behavior varied depending on the types of integrity errors that occur. Kyle engaged in more frequent problem behavior in the errors of omission and control conditions, although at lower frequencies in Experiment 2. These conditions were associated with the lowest amount of reinforcement. For example, chance-level responding in a four-stimulus array is approximately 25%, and participants are likely to display chance-level responding in initial training sessions. Making errors of omission for reinforcement in approximately 18% of trials resulted in few instances of reinforced responses during sessions in the errors of omission condition (i.e., approximately 7% of trials) until at least some conditional discriminations were acquired. In comparison, making 18% errors of commission resulted in a higher density of reinforcement because both chance-level correct responding and errors (i.e., approximately 43% of trials) produced reinforcement. Participants who are more likely to engage in problem behavior under conditions with lean schedules of reinforcement may engage in more problem behavior following particular types of integrity errors (e.g., omission). Additional research on the effect of treatment integrity errors during skill acquisition on problem behavior may help elucidate the relationship between integrity errors, problem behavior, and learning.

In Experiment 2, we investigated the effects of increased errors across trials to compare participants' behavior under different levels of integrity. In Kyle's sessions, we were able to commit programmed treatment integrity errors in 20% of omission trials and 30% of commission trials based on his responding. Increased programmed errors in Experiment 2 impacted the efficacy and efficiency of instruction. In fact, Kyle did not acquire the conditional discriminations in the errors of omission nor errors of commission conditions despite having conducted four times the number of sessions necessary to master targets in the high-integrity condition. Therefore, increasing the percentage of errors of omission and commission from 17% in Experiment 1 to 20% and 30% in Experiment 2, respectively, had a detrimental effect of Kyle's learning, and instruction was not efficacious. Nevertheless, Kyle did learn both sets of targets in the error conditions following exposure to high-integrity instruction. Furthermore, prior integrity errors did not appear to have long-term effects on Kyle's acquisition.

Kyle's results in Experiment 2 contradicted those of Hirst and DiGennaro Reed (2015) and Jenkins et al. (2015), who found delayed acquisition during high-integrity instruction following exposure to integrity errors for some participants. Because Kyle was exposed to fewer errors (20% and 30%) in comparison to participants in Jenkins et al. (50% and 100%), his results suggest that the long-term effects of treatment integrity errors may vary depending on the prevalence of integrity errors during early instruction. More research is needed to further examine the long-term effects of various percentages of treatment integrity errors on acquisition.

The results of our experiments add to the existing literature on treatment integrity and skill acquisition. Prior studies evaluating treatment integrity during skill acquisition investigated the effects of programmed errors during 100% (DiGennaro Reed et al., 2011; Jenkins et al., 2015), 67% (Carroll et al., 2013), 50% (DiGennaro Reed et al., 2011; Jenkins et al., 2015), 33% (Noell et al., 2002), and 25% (Hirst & DiGennaro Reed, 2015) of trials. However, we investigated smaller decrements to integrity. In fact, most of the errors in our evaluation approximated 80% integrity, which may be described as satisfactory or acceptable levels in a variety of educational settings (Cook et al., 2015). The current studies demonstrate that even "acceptable" levels of integrity may make an intervention less efficient or result in an ineffective procedure. In Experiment 1, the participants were exposed to approximately 82% integrity, and double the amount of instruction was necessary to acquire the conditional discriminations in the errors of omission (Kyle and Cassie) and errors of commission (Cassie only) conditions. Based on the results of our study, future studies should include a more fine-grained parametric analysis of small, incremental changes in treatment integrity to determine if there is a minimum level of acceptable integrity. In educational settings, it is likely that instructors will make errors (e.g., Carroll et al., 2013). Thus, it could be beneficial to identify the level at which errors of omission and commission have little-to-no detrimental effects on outcomes for learners.

Both participants required more sessions to acquire the conditional discriminations taught in the errors of omission and commission conditions in Experiment 2 compared to their own data in Experiment 1. It is possible that repeated exposure to integrity errors influenced outcomes across experiments rather than or in addition to increases in the percentage of errors that occurred across experiments. Previous research shows that prior exposure to instructional conditions can influence the efficiency of instruction (Coon & Miguel, 2012; Hirst & DiGennaro Reed, 2015). In their analogue study with undergraduate students, Hirst and DiGennaro Reed (2015) found that a longer history with inaccurate feedback lead to carryover effects when exposed to accurate feedback for some participants. However, Coon and Miguel (2012) found that prior exposure to a particular teaching arrangement improved rather than reduced instructional efficiency. Nevertheless, repeated exposure to integrity errors across stimulus sets has not been evaluated in prior research. Researchers could examine the effect of prolonged exposure to integrity errors across stimulus sets on learning.

Several limitations of the current investigation could be addressed in future translational and applied studies comparing errors of omission and commission of reinforcement. Because the current experiments were translational, we included typically developing children and a task that was not relevant to their education. Although the trial-and error procedure included in our study has been used to teach AVCD and other skills to typically developing children and individuals with ASD or developmental disabilities (e.g., Saunders & Spradlin, 1993; Schilmoeller et al., 1979), the implications of the current findings for other populations of interest (i.e., children with ASD and developmental disabilities) and other skills are unknown. Future research could compare the effects of errors of omission and commission with other learners and more relevant skills, although precautions (e.g., training stimuli in all conditions to mastery following the initial evaluation of integrity errors) could be taken to minimize any potential negative impact of these comparisons on learner's educational outcomes. In addition, replications with children of varying ages are warranted.

Another limitation of the current investigation is that we were unable to match the number of errors across conditions, stimuli, and participants. Unlike previous studies in which instructional stimuli were exposed to both errors of omission and commission (e.g., Hirst & DiGennaro Reed, 2015), targets in either the omission or commission conditions were exposed to only one type of error. Studies with combined errors were able to maintain specific error percentages (e.g., 50%, 75%) because an error could always be made on a programmed trial regardless of whether the participant responded correctly or incorrectly. Our isolated error conditions prevented this arrangement whereby our ability to make a programmed integrity error was dependent on the participant's response. The probability of engaging in a programmed integrity error of omission or commission changed as a participant acquired a conditional discrimination. For example, it was easier to make programmed errors of commission at the beginning of the comparison because Cassie and Kyle engaged in more incorrect responses. As they acquired these conditional discriminations, the likelihood that a programmed error of commission could be made decreased. The opposite was the case for errors of omission.

This difficulty was especially apparent with Kyle's stimuli in the commission condition in Experiments 1 and 2 (Table 4). Once Kyle acquired a stimulus, he rarely engaged in incorrect responses to that stimulus; thus, we were unable to commit an error of commission with stimuli that were acquired. In an attempt to make programmed errors across a predetermined number of trials, these programmed errors of commission were then allocated to other stimuli which he had not yet acquired, which resulted in disparate percentages of errors across stimuli. The effects of unequal error distribution on Kyle's acquisition are not known. One potential solution for similar comparisons in future studies is to replace any mastered targets with novel stimuli, and then measure the total number of stimuli acquired based on a predetermined amount of instructional training opportunities to evaluate the efficacy and efficiency of instruction with varying types and levels of treatment integrity errors.

The present investigation could be viewed as an initial step toward comparing the types and amounts of errors that impact learning. The relatively small percentage of isolated errors of omission and commission of reinforcement evaluated in our study slowed or hindered the acquisition of conditional discriminations with typically developing participants for whom no history of difficulties in skill acquisition were noted. The reduction in treatment efficacy and efficiency may be more pronounced in populations who require effective instructional strategies to address skill deficits.

DOI 10.1007/s40732-016-0211-2

[mail] Tiffany M. Kodak

kodak@uwm.edu

Acknowledgements We thank Patricia Zemantic, Shaji Haq, and Jacqueline Kammer for their assistance with data collection.

Compliance with Ethical Standards

Conflict of Interest All authors declare that they have no conflict of interest.

Ethical Approval "All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards."

References

Carroll, R. A., Kodak, T., & Fisher, W. W. (2013). An evaluation of programmed treatment-integrity errors during discrete-trial instruction. Journal of Applied Behavior Analysis, 46(2), 379-394. doi:10.1002/jaba.49.

Cook, J. E., Subramaniam, S., Brunson, L. Y., Larson, N. A., Poe, S. G., & St. Peter, C. C. (2015). Global measures of treatment integrity may mask important errors in discrete-trial training. Behavior Analysis in Practice. doi:10.1007/s40617-014-0039-7.

Coon, J. T., & Miguel, C. F. (2012). The role of increased exposure to transfer-of-stimulus-control procedures on the acquisition of intraverbal behavior. Journal of Applied Behavior Analysis, 45(4), 657-666.

DiGennaro Reed, F. D., & Codding, R. S. (2014). Advancements in procedural fidelity assessment and intervention: Introduction to the special issue. Journal of Behavioral Education, 23(1), 1-18. doi:10.1007/s10864-013-9191-3.

DiGennaro Reed, F. D., Reed, D. D., Baez, C. N., & Maguire, H. (2011). A parametric analysis of errors of commission during discrete-trial training. Journal of Applied Behavior Analysis, 44(3), 611-615.

Fryling, M. J., Wallace, M. D., & Yassine, J. N. (2012). Impact of treatment integrity on intervention effectiveness. Journal of Applied Behavior Analysis, 45(2), 449-453. doi: 10.1901/jaba.2012.45-449.

Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 75(1), 37-50.

Grow, L. L., Carr, J. E., Gunby, K. V., Charania, S. M., Gonsalves, L., Ktaech, I. A., & Kisamore, A. N. (2009). Deviations from prescribed prompting procedures: Implications for treatment integrity. Journal of Behavioral Education, 18(2), 142-156. doi:10.1007/s10864-0099085-6.

Hirst, J. M., & DiGennaro Reed, F. D. (2015). An examination of the effects of feedback accuracy on academic task acquisition in analogue settings. Psychological Record, 65, 49-65. doi: 10.1007 /s40732-014-0087-y.

Holcombe, A., Wolery, M., & Snyder, E. (1994). Effects of two levels of procedural fidelity with constant time delay on children's learning. Journal of Behavioral Education, 4(1), 49-73. doi: 10.1007 /BF01560509.

Jenkins, S. R., Hirst, J. M., & DiGennaro Reed, F. D. (2015). The effects of discrete-trial training commission errors on learner outcomes: An extension. Journal of Behavioral Education, 24(2), 196-209. doi: 10.1007/s 10864-014-9215-7.

Kodak, T., Clements, A., Paden, A. R., LeBlanc, B., Mintz, J., & Toussaint, K. A. (2015). Examination of the relation between an assessment of skills and performance on auditory-visual conditional discriminations for children with autism spectrum disorder. Journal of Applied Behavior Analysis. 48(1), 52-70. doi:10.1002/jaba.160.

Kodak, T., Clements, A., & LeBlanc, B. (2013). A rapid assessment of instructional strategies to teach auditory-visual conditional discriminations to children with autism. Research in Autism Spectrum Disorders, 7(6), 801-807. doi: 10.1016/j.rasd.2013.02.007.

Leon, Y., Wilder, D. A., Majdalany, L., Myers, K., & Saini, V. (2014). Errors of omission and commission during alternative reinforcement of compliance: The effects of varying levels of treatment integrity. Journal of Behavioral Education, 23(1), 19-33. doi:10.1007 /s 10864-013-9181-5.

Lerman, D. C. (2003). From the laboratory to community application: Translational research in behavior analysis. Journal of Applied Behavior Analysis, 36(4), 415-419. doi: 10.1901/jaba.2003.36-415.

McGhan, A. C., & Lerman, D. C. (2013). An assessment of error-correction procedures for learners with autism. Journal of Applied Behavior Analysis, 46(3), 626-639. doi:10.1002/jaba.65.

Noell, G. H., Gresham, F. M., & Gansle, K. A. (2002). Does treatment integrity matter? A preliminary investigation of instructional implementation and mathematics performance. Journal of Behavioral Education, 11(1), 51-67. doi: 10.1023/A: 1014385321849.

Pence, S. T., & St Peter, C. C. (2015). Evaluation of treatment integrity errors on mand acquisition. Journal of Applied Behavior Analysis, 48(3), 575-589. doi:10.1002/jaba.238.

Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15(4), 477-492. doi: 10.1901/jaba. 1982.15-477.

Saunders, K. J., & Spradlin, J. E. (1993). Conditional discrimination in mentally retarded subjects: Programming acquisition and learning set. Journal of the Experimental Analysis of Behavior, 60(3), 571-585. doi: 10.1901/jeab. 1993.60-571.

Schilmoeller, G. L., Schilmoeller, K. J., Etzel, B. C., & Leblanc, J. M. (1979). Conditional discrimination after errorless and trial-and-error training. Journal of the Experimental Analysis of Behavior, 31(3), 405-420. doi:10.1901/jeab. 1979.31-405.

Sindelar, P. T., Rosenberg, M. S., & Wilson, R. J. (1985). An adapted alternating treatments design for instructional research. Education and Treatment of Children, 8(1), 67-76.

St. Peter Pipkin, C., Vollmer, T. R., & Sloman, K. N. (2010). Effects of treatment integrity failures during differential reinforcement of alternative behavior: A translational model. Journal of Applied Behavior Analysis, 43(1), 47-70. doi:10.1901/jaba.2010.43-47.

Vollmer, T. R., Roane, H. S., Ringdahl, J. E., & Marcus, B. A. (1999). Evaluating treatment challenges with differential reinforcement of alternative behavior. Journal of Applied Behavior Analysis, 32(1), 9-23. doi: 10.1901/jaba. 1999.32-9.

Worsdell, A. S., Iwata, B. A., Hanley, G. P., Thompson, R. H., & Kahng, S. (2000). Effects of continuous and intermittent reinforcement for problem behavior during functional communication training. Journal of Applied Behavior Analysis, 35(2), 167-179

Samantha C. Bergmann [1] * Tiffany M. Kodak [1] * Brittany A. LeBlanc [1]

[1] University of Wisconsin-Milwaukee, 2441 E. Hartford Ave, Garland 238E, Milwaukee, WI 53211, USA

Caption: Fig. 1 Percentage correct responses during comparison of acquisition for integrity manipulations for Kyle (top panel) and Cassie (bottom panel) in Experiment 1

Caption: Fig. 2 Percentage correct responses during comparison of acquisition for integrity manipulations for Kyle (top panel) and Cassie (bottom panel) in Experiment 2. In the final phase in Kyle's graph, both omission and commission targets were exposed to high-integrity instruction. The control condition remained in baseline
Table 1 Stimuli assigned to each condition across
participants and experiments

Condition        Experiment 1: Stimuli     Experiment 2: Stimuli

                 Participant               Participant

                 Kyle         Cassie       Kyle         Cassie

Control          Big          Good         Leaf         Big
                 Hot          Father       Star         Hot
                 Sky          Love         Time         Sky
                 True         Water        Wind         True
High integrity   Chair        Cat          Dew          Chair
                 Cloud        Goat         Kite         Cloud
                 Life         Rabbit       Rock         Life
                 Meat         Tree         Sheep        Meat
Omission         Core         Bird         Mouse        Core
                 Fire         Com          Nose         Fire
                 Hand         Flower       Pig          Hand
                 Train        Man          Sand         Train
Commission       Cold         Bear         Deer         Cold
                 Dog          Cow          Foot         Dog
                 Hair         Friend       Sun          Hair
                 Sad          Mother       Twig         Sad

Table 2 Percentage of correct and incorrect responses
reinforced in each condition in both experiments

Participant   Condition        Experiment 1

                               Correct      Incorrect
                               responses    responses

Kyle          High integrity   100          0
              Omission         83           0
              Commission       100          17
              Control          0            0
Cassie        High integrity   100          0
              Omission         82           0
              Commission       100          18
              Control          0            0

Participant   Condition        Experiment 2

                               Correct      Incorrect
                               responses    responses

Kyle          High integrity   100          0
              Omission         80           0
              Commission       100          30
              Control          0            0
Cassie        High integrity   100          0
              Omission         78           0
              Commission       100          21
              Control          0            0

Table 3 Summary of acquisition data for Experiments 1 and
2 with total error trials and overall percentage of
obtained errors

Participant    Condition        Experiment 1

                                Sessions-      Minutes-
                                to-mastery     to-mastery

Kyle           High integrity   7              35
               Omission         18             86
               Commission       8              45
Cassie         High integrity   4              15
               Omission         8              32
               Commission       8              31

Participant    Condition        Experiment 1         Experiment 2

                                Obtained integrity   Sessions-
                                errors (%)           to-mastery

Kyle           High integrity   N/A                  11
               Omission         17                   44(+11) (a)
               Commission       17                   44(+8) (a)
Cassie         High integrity   N/A                  2
               Omission         18                   16
               Commission       18                   14

Participant    Condition        Experiment 2

                                Minutes-       Obtained integrity
                                to-mastery     errors (%)

Kyle           High integrity   49             N/A
               Omission         170(+40) (b)   20
               Commission       186(+32) (b)   30
Cassie         High integrity   7              N/A
               Omission         67             22
               Commission       62             21

(a) The numbers in the parentheses represent the number
of high-integrity sessions required to acquire the targets.
(b) The numbers in the parentheses represent the additional
minutes of high-integrity instruction to acquire the targets

Table 4 Number of errors per stimulus and percentage of errors
per stimulus for omission and commission conditions

Participant   Condition    Experiment 1

                           Stimulus   Number            Obtained
                                      of error trials   integrity
                                                        errors (%)

Kyle          Omission     Core       10                19
                           Fire       10                19
                           Hand       7                 13
                           Train      10                19
              Commission   Cold       2                 8
                           Dog        4                 17
                           Hair       7                 29
                           Sad        3                 13
Cassie        Omission     Bird       4                 17
                           Corn       4                 17
                           Flower     4                 17
                           Man        5                 21
              Commission   Bear       5                 21
                           Cow        5                 21
                           Friend     4                 17
                           Mother     5                 21

Participant   Condition    Experiment 2

                           Stimulus   Number            Obtained
                                      of error trials   integrity
                                                        errors (%)

Kyle          Omission     Mouse      27                21
                           Nose       27                21
                           Pig        26                20
                           Sand       26                20
              Commission   Deer       68                52
                           Foot       16                12
                           Sun        9                 7
                           Twig       66                50
Cassie        Omission     Core       12                25
                           Fire       11                23
                           Hand       11                23
                           Train      10                21
              Commission   Cold       5                 12
                           Dog        12                29
                           Hair       8                 19
                           Sad        12                29

Note. Percentages are rounded to nearest whole number
COPYRIGHT 2017 Springer
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:ORIGINAL ARTICLE
Author:Bergmann, Samantha C.; Kodak, Tiffany M.; LeBlanc, Brittany A.
Publication:The Psychological Record
Article Type:Report
Date:Mar 1, 2017
Words:8242
Previous Article:Implicit cross-community biases revisited: evidence for ingroup favoritism in the absence of outgroup derogation in Northern Ireland.
Next Article:An investigation into the relationship between the gender binary and occupational discrimination using the implicit relational assessment procedure.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters