Printer Friendly

Recent innovations in small-N designs for research and practice in professional school counseling.

This article illustrates an innovative small-N research design that researchers and practitioners can use to investigate questions of interest in professional school counseling. The distributed criterion (DC) design integrates elements of three classic small-N research designs--the changing criterion, reversal, and multiple baseline. The DC design is well suited to situations in which students or school counselors must allocate, prioritize, and adjust time or effort to complete multiple tasks in response to changing situational demands. The article includes practical examples of how the DC design can be used by practitioners.


Professional school counselors, teachers, and students often address multiple social, psychological, and academic concerns. These issues can overlap and vary in intensity across a wide range of contexts. For example, a school counselor intervening with a disruptive and inattentive student might aim to reduce the frequency of aggressive, antisocial behavior toward peers in a specific context, such as the school cafeteria. Concurrently, the counselor and teacher might focus on increasing that student's on-task behavior in the classroom, ostensibly leading to enhanced academic performance. In such cases, professionals must prioritize and manage their intervention efforts and time. What, for example, are the primary and secondary foci of intervention for an individual student? After establishing these foci, school counselors must allocate resources accordingly to ensure intervention consistency or fidelity. Professionals also must monitor and evaluate intervention effectiveness with an eye toward determining when and how to shift intervention from one emphasis to other foci.


For nearly 50 years, researchers in counseling and related disciplines have used small-N (also known as single-subject) research designs to evaluate the efficacy of interventions designed to promote change over time in individuals. Small-N designs have proven useful for evaluating intervention effects in studies that (a) include one or a few students; (b) require ongoing, repeated measures of individual students' progress across time; and (c) apply interventions that seek to improve short-term and longterm outcomes. In fact, small-N designs offer viable alternatives for demonstrating empirically the impact of interventions in studies that do not lend themselves to large-N, true- and quasi-experimental group research designs (Cowan, Hennessey, Vierstra, & Rumrill, 2004). However, with notable exceptions, such as the multiple baseline, small-N designs usually target a single behavior or dependent variable. Consequently, researchers would benefit from designs, and practitioners would benefit from approaches, that accommodate how students change across time when faced with shifting conditions and multiple tasks.

The scientific status of research on individual change emerged in the 1960s when investigators developed numerous small-N designs, particularly the reversal, multiple baseline, and changing criterion (CC). Concurrently, applied behavior analysis emerged as a behavior change technology, and as a methodology to evaluate experimental control of interventions that promote change in individuals over time (see, e.g., Baer, Wolf, & Risley's 1968 seminal article in the inaugural issue of the Journal of Applied Behavior Analysis). As Hartman and Hall (1976) noted, "The development of experimental designs to demonstrate control in individual case studies has been a crucial factor in bringing about scientific status to the study of individuals" (p. 527).

Although some individuals have utilized small-N designs, Foster, Watson, Meeks, and Young (2002) advised practitioners and researchers in professional school counseling to adopt and increase their use of these designs and cited sound rationales for doing so. Reasons for doing so include the need to (a) document the type of work that school counselors perform; (b) demonstrate if and how school counseling interventions produce desirable outcomes; (c) demonstrate pragmatically and empirically the efficacy of interventions; (d) communicate clearly and accountably to other professionals, the public, and consumers; and (e) promote professionalism within and outside the field of school counseling. Most recently, in his "Editor's Top Ten Wish List," Lapan (2005) acknowledged the utility of small-N designs but lamented the "very small handful of studies" (p. iii) submitted to Professional School Counseling.

Few new small-N designs have emerged since the fertile period, four decades ago, when design innovations flourished (McDougall, Hawkins, Brady, & Jenkins, in press). Two decades ago, Kazdin (1982) concluded, "Few variations of the changing criterion design have been developed" (p. 159). Moreover, CC design innovations have not emerged in the two decades after Kazdin reached this conclusion (McDougall, Smith, Black, & Rumrill, 2005). Recently, the first author developed, named, and applied two innovations of the classic CC design called the range-bound changing criterion design (McDougall, 2005) and the distributed criterion design (McDougall, in press). We believe that the latter design offers researchers a viable option for investigating multifaceted questions of interest in school counseling and related fields. Thus, the primary purpose of this article is to introduce the distributed criterion (DC) design to the field of professional school counseling. First, however, we review the classic CC because it constitutes a core element of the DC design and, we believe, the CC is an elegant, underutilized design and strategy.


The classic CC research design was first described by Sidman (1960), named by Hall (1971), and illustrated by Weis and Hall (1971). Although it has been used less frequently than multiple baseline and reversal designs, researchers have applied the CC design in small-N studies that targeted behaviors of children and teenagers. For example, Flood and Wilder (2004) increased the amount of time that a boy with separation anxiety disorder was able to be away from his mother; Gorski and Westbrook (2002) increased a teenager's compliance to a medical regimen; and Hall and Foxx (1977) improved children's math productivity. Researchers also have used the CC design to foster better outcomes for adults. Changing criterion research designs have shown that adult participants can reduce their cigarette smoking (Weis & Hall) and coffee drinking (Foxx & Rubinoff, 1979) and increase their leisure reading (Skinner, Skinner, & Armstrong, 2000).

The classic CC design is most appropriate for evaluating intervention effects that aim to change (i.e., accelerate or decelerate) one target behavior of one research participant, in a systematic, stepwise fashion. The CC is useful, both as an intervention strategy and as a research design, when the nature of the target behavior and the corresponding intervention dictate a series of small systematic changes, rather than large-scale or "all-at-once" change.

Figure 1 depicts hypothetical data from a CC intervention designed to increase, in a sequential way, the number of correct answers that a student (Maile) wrote, for previously mastered, single-digit multiplication facts (e.g., 9 x 8), during daily, 1-minute, warm-up periods in math. The teacher designed this activity to promote automaticity (high rates of errorless response). However, Maile typically drew pictures, which she wanted to show her teacher and other adults, instead of writing answers during math warm-ups. In fact, baseline data in Figure 1 indicate that Maile completed zero or one problem per warm-up, even though she was capable academically. Thus, the teacher consulted with the school counselor to design an effective intervention. Initially, together they considered using an activity reinforcer (i.e., 1 minute of drawing time contingent upon Maile completing a specific number of problems). However, the counselor and teacher decided, instead, to capitalize on Maile's preference for immediate feedback and recognition.


The intervention used goal setting, self-graphing, and a CC approach. First, the educators helped Maile set an explicit, immediately attainable performance criterion. This initial criterion was based on Maile's most recent performance (i.e., how many problems she completed typically during baseline). Second, the adults showed Maile how to mark on a graph, immediately after each warm-up, the number of problems she had completed. As depicted in the first intervention phase, Maile met the initial performance criterion of answering at least one problem, and she did so for three consecutive days. Then the educators helped Maile "up the ante" to at least two correct responses per warm-up for the second phase of the intervention. Maile continued to meet the "changing criterion" during subsequent intervention phases with one exception (see Day 13 in Figure 1).

In conclusion, professionals can utilize the CC strategy when it helps students achieve long-term goals via a series of short-term objectives. Indeed, for some situations, a series of repeated, minor improvements is more likely to produce meaningful, long-term outcomes compared to strategies that demand, but are unlikely to achieve, large immediate changes.


The DC is a very recent innovation in small-N research. This approach is particularly suited to empirical investigations of performance across time in which individuals allocate time to multiple tasks in ways that mesh with changing environmental demands (McDougall, in press). The DC is a combined design that utilizes elements of the classic CC, reversal, and multiple baseline designs. Like the CC, the DC typically includes a baseline phase followed by a series of intervention phases, each of which has a distinctive performance criterion. In contrast to the CC design, however, the DC is most useful for evaluating interventions that aim to change concurrently--gradually or abruptly in two directions--a single behavior across multiple contexts, or multiple related behaviors in a single context. Table 1 summarizes key similarities and differences between the DC and classic CC research designs.

Applying the Distributed Criterion Design in Counseling Research and Practice: Case 1

In our first hypothetical example, a school counselor and a classroom teacher used a multicomponent intervention to increase dyad and group play, and decrease isolate activities, of a withdrawn and socially inept third grader, Kainoa, during recess periods. These two professionals collaborated after they saw that Kainoa (a) usually stood at least 50 meters away from classmates while he watched them play in groups (e.g., soccer) or in dyads (e.g., ball toss); (b) appeared to be "daydreaming" some of the time; and (c) placed his face too closely to classmates' faces during those infrequent times when he approached peers and spoke to them. Kainoa's school counselor helped him set explicit performance criteria to increase his dyad play and group play and to decrease his isolate activity (goal setting). Additionally, Kainoa's teacher showed him how to use the alarm on his watch to remind himself how much time he was supposed to expend in isolate activities, dyad play, and group play, while on the playground (audio-cued self-monitoring). Finally, the counselor used role playing, modeling and imitation, practice, and feedback to help Kainoa and his peers learn where to stand, what to say, and how to begin and continue playing in dyads and groups (social skills training).

How to distribute one total criterion across multiple activities. The criterion for Kainoa's total engaged time within and across all intervention phases was fixed at a mean of 30 minutes per day, while criteria for each of three particular activities varied. That is, the total criterion of 30 minutes (the duration of recess each day) was distributed across three types of activities (isolate, dyad, and group) in a manner consistent with the duration of daily recess periods, the multitasking nature of the target behaviors, and the long-term intervention goal of having Kainoa play with peers instead of isolating himself during recess. Engagement or performance criteria for particular activities were shifted in accordance with Kainoa's readiness and need to devote varying amounts of time to isolate activity, dyad play, and group play.

The counselor changed intervention phases (i.e., shifted performance criteria) based on Kainoa's mastery of short-term objectives (STOs). STOs corresponded logically and sequentially to each of the intervention phase labels (i.e., performance criteria) in Figure 2. As depicted in Figure 2, the label for the first intervention phase (20'-10'-0') reflects the performance criterion for Kainoa's initial STO (i.e., reduce isolate activity to 20 minutes; increase dyad play to 10 minutes; and keep, for the time being, group play at 0 minutes). See Table 2 for further details.


Demonstrating experimental control. Baseline (pre-intervention) data in Figure 2 indicate that Kainoa engaged in isolate activities on the playground for 20, 30, and 30 minutes during days 1, 2, and 3, respectively. However, Kainoa reduced his isolate activities during the first phase of the intervention to an average of 20 minutes per day, and he did so in a way that matched exactly the pre-stipulated daily performance criterion of 20 minutes. During the subsequent intervention phases, Kainoa continued to meet--very precisely, consistently, and punctually--the pre-stipulated performance criteria for each intervention phase. During the fourth and final intervention phase, Kainoa no longer engaged in isolate activities (mean = 0 minutes). Thus, the intervention appears to demonstrate strong experimental control over the duration of Kainoa's isolate activities. Likewise, the intervention appears to show strong control over the duration of Kainoa's dyad play and group play, as evidenced by systematic changes (typically increases) that occurred in conjunction with the precise days when changes in intervention phases were instituted. For example, during each of the three recess periods depicted in the fourth and final intervention phase, the duration of Kainoa's dyad play was 15 minutes, as was his group play.

Overall, the patterns of play behavior graphed in Figure 2 (i.e., repeated instances, or replications, of systematic increases and decreases in accordance with changing criteria across multiple intervention phases) suggest that the intervention exerted strong experimental control over Kainoa's behaviors during recess. Only during day 17 did Kainoa's performance fail to adhere precisely to pre-stipulated criteria.

Considerations in Using the Distributed Criterion Research Design

One outstanding feature of the DC is its extensive yet elegant capacity to demonstrate experimental control. Four design elements contribute to this valuable feature. First, the design provides numerous opportunities for researchers to replicate experimental control. Second, the design requires both sequential (i.e., across adjacent phases) and concurrent (i.e., across behaviors or contexts) changes in the target behavior as a function of intervention. Third, the design requires bidirectional changes in target behaviors (i.e., both increases and decreases). Fourth, like the classic CC, the DC allows users and consumers to evaluate intervention efficacy (i.e., experimental control) based, in large part, on the degree to which the target behavior conforms to performance criteria stipulated for various intervention phases.

Sequential and concurrent replications. Consider the extensive number of replications derived from sequential and concurrent comparisons of graphed data, from the various phases, in Figure 2. First, consider sequentially derived comparisons--that is, changes in the target behavior across adjacent phases for one type of activity. Consider isolate activity. First, we evaluate intervention impact by comparing performance from the initial baseline to the initial intervention phase. Then, three additional "adjacent-phase" comparisons are available for isolate activity: (a) initial intervention phase labeled 2010-0 vs. second intervention phase labeled 10-20-0; (b) second intervention phase labeled 10-20-0 vs. third intervention phase labeled 10-10-10; and (c) third intervention phase 10-10-10 vs. fourth intervention phase labeled 0-15-15. Further inspection of Figure 2 reveals that additional comparisons of data from adjacent phases are available for dyad play (n = 4 comparisons) and for group play (n = 4 comparisons).

In addition to numerous sequential comparisons, the DC also permits and requires investigators to evaluate concurrent changes in the target behavior across contexts or behaviors. After baseline, the DC design called for a series of intervention phases in which performance criteria changed concurrently for multiple types of recess activities. Rather than staggering changes in the intervention conditions (i.e., the performance criteria) across multiple behaviors or contexts, as in a classic multiple baseline design, investigators changed concurrently the performance criteria for two or three recess activities. At such points, the case for experimental control depends largely upon the degree to which the participant's behavior changes in accordance with these multiple, concurrent changes in performance criteria. For example, in Figure 2, examining vertically the data for days 14 and 15 (i.e., the last phase change), we ask: To what extent did Kainoa's behavior conform to concurrently implemented changes in performance criteria when the criteria for (a) isolate activity decreased from 10 to 0 minutes, (b) dyad play increased from 10 to 15 minutes, and (c) group play increased from 10 to 15 minutes? Did Kainoa's behavior conform to changes in performance criteria for one, two, or all three of the recess activities?

The case for experimental control is robust when actual performance conforms very closely and very punctually to each and every concurrent change in performance criteria. In this first hypothetical case, the intervention appears to be very effective because the data in Figure 2 illustrate how the intervention demonstrated strong control over the target behavior. We refer readers to Kazdin (1982) for additional explanations and illustrations on using visual inspection criteria to analyze graphed data to evaluate experimental control of interventions over target behaviors.

A unique advantage of the DC. As illustrated in Figure 2, the DC accommodates bidirectional changes in performance criteria and temporary reversions of performance criteria to prior levels. These options permit school counseling investigators additional opportunities, if necessary, to demonstrate experimental control. Notably, investigators can use these options without encountering most of the ethical and practical concerns inherent in designs, such as the reversal, which temporarily removes intervention conditions, or the classic CC, which sometimes temporarily reverses the behavior in the opposite direction of the overall intervention goal. In the classic CC design, when investigators temporarily revert or change performance criteria in the opposite-of-usual direction, the criterion change is always counter to the desirable direction stipulated by the intervention. For example, a CC approach might seek initially to reduce, in a stepwise fashion, the number of cigarettes smoked each day from 20 per day, to 18 per day, to 16 per day, and so forth. At some point, the investigator may revert temporarily to a higher criterion in order to establish a more convincing case for experimental control. If smoking behavior conforms to this temporary higher criterion, experimental control is strengthened, particularly if, during subsequent intervention phases, smoking conforms again to stepwise reductions in the criterion.

Unfortunately, such temporary reversions in the criterion are always in a direction that is opposite of the overall goal of the intervention. Indeed, in this example, raising the criterion for number of cigarettes smoked presents ethical concerns and conflicts with the intervention goal of reducing smoking. However, in a DC, changes in performance criteria in the opposite-of-usual direction are rarely, if ever, counter to the intervention goal. The reason for this is that the participant reallocates time devoted previously to one task (e.g., dyad play) to other useful tasks (e.g., group play), while the overall criterion remains constant (i.e., 30 minutes per recess period). Thus, the DC allows investigators to bolster experimental control by demonstrating bidirectional changes--both increases and decreases--in target behaviors.

Applying the Distributed Criterion Design in Counseling Research and Practice: Case 2

Our second hypothetical case illustrates how to apply the DC for an individual with anger management and self-regulation issues. More specifically, this case illustrates how practitioners might use the DC, as a strategy, to prioritize and manage time when faced with numerous multitasking demands. In this case, teachers have referred Jontuna, a fifth-grade boy, to the school counselor because of three concerns--frequent fights, a short temper, and hostile attitudes toward peers. The counselor's caseload and Jontuna's academic schedule limit direct counseling time to two 30-minute sessions per week. The counselor develops an anger management intervention to address each of the three concerns voiced by Jontuna's teachers.

Due to frequent fights and serious concerns about the welfare of Jontuna and his peers, the counselor immediately targets this serious aggressive behavior. In fact, the counselor and Jontuna initially devote all of their counseling time (i.e., 60 minutes per week) to address fighting. Information from Jontuna's teachers indicates that he fights with peers about three times per week and that these fights occur on the playground. Therefore, the counselor establishes a goal of reducing the number of fights from three per week to zero per week. The intervention teaches Jontuna to use effective conflict resolution skills, including perspective taking, problem solving and nonviolent communication skills (cf., Resolving Conflict Creatively Program; DeJong, 1999).

As the frequency of Jontuna's fights decreases and reaches zero occurrences per week, Jontuna and the counselor reduce time devoted to the first goal and initiate an intervention for the second goal. That is, the counselor recognizes that Jontuna still experiences frequent and intense bouts of anger. Therefore, the counselor reduces from 60 to 15 minutes the amount of counseling time devoted to the first goal (i.e., to reduce fights) and begins to devote 45 minutes to Jontuna's second goal (i.e., to regulate angry feelings). The intervention helps Jontuna recognize physical cues associated with intense anger, including muscle tension, elevated heart rate, and rapid breathing. In time, Jontuna learns via systematic desensitization how to regulate his temper with progressive muscle relaxation and deep breathing (cf., Rimm & Masters, 1987).

Next, the school counselor reduces time allocated to the second goal because observations indicate that Jontuna has nearly mastered temper regulation skills. Essentially, Jontuna needs only "brush-up" sessions to maintain these skills (10 minutes per week). Moreover, the counselor reduces further the amount of time allocated previously to the first goal. Consequently, the counselor now devotes nearly all time during weekly counseling sessions to the third goal (i.e., to improve hostile attitudes toward peers). The counselor recognizes that distorted beliefs underlie Jontuna's anger and aggressive behaviors. More specifically, Jontuna attributes hostile intent to peers, particularly during playground activities and games. The counselor initiates a cognitive restructuring program (Beck, 1997; Sacco & Beck, 1995). This program helps Jontuna to recognize some of his irrational beliefs, formulate more rational beliefs, and replace antisocial cognitions with prosocial cognitions. In addition, Jontuna becomes more adept at taking on peers' perspectives. In time, Jontuna's attitudes toward peers become less hostile.

This second hypothetical case study illustrates how professional school counselors can use the DC, as a strategy, to plan, prioritize, and reallocate their efforts and time. The DC strategy acknowledges that many counselors must multitask their duties and overcome scheduling challenges on a daily basis. That is, this strategy encourages counselors to be "planful" and purposeful when they allocate their professional time and skills. The strategy also accommodates counselors' needs to manage unplanned events that arise and which merit immediate attention. Additionally, as a research design, the DC could be applied to evaluate the effectiveness of counseling time and efforts directed toward Jontuna. Indeed, in this case, a researcher might apply the DC design to evaluate how improvements in the three specified problem areas correspond to changes in how counseling time is allocated.

Summary of the Distributed Criterion Research Design

The DC, a combined research design that incorporates elements of the multiple baseline, reversal, and CC designs, permits researchers and practitioners to evaluate experimental control via numerous yet elegant replications. We believe that one major advantage of the DC is that it permits researchers to investigate questions of interest in multitasking contexts--"real-world" challenges that confront students and counselors on a daily basis. During the 1960s, when small-N designs emerged, most designs were suited to investigations that targeted single discrete behaviors. Over the past four decades, small-N designs have become more sophisticated. The recent introduction of the DC expands the range of options available to researchers. The design accommodates circumstances that require individuals to juggle multiple, concurrent, and overlapping tasks.

Oftentimes, school counselors and students must perform under conditions that limit or fix the amount of time and resources they can allocate to tasks. They must prioritize, allocate, adjust, and readjust their efforts to perform these tasks in efficient ways, particularly when circumstances dictate that tasks change, abate, or emerge. The DC design is an innovative small-N design that researchers can apply to evaluate experimental control in intervention studies that target performance in ever-changing, multitasking contexts. Likewise, the DC strategy is a management and planning tool for practitioners, such as counselors, who must distribute and redistribute their time and their ongoing efforts to improve not only specific multiple behaviors, but also the whole person.


By virtue of its many design requirements, the DC is more complex compared to more versatile small-N research designs, especially the multiple baseline. Recall that the DC incorporates elements of the CC, multiple baseline, and reversal designs. Consequently, in order to maximize experimental control and minimize threats to internal validity, researchers must adapt guidelines from the three aforementioned designs based on nuances that arise when they integrate these three individual designs into the one combined design.


We hope that this article expands awareness, understanding, and use of the DC design among researchers and practitioners in professional school counseling. The DC expands the number and type of credible small- N designs for evaluating intervention efficacy in school counseling and other disciplines that promote individuals' improvement over time. The design is useful for investigating the impact of goal setting, self-management, and other interventions that incorporate student input. We believe that when researchers and practitioners apply the DC in professional school counseling, they will further advance the scientific status of small-N research and the professional status of their discipline. Consequently, we recommend that researchers and practitioners consult definitive sources on small-N research designs, as well as resources that describe effective strategies, interventions, and programs (cf., Alberto & Troutman, 1999; Barlow & Hersen, 1984; Johnston & Pennypacker, 1993; Kazdin, 1982; Kottler, 2001; Kratochwill, 1978; McDougall, 1998; McDougall, in press; Poling & Fuqua, 1986; Tawney & Gast, 1984; Watson & Tharp, 2002).


Alberto, P. A., & Troutman, A. C. (1999). Applied behavior analysis for teachers (5th ed.). Columbus, OH: Merrill.

Baer, D. M., Wolf, M. M., & Risley, T. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97.

Barlow, D. H., & Hersen, M. (1984). Single-case experimental design: Strategies for studying behavior change (2nd ed.). New York: Pergamon.

Beck, A.T. (1997). The past and future of cognitive therapy. Journal of Psychotherapy Practice and Research, 6, 276-284.

Cowan, R. J., Hennessey, M. L., Vierstra, C.V., & Rumrill, P. D. (2004). Small-N designs in rehabilitation research. Journal of Vocational Rehabilitation, 20, 203-211.

DeJong, W. (1999). Building the peace: The Resolving Conflict Creatively Program. Washington, DC: National Institute of Justice.

Flood, W. A., & Wilder, D. A. (2004).The use of differential reinforcement and fading to increase time away from a caregiver in child with separation anxiety disorder. Education and Treatment of Children, 27, 1-8.

Foster, L. H., Watson,T. S., Meeks, C., & Young, J. S. (2002). Single-subject research design for school counselors: Becoming an applied researcher. Professional School Counseling, 6, 146-154.

Foxx, R. M., & Rubinoff, A. (1979). Behavioral treatment of caffeinism: Reducing excessive coffee drinking. Journal of Applied Behavior Analysis, 12, 335-344.

Glyn n, E. J., Thomas, J. D., & Shee, S. M. (1973). Behavioral self-control of on-task behavior in an elementary classroom. Journal of Applied Behavior Analysis, 6, 105-113.

Gorski, J. A. B., & Westbrook, A. C. (2002). Use of differential reinforcement to treat medical non-compliance in a pediatric patient with leukocyte adhesion deficiency. Pediatric Rehabilitation, 5, 29-35.

Hall, R.V. (1971). Managing behavior: Behavior modification, the measurement of behavior. Lawrence, KS: H & H Enterprises.

Hall, R. V., & Foxx, R. G. (1977). Changing criterion designs: An alternative applied behavior analysis procedure. In B.C. Etzel, J. M. LeBlanc, & D. M. Baer (Eds.), New developments in behavioral research: Theory, mind, and application (pp. 151-166). Hillsdale, NJ: Erlbaum.

Hartman, D. P., & Hall, R. V. (1976).The changing criterion design. Journal of Applied Behavior Analysis, 9, 527-532.

Johnston, J. M., & Pennypacker, H. S. (1993). Strategies and tactics of behavioral research (2nd ed.). Hillsdale, NJ: Erlbaum.

Kazdin, A.E. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford University Press.

Kottler, J. A. (2001). Making changes last. Philadelphia: Brunner-Routledge. Kratochwill, T. R. (1978). Single subject research. New York: Academic Press.

Lapan, R.T. (2005). An editor's top ten wish list. Professional School Counseling, 8(5), ii-iv.

McDougall, D. (1998). Research on self-management techniques used by students with disabilities in general education settings: A descriptive review. Remedial and Special Education, 19, 310-320.

McDougall, D. (2005).The range-bound changing criterion design. Behavioral Interventions, 20, 129-137.

McDougall, D. (in press).The distributed criterion design. Journal of Behavioral Education.

McDougall, D., Hawkins, J., Brady, M. P., & Jenkins, A. (in press). Recent innovations in the changing criterion design: Implications for research and practice in special education. Journal of Special Education.

McDougall, D., Smith, G., Black, R., & Rumrill, P. (2005). Recent innovations in small-N designs for rehabilitation research: An extension of Cowan, Hennessey, Vierstra, and Rumrill. Journal of Vocational Rehabilitation, 23, 197-205.

Poling, A., & Fuqua, R. W. (1986). Research methods in applied behavior analysis: Issues and advances. New York: Plenum.

Rimm, D. C., & Masters, J. C. (1987). Behavior therapy: Techniques and empirical findings. New York: Academic Press.

Sacco, W. A., & Beck, A. T. (1995). Cognitive theory and therapy. In E. E. Beckham & W. Leber (Eds.), Handbook of depression (pp. 329-351). New York: Guilford.

Sidman, M (1960). Tactics of scientific research. New York: Basic.

Skinner, C. H., Skinner, A. L., & Armstrong, K.J. (2000). Analysis of a client-staff-developed shaping program designed to enhance reading persistence in an adult diagnosed with schizophrenia. Psychiatric Rehabilitation Journal, 24, 52-57.

Tawney, J., & Gast, D. (1984). Single subject research in special education. Columbus, OH: Merrill.

Watson, D. L., & Tharp, R. G. (2002). Self-directed behavior: Self-modification for personal development. Belmont, CA: Wadsworth.

Weis, L., & Hall, R. V. (1971). Modification of cigarette smoking through avoidance of punishment. In R.V. Hall (Ed.), Managing behavior: Behavior modification applications in school and home (pp. 77-102). Lawrence, KS: H & H Enterprises.

Dennis McDougall and Douglas Smith are associate professors of special education at the University of Hawaii, Honolulu. E-mail:
Table 1. Changing Criterion Design Vs. Distributed Criterion Design

Feature Changing Criterion Distributed Criterion

Number of N = 1, in one context N > 1 or one target
target behavior is performed in
behaviors or > 1 context

Design Single design, sometimes Combined design with
typically with one phase that changing criterion,
applied as-- temporarily reverts multiple baseline, and
 performance criterion reversal features

Number and One, usually brief > 1, usually brief
duration of

How criteria One criterion at fixed Overall criterion is
are applied value across all sessions constant across all
 within an intervention sessions across all, or
 phase. Criterion shifts nearly all, intervention
 in step-wise manner for phases. Criteria are
 successive phases, for a distributed--across
 single target behavior, multiple individual
 in a single context, in behaviors or contexts
 one direction--either across various
 increases or decreases, intervention phases--in
 but not in both two directions, increase
 directions. and decrease

Amenable to Shape behavior via small Change behaviors via
interventions changes in performance small or large changes in
that-- criteria. Use performance criteria.
 differential Multitask, prioritize,
 reinforcement of higher/ and reallocate time based
 lower rates of behavior, on schedule demands, due
 goal setting, behavioral dates. Use goal setting,
 self-management. (Glynn, behavioral self-
 Thomas, & Shee, 1973) management. (McDougall,

Table 2. Short-Term Objectives Correspond to Performance Criteria for
Types of Recess Activities During Successive Intervention Phases

Short-Term Intervention Phase
Objective Phase Label

None -- Baseline

1 1 [20.sub.I]-[10.sub.D]-[0.sub.G]

2 2 [10.sub.I]-[20.sub.D]-[0.sub.G]

3 3 [10.sub.I]-[10.sub.D]-[10.sub.G]

4 4 [0.sub.I]-[15.sub.D]-[15.sub.G]

 Performance Criteria (number of minutes student
 expected to engage in each activity)

Short-Term Activity Dyad Play Group Play
 None None None
 Reduce to Start at 0'/day
1 20'/day 10'/day

 Reduce to Increase to 0'/day
2 10'/day 20'/day

 Maintain at Reducc to Start at
3 10'/day 10'/day 10'/day

 Reduce to Increase to Increase to
4 0'/day 15'/day 15'/day

Note. The alpha-numeric sequences that appear in the "Phase Label"
column include (a) subscript letters (I, D, G) that identify each of
three types of recess activities--isolate activity, dyad play, and
group play, respectively; and (b) corresponding numbers that specify
the performance criterion, in minutes, for each activity within
various phases. During baseline, no explicit performance standards
existed and the student engaged habitually in isolate activities.
COPYRIGHT 2006 American School Counselor Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Smith, Douglas
Publication:Professional School Counseling
Geographic Code:1USA
Date:Jun 1, 2006
Previous Article:Producing evidence to show counseling effectiveness in the schools.
Next Article:Practical significance: the use of effect sizes in school counseling research.

Related Articles
Defining and examining school counselor advocacy.
The development of a self-assessment instrument to measure a school district's readiness to implement the ASCA National Model.
University-Urban School Collaboration in school counseling.
School counselors and student assessment.
Record keeping and the school counselor.
Using comparison groups in school counseling research: a primer.
Action research and school counseling: closing the gap between research and practice.
Producing evidence to show counseling effectiveness in the schools.
Practical significance: the use of effect sizes in school counseling research.
Research methods in school counseling: a summary for the practitioner.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters