Printer Friendly

Single-subject research design for school counselors: becoming an applied researcher.

During the past decade, there has been discussion over the need for outcome research documenting the work of school counselors and the lack of research delineating the work of school counselors (Baker, 2000; Fairchild, 1993; Green & Keys, 2001; Myrick, 1990; Paisley & McMahon, 2001; Rhyne-Winkler & Wooten, 1996; Vacc & Rhyne-Winkler, 1993; Whiston & Sexton, 1998). Outcome research supporting the work of school counselors is increasingly being demanded as the public desires to know how public education funds are spent on school counseling services and whether those services are effective (Burnham & Jackson, 2000; Looney, 1998; Lusky & Hayes, 2001; Rhyne-Winkler & Wooten). The addition of new school counseling positions and continuation of school counseling programs in public and private education has prompted efforts to justify funding and retention of school counselors as necessary components of the educational system (Whiston & Sexton). Funding of school counseling programs is of great concern as public education frequently must deal with decreased federal, state, and local funding sources which translates into reductions in programs, services, and personnel (Lenhardt & Young, 2001; Otwell & Mullis, 1997). Not only can empirical research be helpful to justify funding, but also it can be a useful tool for program evaluation to establish and maintain an effective school counseling program. Enhancement of the professional viability of school counselors can also be a benefit of empirical research (Borders & Drury, 1992; Fairchild; Lenhardt & Young; Paisley & McMahon). A clear need exists for research on school counselors and their interventions, but it is unclear who is responsible for conducting this research. It has been suggested that school counselors should accept the challenge to provide the needed accountability data (Johnson, 2000; Paisley & McMahon; Otwell & Mullis). Moreover, Lenhardt and Young proposed that through advocacy, public relations, and marketing, the profession could be strengthened; however, this is dependent on accountability efforts including measuring student outcomes and conducting action research.

Throughout the history of the school counseling profession, the school counselor has provided a wide range of services and interventions evolving from the early vocational emphasis to a mental health model in the middle years, and finally, the current emphasis on comprehensive developmental guidance programs (Gysbers & Henderson, 2000). School counselors provide many different interventions throughout a school day, but are not maintaining adequate records providing evidence of their impact with students (Borders, 2002; Kuranz, 2002; Whiston, 2002). School counselors are being encouraged to describe and define their work through the use of outcome data (Borders; Sears & Granello, 2002). Whiston also suggested that without sufficient evidence or documentation of the positive interventions of school counselors, the profession is in jeopardy.

Although the lack of outcome research regarding school counselors and school counseling programs is well documented, perusal of the extant professional literature indicates that school counselors have been negligent in evaluating, documenting, and communicating evidence of their effectiveness (Green & Keys, 2001; Lusky & Hayes, 2001; Myrick, 1997; Paisley & Borders, 1995; Rhyne-Winkler & Wooten, 1996; Whiston & Sexton, 1998). School counselors have been slow to accept responsibility for researching their skills and the effectiveness of school counseling programs (Myrick, 1990, 1997; Paisley & McMahon, 2001). Reasons for this lack of research by school counselors vary from a lack of knowledge regarding assessment practices to ethical considerations regarding confidentiality issues. Time constraints, lack of funds, and lack of confidence to conduct a thorough assessment are other possible reasons (Fairchild, 1993; Fairchild & Zins, 1986; Myrick, 1997). It has been suggested that the most significant barrier for school counselors in conducting outcome research is the lack of familiarity with research strategies and methods (Fairchild; Fairchild & Seeley, 1995; Lusky & Hayes; Myrick, 1997; Whiston & Sexton).

School counselors must be able and willing to provide evidence of their effectiveness to students, teachers, administrators, and parents (Fairchild, 1993). Learning to conduct outcome research will provide a valuable tool to examine and document school counselors' effectiveness with students (Borders & Drury, 1992; Gysbers & Henderson, 2000). School counseling programs focus on student mastery of competencies and are designed to facilitate student success. An examination of school counseling interventions may provide answers for the school counselor to the question: "Do I make a difference in the lives of the children with whom I work?" Myrick (1990) also questioned whether the school counselor's intervention makes a difference. The application of basic operant research principles can provide the school counselor with a systematic way to make known how their interventions impact students (Rhyne-Winkler & Wooten, 1996). A simple, easy-to-use research method is focusing on one student and analyzing the information to assist both the counselor and student. School counselors can easily take advantage of the single-subject research paradigm to conduct outcome research and answer questions regarding the effectiveness of school counselors' interventions.

Single-subject research design has been described as any research involving one subject or one group that is treated as a single entity (Hittleman & Simon, 1997). Using repeated observations, the effect of an intervention can be established. The researcher (school counselor) repeatedly measures behavior during baseline and treatment conditions on an individual student or single group in order to determine if an intervention is effective in helping the student or students become successful in the classroom. Borders and Drury (1992) described a single-case study as focusing on one student in order to document counseling interventions and the student's progress towards target behaviors. Outcome research, as defined by Fairchild (1993), involves gathering information regarding behavioral changes in a student occurring after an intervention by the school counselor. The use of single-subject outcome research is increasing in other applied fields (e.g., special education, school psychology, child clinical psychology, pediatric psychology) for many reasons, including the low cost of gathering data, the lack of complicated statistics, and that interventions can be individualized for each child (Polaha & Allen, 1999). Single-subject research design has also been called the "best kept secret" in counseling and assists the counselor with the development of techniques that are effective for individuals (Lundervold & Belwood, 2000). Research has indicated that single-subject research designs are gaining recognition due to their emphasis on the individual student rather than group assessments typically used to measure student success (Gresham, 1998; Myrick, 1990).

Barlow and Hersen (1984) noted that one of the advantages of single-case research designs is that they are the best research designs for isolating the cause for behavior change and for determining which treatment procedures result in the most effective and efficient behavior change. Other advantages include: (a) recognizing individuality among subjects and (b) evaluating the effects of a treatment immediately thereby allowing for improvisation in treatment techniques to more readily isolate cause and effect and improve outcome. Several researchers believe group research designs are misleading and that single-case research reveals more accurate findings concerning change in individuals, which are often obscured by group results (Gresham, 1998; Lundervold & Belwood, 2000; Myrick, 1997). It is through single-case studies that school counselors can measure the effectiveness of interventions with students.

Gresham (1998) compared single-case methodology and group design methodology to help illustrate the strengths and weaknesses of each approach. First, the single-case study provides the researcher with a clear view of the unique effects of a counseling intervention on a student. As with any research design, threats to the validity of a study are caused by uncontrolled variables influencing the experiment. In group designs, random assignment of the participants to either a control group or experimental group is used to minimize most threats to validity. However, single-subject research inherently reduces threats to validity by comparing the individual student's behavior under baseline and treatment conditions so that the student is his or her own control group. Secondly, single-subject research methods can provide much more information about an individual child than a large group design. The school counselor, as a researcher, may be interested in a single student to determine the effectiveness of interventions on problematic behavior. Single-subject research can show the change or nonchange within each individual, rather than looking at the group as a whole. In counseling, the ultimate barometer of success is whether an individual or individual's behavior changed (i.e., single-subject designs), not whether the average behavior of the group improved (i.e., group designs).

It has been noted that one of the reasons school counselors do not conduct outcome research is their lack of familiarity with research methods (Fairchild, 1993; Fairchild & Seeley, 1995; Fairchild & Zins, 1986; Myrick, 1990, 1997). Therefore, this article attempts to help school counselors understand outcome research by describing, in a tutorial format, relatively simple procedures to evaluate their interventions. Provided for the school counselor is an easy-to-use guide for understanding and conducting single-case research. Four single-subject research designs are presented. These four single-subject designs are typical situations in which school counselors may be called upon to collaborate, consult, or intervene. Also included are examples of how to plan and implement a single-case research study in a typical classroom setting. Planning and implementing outcome research similar to these examples will help school counselors provide documentation of accountability efforts, and evidence of the effectiveness of interventions with students.

A-B DESIGN

Description

The simplest method for determining if a student's behavior has changed because of an intervention is the A-B research design that allows for comparison of behavior before and after treatment. The baseline period is referred to as "A." This is the period in which no intervention or treatment is applied to the student. Data collection occurs during this time to measure the frequency, intensity, and/or duration of the student's problematic behaviors (e.g., how many minutes a student is out of his seat during a 20-minute observation). The time when the intervention is applied to the student is referred to as the "B" phase. Although data collected during baseline and treatment periods are compared to assess improvement in some aspect of the student's functioning, the A-B research design does not demonstrate causation or a functional relationship between the intervention and changes in behavior that may occur. In other words, one can demonstrate that a change in behavior has occurred but cannot "prove" it is the counselor's intervention that is responsible for the observed change. This is because other extraneous conditions may have influenced the student's behavior (e.g., a change in the teacher's responses to the behavior, a second treatment that was unintentionally applied, recovery from an illness). Without replicating the intervention a second time, the counselor cannot determine whether the behavior changed based solely on the intervention or whether another condition affected the outcome (Miltenberger, 1997; Polaha & Allen, 1999). However, the practicality and usefulness of this design is invaluable for the counselor in documenting change in a student's behavior.

A-B designs are most appropriate in those situations where a return to the baseline condition is unethical, infeasible, or undesired by the teacher and/or counselor. For example, if a student were exhibiting severe aggressive behaviors such as biting, hitting, or kicking others, it would be unethical to withdraw an effective treatment simply to assure oneself that an intervention was effective. Similarly, it would be infeasible to remove an academic strategy that a student has learned and incorporated into their academic repertoire (e.g., phonics). In treating behaviors that disrupt the learning environment (e.g., talking out, out of seat), the teacher or counselor might find withdrawing treatment undesirable.

APPLICATION OF A-B DESIGN

One example in which an A-B design would prove especially useful is the case of a school counselor who is approached by a teacher regarding a student who is hitting classmates. Because hitting others is a highly aggressive behavior, there may be an immediate need to implement treatment to provide for the safety of the student, his or her classmates, and the teacher. In such a case, baseline data could be collected for a class period where the 50-minute class is divided into two, 20-minute periods separated by a 10-minute break. Another viable alternative would be to use archival data the teacher has previously collected and immediately start treatment. A typical treatment may involve providing some type of positive reinforcement for the absence of aggressive behavior (this procedure is called "differential reinforcement of other behavior") at predetermined intervals (typically after every 5 or 10 minutes). The length of the intervals should gradually increase as the student demonstrates the ability to go the entire interval without displaying aggressive behavior. Figure 1 presents an A-B design in which aggressive behavior was decreased using a differential reinforcement procedure.

[GRAPHIC OMITTED]

After implementing treatment, the teacher or counselor should continue to collect data during the "B" phase. Because there is no withdrawal of treatment (i.e., a return to the baseline phase), the treatment alone cannot be held accountable for the changes in behavior. There may have been other extraneous variables that had an effect on the outcome. Follow-up or maintenance data may also be collected at a later time to assess the duration of treatment effects.

A-B-A-B DESIGN

Description

The A-B-A-B research design is an extension of the A-B design with baseline and treatment periods being repeated. After the first treatment phase "B," the treatment is removed which results in a return to baseline "A." The second baseline "A" is then followed by another treatment phase (B). For a school counselor as well as other applied researchers, the most important ethical consideration is to determine if the removal of the treatment would cause harm to the student or others in the immediate environment.

Consideration must be given to limitations surrounding the use of A-B-A-B research designs. Barlow and Hersen (1984) listed several important considerations before the application of designs involving withdrawal of interventions. First, issues such as adequate staffing and parental cooperation must be established. Second, safety of the student and safety of other students must also be ensured before implementation of designs such as A-B-A-B. Barlow and Hersen also included a well-organized and brief timeline, in addition to minimal environmental distractions, as important considerations when conducting single-case research designs requiring removal of interventions. Miltenberger (1997) cited ethical concerns surrounding removal of interventions that result in unsafe behavior by the student (e.g., self-injurious behavior). More specifically for school counselors and teachers, it may be frustrating to remove an intervention that appears successful. Heppner, Kivlighan, and Wampold (1999), and Barlow and Hersen also raised this issue as a limitation in applying research designs such as A-B-A-B designs.

Nevertheless, the strength of this design is its ability to demonstrate a functional relationship between the intervention and a positive outcome (e.g., behavior change) for the student. The A-B-A-B design is superior to the A-B design because it controls for many threats to internal validity and allows one to make more confident statements regarding the functional relationship between the intervention and the behavior. A-B-A-B designs have been used to illustrate intervention effects on problems as diverse as school avoidance and separation anxiety (Hagopian & Slifer, 1993), on-task and on-schedule behaviors in students with autism (Bryan & Gast, 2000), disruptive behaviors in regular education students (De Martini-Scully, Bray, & Kehle, 2000), ADHD-related behaviors (McGoey & DuPaul, 2000), and teacher questioning and pauses on child talk (Orsborn, Patrick, Dixon, & Moore, 1995). Despite limitations existent in the use of A-B-A-B designs, the strength of demonstrating a functional relationship between the intervention and a positive outcome, along with providing evidence of effective interventions, is enhanced through the use of A-B-A-B designs.

Application of A-B-A-B Design

An A-B-A-B design may be especially useful in the case of a student who has difficulty staying seated. First, baseline data are collected to illustrate the amount of time the student is out of his or her seat. A treatment plan would then be designed according to the function of the problem behavior and implemented over several days. For instance, some students are out of their seat to avoid doing schoolwork that is either boring or too difficult. Other students leave their seat to gain attention of the teacher, classmates, or both. When intervening with students for whom the function of their out-of-seat behavior is to avoid work that is too difficult, the treatment would typically involve three components: (a) modifying (i.e., decreasing) the complexity of the assignment such that it is commensurate with skill level, (b) providing direct skill instruction, and (c) providing brief breaks contingent upon completing the modified assignment (Skinner, 1998). During the treatment phase, data on out-of-seat behavior would still be collected. During the return to baseline, or withdrawal phase (A), the treatment that was implemented would be removed and data collected on out-of-seat behavior. In returning to baseline, it is easier to see if the changes occurred because of the treatment or some other extraneous variable. In this second "A" phase, if the treatment is responsible for changes in the behavior, the behavior is expected to return to levels seen in the first baseline phase. This consideration is extremely important for determining the advisability or necessity of returning to baseline. To reiterate the effectiveness of the treatment, the treatment is then implemented again for the final "B" phase. Figure 2 illustrates an A-B-A-B design in the reduction of out-of-seat behavior.

[GRAPHIC OMITTED]

MULTIPLE-BASELINE DESIGN

Description

A multiple-baseline design is essentially a series of A-B designs that are replicated in one of three ways: (a) with the same individual across different behaviors; (b) with the same individual across different settings; and (c) with the same behavior across different individuals. A multiple-baseline design strengthens the hypothesis that an intervention caused a behavior change but does not allow for statements of causality (Polaha & Allen, 1999). The multiple-baseline design may be helpful to the school counselor when a target behavior is exhibited by multiple students (e.g., several students in the same classroom), when more than one problematic behavior is exhibited by an individual student (e.g., talking loudly in class, not completing work, and physical aggression), or when a behavior is demonstrated by a student in more than one setting such as in the classroom and on the playground (Miltenberger, 1997).

APPLICATION OF MULTIPLE-BASELINE DESIGN

A multiple-baseline design would be appropriate, for example, in the case of a student exhibiting several disruptive behaviors in the classroom (e.g., inappropriate vocalization, bothering others, out of seat), and when the counselor did not want to intervene with all behaviors at the same time because of the possible excessive time demands on the teacher. Although there are different methods for selecting the first behavior on which to implement treatment using a multiple-baseline format, we advocate selecting the behavior that seems to be a "keystone" behavior for all of the behaviors (Barnett, Bauer, Ehrhardt, Lentz, & Stollar, 1996). In the example above, out-of-seat is a keystone behavior because the student only bothers others and calls out to the teacher when out-of-seat. Thus, out-of-seat would be selected as the first target of intervention. A typical treatment for out-of-seat behavior may involve the teacher briefly (i.e., for 5 seconds) observing the student at random times about six times every hour and awarding points for in-seat behavior. These points could then be exchanged for access to a grab-bag type of reinforcement system (Friman & Jones, 1998).

One advantage of using a multiple-baseline design in this type of situation is that the intervention effects may generalize across behaviors, thus obviating the need to intervene separately with each of the behaviors. That is, as out-of-seat behavior decreases, concurrent decreases may also be observed in bothering others and calling out, thus eliminating the need for intervention for those behaviors (Sulzer-Azaroff & Mayer, 1991).

Another example of when a multiple-baseline design could be useful for a school counselor is if several teachers are having similar problems with the same student. For instance, a junior high student may be exhibiting disruptive behaviors in English, math, PE, and the cafeteria. In this situation, a multiple-baseline across settings design would be most effective for guiding and evaluating intervention. After collecting and establishing stability in the baseline phase in each of the settings, treatment would then be implemented in one setting while baseline data continue to be collected in the other settings. A typical treatment might involve providing access to desirable activities (e.g., computer time, going to the gym, playing a game) for 5 to 10 minutes at the end of each class period contingent upon the absence of any disruptive behavior. Treatment would then be sequentially implemented in each of the settings as necessary. Figure 3 illustrates how this example appears in a multiple-baseline design.

[GRAPHIC OMITTED]

CHANGING-CRITERION DESIGN

Description

The changing-criterion design is simply an A-B design that is replicated wherein the intervention remains the same but the criterion for success is sequentially changed. A changing-criterion design is especially appropriate in a situation where, for example, increasing the amount of completed classwork is the goal when the student has adequate skills to do the work but is choosing not to (this is often referred to as a performance deficit because the student possesses the skills but is not performing the task; Skinner, 1998).

If, during baseline, a student were only completing 30% of his or her classwork, it would probably be unrealistic to design an intervention that immediately required 100% completion to earn a reinforcer. Therefore, a changing-criterion would be implemented in which the percentage of work completed is increased over a period of time. It is also very important to set realistic goals for the student to attain. One would not increase the percentages to such high levels that the student could not be successful. Likewise, one would not set the goal so low that low levels of performance are inadvertently reinforced. The length of each criterion change should also be sufficiently lengthy to allow for stable responding (Hayes, Barlow, & Nelson-Gray, 1999; Kazdin, 1982).

APPLICATION OF CHANGING-CRITERION DESIGN

In using a changing-criterion design with work completion, baseline data would first be collected to determine the percentage of work the student is presently completing. Treatment for increasing the percentage of work completed would first involve ascertaining if the student possessed the academic skills to do the work. This process of assessing academic skill is called determining if the problem is a skill deficit (lack of requisite academic skills) or a performance deficit (inadequate reinforcement available for work completion; sometimes referred to as a motivational problem). Assume that the student has all of the necessary academic skills to perform the work but is not. The next step of treatment would involve designing a positive reinforcement program whereby reinforcers would be offered based on the percentage of work completed.

If the student is only completing 30% of his or her classwork, the initial criterion may be set at 45% to 50% to achieve success. After meeting the criterion for 3 consecutive days, the criterion would be reset based on the average percentage of work completed during the 3 previous days. Then, if the student completed an average of 57% of the assigned work, the new criterion may be increased to 65%. This process would continue until the student is completing an acceptable percentage of work, probably in the range of 90 to 100%. Figure 4 illustrates a changing-criterion design for increasing classwork output.

[GRAPHIC OMITTED]

SUMMARY

The four applied single-subject research designs presented provide an overview of the most common types of single-subject research designs that can be used by a school counselor. One must first be intentional in determining the design best suited to the targeted behaviors, setting, students, and teachers involved. Not only must the school counselor be intentional in selecting the most appropriate research design for a particular situation, but the school counselor must also seek information and suggestions from parents as well as permission before any interventions are attempted. Both legal issues and ethical guidelines concerning minor children bind school counselors and must be recognized. The school counselor is reminded that every design is not applicable for every situation. Some behavior change projects necessitate an A-B design while others are more suited to an A-B-A-B design. These designs are as varied as the situations and problematic behaviors encountered in the school. Although a counselor may not encounter a situation where all of these designs will be used, it is important to be aware and knowledgeable of all the tools available to assist in demonstrating effectiveness and documenting student behavior change. Although this tutorial is not intended to be a thorough guide to single-subject, research, it may remove the stigma sometimes attached to conducting applied research and give school counselors viable methods to conduct outcome research documenting the impact of their interventions.

Implications for Professional School Counselors

The issues of school counselor effectiveness and accountability in education have become increasingly important as school reform initiatives have challenged school counselors to link their school counseling programs and their interventions to student success and academic achievement. Calls have arisen for professional school counselors to take responsibility and demonstrate the effectiveness of their interventions to students, teachers, administrators, and parents (Dahir, 2001; Fairchild, 1993; Green & Keys, 2001; Johnson, 2000). The school counseling profession began nearly 100 years ago and has undergone radical changes in the focus of school counseling programs. Shifts have also taken place in the role and function of school counselors moving the professional school counselor from a position-focused service provider to a comprehensive, developmental-guidance specialist (Gysbers & Henderson, 2000; Myrick, 1997). With the probable continued fluctuation in the professional identity of school counselors, empirical evidence is necessary to validate the role that school counselors play in the education of students. It demands that school counselors accept the challenge to empirically answer the question: "Do I make a difference in the lives of the children with whom I work?" The use of single-subject research design methods such as those presented will allow school counselors to respond with sound empirical data demonstrating positive student outcomes.

References

Baker, S. B. (2000). School counseling for the twenty-first century (3rd ed.). New York: Merrill/Prentice Hall.

Barlow, D. H., & Hersen, M., (1984). Single-case experimental designs: Strategies for studying behavior change (2nd ed.). New York: Pergamon.

Barnett, D. W., Bauer, A. M., Ehrhardt, K. E., Lentz, F. E., & Stollar, S. A. (1996). Keystone targets for change: Planning for widespread positive consequences. School Psychology Quarterly, 11, 95-117.

Borders, L. D. (2002). School counseling in the 21st century: Personal and professional reflections. Professional School Counseling, 5, 180-185.

Borders L. D., & Drury, S. M. (1992). Comprehensive school counseling programs: A review for policymakers and practitioners. Journal of Counseling and Development, 70, 487-498.

Bryan, L. C., & Gast, D. L. (2000). Teaching on-task and on-schedule behaviors to high-functioning children with autism via picture activity schedules. Journal of Autism and Developmental Disorders, 30, 553-567.

Burnham, J. J., & Jackson, C. M. (2000). School counselor roles: Discrepancies between actual practice and existing models. Professional School Counseling, 4, 41-49.

Dahir, C. A. (2001). The national standards for school counseling programs: Development and implementation. Professional School Counseling, 4, 320-327.

De Martini-Scully, D., Bray, M., & Kehle, T. J. (2000). A packaged intervention to reduce disruptive behaviors in general education students. Psychology in the Schools, 37, 149-156.

Fairchild, T. N. (1993). Accountability practices of school counselors: 1990 national survey. The School Counselor, 40, 363-374.

Fairchild, T. N., & Seeley, T. J. (1995). Accountability strategies for school counselors: A baker's dozen. The School Counselor, 42, 377-392.

Fairchild, T. N., & Zins, J. E. (1986). Accountability practices of school counselors: A national survey. Journal of Counseling and Development, 65, 196-199.

Friman, P. C., & Jones, K. M. (1998). Elimination disorders in children. In T. S. Watson & F. M. Gresham (Eds.), Handbook of child behavior therapy (pp. 239-260). New York: Plenum.

Green, A., & Keys, S. (2001). Expanding the developmental school-counseling paradigm: Meeting the needs of the 21st century student. Professional School Counseling, 5, 84-95.

Gresham, F. M. (1998). Designs for evaluating behavior change: Conceptual principles of single-case methodology. In T. S. Watson & F. M. Gresham (Eds.), Handbook of child behavior therapy (pp. 23-40). New York: Plenum.

Gysbers, N. C., & Henderson, P. (2000). Developing and managing your school guidance program (3rd ed.). Alexandria, VA: American Counseling Association.

Hagopian, L. P., & Slifer, K. J. (1993). Treatment of separation anxiety disorder with graduated exposure and reinforcement targeting school attendance: A controlled case study. Journal of Anxiety Disorders, 7, 271-280.

Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999). The scientist practitioner: Research and accountability in the age of managed care. Boston: Allyn & Bacon.

Heppner, P. P., Kivlighan, D. J., Jr., & Wampold, B. E. (1999). Research design in counseling (2nd ed.). Belmont, CA: Wadsworth.

Hittleman, D. R., & Simon, A. J. (1997). Reading and evaluation procedure sections. In K. M. Davis (Ed.), Interpreting educational research: An introduction for consumers of research (pp. 169-221). Upper Saddle River, NJ: Prentice-Hall.

Johnson, L. S. (2000). Promoting professional identity in an era of educational reform. Professional School Counseling, 4, 31-40.

Kazdin, A. E. (1982). Single-case research designs: Methods for clinical applied settings. New York: Oxford University Press.

Kuranz, M. (2002). Cultivating student potential. Professional School Counseling, 5, 172-179.

Lenhardt, A. M. C., & Young, P. A. (2001). Proactive strategies for advancing elementary school counseling programs: A blueprint for the new millennium. Professional School Counseling, 4, 187-194.

Looney, J. J. (1998). Let's get real about accountability! Thrust For Educational Leadership, 27, 8-9.

Lundervold, D. A., & Belwood, M. F. (2000). The best kept secret in counseling: Single-case (N = 1) experimental designs. Journal of Counseling and Development, 78, 92-102.

Lusky, M. B., & Hayes, R. L. (2001). Collaborative consultation and program evaluation. Journal of Counseling and Development, 79, 26-35.

McGoey, K. E., & DuPaul, G. J. (2000). Token reinforcement and response cost procedures: Reducing the disruptive behavior of preschool children with attention-deficit/hyperactivity disorder. School Psychology Quarterly, 15, 330-343.

Miltenberger, R. G. (1997). Behavior modification: Principles and procedures. Pacific Grove, CA: Brooks/Cole.

Myrick, R. D. (1990). Retrospective measurement: An accountability tool. Elementary School Guidance and Counseling, 25, 21-29.

Myrick, R. D. (1997). Developmental guidance and counseling: A practical approach (3rd ed.). Minneapolis, MN: Educational Media.

Orsborn, E., Patrick, H., Dixon, R., & Moore, D.W. (1995). The effects of reducing teacher questions and increasing pauses on child talk during morning news. Journal of Behavioral Education, 5, 347-357.

Otwell, P. S., & Mullis, F. (1997). Academic achievement and counselor accountability. Elementary School Guidance and Counseling, 31, 343-348.

Paisley, P. O., & Borders, L. D. (1995). School counseling: An evolving specialty. Journal of Counseling and Development, 74, 150-154.

Paisley, P. O., & McMahon, G. (2001). School counseling for the 21st century: Challenges and opportunities. Professional School Counseling, 5, 106-115.

Polaha, J. A., & Allen, K. D. (1999). A tutorial for understanding and evaluating single-subject methodology. Proven Practice, 1, 73-77.

Rhyne-Winkler, M. C., & Wooten, H. R. (1996). The school counselor portfolio: Professional development and accountability. The School Counselor, 44, 146-150.

Sears, S. J., & Granello, D. H. (2002). School counseling now and in the future: A reaction. Professional School Counseling, 5, 164-171.

Skinner, C. H. (1998). Preventing academic skills deficits. In T. S. Watson & F. M. Gresham (Eds.), Handbook of child behavior therapy (pp. 61-82). New York: Plenum.

Sulzer-Azaroff, B., & Mayer, G. R. (1991). Behavior analysis for lasting change. Belmont, CA: Wadsworth.

Vacc, N. A., & Rhyne-Winkler, M. C. (1993). Evaluation and accountability of counseling services: Possible implications for a midsize school district. The School Counselor, 40, 260-266.

Whiston, S. C. (2002). Response to the past, present, and future of school counseling: Raising some issues. Professional School Counseling, 5, 148-155.

Whiston, S. C., & Sexton, T. L. (1998). A review of school counseling outcome research: Implications for practice. Journal of Counseling and Development, 76, 412-426.

Linda H. Foster, NCC, NCSC, LPC, is a doctoral candidate, e-mail: fosterlh@bellsouth.net; T. Steuart Watson, Ph.D., is a professor; Caroline Meeks is a graduate student; and J. Scott Young, Ph.D., is an associate professor. All are with Mississippi State University, Mississippi State.
COPYRIGHT 2002 American School Counselor Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Young, J. Scott
Publication:Professional School Counseling
Date:Dec 1, 2002
Words:5255
Previous Article:Promoting healthy body image in middle school.
Next Article:When parents want to know: responding to parental demands for confidential information.
Topics:


Related Articles
School guidance and counseling in the 21st Century: remember the past into the future.
Computer technology and the 21st Century school counselor.
Using single-participant research to assess counseling approaches on children's off-task behavior.
"The opportunity was there!" A qualitative study of early-entrant school counselors. (General Features).
School counselors and information literacy from the perspective of Willard Daggett. (Special issue: career development and the changing workplace).
School counselors, comprehensive school counseling programs, and academic achievement: are school counselors promising more than they can deliver?
The ASCA National Model, accountability, and establishing causal links between school counselors' activities and student outcomes: a reply to sink.
Collaborative action research and school counselors.
Practical significance: the use of effect sizes in school counseling research.
Research methods in school counseling: a summary for the practitioner.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |