Printer Friendly

Special issue editor's introduction: using single-case research designs to demonstrate evidence for counseling practices.

The contemporary sociopolitical and economic climates have resulted in a necessity for counselors across settings to provide measurable outcomes that justify the type and amount of services rendered to clients (Lenz, 2013; Morgan & Morgan, 2009; Sanderson, 2003). When answering this call to accountability, some counselors have found themselves stymied by a set of assumptions wherein (a) such research is for people wearing lab coats, (b) predictive models are insufficient for making causal inferences, (c) between-groups treatment comparisons are not feasible, (d) the required sample size to complete outcome studies is often unrealistic, and (e) the practical preparation for completing research and program evaluation were insufficient during graduate preparation. Others may contest that formal research and appraisal practices are insufficient for capturing the humanistic nature of developments associated with treatment gains (Hansen, 2006, 2012). Despite these suppositions, the emphasis on implementing practices that support accountability between counselors, clients, funding sources, and stakeholders has remained. Therefore, a prudent task for counselors across settings is to use evaluative strategies that meet requirements for reporting outcomes, while also honoring the uniqueness of each counseling relationship and the holistic development of all clients.

Single-case research designs (SCRDs) represent a practical strategy for making inferences about the efficacy of an intervention, establishing evidentiary support for counseling practices, and giving voice to counseling activities with small or understudied populations. At a basic level, SCRDs compose a category of experimental techniques for evaluating causal relationships between interventions and dependent variables in which individual participants function as the unit for analysis (Lundervold & Belwood, 2000; O'Neill, McDonnell, Billingsley, & Jenson, 2010; Sharpley, 2007). Several authors have suggested that the term single-case research design is a bit of a misnomer when considering that a single case can be represented by an individual client, family, or group (Morgan & Morgan, 2009; O'Neill et al., 2010). When implementing this approach, participants serve as their own comparison through contrasting scores associated with a dependent variable during and after an intervention with those collected prior to the introduction of an independent variable (Egel & Barthold, 2010; Rubin & Belamy, 2012). As a result, the analysis of SCRD data yields information related to outcomes of individuals that are associated with their experience during an intervention. A common practice is for counselors to inspect these data within individual participants (see Lenz, 2013); however, there is an emerging trend for aggregating results across all cases to promote the inspection of participant characteristics and study qualities that may moderate outcomes (see Bowman-Perrott et al., 2013; Cambell & Herzinger, 2010). In both scenarios, the data yielded from SCRDs can support counseling professionals as they attempt to monitor the progress of individual clients, complete program evaluations, support evidence-based practice, and report activities to stakeholders.

Almost 2 decades ago, Chambless et al. (1996, 1998) presented benchmark criteria for evaluating the evidentiary support of therapeutic interventions. As a result, experimental between-groups research designs and SCRDs were galvanized within the social sciences as methods for contributing to the evidence-based treatment literature. Wampold, Lichtenberg, and Waehler (2002) expanded this framework by including deliberations related to the degree of specificity, utility of multiple type of comparison groups, treatment component analysis, broad assessment, methods for consolidating results across studies, and viability of interventions from one community to the next. Although these endorsements promoted two separate and unique methodologies for establishing evidentiary support for interventions, several authors have argued that between-groups designs, especially randomized controlled trials with an emphasis on null hypothesis testing, should be regarded as the gold standard of research methodologies (Balshem et al., 2011; Schulz, Altman, & Moher, 2010). The generalizability, transparency of methodology, and inferences related to causality inherent among well-designed between-groups research programs has provided a great wealth of knowledge related to best practices that have historically moved the counseling profession forward. However, despite these strengths, some have espoused an incredulous disposition regarding the goodness of fit and practicality of this methodology for counseling professionals (Foster, 2010; Lenz, 2013; Lundervold & Belwood, 2000; Morgan & Morgan, 2009; Sharpley, 2007).

Limitations of Between-Groups Designs in Counseling Settings

The great preponderance of counselors will implement evaluative or research strategies for stakeholder reporting, fidelity monitoring for accreditation bodies, or to assay whether an intervention warrants continued application with a particular subset of clients. With these considerations in mind, the complex nature of between-groups designs may be in excess of what is needed or achievable within some counseling settings. Specifically, the characteristics of between-groups designs related to sample size, cost and logistics, types of comparison groups implemented, data analysis, and type of data yielded may have an inherently limited utility.

Sample Size

Perhaps the most glaring challenge for some counselors when attempting to implement a between-groups design is the requirement of accessing a modest, but preferably large sample size. Many counselors will not have access to a sufficient number of individuals that can function as participants within their research endeavor given the nature of the setting within which they work. For example, a counselor who is working in a multisystemic therapy treatment setting may only have a case load of eight to 10 families at a time, each of whom they provide services to for roughly 16 to 30 weeks, thus only having eight to 10 identifiable clients for a considerable amount of time. This may also be the case for those working with a population that has a low incidence of diagnosis, such as those with selective mutism or childhood onset schizophrenia. In either case, counselors may be find it challenging to garner a sample size that provides sufficient statistical power to evaluate null hypotheses or provide robust results that generalize to the target population.

Cost and Logistics

Implementing a between-groups design can be a complex endeavor that requires the coordination of several financial, temporal, and personnel resources. Oftentimes, research protocols require the use of standardized assessments and several qualified professionals to administer and score them, which requires an extant amount of time and compensation. As an example, Cohen, Mannarino, and Iyengar (2011) compared trauma-focused cognitive behavior therapy with child-centered therapy among 124 boys and girls who were experiencing posttraumatic stress disorder (PTSD) symptoms related to witnessing intimate partner violence. To complete pretest-posttest evaluations, each participant completed several proprietary structured interviews, intelligence tests, and self-report measures; treatment was provided by three practitioners who received specialized training, supervision from two supervisors, and fidelity checks by two project coordinators. Although the data yielded from this evaluation provides useful data related to treating PTSD symptoms, the cost and logistics are often beyond what is feasible to most master's-level counselors regardless of the setting.

Type of Comparison

Whereas some counseling or educational settings may be well suited for evaluating the effects between individuals receiving a particular intervention and those within a comparison group, not all settings are conducive to this evaluative strategy. For example, many rehabilitative, school, or community-based treatment approaches have distinctive service packages for clients that are predicated on diagnostic category rather than the result of clients choosing the amount and type of services they receive from a list of options. Furthermore, in some settings, assigning participants to an alternative treatment or minimal treatment condition that lacks evidentiary support may be an ethical concern that places counselors at risk for litigious action.

Data Analysis

Some of the parametric statistical procedures associated with between-groups designs may be beyond the developmental level of master's-level counselors and discourage meaningful analyses of participant data. This may be especially true when considering the requirements for conducting statistical power analysis, computing reliability metrics, and interpreting findings in the context of practical significance estimations as depicted in the Publication Manual of the American Psychological Association, Sixth Edition (American Psychological Association, 2009). These and other analyses often rely on access to statistical packages that can be expensive. Furthermore, even open-source options such as GNU PSPP Statistical Analysis Software or R software may require training or familiarity with writing syntax to complete even basic computations.

Type of Data Yielded

Primary analyses completed in between-groups designs are typically based on finding meaningful differences among two or more levels of an intervention across at least two measurement intervals. This type of gain score data is well suited for illustrating the broad-level effectiveness of a treatment among a population sample and as a criterion for identifying moderating variables that predict changes within the group; however, researchers may be left with little indication about the response to treatment of individual participants. As a consequence, data yielded from between-groups designs often obscure who among the participants may respond best to a particular intervention and under which circumstances efficacy is maximized.

The artificial, narrow-focused, and inaccessible nature of large, between-groups methodology such as randomized controlled trials brings into question the veracity of this modality as the gold standard for counseling researchers. Too often, master's-level practitioners shy away from completing evaluative activities that may support the clinical decision making of their peers because the concept of research itself is nested in a methodology that requires finding large samples, negotiating costs and logistics, identifying a comparison group, and completing complex data analyses that may not yield the type of data that support intervention on an individual level. With incredulity toward the practicality of this modality mounting, counselors may be presently in the foothills of a golden age for implementing alternative methods such as SCRDs for evaluating the efficacy of interventions.

SCRDs as a Practical Alternative for Counselors

When counselors use SCRDs, they are implementing a scientifically rigorous, yet flexible approach for estimating the benefit of interventions that can be evaluated across counseling settings. Rather than considering between-groups methodology or SCRDs as competing designs in which one is better than another, a more prudent perspective may be to regard them as complementary for creating a more representative depiction of a treatment effect over time. SCRDs have several distinguishing characteristics that function as inherent strengths for counseling professionals, including characteristics related to their required sample size, ability to make causal inferences, applicability for special populations, systematic nature of evaluation, and type of understanding available from analyses.

Minimal Sample Size Required

The minimal sample size required for implementing an SCRD is one, but most investigations will typically be completed with at least three participants as a safeguard against attrition. Although the analysis among SCRDs occurs at the level of a singular case, having several participants within an evaluation promotes the inclusion of diverse client characteristics that may lead to an increased understanding of which interventions work for whom and under what circumstances. This prospect is promising to most counselors, who may receive referrals a few at a time or need to report a therapeutic trajectory to stakeholders related to a novel treatment approach of a nonfrequent diagnosis. For example, a partial hospitalization program for eating disorders may only admit three to five clients per month or a school counselor may be requesting allocation of time to conduct a support group for students who are dealing with the separation of their parents. In each instance, the counselor is likely going to be required to provide documentation of efficacy to a third party, such as insurance companies or administrators, that justify the type and amount of services delivered.

Self as Control

Unlike between-groups designs, participants in SCRDs serve as their own comparison condition. This ability to demonstrate functional relationships between no treatment, treatment, and alternative treatments belies the causal nature of well-designed SCRDs and their utility for evaluating the efficacy of counseling practices. For example, a client receiving an innovative exposure therapy protocol for the treatment of PTSD may be asked to monitor the frequency and intensity of arousal symptoms daily for a week prior to the beginning of treatment. Comparison of their ratings during and after treatment provides the scenario to begin inspecting cause-and-effect relationships between the intervention and symptoms. As these results are replicated within and across clients, each serving as their own control, more robust inferences about the functional nature of the innovative exposure therapy protocol may be supported.

Flexibility and Responsiveness

SCRDs are practical to implement across counseling settings for a number of reasons. Foremost, the use of SCRDs allows for the evaluation of interventions with diverse, unique, or rural populations that may traditionally be difficult to assess given marginal access to clients. Furthermore, SCRDs are intentionally adaptive and flexible so that people who may not respond to a treatment may have an opportunity to do so. In most between-groups designs, all participants will get a standard dosage of treatment (e.g., 10 sessions of acceptance and commitment therapy), whereas SCRDs are not limited by the confines of standard protocols and can be responsive to client needs by accommodating for the amount or type of interventions, all which will be accounted for across documented phase changes. To illustrate, Schottelkorb and Ray (2010) reported that one participant within their single-group effectiveness study was switched from a reading mentoring condition to a therapeutic one following a teacher's concern regarding the student's persistent severity of off-task behavior, a trend that was confirmed through the inspection of visual data. Finally, although standards have been proposed that identify standards for rigor within SCRDs (Chambless & Hollon, 1998; Kratochwill et al., 2010), several researchers have stressed that standards are best conceptualized in the context of treatment settings and the ethical obligations of counselors to not withhold treatment from clients with an identified need (Kennedy, 2005). To illustrate, Kratochwill et al. (2010) suggested that a minimum of five measurements are required across all phases of treatment to establish internal validity within SCRDs; however, the fact remains that across many settings, counselors may be compelled by ethical imperatives or the policies of their setting to start an intervention before a stable baseline or five assessments can be administered.

Ease of Data Analysis

Unlike between-groups designs that rely on parametric statistics grounded in matrix algebra, the analyses used to determine the magnitude of a treatment effect within SCRDs can often require little more than a graphical representation of the data, a pencil, and a straightedge ruler (Parker, Vannest, & Davis, 2011). Furthermore, whether implementing visual analysis or another methodology such as those based on nonoverlap of data between phases, only modest training is required and results tend to be highly replicable within each approach, despite some evidence that separate metrics may yield differential results across approaches (Brossart, Parker, Olson, & Mahadevan, 2006; Lenz, 2013). This later point speaks to the responsibility for counselors using SCRDs to implement an analytic procedure that is prudent based on the characteristics of their data, rather than select strategies that may inflate their findings or have unadvisable indication for use.

Type of Data Yielded From Analyses

The type of data that is yielded from an SCRD is characteristically practical for counseling professionals. Because data are collected in a systematic way over time, visual depictions of client data illustrate their unique experience with an intervention. When paired with an in-depth understanding about individual clients or participants, an understanding is possible not just about who an intervention works for, but also for whom it is not efficacious. These characteristics also support identifying the course of an intervention and what dosage of intervention is required for a therapeutic effect. For example, if a counselor is evaluating a 16-session treatment protocol, but client data depict that most individuals in a series of SCRDs tended to have data stable within the therapeutic range by Session 8, it may be plausible to infer that the longer protocol may not be warranted to achieve the intended effect. From a practical perspective, findings such as these could provide support for a counselor to provide an eight-session intervention to twice as many clients with a similar effect as could be expected from the 16-session approach. The ability for counselors to evaluate interventions using SCRDs is bolstered by this ability to answer the fundamental questions of what interventions work, for whom, and under what circumstances?

Demonstrating Evidentiary Support Using SCRDs

Consumers of published SCRDs typically agree on at least two things: all SCRDs are not created equal, nor do the data yielded from their findings warrant the same degree of trustworthiness. In an effort to standardize the degree of rigor required for SCRDs to depict a functional relationship between an intervention and an outcome, several experts have presented standards for demonstrating evidentiary support when using SCRDs that are helpful for researchers to consider (Chambless et al., 1996, 1998; Kratochwill et al., 2010; Wampold et al., 2002). Chambless et al. (1996, 1998) indicated that SCRDs can be used to classify therapeutic interventions as "well-established," "probably efficacious," or not demonstrating efficacy (Chambless et al., 1998, p. 4). Within their framework, an SCRD meets criteria as well established through the implementation of a large series of SCRDs (N>9) that use good experimental designs and compare an intervention of interest with an alternative treatment. An intervention can be regarded as probably efficacious following the implementation of a small series of SCRD experiments (N>3). In both scenarios, SCRDs should be completed using treatment manuals with a clearly described population sample across at least two individual investigators or investigative teams. Later, Chambless and Hollon (1998) advocated for the establishment of a stable data trend prior to implementing an intervention and the use of A-B-A-B or multiple baseline designs with at least three clinically relevant outcomes as integral to determining the degree that an intervention should be regarded as efficacious.

More recently, Kratochwill et al. (2010, 2013) provided a greater deal of specificity in their description of criteria for designs that result in the designation that an SCRD "meets design standards," "meets design standards with reservations," or "does not meet design standards" (Kratochwill et al., 2013, p. 27). The standards presented by Kratochwill et al. are grounded in maximizing the internal validity of an SCRD and demonstrating efficacy through multiple replications of results with a participant. Within this framework, researchers attempting to meet design standards are required to use an experimental protocol; systematically collect data over time using more than one assessor; include at least three attempts to replicate intervention effects with individual participants (use of an A-B-A-B-A-B design); and collect a minimum of three, but preferably five, data points within each intervention phase (Kratochwill et al., 2010, 2013). Furthermore, Kratochwill et al. detailed general guidelines for completing visual analysis and estimating the magnitude of a treatment effect across study phases using quantitative metrics.

Both sets of guidelines presented by Chambless et al. (1996, 1998) and Kratochwill et al. (2010, 2013) represent tremendous advances for researchers through the operationalization of SCRD technology that should be considered as a benchmark to which counseling researchers can refer. With this noted, meeting the rigor inherent within these standards may be problematic for many master's-level counseling professionals for a variety of ethical and practical reasons. Most counselors work in applied settings where they are legally bound by the ethical principles of their state licensing boards and the American Counseling Association's Code of Ethics (American Counseling Association, 2014), in which standards for service provision and characteristics of the treatment setting may not promote the use of the establishment of a stable baseline, multiple observers, or the repeated withdrawal and introduction of therapeutic interventions.

Consider the following two examples that challenge the practicality of adhering to rigorous standards for SCRDs that favor internal validity over external validity: (a) a licensed professional counselor working in a community-based mental health setting provides a screening for someone who has been the victim of kidnapping along the Mexican American border and is presently experiencing the symptoms of PTSD after not receiving immediate treatment after the occurrence, and (b) a registered play therapist completes an initial consultation with a child who has recently lost both of his/her parents in an automobile accident and is experiencing the deleterious psychological effects of processing the loss and also accommodating to a new living arrangement with a relative. In both instances, the client represents a special population of interest about who little is understood with regard to the response to standard treatment protocols. Furthermore, given the complex nature of the hypothetical symptoms, stakeholders such as supervisors, relatives, and third-party payers may eventually request documentation regarding the course and efficacy of the counselor's intervention. SCRDs offer a practical option for documenting progress with the client over time; however, the long-term establishment of baseline symptoms prior to an intervention may not be ethically prudent. Furthermore, if a counselor is using interventions such as cognitive processing therapy (Resick & Schnicke, 1993) or child-centered play therapy (Ray, 2011) to treat the client, the counselor's withdrawal from the counseling relationship would not be ethical or expected. Finally, an assessment of outcomes at many counseling sites may include the frequency of overt behaviors obtained through self-monitoring, but also likely includes self-reported data along one or more psychological constructs. In such instances, multiple raters for a single dimension of client experience and the presence of at least three unrelated dependent variables may not be guaranteed.

These examples are presented to illustrate that oftentimes, practicing counselors will be compelled to find a balance between standards for internal validity that demonstrate a functional relationship between an intervention and an approach to measuring outcomes within those conditions that make sense clinically. SCRDs can be a form of participatory action research that promotes sustainable livelihoods and community development, thereby allowing counselors to do justice to their clients who often present in vulnerable states because of the severity of their symptoms and related impairments in functioning. SCRD's ability to make causal inferences when implementing strong methodology, reliance on few participants, emphasis on responsiveness to treatment, ability to describe the experience of special populations, type of data yielded, and accessibility of evaluation strategies make it an obvious choice for counseling professionals.

In the absence of SCRD standards that are specific to counselors, the choices made to balance variables that influence internal and external validity to maximize the practical nature of evaluation becomes one that should be informed by the general recommendations depicted by the authors of articles included within this special issue, as well as the resources they have cited. In general, counselors should select SCRD methodologies that are most appropriate for their setting, treatment modality, target outcomes, and allotted interval for intervention. Preliminary evidentiary support can occur through the replication of findings within same-person (intraparticipant) and between-participants (interparticipant) research. In either case, providing rich descriptions of a participant's experience with the intervention, estimating treatment efficacy through visual analysis and quantitative metrics of the treatment effect, and emphasizing clinically practical outcomes are judicious. With these strategies in mind, SCRDs may provide the testament of evidence-supported practice in counseling that strengthens and broadens counselors' professional scope.

Rationale for This Special Issue

This special issue of the Journal of Counseling & Development (JCD) is intended to support the use and dissemination of SCRDs by counseling professionals. Such an endeavor is warranted given that despite calls by previous scholars (Lundervold & Belwood, 2000; Ray, Barrio Minton, Schottelkorb, & Brown, 2010; Sharpley, 2007), training and the availability of exemplar studies depicting the implementation of SCRDs are typically underrepresented in counselor preparation programs, continuing education outlets such as professional conferences, and within counseling journals. Therefore, this special issue is intended to provide practical support for master's-level counselors and doctoral-level counselor educators who are interested in using SCRD theory and methods to not only contribute to the knowledge base available within JCD, but to stimulate the use and reporting of counseling outcomes by practitioners and scholars alike.

Readers will find methodological articles that address strategies for SCRD design (Ray, 2015), analysis (Vannest & Ninci, 2015), and reporting (Hott, Limberg, Ohrt, & Schmit, 2015) that are intended to depict the process of completing an SCRD from inception to disseminating findings to stakeholders. Also included within this special issue are eight exemplar studies that depict the use of SCRDs to evaluate the efficacy of counseling and counseling preparation interventions with individuals across the lifespan. Four articles investigate the efficacy of interventions with school-aged children and adolescents using Adlerian play therapy (Meany-Walen, Kottman, Bullis, & Taylor, 2015), child-centered play therapy (Ware Balch & Ray, 2015), and nature-based (Swank, Shin, Cabrita, Cheung, & Rivers, 2015) and peer-monitoring modalities (Smith, Evans-McCleon, Urbanski, & Justice, 2015). Two articles depict the utility of counseling interventions within the rehabilitative experience of individuals who are incarcerated using integrating narrative therapy (Ikonomopoulos, Smith, & Schmidt, 2015) and cognitive behavior therapy (Cox, Lenz, & James, 2015). Finally, two articles report the findings of SCRDs used to evaluate training practices intended to increase relational development and multicultural competency (Swan, Schottelkorb, & Lancaster, 2015) and client-counselor attunement (Schomaker & Ricard, 2015) among counseling students.

DOI: 10.1002/jcad.12036

References

American Counseling Association. (2014). ACA code of ethics. Alexandria, VA: Author.

American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.

Balshem, H., Hefland, M., Schunemann, H., Oxman, A., Kunz, R., Brozek, J., Vist, G., ... Guyatt, G. (2011). GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology, 64, 401-406. doi: 10.1016/j.jclinepi.2010.07.015

Bowman-Perrott, L., Davis, H., Vannest, K., Williams, L., Greenwood, C., & Parker, R. (2013). Academic benefits of peer tutoring: A meta-analytic review of single-case research. School Psychology Review, 42, 39-55.

Brossart, D. F., Parker, R. I., Olson, E. A., & Mahadevan, L. (2006). The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behavior Modification, 30, 531-563. doi:10.1177/0145445503261167

Campbell, J. M., & Herzinger, C. V (2010). Statistics and single subject methodology. In D. L. Gast and J. R. Ledford (Eds.), Single subject research methodology in behavioral sciences (pp. 417-457). New York, NY: Routledge.

Chambless, D. L., Baker, M., Baucom, D. H., Beutler, L. E., Calhoun, K. S., Crits-Christoph, P., ... Woody, S. R. (1998). Update on empirically validated therapies, II. The Clinical Psychologist, 51, 3-16.

Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7-18.

Chambless, D. L., Sanderson, W. C., Shoham, V, Johnson, S. B., Pope, K. S., Crits-Christoph, P., McCurry, S. (1996). An update on empirically validated therapies. The Clinical Psychologist, 49, 5-18.

Cohen, J. A., Mannarino, A. P., & Iyengar, S. (2011). Community treatment of posttraumatic stress disorder for children exposed to intimate partner violence: A randomized controlled trial. Archives of Pediatric and Adolescent Psychiatry, 165, 16-21.

Cox, R. m, Lenz, A. S., & James, R. K. (2015). A pilot evaluation of the ARRAY program with offenders with mental illness. Journal of Counseling & Development, 93, 471-480.

Egel, A. L., & Barthold, C. H. (2010). Single subject design and analysis. In G. R. Hancock & R. O. Mueller (Eds.), The reviewer s guide to quantitative methods in the social sciences (pp. 357-370). New York, NY: Routledge.

Foster, L. (2010). A best kept secret: Single-subject research design in counseling. Counseling Outcome Research and Evaluation, 1, 30-39. doi:10.1177/2150137810387130

GNU PSPP Statistical Analysis Software (Version 0.8.4) [Computer software]. Boston, MA: Free Software Foundation, Inc.

Hansen, J. T. (2006). Is the best practices movement consistent with the values of the counseling profession? A critical analysis of best practices ideology. Counseling and Values, 50,154-156.

Hansen, J. T. (2012). Extending the humanistic vision: Toward a humanities foundation for the counseling profession. Journal of Humanistic Counseling, 51, 133-144.

Hott, B. L., Limberg, D., Ohrt, J. H., & Schmit, M. K. (2015). Reporting results of single-case studies. Journal of Counseling & Development, 93, 412-417.

Ikonomopoulos, J., Smith, R. L., & Schmidt, C. (2015). Integrating narrative therapy within rehabilitative programming for incarcerated adolescents. Journal of Counseling & Development, 93, 460-470.

Kennedy, C. (2005). Single-case designs for educational research. Boston, MA: Allyn & Bacon.

Kratochwill, T. R., Hitchcock, J., Homer, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from http://ies. ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Kratochwill, T. R., Hitchcock, J., Homer, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education. 34. 26-38. doi: 10.1177/0741932512452794

Lenz, A. S. (2013). Calculating effect size in single-case research: A comparison of nonoverlap methods. Measurement and Evaluation in Counseling and Development, 46, 64-73. doi: 10.1177/0748175 612456401

Lundervold, D. A., & Belwood, M. R (2000). The best kept secret in counseling: Single-case (N = 1) experimental designs. Journal of Counseling & Development, 78, 92-102.

Meany-Walen, K. K., Kottman, T., Bullis, Q., &Taylor, D. D. (2015). Effects of Adlerian play therapy on children's externalizing behavior. Journal of Counseling & Development, 93, 418-428.

Morgan D. L., & Morgan, R. K. (2009). Single-case research methods for the behavioral and health sciences. Thousand Oaks, CA: Sage.

O'Neill, R. E., McDonnell, J. J., Billingsley, F. E., & Jenson, W. R. (2010). Single case research designs in educational and community settings. Upper Saddle River, NJ: Pearson.

Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35, 303-322. doi: 10.1177/0145445511399147

R [Computer software]. Vienna, Austria: R Foundation for Statistical Computing.

Ray, D. C. (2011). Advanced play therapy: Essential conditions, knowledge, and skills for child practice. New York, NY: Routledge.

Ray, D. C. (2015). Single-case research design and analysis: Counseling applications. Journal of Counseling & Development, 93, 394-402.

Ray, D., Barrio Minton, C., Schottelkorb, A., & Brown, A. (2010). Single-case design in child counseling research: Implications for counselor education. Counselor Education and Supervision, 49, 193-208.

Resick, P. A., & Schnicke, M. (1993). Cognitive processing therapy for rape victims: A treatment manual. London, England: Sage.

Rubin, A., & Bellamy, J. (2012). Practitioner's guide to using research for evidence-based practice (2nd ed.). Hoboken, NY: Wiley.

Sanderson, W. C. (2003). Why empirically supported treatments are important. Behavioral Modification, 27, 290-299. doi: 10.1177/0145445503027003002

Schomaker, S. A., & Ricard, R. J. (2015). Effect of a mindfulness-based intervention on counselor-client attunement. Journal of Counseling & Development, 93, 491-498.

Schottelkorb, A. A., & Ray, D. C. (2010). ADHD symptom reduction in elementary students: A single-case effectiveness design. Professional School Counseling, 13, 11-22.

Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMC Medicine, 8, 18-26. doi: 10.1186/17417015-8-18

Sharpley, C. F. (2007). So why aren't counselors reporting n = 1 research designs? Journal of Counseling & Development, 85, 349-356.

Smith, H. M., Evans-McCleon, T. N., Urbanski, B., & Justice, C. (2015). Check-in/check-out intervention with peer-monitoring for a student with emotional-behavioral difficulties. Journal of Counseling & Development, 93, 451-459.

Swan, K. L., Schottelkorb, A. A., & Lancaster, S. (2015). Relationship conditions and multicultural competence for counselors of children and adolescents. Journal of Counseling & Development, 93, 481-490.

Swank, J. M., Shin, S. M., Cabrita, C., Cheung, C., & Rivers, B. (2015). Initial investigation of nature-based, child-centered play therapy: A single-case design. Journal of Counseling & Development, 93, 440-150.

Vannest, K. J., & Ninci, J. (2015). Evaluating intervention effects in single-case research designs. Journal of Counseling & Development, 93, 403-411.

Wampold, B. E., Lichtenberg, J. W., & Waehler, C. A. (2002). Principles of empirically supported interventions in counseling psychology, The Counseling Psychologist, 30, 197-217. doi: 10.1177/0011000002302001

Ware Balch, J., & Ray, D. C. (2015). Emotional assets of children with autism spectrum disorder: A single-case therapeutic outcome experiment. Journal of Counseling & Development, 93, 429-439.

Received 01/27/15

Revised 02/08/15

Accepted 02/09/15

A. Stephen Lenz, Guest Editor

A. Stephen Lenz, Department of Counseling and Educational Psychology, Texas A&M University-Corpus Christi. Correspondence concerning this article should be addressed to A. Stephen Lenz, Department of Counseling and Educational Psychology, Texas A&M University-Corpus Christi, 6300 Ocean Drive, Early Childhood Development Center, Room 152, Corpus Christi, TX 78412 (e-mail: stephen.lenz@tamucc.edu).
COPYRIGHT 2015 American Counseling Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Issue: Single-Case Research Design
Author:Lenz, A. Stephen
Publication:Journal of Counseling and Development
Article Type:Editorial
Date:Oct 1, 2015
Words:5247
Previous Article:Enhancing counselor supervision through compassion fatigue education.
Next Article:Single-case research design and analysis: counseling applications.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters