Printer Friendly

From good intentions to evidence-based: paving the right road.

One of the great mysteries of correctional practice is how a program or practice moves from one facility to become a standard part of the corrections world. In the world of social science, this is known as the "diffusion of innovations," the movement of an idea or practice from one setting across a variety of settings to achieve wide-spread adoption. (1) Of course, the difference between the diffusion of a fad and the adaptation of a productive correctional practice is important.

A definition of evidence-based practice was introduced in this column in the December 2006 issue of Corrections Today. (2) The authors offered that definition in terms of communicating program evaluation results, and this column takes a step back and propose some steps in that process to allow the readers to locate themselves on the road to proven practice. This column will examine the process by which individual research/evaluation studies move from a good intention to the ranks of evidence-based practice; or, just as important, why some good intentions should wind up in the "doesn't work" category.

Most ideas start from the observations of an individual or group of corrections professionals, corrections-associated academics or others associated with issues related to correctional populations and their families. Some ideas are purely observation-based, others are grounded in faith, some in academic theory and a good many are just well-intentioned. At a minimum, they are ideas that someone thinks will produce a desired outcome in a correctional environment.

Corrections professionals are very curious about what is working in different systems, and often seek out and share information with each other. When something being tried in one setting appears successful, others may adopt the practice if they think it will fit in their system. From these exchanges of ideas are born practice-based knowledge and operations. Knowledge and operations are often evaluated in terms of whether they make sense to those who conduct them or to other funding sources, such as legislators, who like the idea. One such practice-based program that made intuitive sense was Scared Straight, which brought children who were engaging in or at risk for criminal behavior to prisons to deter future criminal activity. The program was well intended, but in fact, was a consistent failure in evaluation studies.

Agency researchers or academics want to evaluate programs to see if they really work--or worse, to determine if they cause unanticipated harm. Program evaluation as a behavioral and social science criminal justice tool really took off in the 1970s as a way to determine whether the large sums of money being spent through the Law Enforcement Assistance Administration (LEAA) programs were actually accomplishing what the government wanted. In the early days of criminal justice evaluations evaluators often had to rely on pre- and post-test designs to see if knowledge or behaviors had changed from pre-intervention to some period following the intervention. At best, evaluators generally were able to conduct quasi-experimental designs with some sort of comparison or control group, often due to issues raised by attorneys and advocacy groups about rights to treatment and due process (procedural justice) issues.

During the next two decades, the science of evaluation changed rapidly, and the science community began to develop an idea of what constituted a strong research design to produce an "evidence base." The random assignment of individuals to treatment or control group has always been a hallmark of research. As legislatures and foundations began to ask how effective the programs they funded were, the importance of these designs became stronger. Some funding agencies began to demand that any program they funded have a random assignment of participants to treatment or control groups. Failure to promise to do so could result in not being considered for funding.

The question arose, however, whether one study was really enough to say something was the best practice in the criminal justice world. Thus, in the 1990s, the Office of Juvenile Justice and Delinquency Prevention funded the development of the Blueprints for Violence Prevention (3) at the University of Colorado to determine the strength of evidence among delinquency and violence prevention programs. Programs must satisfy three criteria for consideration: evidence of a strong deterrent effect, sustained effects beyond the treatment period and replication across multiple sites. Criteria such as cost effectiveness and consideration of mediating influences may also be assessed. Programs that satisfy two of the criteria may be put in the "promising practice" category, often due to a lack of replication studies. Those that satisfy more criteria move toward the "model program" category. The reader is encouraged to visit the University of Colorado Web site for more detailed information.

During the 1990s the science of "meta-analysis" was also developing as a statistical method for analyzing trends in evaluation data across multiple studies. This allowed the initiation of systematic reviews of evidence that attempt to look across the results of various studies in a statistically meaningful way. In the biomedical world, the Cochrane Collection (4) began subjecting medical studies to meta-analysis to determine whether practice-based procedures were truly effective when looked at broadly. On the criminal justice side, the Campbell Collaboration (5) evolved to subject a range of justice- and corrections-related programs to the same test. The Centers for Disease Control and Prevention also developed the Community Guides (6) process to test public health interventions, including violence prevention and even juvenile sentencing procedures. To date, only a relatively small number of corrections-related practices have been subject to the rigorous analyses of the Campbell Collaboration, while the Blueprints process has examined nearly 600 programs.

In the biomedical field, some cherished practices of physicians have not withstood the scrutiny of the Cochrane Collection or Community Guides process. As one might expect, this does not always sit well with the practice community that has faith in how it does things. There are dis-agreements between those who are "in the trenches" watching individual results and those in the "ivory tower" who simply crunch the numbers about whose methods are most appropriate. Even as the research community moves toward consensus about what it takes to move something into the evidence-based, best- or world's best-practice categories, researchers see that the practice community does not always agree with the assessments.

However, there are pressures to adopt the systematic review methodology for public policy decision-making and funding. This article has provided a cursory overview of the current thinking on establishing a practice as evidence-based. Readers are encouraged to explore the various schemes further at the Web sites listed below. For practical purposes, the message of the column is to be aware of what a particular funding organization or a legislator means by evidence-based. Language, even scientific language, is not always as specific as it should be.

This is an era with strong competition for scarce public resources for corrections-related programs. Knowing what it takes to move an idea, no matter how well-intended, to a promising practice to a model program, is a good starting framework for innovative ideas. Being evidence- or science-based is the new watchword for correctional programming. Just because it works in Rhode Island does not mean it will work in California. The days of "knowing what I'm doing" are gone; it takes evidence, not good intentions. Replication, sustained impact across multiple sites and repeated evaluations are the new requirements. It is important to share innovative practices with friends and colleagues, as this will help with replication studies.

ENDNOTES

(1) Rogers, E.M. 2003. Diffusion of innovations, fifth edition. New York: Simon & Schuster.

(2) Smith, L.G. and R.H. Potter. 2006. Communicating evaluation findings from offender programs. Corrections Today, 68(7): 98-99, 101.

(3) Information on the Center for the Study and Prevention of Violence's Blueprints for Violence Prevention is available at www.colorado.edu/cspv/blueprints. Accessed on Feb. 6, 2007.

(4) Information on the Cochrane Collection is available at www.cochrane.org. Accessed on Feb. 6, 2007.

(5) Information on the Campbell Collaboration is available at www.campbellcollaboration.org. Accessed on Feb. 6, 2007.

(6) The CDC/National Center for Health Marketing's Guide to Community Preventive Services is available at www.thecommunityguide.org. Accessed on Feb. 6, 2007.

R. Hugh Potter is a senior health scientist at the Centers for Disease Control and Prevention, U.S. Department of Health and Human Services.
COPYRIGHT 2007 American Correctional Association, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Notes
Author:Potter, R. Hugh
Publication:Corrections Today
Geographic Code:1USA
Date:Jun 1, 2007
Words:1380
Previous Article:Pretrial Justice: Principles and Practice.
Next Article:Obtaining federal benefits for disabled offenders: part 2--Medicaid benefits.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters