Printer Friendly


The intent of this article is to provide a broad overview of the current status of implementation science in relation to the successful conduct of character education programs. This article reviews some of the major empirical findings, and then discusses some of the critical issues involved in translating important principles identified in implementation science into effective practice. As such, this article is primarily intended for researchers and consultants who may work with various organizations interested in conducting character education initiatives. This presentation is divided into several major sections that involve a brief history of implementation, a definition of implementation, an explanation of its major elements and why it is so important, a discussing some of the multiple factors that can enhance or impede effective implementation, and a framework that illustrates the multiple steps that should be followed to increase the chances that a program will be put to a fair test in a new setting. Findings from different types of character development interventions are presented to illustrate various points, and several critical issues relevant to reaching a successful collaborative relationship between researchers/consultants and program providers are emphasized.


Implementation practices were first used over a hundred years ago in the U.S. when Cooperative Extension Service agents began working with farmers to help them apply scientifically based research practices to their own land (Rogers, 2003). Extension agents were using what was learned from agricultural research conducted at various land grant universities to help farmers increase their plant yields, protect the health of their soil and fields, and deal successfully with potentially harmful pests. Except for a few scattered reports, however, it was not until the early 1970s that systematic research attention was devoted to implementation in other fields. Now, implementation science has become an established research area and is applicable to many endeavors in the fields of health and medicine, education, the social sciences, public policy, and community development. Moreover, implementation applies to all types of services and interventions offered in these fields to populations of all ages.

Early work in agriculture confirmed that, if farmers would effectively implement science based practices, their farms would become more productive and their farming practices more sustainable. The same holds true today in the fields mentioned above. If local organizations would implement new evidenced-based programs or interventions effectively, their programs and services would improve, and their new procedures would be more sustainable. However, as the ensuing discussion makes clear, effectively implementing character education is not as straightforward as following quantitative scientific findings regarding such things as soil and seed quality and the appropriate levels of insecticide applications. Implementation science has discovered a complex set empirical findings that interact with many personal, situational, and administrative factors that must be successfully negotiated along the way toward effective implementation. Furthermore, it appears that collaborative working relationships between researchers/consultants and practitioners are essential for the successful planning and execution of character education programs.


Implementation can be defined as "efforts designed to get evidence-based programs or practices of known dimensions into use via effective change strategies" (Damshroder & Hagedorn, 2011, p. 195). So, we are talking about introducing an evidence-based program (once that has been carefully evaluated and found to be successful in achieving certain goals) into a new setting. Implementation refers to the ways this new program is eventually put into practice and delivered to participants. In other words, implementation has to do with what a program looks like in reality rather than what a program is conceived to be in theory or on the drawing board. This distinction is important because there can often be a dramatic difference between the theory or conceptualization of a program compared to what happens what it is applied in any situation. There can be many reasons for the "disconnect" between theory and practice as suggested by the issues discussed later in this article. Implementation research confirms Robert Burns' famous line "The best laid plans of mice and men often go astray." Another important aspect to the above definition of implementation is that it requires "effective change strategies." Good implementation does not occur spontaneously or naturally, but requires careful planning, execution, and sustained attention.


Extensive research has confirmed that the level of implementation that is achieved has a strong influence on program outcomes. There have been many reports on the outcomes of character development programs illustrating that as the level of program implementation improves, so do outcomes, sometimes to a substantial degree (Durlak & Dupre, 2008; Humphrey, 2013; Lendrum & Humphrey 2012). Simply, if one implements a well-designed program with fidelity to its theory of change and logic model, then the chances of successful outcomes of the program will be increased.

For example, one large scale meta-analysis involving over 200 interventions, compared youth who had participated in better implemented school-based social and emotional learning programs with those involved in less well-implemented programs (Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, 2011). The former group demonstrated academic gains that were twice as large as students in the latter group; they also showed reductions in conduct problems that were nearly twice as large, and reductions in emotional distress (i.e., depression and anxiety) that were more than twice as large as students in the latter group. In this case, the students participating in less well implemented programs did receive some benefit, but other research has indicated that poorly implemented programs might not yield any significant benefits for participating youth (Battistich, Schaps, Watson, Solomon, & Lewis (2001). In their meta-analysis of school based conflict resolution programs, Garrard and Lipsey (2007) found that the mean effects for interventions that were successfully implemented were three times higher in magnitude than interventions in which major implementation problems occurred (0.39 vs. 0.13, respectively). In sum, with few exceptions, better implemented programs produce more benefits for their participants whereas poorly implemented programs tend to yield only modest or no significant benefits.

Therefore, it is extremely costly to ignore implementation. Not only will participating youth be short-changed but, also, all the time, effort, and resources devoted to the program will not be well-spent. On the one hand, this problematic situation might affect the leaders of the organization or the front-line providers of the program to erroneously think the program is not worth worthwhile. On the other hand, if the program had instead been better implemented, it could have led to much better results.

Research has indicated several complex issues that arise with respect to implementation. Three of the most prominent involve: (a) the multiple components comprising implementation, (b) the many ecological factors that affect implementation, and (c) the major steps that need to be accomplished to achieve effective implementation. These issues are now briefly discussed.


Research has identified eight major components of implementation (Durlak & DuPre, 2008). They are briefly defined in Table 1 and consist of fidelity, dosage, quality of delivery, adaptation, participant responsiveness or engagement, program differentiation, monitoring of control conditions, and, finally, program reach. Program differentiation refers to the extent to which a new innovation departs from existing practices. This component is also related to the careful monitoring of control conditions to assure that a new program is substantially different from what youth in a control condition may be receiving. For example, some research on schoolwide character education programs has found that the failure to find significant effects for the intervention was likely due to the fact that several activities occurring in the control schools were closely related to the newly introduced program (Cook et al. 1999). In other words, there was not enough of a difference between the experimental and control schools that would lead to positive findings for the new program.

Each of the eight implementation components exist on a continuum (e.g., think of a range between 0 and 100%). Dosage (or how much of the program is implemented) can be complete (100%, indicating the total program was conducted) or it can fall below this level (as it often does). In some settings, planned programs were never conducted at all due to unforeseen administrative, financial, or personnel issues (Elias, 2010).

Furthermore, the eight components interact with each other. For example, reducing dosage will affect fidelity and the other components. We are still learning about which components might be more important in different situations, so it is good practice to monitor how well the different components are being conducted. For example, both fidelity and dosage may be high, but if quality of delivery and participant responsiveness are low (i.e., how well the various program features are delivered, and to what extent youth are interested in and engaged with the program) program outcomes are at risk.

Adaptation refers to any changes made to the planned program. At first, adaptation was considered to be a negative influence because it affected a program's fidelity (the degree to which the active ingredients of an intervention are effectively delivered, that is, those elements that are crucial to producing intended effects). Research has now clarified two important principles regarding adaptation: (a) most programs experience some degree of intentional or spontaneous adaptation when they are delivered in new settings, and (b) some adaptations can positively affect program outcomes (Durlak & Dupre, 2008; Ring-walt et al. 2003).

The important considerations are what types of adaptation occur, if they are intentional, agreed upon, and part of the systematic implementation process (see below), and whether the adaptations undermine, complement, or are unrelated to a program's active ingredients. For example, adaptations may involve changing some of the program's language and activities or exercises, or when and how often it is delivered. If these adaptations are successfully negotiated as part of the coordinated implementation plan, they can result in a program that is better suited to a particular locale and population, which should increase its chances of success. In sum, while some programs can be implemented with no or negligible adaptations depending on their program fit, in many cases, the goal is to strike a careful and effective balance between fidelity and adaptation. This is probably one of the most important aspects of a successful implementation process (Bopp, Saunders & Lattimore, 2013; Ghate, 2016).


So far, research and practice has identified over 20 factors that affect program implementation, and many of these are briefly described in Table 2. None of these factors are all-or-none variables, and can be thought of to exist in degrees (e.g., such as low, medium, or high). These factors also range across several ecological levels. These factors can relate to broad factors (such as the level of political or administrative pressures, or available funding), or to characteristics of the program (e.g., its complexity) the host organization (e.g., its work climate, leadership, and its decision-making and communication practices) the front-line program providers (e.g., their various attitudes and beliefs) and, finally, to the quality of the professional development services that are offered. The various ecological factors affecting implementation are best integrated into the discussion of the major steps in the implementation process.


However, before beginning this discussion it is important to stress the limitations in implementation science, some of which have been suggested above. For example, we do not know which of the eight components of implementation are the most important in each situation, or how these components interact to yield the best outcomes. The situation is similar for the over 20 ecological factors that may impede or promote implementation. Finally, researchers have yet to identify exactly what level of implementation is needed to achieve each program's intended goals. It is clear that implementation does not have to be perfect (indeed, it almost never is) but how good it has to be to produce maximum benefits has yet to be discovered for different programs and populations. There is likely to be an implementation threshold that varies depending on the particular program. That is, achieving a certain level of implementation probably leads to desired gains for participants; going beyond this level is unlikely to lead to substantially more benefit.

Because we still have much to learn, the best general strategy to take in implementation seems to rest upon developing a true collaborative relationship between researchers or consultants and organizations and their staff willing to try out new programs (Fixsen, et al. 2005; Ghate, 2016). This collaborative model recognizes the contributions that all stakeholders can make in the important decisions regarding implementation that have to be made in terms of the planning, execution, evaluation, and hopefully, continual program improvement of interventions as needed. As a result, researchers must be careful about insisting on "doing it their way only" for fear of alienating practitioners, jeopardizing a potentially successful program adaptation, or presuming solid knowledge when some humility is more appropriate.


Before discussing the steps that research has indicated are important in implementation, it is important to emphasize the differing orientations and values often held by researchers and practitioners. Chen (2010) has stressed that whereas researchers typically prize internal and external validity, practitioners are much more likely to concentrate on what Chen calls, viable validity, that is, practitioners' (or other stakeholders') "views and experience regarding whether an intervention program is practical, affordable, suitable, evaluable, and helpful in the real-world" (p. 207). Chen stresses that convincing practitioners that a proposed intervention has viable validity is essential to its eventual success. Therefore, successful implementation is often determined by how well the different perspectives and values of researchers and practitioners are resolved through collaborative planning and decision-making.

A synthesis of the literature indicated there is consensus that implementing a program effectively entails at least 14 different steps which have been organized into what has been identified as the Quality Implementation Framework (Meyers, Durlak, & Wandersman, 2012). These steps are summarized in Table 3, and at each step decisions and actions need to occur. The steps are generally sequential, although implementation is a dynamic process in that some steps can be skipped in some settings because they are already resolved or satisfied, whereas some need to be revisited depending on local circumstances. For example, providers' interest or enthusiasm for a program may wane over time, so it is best to revisit this issue when needed. In general, however, the Quality Implementation Framework implies that effective implementation should be approached systematically as a temporal series of linked steps and actions that should be effectively addressed to enhance the likelihood of quality implementation. In other words, implementation requires careful planning, monitoring, and coordination. Current thinking is that the inability to achieve effective implementation in many cases can be linked to not successfully accomplishing its major steps.

The temporal process of implementation can be divided into four major phases that involve (1) assessing program fit, (2) creating a plan and structure, (3) ongoing monitoring and evaluation, and (4) reflection and decisions about future applications.

Considering the entire process of implementation at the outset cannot be overemphasized. In fact, 10 of the 14 steps involved in implementation should be accomplished before the program begins! This effort should occur because a failure to accomplish each step successfully threatens the level of eventual implementation that is attained, which, in turn, affects program outcomes. Several of these steps relate to viable validity, so it is important for researchers or consultants to understand and address practitioners' concerns or issues during the process of implementation. A few of the steps of implementation are discussed below.

Importance of Program Fit. In general, the first phase of implementation usually consists of 8 steps that involve determining how well the program will fit into the new local setting. The major issues to be answered in these eight steps include such things as assessing how well or to what extent: (a) the program fulfills organizational needs, (b) there is genuine staff buy-in for the program (c) staff hold realistic expectations for what the program can achieve, (d) the organization has the requisite readiness and capacity (i.e., resources, staff, time, space, funding, and skills) to deliver the program, and (e) professional development services can be obtained to prepare staff for program delivery. For example, is the program a good match for what the organization needs, can it be integrated into an organization's operational procedures, is it practical and affordable, and how can practitioners do what is required in order to see if the program really helps?).

Better program fit typically leads to better outcomes, so a good fit can be achieved by either changing some aspects of the program (adaptation) or changing the organization that will deliver the program, or, sometimes doing a little of both. In some cases, decisions might be made to cancel or postpone implementation if it seems like the organization does not possess sufficient readiness or capacity, or if there are major disagreements about going ahead with the program or its possible worth to the organization (that is, practitioners are not convinced of the program's viable validity). Another concern is administrator pressure to go ahead without sufficient staff buy-in which can not only undermine effective implementation but also jeopardy the prospects of sustaining a useful intervention over time (Wanlass & Domitrovich, 2015).

Professional Development Services Are Essential. Professional development services refer to preprogram training of front-line providers (and sometimes administrators), and continuing technical assistance (coaching or consultation) after the program begins, and until its trial period ends. Research indicates that both are necessary for effective implementation because pre-program training and preparation cannot always anticipate all the practical and personal problems that inevitably appear when something new is attempted, particularly if that something new is complicated and requires the mastery and application of new skills (Durlak & DuPre, 2008; Fixsen et al., 2005).

Therefore, there are some evidence-based programs that should not be implemented due to the lack of available professional development services. On the positive side, however, there are a small but growing number of professional groups willing and able to offer professional development services for organizations wishing to offer new programs (e.g., CASEL at; the Center for Disease Control, the Sociometrics Corporation, and the Yale Center for Emotional Intelligence; see also Wandersman, Imm, Chinman, & Kaftarian, 2000). To increase the number of personnel available to assist organizations, several agencies and universities have offered courses or training workshops in implementation (Chambers, Proctor, Brownson & Strauss, 2017). Some groups involved in character education have established online resources to support their training and consultation services (Stern, Harding, Holzer, & Elbertson, 2015). Technology may be one way to overcome some of barriers limiting the application of professional development services. Professional development services require money and time, but are indispensable for effective program implementation.

The Necessity of Effective Leadership. Another factor that cannot be overemphasized is that effective implementation cannot be achieved without continual, strong leadership at the organizational level. The exact nature of a leader's activities and who (sometime more than one) should serve in this role varies depending on the circumstances. The collective wisdom to date drawn from both research and practice suggests that some of the major tasks of a leader involve: (a) supporting all those involved in program implementation, (b) rejuvenating lagging motivation and commitment if necessary, (c) recognizing jobs well done, (d) the ability to solve practical problems either on one's own or by delegating to others, and (e) willingness to "run interference" so that untoward political or administrator pressures and directives do not derail program efforts (Damschroder & Hageborn, 2011; Fixsen et al., 2005). An effective leader is able "to rise to the occasion" so that what needs to get done, is done well, and without undue delay. Staff can often tell if a leader is truly behind a new program and on their side.


It is a mistake to believe that the responsibility for effective program implementation lies solely in the hands of front-line providers. Implementation is everybody's business and multiple stakeholders need to work collaboratively to attain mutual goals. Sadly, one of the reasons that some attempts at implementation are not successful is because multiple stakeholders do not have a positive history of close and effective collaboration. Each stakeholder group has one or more key roles to play. For example, funders and policy makers must realize that time and resources are needed to achieve good program implementation. Some complex programs may need 3 to 4 years before a desirable level of implementation is attained. Administrators usually have to make some operational and organizational changes to accommodate program implementation (e.g., changing staff routines and tasks, freeing up staff time, and possibly providing incentives for taking on new jobs). Trainers and consultants must be available to offer successful professional development services so that staff members are adequately prepared for their new roles. Front-line providers must be willing and committed to devote the needed time and energy to new programs. Even the potential benefactors of the program (youth and their families) may have a role to play in offering input regarding what is most needed and which specific activities and procedures would be best received and useful. Giving youth meaningful roles in the planning and delivery of new programs is one way to foster positive youth development. Before offering concluding comments, it is important to address a frequently neglected issue.

What About Existing Programs?

Earlier, it was mentioned that the field of implementation science has focused on the conduct of evidence-based programs that are brought into new settings. Yet, there are many existing programs serving young people that could benefit from an implementation perspective. For the sake of discussion, let's simply call these "practice programs" referring to offerings that attend to various aspects of character development but have primarily developed independently from research findings. These practice programs deserve recognition and assistance. For example, there are a host of community-based and out-of-school-time programs that serve hundreds of thousands of youth internationally (e.g., Boys and Girl clubs, 4-H, and the Boy and Girl Scouts). Evaluations have indicated the value of these practice programs (e.g., Dubois, Portillo, Rhodes, Silverthorn & Valentine, 2011; Lerner et al., 2005; National Afterschool Association & American Institutes for Research, 2016). However, these organizations have chapters and affiliates who have probably made some adaptations based on the different cultural and social contexts in which they operate. It is logical to believe local chapters can benefit from feedback regarding their implementation of organizational practices and how they might improve their services, but the central offices of each organization rarely have the resources to help all their affiliates, and many other practice programs are not a formal part of any larger organization. Once again, finances and the lack of sufficient numbers of available consultants are limitations against many practice programs receiving objective advice and feedback on implementation and continual program improvement.

There is another benefit from studying the implementation of existing practice programs which emphasizes that researchers and consultants can learn from practitioners about the latter's successful efforts. Practitioners have developed good ideas and good programs largely on their own, and it is important that we identify when this has occurred and learn from them.


This article has attempted a broad overview of the issues and factors involved in effective program implementation. If we are to learn about the circumstances in which character development programs are successful, then we must put them to a fair and adequate test and this requires, first, that the program is effectively implemented. Yet, the complexity of implementation should be recognized. There are multiple components of implementation, many possible ecological factors that can increase the chances of less or more successful efforts, and several steps must be negotiated for a program to be effectively conducted in each cultural and social context. Although much more understanding is needed regarding these complex issues, it is clear that effective implementation is a fundamental aspect of the delivery of new and well-established programs that increases the chances of achieving success. Hopefully, multiple stakeholders can learn to collaborate successfully in order to advance our understanding of implementation, which, in the long run, will improve services for youth.


Battistich, V., Schaps, E., Watson, M., Solomon, D., & Lewis, C. (2000). Effects of the child development project on students' drug use and other problem behaviors. The Journal of Primary Prevention, 21, 75-99.

Bopp, M., Saunders, R. P., & Lattimore. D. (2013). The tug-of-war: Fidelity versus adaptation throughout the health promotion program life cycle. Journal of Primary Prevention, 34,193-207. doi: 10.1007/s10935-013-0299-y

Chambers, D. A., Proctor, E. K., Brownson, R. C., & Straus, S. E. (2017). Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs. Translational Behavioral Medicine, 7(3), 5930150601. doi:10.1007/s13142- 016-0399-3

Chen, H. T. (2010). The bottom-up approach to integrative validity: A new perspective for program evaluation. Evaluation and Program Planning, 33, 205-214. doi:10.1016/j.evalprogplan.2009.10.002

Cook, T. C., Habib, F., Phillips, M., Settersten, R. A., Shagle, S. C., & Degirmencioglu, S. D. (1999). Comer's school development program in Prince George's County, Maryland: A theory-based evaluation. American Educational Research Journal, 36, 543-597.

Damschroder, L. J., & Hagedorn, H. J. (2011). A guiding framework and approach for implementation research in substance use disorders treatment. Psychology of Addictive Behaviors, 25, 194-205. doi:10.1037/a0022284

Domitrovich, C. E., Bradshaw, C. P., Poduska, J. M., Hoagwood, K., Buckley, J. A. Olin, S.,... Ialongo, N. S. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Based Mental Health Promotion, 1, 6-28. doi: 10.1007/s10464-012-9522-x

DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C., (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57-91 doi: 10.1177/1529100611414806

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350.

Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students' social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82, 405-433. doi:10.1111/j.1467-8624.2010.01564.x

Elias, M. (2010). Sustainability of social-emotional learning and related programs: Lessons from a field study. International Journal of Emotional Education, 2, 17-33.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Retrieved from

Garrard, W. M., & Lipsey, M. W. (2007). Conflict resolution education and antisocial behavior n U.S. schools: A meta-analysis. Conflict Resolution Quarterly, 25, 9-38. doi:10.1002/crq

Ghate, D. (2016). From programs to systems: Deploying implementation science and practice for sustained real world effectiveness in services for children and families. Journal of Clinical Child and Adolescent Psychology, 45, 812-826. doi:10.1080/15374416.2015.1077449

Humphrey, N. (2013). Social and emotional learning: A critical appraisal. Thousand Oaks, CA: SAGE.

Lendrum, A., & Humphrey. N. (2012). The importance of studying the implementation of interventions in school settings. Oxford Review of Education, 38, 635-652.

Lerner, R. M., Lerner, J. V., Almerigi, J. B., Theokas, C., Phelps, E., Gestsdottir, S.,... & van Eye, A. (2005). Positive youth development, participation in community youth development programs, and community contributions of fifth-grade adolescents: Findings from the first wave of the 4-H study of positive youth development. Journal of Early Adolescence, 25(1), 17-71. doi:10.1177/0272431604272461

Meyers, D. C., Durlak, J. A., & Wandersman, A. (2012). The Quality Implementation Framework: A synthesis of critical steps in the implementation process. American Journal of Community Psychology, 50, 462-480. doi:10.1007/s10464-012-9522-x

National Afterschool Association & American Institutes for Research. (2016). Building on a strong foundation. McLean, VA: National After-School Association.

Ringwalt, C. L., Ennett, S., Johnson, R., Rohrbach, L. A., Simons-Rudolph, A., Vincus, A.,... Thorne, J. (2003). Factors associated with fidelity to substance use prevention curriculum guides in the nation's middle schools. Health Education & Behavior, 30, 375-391.

Rogers, E. M. (2003). Diffusion of innovations (5th Ed.). New York, NY: Free Press.

Stern, R. S., Harding, T. B., Holzer, A. A., & Elbertson, N. A. (2015). Current and potential uses of technology to enhance SEL: What's new and what's next?. In J. A. Durlak, C. E. Domitrovich, Roger P. Weissberg, & T. P. Gullottas (Eds.), Handbook of social and emotional learning (pp. 516-531). New York, NY: Guilford.

Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. (2000). Getting to outcomes: A results-based approach to accountability. Evaluation and Program Planning, 23, 389-395.

Wanlass, S. B., & Domtrovich, C. E. (Eds.). (2015). Readiness to implement social-emotional learning interventions [Special issue]. Prevention Science, 16(8).

Joseph A. Durlak

Loyola University Chicago

* Correspondence concerning this article should be addressed to: Joseph A. Durlak,
Table 1
Definitions of the Eight Major Components of Program Implementation

* Fidelity: the degree to which the major components of the program
have been delivered
* Dosage: how much of the program is delivered?
* Quality of delivery: how well or competently is the program conducted?
* Adaptation: what changes, if any, are made to the original program?
* Participant responsiveness or engagement: to what degree does the
program attract participants' attention and actively involve them in
the intervention?
* Program differentiation: in what ways is the program unique compared
to other interventions?
* Monitoring of control conditions: in what ways might the control
condition mirror or overlap with critical parts of the new program?
* Program reach: how much of the eligible population participated in
the intervention?

Source: Drawn from Durlak and DuPre (2008).

Table 2
Examples of Factors That Can Affect Program Implementation

I.    At the Broad Community Level
      A. Scientific theory and research
      B. Political or administrative pressures
      C. Availability of funding
II.   Characteristics of Those Delivering the Program
      A. Perceived need and relevance of the program
      B. Perceived benefits of innovation
      C. Self-efficacy and confidence in executing the program
      D. Possession of sufficient skills necessary for implementation
III.  Characteristics of the Program Being Conducted
      A. How compatible is it with the organization's mission,
      priorities, and values
      B. Adaptability: what modifications are possible to fit local
      needs and preferences
IV.   Factors Relevant to the Organization: Organizational Capacity
      A. General Organizational Factors
       1. Positive work climate
       2. Openness to change
       3. Shared vision, consensus, and staff buy-in
      B. Specific Practices and Processes
       1. Shared decision-making and effective collaboration among
       2. Coordination and partnership with other agencies as needed
       3. Frequent and open communication among participants and
       4. Procedures conducive to strategic planning and task
      C. Specific Staffing Considerations
       1. Effective leadership
       2. Program champions who can maintain support and problem-solve
       arising difficulties
       3. Effective management and supervision
V.    Factors Related to Professional Development Services
       1. Successful training of implementers
       2. Ongoing technical assistance to maintain staff motivation,
       and skills

Note: Discussion of these factors is available in Damschroder and
Hagehorn (2011), Domitrovich et al. (2008), Durlak and DuPre (2008),
and Fixsen, Naoom, Blase, Friedman, and Wallace. (2005).

Table 3
Major Steps in the Process of Implementation

Phase One: Initial Considerations Regarding the Host Setting
* Self-Assessment Strategies
  1. Conducting a needs and resources assessment
  2. Conducting a fit assessment
  3. Conducting a capacity/readiness assessment
* Decisions about adaptation
  4. Possibility for adaptation
* Capacity-Building Strategies
  5. Obtaining explicit buy-in from critical stakeholders and fostering
  a supportive
  community/organizational climate
  6. Building general/organizational capacity
  7. Staff recruitment/maintenance
  8. Effective preinnovation staff training
Phase Two: Creating a Structure for Implementation
* Structural Features for Implementation
  9. Creating implementation teams
  10. Developing an implementation plan
Phase Three: Ongoing Structure Once Implementation Begins
* Ongoing Implementation Support Strategies
  11. Technical assistance/coaching/supervision
  12. Process evaluation
  13. Supportive feedback mechanism
Phase Four: Improving Future Applications
 14. Learning from experience
COPYRIGHT 2017 Information Age Publishing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Durlak, Joseph A.
Publication:Journal of Character Education
Article Type:Report
Geographic Code:1USA
Date:Jul 1, 2017

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |