Printer Friendly

Measuring Constructs of the Consolidated Framework for Implementation Research in the Context of Increasing Colorectal Cancer Screening in Federally Qualified Health Center.

Narrowing the gap between research and practice in health care is a national priority which has fueled development of a large number of theories and models to explain variability in implementation outcomes (Woolf 2008; Brownson et al. 2010; Tabak et al. 2012). One in particular, the Consolidated Framework for Implementation Research (CFIR), attempts to synthesize constructs across existing theories into a typology that can advance our understanding of implementation across a range of settings and types of interventions (Damschroder et al. 2009). The CFIR identifies 39 constructs and subconstructs within five major domains: Intervention Characteristics (e.g., relative advantage), Outer Setting (e.g., external policies and incentives), Inner Setting (e.g., implementation climate, readiness for implementation), characteristics of the individuals involved (e.g., beliefs about the intervention), and the Process Of Implementation (e.g., reflecting and evaluating). Its comprehensiveness, while likely capturing the complexity of the Implementation Process, also makes it challenging to use. Indeed, the authors encourage researchers to assess each construct for salience in any given study, acknowledging that attempts to use the full model would "quickly mire evaluations" given the large number of constructs (Damschroder et al. 2009). While parsimony has appeal horn a methodologic perspective, reality is complex and studies that explore relationships between CFIR constructs can lead to a better understanding of the framework and better guidance for its application.

Much of the prior research using CFIR has been qualitative or mixed methods (Robins et al. 2013; Forman et al. 2014; Gould et al. 2014; Kalkan et al. 2014; Luck et al. 2014; Ramsey et al. 2014; Kirk et al. 2016; Liang et al. 2016). Relatively few researchers have used CFIR quantitatively, particularly in a comprehensive way that identifies factors within each of the five domains (Acosta et al. 2013; Kirk et al. 2016). More commonly, studies focus on a single CFIR domain (Midboe et al. 2011; Ditty et al. 2014; Kirk et al. 2016). An important next step in this line of research is to examine the influence of CFIR domains and constructs in a more comprehensive fashion. Moving the field in this direction requires development of a set of parsimonious, yet reliable and valid measures.

The Cancer Prevention and Control Research Network (CPCRN) is a national network of academic, public health, and community partners who work together to reduce the burden of cancer through dissemination and implementation research (Fernandez et al. 2014; Ribisl et al. 2017). Members have expertise in cancer prevention and control, behavioral and social sciences in public health, and implementation science. The CPCRN developed a federally qualified health center (FQHC) work group to describe and identify factors that influenced implementation of evidence-based approaches to promoting colorectal cancer screening in FQHCs. FQHCs serve as the safety net of primary care for vulnerable populations in the United States (Health Resources and Services Administration, 2014). Measuring theory-informed factors that may influence implementation of evidence-based approaches to CRC screening such as client reminders and provider assessment and feedback has the potential to inform efforts to increase CRC screening rates among the high-risk populations served by FQHCs (Sabatino et al. 2012).

A parsimonious and psychometrically sound set of measures informed by CFIR could be useful during three phases of the Implementation Process: (1) in the formative or planning phase such measures could pinpoint aspects of the implementation context that could be strengthened to increase the likelihood of successful implementation, (2) during initial implementation such measures could identify barriers and challenges that could be addressed to improve implementation quality, and (3) such measures could be used during a maintenance phase to ensure continued quality implementation and improvement. Additionally, such measures could inform future change efforts by helping to explain why implementation went well or did not. The purpose of this study was to describe how we operationalized selected constructs from CFIR domains and to present the psychometric properties of these measures within the context of implementing evidence-based approaches for promoting CRC screening in FQHCs. The research reported here was conducted by the CPCRN FQHC work group and is one of the first studies to quantitatively operationalize selected CFIR constructs from all five domains as an initial step in identifying factors that influence implementation of evidence-based interventions.


Measure Development

Our first step in developing measures was to select priority CFIR constructs. We employed three principles to guide the selection process: (1) operationalize at least one construct from each of the five domains of CFIR, (2) keep the measures brief enough to be used in busy practice settings, and (3) consider constructs that assessed modifiable factors that were of relevance to FQHCs. We selected 16 of 39 CFIR constructs and subconstructs (see Table 1) through a series of in-person and telephone discussions among work group members, all with expertise in implementation science and/or cancer prevention and control.

Our second step was to select and adapt measures. Each participating CPCRN center took the lead in identifying candidate measures for a domain and/or multiple constructs and then we reviewed them collectively for face validity (i.e., subjective assessment of whether items appear to measure the concept they purport to measure), discussing whether the candidate items seemed to align with the constructs as defined in the seminal CFIR paper (Damschroder et al. 2009). We drew heavily from a survey that included the Practice Adaptive Reserve (PAR) Scale (Jaen et al. 2010; Nutting et al. 2010) that had been fielded to better understand organizational capacity to implement an evidence-based intervention for cancer screening among Asian/Pacific Islanders (Sohng et al. 2013). First, we identified items with face validity that matched CFIR construct definitions. For constructs that did not have well-matched items, we conducted literature searches to identify relevant measures by going through the reference list of the seminal CFIR publication and searching electronic databases for peer-reviewed articles published after 2001. We then achieved consensus through extensive discussions within the work group to select items for each construct that fit the CFIR definitions (i.e., face validity), had been used in health-related settings and were relevant to FQHC practice, and had evidence of reliability and validity. We also asked leaders from individual FQHCs and state Primary Care Associations to review measures for face validity, importance, and changeability within the FQHC context.

In describing how to use CFIR Damschroder and colleagues (Damschroder et al. 2009) discuss the level at which each construct should be measured. In our study, this meant considering whether constructs should be assessed at the individual or the clinic level. We addressed this, in part, by developing two web-based survey instruments. The first was developed for all levels of clinic staff [Staff Survey] and the second focused on clinic-level characteristics [Clinic Characteristics Survey]. Additional complexity arose from our interest in understanding factors influencing implementation of multiple evidence-based strategies recommended by the Guide to Community Preventive Services (Sabatino et al. 2012). Because a number of the CFIR constructs are specific to the particular intervention being implemented, when a respondent indicated that multiple evidence-based approaches were being implemented at their site, we populated the intervention-specific items using an algorithm that prioritized provider prompts first, followed by client reminders. Table 1 indicates where a particular item was general or specific to an evidence-based strategy.

Description of the Measures

The final CFIR measures consisted of 69 items operationalizing 16 constructs and subconstructs of CFIR (see Table 1). Table 1 defines our selected constructs by domain, lists the items, indicates whether they were tailored to a specific intervention, and identifies the original source for each measure. Each is described briefly below.

The first CFIR domain is Intervention Characteristics. Interventions typically need to be adapted to fit within the context of a specific organization (Damschroder et al. 2009). CFIR identifies eight constructs within this domain and we assessed two of these: relative advantage using one item from Scott et al. (2008). and complexity using four items from Pankratz, Hallfors, and Cho (2002). In CFIR compatibility is listed under implementation climate. We chose to include compatibility as part of Intervention Characteristics given its history as a dimension of the intervention in the classical conceptualization in Roger's Diffusion of Innovations (Rogers 2003). Our measure used two items from a longer scale by Pankratz and colleagues that combined compatibility with relative advantage (Pankratz, Hallfors, and Cho 2002).

The second CFIR domain is Outer Setting. The Outer Setting can both facilitate implementation and create challenges (Damschroder et al. 2009). We assessed external policies and incentives through five items adapted from an index developed by Simon, Rundall, and Shortell (2007). We split the index into two measures, one for reporting requirements and one for recognition. We assessed patient needs and resources through five items adapted from McMenamin et al. (2010). We did not assess cosmopolitanism or peer pressure.

The third and most complex CFIR domain is Inner Setting, comprising five constructs and nine subconstructs. We used 38 items from the Practice Adaptive Reserve Scale (Jaen et al. 2010; Nutting et al. 2010) supplemented with items from Helfrich et al. (2009), Lehman, Greener and Simpson (2002) and Weiner et al. (2011). to create an Inner Setting construct that covered available resources, culture including stress and effort, implementation climate, learning climate, and readiness for implementation (Fernandez et al. 2018). For the current analyses, we constructed a second order factor which we labeled Inner Setting and which we use here for parsimony given our objective of including measures from each of the five CFIR domains and to aid in examination of construct validity.

The fourth domain is Characteristics Of Individuals, with five constructs. We assessed knowledge and beliefs about the intervention operationalized as appeal of the intervention. We also assessed openness toward new interventions as an other personal attribute. We did not assess self-efficacy, stage of change, or identification with the organization. We used three items adapted from Aarons (2004, 2005) Evidence Based Practice Attitudes Scale (EBPAS) to assess appeal and three items to assess openness to innovation. The EBPAS was developed to assess mental health care providers' attitudes toward the implementation of EBAs. The original scale comprised four subscales including intuitive appeal of the EBA, openness to new practices, willingness to adopt new practices, and perceived divergence between usual practice and research-based practice.

The fifth CFIR domain is the Process Of Implementation, which is conceptualized as four steps: planning, engaging, executing, and reflecting and evaluating. Engaging includes four subconstructs and we assessed one of these subconstructs: engaging champions with three items for which we constructed an item with good face validity and modified two additional items from Damschroder et al. (2009, 2011). for the FQHC environment. We created a single item with good face validity to assess the executing construct. Two items were modified from the Practice Adaptive Reserve Scale to gauge reflecting and evaluating (Jaen et al. 2010; Sohng et al. 2013). Finally, two items based on Helfrich and colleagues' context-related sub-scale examining goal setting and tracking and communicating performance were used to assess the degree to which goals were clearly communicated, acted upon, and relayed back to staff and alignment of that feedback with goals (Helfrich et al. 2009). The latter, goals and feedback, is an Inner Setting variable according to CFIR but we moved it to this domain as the items were process oriented.

All but two of the CFIR constructs were measured via the Staff Survey answered by up to 10 clinical staff in different roles from each FQHC clinic. Two Outer Setting constructs, both related to external policies and incentives (i.e., reporting requirements and external rewards and recognition), were measured only via the Clinic Characteristics Survey answered by a leader from each FQHC clinic. The Clinic Characteristics Survey also assessed general characteristics of the FQHC, such as number of patients and use of electronic health records.

Data Collection Procedures

Recruitment. CPCRN centers from seven states (CA, CO, GA, MO, SC, TX, and WA) recruited FQHC clinics to participate using two strategies: (1) partnering with the state's Primary Care Associations (PCAs) to email member FQHCs to encourage them to participate in this study and (2) inviting individual FQHCs directly through emails, telephones calls, or in-person meetings. Three centers (GA, TX, and CO) used both strategies; two (WA and SC) used only the first strategy and two (CA and MO) used only the second strategy. One PCA (SC) also directly recruited participants at a meeting of community health center staff members. Once a FQHC clinic agreed to participate, an individual (usually a member of the management team) from each clinic was designated as the main contact. The main contact sent out an introductory email with a link to the online survey to eligible staff members encouraging their participation.

Sample. One individual representing the clinic, typically a Chief Executive Officer or Medical Director, responded to the Clinic Characteristics Survey. A maximum of 10 staff from each clinic were allowed to complete the Staff Survey, with a limit of three providers (physicians, nurse practitioners, and physician assistants), three nurses or quality improvement staff, and four medical assistants. In all states except WA and CO, only one clinic per FQHC system participated in the survey.

Survey Administration. Surveys were administered between January and May 2013. Reminder emails were sent to potential participants up to three times postinvitation. Incentives were offered to either individuals ($25 gift card) completing the survey or to FQHCs ($250). All study procedures were approved by the Institutional Review Boards of the Coordinating Center and the participating centers (Fernandez et al. 2018). The surveys were programmed into an online survey administration system, Qualtrics.

Data Analyses

Only clinics that responded to both the Clinic Characteristics Survey and the Staff Survey were included in the analyses reported here. We used confirmatory factor analysis (CFA) to assess measures of CFIR constructs with more than three items: complexity and patient needs and resources. Because clinic staff were nested within FQHC clinics, we used Mplus (Muthe'n and Muthe'n 2012) to estimate the models accounting for the nested data structure. In each CFA model, items were forced to load on a single factor and, when necessary, post hoc modifications were made to improve model fit (e.g., allowing covariance between certain items). We used the following fit indices and rules of thumb to determine individual CFA model fit: the comparative fit index (CFI) and the Tucker-Lewis index (TLI) as incremental fit indices, equal to or greater than 0.90; the standardized root mean square residual (SRMR) as an index of absolute fit, equal to or less than 0.08; and the root mean square error of approximation (RMSEA) as a parsimony-corrected fit index, equal to or less than 0.08 (Schreiber et al. 2006). We then computed Cronbach's alpha for these constructs to assess interitem consistency. For constructs with only two items, we computed Spearman-Brown reliability estimates for 2-item scales as recommended by Eisinga, Grotenhuis, and Pelzer (2013).

To determine whether constructs were appropriately analyzed at the individual or clinic level, we computed ICC(l), ICC(2), and [r.sub.wG(J)] statistics for clinics with two or more respondents to the Staff Survey (Klein and Kozlowski 2000a,b; Lance, Butts, and Michels 2006).

We assessed construct validity by examining correlations between constructs at the clinic level, and to the extent possible, assessing whether associations fit the predicted pattern (Furr and Bacharach 2013). Convergent validity was assessed by examining whether factors expected to correlate with one another actually did, and discriminant validity was documented by confirming that theoretically distinct constructs were not correlated too highly. SAS 9.3 was used for these and other descriptive analyses.


Description of Participants, Clinics, and Selected CFIR Constructs

A total of 277 individual clinic staff from 59 FQHC clinics responded to the Staff Survey and 59 clinic leaders responded to the Clinic Characteristics Survey. The characteristics of participants and FQHC clinics are shown in Table 2. Approximately 12 percent were providers, with a higher percentage of medical assistants (36.1 percent) and nurse or clinical directors (35.4 percent). On average, the clinics served 13,485 (SD 15,334) patients in year 2012.

Structural Validity

Table 3 also shows the results of separate CFAs for three of the constructs. Complexity and patient needs and resources each achieved good model fit, as assessed by CFI, TLI, SRMR and RMSEA fit statistics. Covariance between two items was allowed for complexity to increase model fit. The second order factor, which attempted to combine five Inner Setting constructs into one measure, did not fit well, with both CFI and TLI <.90.


As shown in Table 4, interitem consistency was good for all five measures with three or more items (complexity: .72; patient needs and resources: .86; individual knowledge and belief. .75; engaging champions: .84; openness: .88). For constructs with only two items (compatibility, reflecting and evaluating, goals and feedback), Spearman-Brown coefficients ranged from 0.60 to 0.70. The constructs assessed through the Clinic Characteristics Survey (n = .59), reporting requirements and recognition, also demonstrated acceptable reliability (alpha = 0.69; Spearman-Brown coefficient = 0.64).

Inter-Rater Reliability and Agreement

Inter-rater reliability and inter-rater agreement statistics were calculated to assess whether computing clinic-level means from the individual-level data was appropriate, and if so, for which constructs. Results are presented in Table 4. The ICC(l) values for most constructs are above 0.10 and significant, with the exception of the three Intervention Characteristic constructs and openness to innovation (Klein and Kozlowski 2000b; Furr and Bacharach 2013). In contrast, the ICC(2) values are generally below .70 suggesting the group scores may not be reliable, most likely due to relatively small numbers of respondents per clinic by design. The [r.sub.wG(J)] indexes are all above 0.70 except reflecting and evaluating ([r.sub.wG(J)]) = 0.693), which indicates relatively high degree of agreement among individual raters (Klein and Kozlowski 2000b).

Construct Validity

Discriminant validity for all of the constructs is evident in Table 5 which shows correlations among CFIR constructs at the clinic level (individual-level correlations are available from the first author). None of the correlations exceed .75. Our assessment of convergent validity should be considered exploratory given that we did not hypothesize expected relationships between constructs a priori nor is CFIR explicit about these relationships. Table 5 shows modest correlations among a large number of constructs both within and across domains. Notable exceptions include complexity, reporting requirements, and openness with one or no significant correlations.


This paper describes psychometric properties of measures within each domain of the CFIR within the context of FQHCs and colorectal cancer screening. Discriminant validity was good for all of the measures, and internal consistency was adequate for most. CFA documented reasonable structural validity for complexity and patient needs and resources, but not for the global measure of Inner Setting. Convergent validity was more challenging to assess given that CFIR does not specify hypothesized relationships between constructs other than that they are expected to predict implementation. Therefore, our construct validity findings should be considered exploratory. We also examined ICCs to help inform the appropriate level of analysis in future studies. Results were most consistent within the process domain suggesting these constructs should be measured at the clinic level. Table S1 summarizes psychometric testing of measures in our study relative to the analyses and results reported by the original source.

A closer examination of each of the CFIR domains offers some useful insights for future measurement work, including convergent and predictive validity. We assessed three constructs related to Intervention Characteristics: relative advantage, compatibility, and complexity. All three have been studied extensively within the context of Diffusion of Innovations (Rogers 2003). Our relative advantage measure, assessed with one item, was correlated with constructs within each of the other domains except for Outer Setting, perhaps because the Outer Setting constructs assessed were assessing a more general construct and not referencing a specific EBA. In addition to documenting that relative advantage predicted adoption in the study from which we obtained the measure, Scott et al. (2008) noted correlations between relative advantage and other measures of Intervention Characteristics, including compatibility and complexity. We observed a significant correlation with the former but not the latter.

Our complexity measure, comprised of four items similar to the original measure, had good reliability and structural validity (Pankratz, Hallfors, and Cho 2002). In the original study, complexity predicted adoption and specific items were correlated with items assessing relative advantage/compatibility (a combined measure) and observability (Pankratz, Hallfors, and Cho 2002). Interestingly, our measure of complexity was not associated with any of the other CFIR domains. In contrast, compatibility was significantly correlated with all of the other CFIRconstructs except for external policies and incentives.

We assessed two constructs from the Outer Setting domain: external policies and incentives and patient needs and resources. In theory, these should both be clinic-level constructs because clinics are often required to engage in certain best practices and receive recognition as an entity. As such, we measured them through the Clinic Characteristics Survey. One might speculate that external policies and incentives would increase the appeal of an intervention. We found that reporting requirements were not related to any of the other CFIR measures and that recognition was significantly correlated with appeal. The original study by Simon, Rundall, and Shortell (2007) found external incentives to predict adoption, but did not examine associations with other predictors.

We assessed patient needs and resources with a patient-centeredness index that had been used by McMenamin et al. (2010). The items ask about the clinic and clinic staff in general and how responsive they are to patient needs. The items had good structural validity in our study. The intraclass correlations suggested good agreement among clinic staff when asked about responsiveness to patient needs. Significant correlations were observed between clinic-level report of responsiveness to patient needs and constructs within each of the CFIR domains. This finding is consistent with a recently published systematic review of measures affecting implementation outcomes (Chaudoir, Dugan, and Barr 2013).

To facilitate an examination of all five of the CFIR domains, the Inner Setting measure in this study was conceptualized as one overarching construct. Internal consistency was high, and two of the indicators of intraclass correlation and inter-rater reliability suggested the appropriateness of aggregating measures to the clinic level. CFA suggested it may not be appropriate to combine as many constructs as we attempted to combine here. Indeed, a complementary analysis of constructs within the Inner Setting domain yielded better psychometric results (Fernandez et al. 2018). Nevertheless, the correlations of the Inner Setting construct with constructs in each of the other domains may serve as the basis for hypothesizing causal relationships that can be tested in future studies. The Inner Setting was associated with the patient needs and resources construct and with all of the constructs but one in the process domain. Prior studies have similarly documented an association between the Inner Setting domain and other CFIR domains and constructs (Alexander and Hearld 2011; English et al. 2011; Acosta et al. 2013). Damschroder notes that "the line between inner and Outer Setting is not always clear" and depends on the context of the implementation. This ambiguity may partially explain the correlations observed between constructs in our study. Nevertheless, we observed good discriminant validity between constructs providing evidence that although related, they are distinct. Because of the importance of the Inner Setting and its relationship with implementation outcomes (Novins et al. 2013; Beidas et al. 2014; Ditty et al. 2014), it is a key construct to understand, measure, and consider when conducting implementation studies.

We assessed two constructs from the Individual Characteristics domain: knowledge and beliefs about the intervention, operationalized as appeal, and other personal attributes, which we limited to openness to innovation. Our measures, adapted from Aaron's EBPAS (Aarons 2004), had relatively high internal consistency. Our psychometric analyses confirmed the internal reliability of two subscales within the EBPAS, which has been tested with mental health professionals (Aarons et al. 2010) as well as physicians (Melas et al. 2012). We observed good discriminant validity for this operationalization of appeal and openness. Interestingly, the two constructs were not associated with one another. This was surprising given the source of the items and that Aarons has found a strong positive association between the openness and appeal subscales of the EBPAS (Aarons 2004). Prior studies using the EBPAS had found little variance attributable to cluster membership (e.g., mental health practice or hospital) (Aarons 2004; Aarons et al. 2010; Melas et al. 2012). In contrast, we found significant variance due to the clinic level for appeal, although not for openness.

Of our four Implementation Process measures, internal consistency was high for engaging champions, and moderate for reflecting and evaluating, and goals and feedback. All four of the process measures had two indicators that suggested that aggregating to the clinic level was appropriate. Relatively little psychometric research has been performed on Implementation Process measures.


Our study is limited by its cross-sectional nature which negates our ability to model associations among constructs over time, even though implementation is a process that unfolds over time. For example, associations may vary in strength depending on whether an EBA has been recently adopted or whether it is fully implemented and successfully sustained. Our study was challenged by the selection of a category of EBAs rather than one single EBA. This was a purposeful decision as a FQHC could be considered as engaging in best practice with any of the EBAs. While the majority of respondents referenced provider prompts (a common EBA), pooling responses across a range of EBAs assumes the CFIR constructs operate similarly across a range of EBAs. This is a reasonable assumption, but not yet proven.

We also had varying numbers of respondents per clinic, with seven clinics represented by only one respondent for the Staff Survey. Thus, some of our clinic-level measures were from aggregated responses in some clinics and from single respondents in other clinics. Unit of analysis added another layer of complexity, with individuals within clinics, and clinics within FQHCs. Some of the CPCRNs recruited just one clinic per FQHC, while others recruited multiple clinics per FQHC. Lastly, we chose to present correlations at the clinic level as implementation occurs at that level for the majority of the EBAs we were interested in (i.e., provider reminders, structural barriers, client reminders). However, more nuanced and multilevel analyses that allow constructs to be assessed at different levels would more accurately model reality.


The most effective implementation efforts are likely those that acknowledge the complexity of implementation and address multiple domains. For example, a review by Williams et al. which used CFIR to identify domains addressed by programs achieving high rates of alcohol screening and/or brief intervention found that focusing implementation strategies on the Inner Setting, Outer Setting, and Process Of Implementation domains was associated with greatest implementation outcomes (Williams et al. 2011). Measurement studies such as the one presented here and future studies to more closely examine not only the relationship between constructs but also their relative influence on implementation outcomes are necessary to more fully understand predictors of implementation and identify targets, strategies, and associated causal mechanisms to improve implementation (Kirk et al. 2016). While initial validation results from our study are promising, additional work is needed. A logical next step is to test for predictive validity by assessing whether the CFIR constructs are associated with implementation of an EBA for promoting colorectal cancer screening. This type of research, focused within the cancer screening context and other contexts, will aid in identifying a smaller set of influential constructs. Cross-validation studies are also needed to confirm the factor structures we observed and to assess psychometric properties of the measures in a range of samples, settings, and EBA contexts. This may eventually lead to more parsimonious measures with greater utility for use in busy clinical settings. Additionally, studies using path analysis to test relationships between constructs are warranted for theoretical advancement of CFIR and similar theories and models. Our study makes an important contribution to this line of future research as one of the first to develop quantitative tools that assess constructs across CFIR domains and providing initial validation of the measures, as well as identifying challenges in operationalizing a comprehensive implementation model. Future work will identify which constructs are most commonly associated with implementation and identify intervention targets for improving implementation within the context of FQHCs and cancer control, and beyond.


Joint Acknowledgment/Disclosure Statement: All authors had grant funding associated with this research through the Centers for Disease Control and Prevention and National Cancer Institute-funded Cancer Prevention and Control Research Network through cooperative agreements: U48DP001911, U48 DP001949, U48DP001936, U48DP0010909, U48DP001938, U48DP0019 34, U48DP001944, and U48/DP005000-01S2.

Disclosure: None.

Disclaimer: None.


Aarons, G. A. 2004. "Mental Health Provider Attitudes Toward Adoption of Evidence-Based Practice: The Evidence-Based Practice Attitude Scale (EBPAS)." Mental Health Services Research 6 (2): 61-74.

--. 2005. "Measuring Provider Attitudes Toward Evidence-Based Practice: Consideration of Organizational Context and Individual Differences." Child and Adolescent Psychiatric Clinics of North America 14 (2): 255-71, viii.

Aarons, G. A., C. Glisson, K. Hoagwood, K. Kelleher, J. Landsverk, and G. Cafri. 2010. "Psychometric Properties and U.S. National Norms of the Evidence-Based Practice Attitude Scale (EBPAS)." Psychological Assessment22 (2): 356-65.

Acosta, J., M. Chinman, P. Ebener, P. S. Malone, S. Paddock, A. Phillips, P. Scales, and M. E. Slaughter. 2013. "An Intervention to Improve Program Implementation: Findings from a Two-Year Cluster Randomized Trial of Assets-Getting to Outcomes." Implementation Science 8: 87.

Alexander, J. A., and L. R. Hearld. 2011. "The Science of Quality Improvement Implementation: Developing Capacity to Make a Difference." Medical Care49 (Suppl): S6-20.

Beidas, R. S.,J. Edmunds, M. Ditty, J. Watkins, L. Walsh, S. Marcus, and P. Kendall. 2014. "Are Inner Context Factors Related to Implementation Outcomes in Cognitive-Behavioral Therapy for Youth Anxiety?" Administration and Policy in Mental Health 41 (6): 788-99.

Brownson, R. C, E. A. Baker, T. L. Leet, K. N. Gillespie, and W. R. True. 2010. Evidence-Based Public Health New York: Oxford University Press.

Chaudoir, S. R., A. G. Dugan, and C. H. Barr. 2013. "Measuring Factors Affecting Implementation of Health Innovations: A Systematic Review of Structural, Organizational, Provider, Patient, and Innovation Level Measures." Implementation Science 8: 22.

Damschroder, L. J., D. C. Aron, R. E. Keith, S. R. Kirsh,J. A. Alexander, and J. C. Lowery. 2009. "Fostering Implementation of Health Services Research Findings into Practice: A Consolidated Framework for Advancing Implementation Science." Implementation ScienceA: 50.

Damschroder, L. J., D. E. Goodrich, C. H. Robinson, C. E. Fletcher, and J. C. Lowery. 2011. "A Systematic Exploration of Differences in Contextual Factors Related to Implementing the MOVE! Weight Management Program in VA: A Mixed Methods Study." BMC Health Services Research 11:248.

Ditty, M. S., S. J. Landes, A. Doyle, and R. S. Beidas. 2014. "It Takes a Village: A Mixed Method Analysis of Inner Setting Variables and Dialectical Behavior Therapy Implementation." Administration and Policy in Mental Health 42: 672-81.

Eisinga, R, M. Grotenhuis, and B. Pelzer. 2013. "The Reliability of a Two-Item Scale: Pearson, Cronbach, or Spearman-Brown?" International Journal of Public Health 58 (4): 637-42.

English, M., J. Nzinga, P. Mbindyo, P. Ayieko, G. Irimu, and L. Mbaabu. 2011. "Explaining the Effects of a Multifaceted Intervention to Improve Inpatient Care in Rural Kenyan Hospitals-Interpretation Based on Retrospective Examination of Data From Participant Observation, Quantitative and Qualitative Studies." Implementation Science 6: 124.

Fernandez, M., C. Melvin, J. Leeman, K. M. Ribisl, J. D. Allen, M. C. Kegler, R. Bastani, M. G. Ory, B. C. Risendal, P. A. Hannon, M. W. Kreuter, and J. R. Hebert. 2014. "The Cancer Prevention and Control Research Network: An Interactive Systems Approach to Advancing Cancer Control Implementation Research and Practice." Cancer Epidemiology, Biomarkers & Prevention 23 (11): 2512-21.

Fernandez, M., T. Walker, B. Weiner, W. A. Calo, S. Liang, B. Risendal, D. B. Friedman, S. P. Tu, R. S. Williams, S.Jacobs, A. K. Herrmann, and M. C. Kegler. 2018. "Developing Measures to Assess Constructs From the Inner Setting Domain of the Consolidated Framework for Implementation Research." Implementation Science 13: 52.

Forman, J., M. Harrod, C. Robinson, A. Annis-Emeott, J. Ott, D. Saffar, S. Krein, and C. L. Greenstone. 2014. "First Things First: Foundational Requirements for a Medical Home in an Academic Medical Center." Journal of General Internal Medicine 9: 9.

Furr, M., and V. Bacharach. 2013. Psychometrics: An Introduction. Thousand Oaks: Sage Publications.

Gould, N. J., F. Lorencatto, S. J. Stanworth, S. Michie, M. E. Prior, L. Glidewell, J. M. Grimshaw, and J. J. Francis. 2014. "Application of Theory to Enhance Audit and Feedback Interventions to Increase the Uptake of Evidence-Based Transfusion Practice: An Intervention Development Protocol." Implementation Science 9: 92.

Health Resources and Services Administration. 2014. "What is a Health Center" [accessed on April 2,2015]. Available at

Helfrich, C. D., Y. F. Li, N. D. Sharp, and A. E. Sales. 2009. "Organizational Readiness to Change Assessment (ORCA): Development of an Instrument Based on the Promoting Action on Research in Health Services (PARIHS) Framework." Implementation Science 4: 38.

Jaen, C. R, B. F. Crabtree, R. F. Palmer, R. L. Ferrer, P. A. Nutting, W. L. Miller, E. E. Stewart, R. Wood, M. Davila, and K. C. Stange. 2010. "Methods for Evaluating Practice Change Toward a Patient-Centered Medical Home." Annals of Family Medicine 8 (Suppl 1): S9-20; S92.

Kalkan, A., K. Roback, E. Hallert, and P. Carlsson. 2014. "Factors Influencing Rheumatologists Inverted Question Mark Prescription of Biological Treatment in Rheumatoid Arthritis: An Interview Study." Implementation Science 9 (1): 153.

Kirk, M. A., C. Kelley, N. Yankey, S. A. Birken, B. Abadie, and L. Damschroder. 2016. "A Systematic Review of the use of the Consolidated Framework for Implemen tation Research." Implementation Science 11: 72.

Klein, K. J., and S. W. Kozlowski. 2000a. "From Micro to Meso: Critical Steps in Conceptualizing and Conducting Multilevel Research." Organizational Research Methods?, (3): 211-36.

--., and--. 2000b. Multilevel Theory, Research, and Methods in Organizations: Foundations, Extensions, and New Directions. San Francisco: Jossey-Bass, Inc.

Lance, C. E., M. M. Butts, and L. C. Michels. 2006. "The Sources of Four Commonly Reported Cutoff Criterial-What did They Really Say?" Organizational Research Methods 9: 202-20.

Lehman, W. E., J. M. Greener, and D. D. Simpson. 2002. "Assessing Organizational Readiness for Change.'' Journal of Substance Abuse Treatment'22 (4): 197-209.

Liang, S., M. C. Kegler, M. Cotter, P. Emily, D. Beasley, A. Hermstad, R. Morton, J. Martinez, and K. Riehman. 2016. "Integrating Evidence-Based Practices for Increasing Cancer Screenings in Safety Net Health Systems: A Multiple Case Study Using the Consolidated Framework for Implementation Research." Implementation Science 11: 109.

Luck J., C. Bowman, L. York, A. Midboe, T. Taylor, R. Gale, and S. Asch. 2014. "Multimethod Evaluation of the VA's Peer-to-Peer Toolkit for Patient-Centered Medical Home Implementation." Journal of General Internal Medicine 29 (Suppl 2): S572-8.

McMenamin, S. B., N. M. Bellows, H. A. Halpin, D. R. Rittenhouse, L. P. Casalino, and S. M. Shortell. 2010. "Adoption of Policies to Treat Tobacco Dependence in U.S. Medical Groups." American Journal of Preventive Medicine 39 (5): 449-56.

Melas, C. D., L. A. Zampetakis, A. Dimopoulou, and V. Moustakis. 2012. "Evaluating the Properties of the Evidence-Based Practice Attitude Scale (EBPAS) in Health Care." Psychological Assessment 24 (4): 867-76.

Midboe, A. M., M. A. Cucciare,J. A. Trafton, N. Ketroser, and J. F. Chardos. 2011. "Implementing Motivational Interviewing in Primary Care: The Role of Provider Characteristics." Translational Behavioral Medicine 1 (4): 588-94.

Muthen, L. K., and B. O. Muthen. 2012. Mplus User's Guide, 1998-2012, 7th Edition. Los Angeles, CA.

Novins, D. K., A. E. Green, R. K. Legha, and G. A. Aarons. 2013. "Dissemination and Implementation of Evidence-Based Practices for Child and Adolescent Mental Health: A Systematic Review." Journal of the American Academy of Child and Adolescent Psychiatry 52 (10): 1009-25.el8.

Nutting, P. A., B. F. Crabtree, E. E. Stewart, W. L. Miller, R. F. Palmer, K. C. Stange, and C. R. Jaen. 2010. "Effect of Facilitation on Practice Outcomes in the National Demonstration Project Model of the Patient-Centered Medical Home." Annals of Family Medicine?) (Suppl 1): S33-44; S92.

Pankratz, M., D. Hallfors, and H. Cho. 2002. "Measuring Perceptions of Innovation Adoption: The Diffusion of a Federal Drug Prevention Policy." Health Education Research 17 (3): 315-26.

Ramsey, A., S. Lord, J. Torrey, L. Marsch, and M. Lardiere. 2014. "Paving the Way to Successful Implementation: Identifying Key Barriers to Use of Technology-Based Therapeutic Tools for Behavioral Health Care." Journal of Behavioral Health Services and Research A3: 54-70.

Ribisl, K. M., M. E. Fernandez, D. B. Friedman, P. A. Hannon, J. Leeman, A. Moore, L. Olson, M. Ory, B. Risendal, L. Sheble, V. M. Taylor, R. S. Williams, and B. J. Weiner. 2017. "Impact of the Cancer Prevention and Control Research Network: Accelerating the Translation of Research into Practice." American Journal of Preventive Medicine 52 (3S3): S233-S40.

Robins, L. S. J. E. Jackson, B. B. Green, D. Korngiebel, R. W. Force, and L. M. Baldwin. 2013. "Barriers and Facilitators to Evidence-Based Blood Pressure Control in Community Practice." Journal of the American Board of Family Medicine 26 (5): 539-57.

Rogers, E. M. 2003. Diffusion of Innovations. New York: Free Press.

Sabatino, S. A., B. Lawrence, R. Elder, S. L. Mercer, K. M. Wilson, B. DeVinney, S. Melillo, M. Carvalho, S. Taplin, R. Bastani, B. K. Rimer, S. W. Vernon, C. L. Melvin, V. Taylor, M. Fernandez, K. Glanz, Community Preventive Services Task Force, and Community Preventive Services Task. 2012. "Effectiveness of Interventions to Increase Screening for Breast, Cervical, and Colorectal Cancers: Nine Updated Systematic Reviews for the Guide to Community Preventive Services." American Journal of Preventive Medicine 43 (1): 97-118.

Schreiber, J. B., F. K. Stage, J. King, A. Nora, and E. A. Barlow. 2006. "Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review." Journal of Education Research 99 (6): 323-37.

Scott, S. D., R. C. Plotnikoff, N. Karunamuni, R. Bize, and W. Rodgers. 2008. "Factors Influencing the Adoption of an Innovation: An Examination of the Uptake of the Canadian Heart Health Kit (HHK)." Implementation Science 3: 41.

Simon, J. S., T. G. Rundall, and S. M. Shortell. 2007. "Adoption of Order Entry with Decision Support for Chronic Care by Physician Organizations. "Journal of the American Medical Informatics Association 14 (4): 432-9.

Sohng, H. Y., A. Kuniyuki, J. Edelson, R. C. Weir, H. Song, and S. P. Tu. 2013. "Capability for Change at Community Health Centers Serving Asian Pacific Islanders: An Exploratory Study of a Cancer Screening Evidence-Based Intervention." Asian Pacific Journal of Cancer Prevention 14 (12): 7451-7.

Tabak, R. G., E. C. Khoong, D. A. Chambers, and R. C. Brownson. 2012. "Bridging Research and Practice: Models for Dissemination and Implementation Research." American Journal of Preventive Medicine 43 (3): 337-50.

Weiner, B. J., C. M. Belden, D. M. Bergmire, and M. Johnston. 2011. "The Meaning and Measurement of Implementation Climate." Implementation Science 6: 78.

Williams, E. C, M. L.Johnson, G. T. Lapham, R. M. Caldeiro, L. Chew, G. S. Fletcher, K. A. McCormick, W. G. Weppner, and K. A. Bradley. 2011. "Strategies to Implement Alcohol Screening and Brief Intervention in Primary Care Settings: A Structured Literature Review." Psychology of Addictive Behaviors 25 (2): 206-14.

Woolf, S. H. 2008. "The Meaning of Translational Research and why it Matters." Journal of the American Medical Association'299 (2): 211-3.


Additional supporting information may be found online in the Supporting Information section at the end of the article.

Appendix SA1: Author Matrix.

Table S1: Summary of Psychometric Tests and Selected Results by CFIR Construct, Current Study, and Original Source.

Michelle C. Kegler [iD], Shuting Liang, Bryan J. Weiner, Shin Ping Tu, Daniela B. Friedman, Beth A. Glenn, Alison K. Herrmann, Betsy Risendal, and Maria E. Fernandez

Address correspondence to Michelle C. Kegler, Dr.P.H., M.P.H., Department of Behavioral Sciences and Health Education, Emory Prevention Research Center, Rollins School of Public Health, Emory University, 1518 Clifton Road NE, Atlanta, GA 30033; e-mail: Shuting Liang, M.P.H., is with the Department of Behavioral Sciences and Health Education, Emory Prevention Research Center, Rollins School of Public Health, Emory University, Atlanta, GA. Bryan J. Weiner, Ph.D., is with the Departments of Global Health and Health Services, University of Washington, Seattle, WA. Shin Ping Tu, M.D., is with the General Internal Medicine, University of California Davis, Sacramento, CA. Daniela B. Friedman, Ph.D., is with the Department of Health Promotion, Education, and Behavior and the Statewide Cancer Prevention and Control Program, Arnold School of Public Health, University of South Carolina, Columbia, SC. Beth A. Glenn, Ph.D., and Alison K. Herrmann, Ph.D., are with the UCLA Kaiser Permanente Center for Health Equity, Fielding School of Public Health & Jonsson Comprehensive Cancer Center, University of California Los Angeles, Los Angeles, CA. Betsy Risendal, Ph.D., is with the Department of Community and Behavioral Health, Colorado School of Public Health, University of Colorado Comprehensive Cancer Center, Aurora, CO. Maria E. Fernandez, Ph.D., is with the School of Public Health, University of Texas Health Science Center at Houston, Houston, TX.

DOI: 10.1111/1475-6773.13035
Table 1: CFIR Constructs Measured, Definitions, Items and Sources by

Construct Name      Definition from Damschroder et al.

I. Intervention
Relative            Stakeholders' perception of the
Advantage (*)       advantage of implementing the
                    intervention versus an alternative
Complexity*         Perceived difficulty of implementation,
                    reflected by duration, scope,
                    radicalness, disruptiveness, centrality,
                    and intricacy and number of steps
                    required to implement

Compatibility*      The degree of tangible fit between
                    meaning and values attached to the
                    intervention by involved individuals,
                    how those align with individuals' own
                    norms, values, and perceived risks
                    and needs, and how the intervention
                    fits with existing workflows and
II. Outer Setting
External policies   A broad construct that includes
and incentives      external strategies to spread
                    interventions including policy and
                    regulations (government of other
                    central entity), external mandates,
                    recommendations and guidelines,
                    pay-for-performance, collaboratives,
                    and public or benchmark reporting.
                    Operationalized as "Reporting
                    Requirements" and "External
                    Rewards & Recognition"

Patient needs and   The extent to which patient needs, as
resources           well as barriers and facilitators to meet
                    those needs are accurately known and
                    prioritized by the organization.

III. Inner Setting
Second order        Culture
Inner Setting       Implementation Climate (*)
construct           Learning Climate
                    Leadership Engagement
                    Available Resources (*)
Characteristics Of  Individuals
Knowledge and       Individuals' attitudes toward and value
beliefs about the   placed on the intervention as well as
intervention*       familiarity with facts, truths, and
                    principles related to the intervention.
                    Operationalized as "Appeal of the
Other personal      Openness to innovation

V. Process
Engaging-           Attracting and involving appropriate
champions (*)       individuals in the implementation,
                    especially champions, individuals
                    who dedicate themselves to
                    supporting, marketing, and "driving
                    through" an implementation,
                    overcoming indifference or resistance
                    that the intervention may provoke in
                    an organization.
Executing           Carrying out or accomplishing the
                    implementation according to plan.

Reflecting and      Quantitative and qualitative feedback
Evaluating          about the progress and quality of
                    implementation accompanied with
                    regular personal and team debriefing
                    about progress and experience.
Goals and
feedback (*)        The degree to which goals are clearly
                    communicated, acted upon, and fed
                    back to staff and alignment of that
                    feedback with goals.

Construct Name      # of Items

I. Intervention
Relative                1
Advantage (*)

Complexity*             4

Compatibility*          2

II. Outer Setting
External policies       5
and incentives

Patient needs and       5

III. Inner Setting
Second order           88
Inner Setting

Characteristics Of
Knowledge and           3
beliefs about the

Other personal          3

V. Process
Engaging-               3
champions (*)

Executing               1

Reflecting and          2

Goals and
feedback (*)            2

Construct Name      Items

I. Intervention
Relative            Using <EBA> is more effective than our prior
Advantage (*)       practices for increasing colorectal cancer
                    screening rates.

Complexity*         It is/was difficult to train providers and staff to
                    implement <EBA>.
                    Overall, I believe that it is/was complicated to
                    implement <EBA>.
                    I believe that using <EBA> (has) required my
                    clinic to make substantial changes to our
                    previous practice.
                    <EBA> (has) required more work than can be
                    done with current funding.
Compatibility*      Using <EBA> to increase colorectal cancer
                    screening rates is compatible with current
                    activities/practices in the clinic.
                    I think that using <EBA> to increase colorectal
                    cancer screening fits well with the way I like to

II. Outer Setting
External policies   Reporting requirements:
and incentives      Is your Health Center required to report any of
                    the following to an outside organization (e.g.,
                    HRSA, CMS, NCQS, others)?
                    a Results of CRC screening Quality
                    Improvement projects
                    b Outcome data for CRC screening
                    c HEDIS data on CRC screening
                    External rewards and recognition:
                    Does your Health Center receive any of the
                    following rewards for scoring well on colorectal
                    cancer screening quality measurements?
                    a public recognition
                    b any other reward
Patient needs and   This clinic does a good job assessing patient
resources           needs and expectations.
                    Clinic staff promptly resolves patient
                    Patients' complaints are studied to identify
                    patterns and prevent the same problems from
                    This clinic uses data on patient expectations
                    and/or satisfaction when developing new
                    This clinic uses data from patients to improve
III. Inner Setting
Second order        See Fernandez et al. (2018) for details
Inner Setting

Characteristics Of
Knowledge and       Appeal:
beliefs about the   The <EBA> "made sense" to me.
intervention*       <EBA> were being used by colleagues who
                    were happy with it.
                    I felt I had enough training to use <EBA>
Other personal      Openness:
attributes          I am willing to try new programs even if I have
                    to follow a manual.
                    I am willing to use new and different types of
                    programs developed by researchers.
                    I would try a new program even if it is very
                    different from what I am used to doing.
V. Process
Engaging-           Some of our staff (i.e., managers, supervisors,
champions (*)       other staff), have become program champions,
                    actively supporting and promoting <EBA>
                    beyond what is required.
                    Clinic staff takes an active interest in
                    programmatic-related problems and successes.
                    Managers actively support implementation of

Executing           Our clinic consistently implements programs
                    that are aligned with our mission and strategic
Reflecting and      Throughout the clinic there is frequent and good
Evaluating          communication about how different changes
                    are going.
                    We use data to guide our clinic (e.g.,
                    performance reviews, assessments).
Goals and
feedback (*)        Clinic leaders establish clear goals for <EBA> to
                    increase colorectal cancer screening.
                    Clinic leaders hold staff members accountable
                    for achieving results of <EBA>.

Construct Name      Source

I. Intervention
Relative            Scott et al. (2008)
Advantage (*)

Complexity*         Pankratz,
                    Hallfors, and
                    Cho (2002)

Compatibility*      Pankratz,
                    Hallfors, and
                    Cho (2002)

II. Outer Setting
External policies   Simon, Rundall,
and incentives      and Shortell

                    Simon, Rundall,
                    and Shortell

Patient needs and   McMenamin
resources           et al. (2010)

III. Inner Setting
Second order        Helfrich et al.
Inner Setting       (2009), Lehman,
construct           Greener, and
                    Simpson (2002),
                    Weiner et al.
Characteristics Of  (2011)
Knowledge and       Aarons (2004,
beliefs about the   2005)

Other personal      Aarons (2004,
attributes          2005)

V. Process
Engaging-           One item was
champions (*)       created; Others
                    were modified
                    et al. (2011)

Executing           Created

Reflecting and      Jaenet al. (2010),
Evaluating          Sohng et al.

Goals and
feedback (*)        Helfrich et al.

(*) Constructs with evidence-based approaches (EBA)-specific items.
([dagger]) Only external policies and incentives was assessed in the
Clinic Survey (at the clinic level); the rest of the constructs were
assessed in the Staff Survey.

Table 2: Characteristics of Participants and Federally Qualified Health
Center Clinics

Characteristics of Participants (n = 277)                 Frequency

Female                                                    200
Highest level of education completed
High school graduate or GED or less                        11
Technical school diploma or associate degree              113
College graduate                                           35
Graduate degree or medical school                          92
Hispanic                                                   92
White                                                     158
Black/African American                                     21
Asian                                                      30
American Indian/Alaska Native                               6
Other                                                      35
Nurse Practitioner/Physician Assistant/Physician           33
Quality Improvement/Operations/Clinic Managers             46
Nurse, Clinical/Nursing Director                           98
Medical/Clinical Assistants                               100
Years worked at the clinic
Less than a year                                           29
Less than 5 years                                         149
5 to 10 years                                              46
More than 10 years                                         27
Hours per week worked at the clinic
[less than or equal to]20                                 19
20-39                                                      31
40                                                        149
>40                                                        55
                                                          Mean (Range)
Age                                                        41.3 (21-70)
Years worked at the clinic                                  4.8 (0-29)
Hours per week worked at the clinic                        40.0 (4-85)
Characteristics of Participating Clinics                  Frequency
California                                                  5
Colorado                                                    8
Georgia                                                     5
Missouri                                                    1
South Carolina                                              8

Texas                                                      15
Washington                                                 17
Used Electronic Health Systems (iV = 50)                   46
                                                          Mean (Range)
Total number of patients served at the clinic in 2012     13,485
Total number of patient encounters at the clinic in 2012  32,863

Characteristics of Participants (n = 277)                 Percentage (*)

Gender                                                    79.7
Highest level of education completed                       4.4
High school graduate or GED or less                       45.0
Technical school diploma or associate degree              13.9
College graduate                                          36.7
Graduate degree or medical school
Ethnicity                                                 36.7
Race                                                      57.0
White                                                      9.8
Black/African American                                    10.8
Asian                                                      2.2
American Indian/Alaska Native                             12.6
Roles                                                     11.9
Nurse Practitioner/Physician Assistant/Physician          16.6
Quality Improvement/Operations/Clinic Managers            35.4
Nurse, Clinical/Nursing Director                          36.1
Medical/Clinical Assistants
Years worked at the clinic                                11.6
Less than a year                                          59.4
Less than 5 years                                         18.3
5 to 10 years                                             10.8
More than 10 years
Hours per week worked at the clinic                        6.4
[less than or equal to]20                                 12.4
20-39                                                     59.4
40                                                        21.9
>40                                                       SD
Age                                                        5.2
Years worked at the clinic                                10.6
Hours per week worked at the clinic                       Percentage
Characteristics of Participating Clinics
State                                                      8.5
California                                                13.6
Colorado                                                   8.5
Georgia                                                    1.7
Missouri                                                  13.6
South Carolina
Texas                                                     28.8
Washington                                                92.0
Used Electronic Health Systems (iV = 50)                  SD
Total number of patients served at the clinic in 2012     37,371
Total number of patient encounters at the clinic in 2012

Note. (*) Denominators vary.

Table 3: Confirmatory Factor Analysis Results for Selected CFIR

Constructs by                    Number        of
Domain                           of Items      Responses

I. Intervention Characteristics
Complexity (*)                   4             216
II. Outer Setting
Patient                          5             271
needs and
III. Inner                       5 ([dagger])  216-277
Setting (*, [dagger])

Constructs by                                     Standard
Domain                           Mean ([dagger])  Deviation

I. Intervention Characteristics
Complexity (*)                   2.84             0.60
II. Outer Setting
Patient                          3.85             0.83
needs and
III. Inner                       3.53             0.63
Setting (*, [dagger])

                                 Standardized         Fiti Statistics
Constructs by                    Loadings
Domain                           (Range)       CFI    TLI    SRMR

I. Intervention Characteristics
Complexity (*)                   0.429-O.978   1.000  1.040  0.013
II. Outer Setting
Patient                          0.615-0.815   0.998  0.993  0.013
needs and
III. Inner                       0.515-0.984   0.823  0.089  0.084
Setting (*, [dagger])

                                 Fiti Statistics

Constructs by
Domain                           RMSEA

I. Intervention Characteristics
Complexity (*)                   0.000
II. Outer Setting
Patient                          0.037
needs and
III. Inner                       0.071
Setting (*, [dagger])

Notes. (*) Covariance was allowed between some items.
([dagger]) Inner Setting is a 2nd order factor that consists of five
constructs: available resources, implementation climate, culture,
leadership, learning climate.
([double dagger]) Response options include: strongly disagree = 1,
disagree = 2, neither agree nor disagree = 3, agree = 4, strongly agree
= 5.

Table 4: Reliability, Inter-rater Reliability, and Agreement

                                           Cronbach 's
Construct Name by                Number    alpha/Spearman-Brown
Domain                           of Items  Reliability Coefficient

I. Intervention Characteristics
Relative                          1        N/A
Complexity                        4        0.72
Compatibility                     1        0.62
II. Outer Setting
Patient needs                     5        0.86
and resources
III. Inner Setting               38        0.87
IV. Individual Characteristics
Knowledge                         3        0.75
and beliefs:
Other personal                    3        0.88
V. Process
Engaging--                        3        0.84
Executing                         1        N/A
Reflecting and                    2        0.65
Goals and                         2        0.68

Construct Name by                ICC(1) (p         ICC
Domain                           value of F test)  (2)

I. Intervention Characteristics
Relative                         0.102 (0.0534)    0.305
Complexity                       0.024 (0.7981)    0.087
Compatibility                    0.064 (0.0983)    0.236
II. Outer Setting
Patient needs                    0.134(0.0120)     0.415
and resources
III. Inner Setting               0.209 (<.00()1)   0.553
IV. Individual Characteristics
Knowledge                        0.189 (0.0012)    0.472
and beliefs:
Other personal                   0.053 (0.1247)    0.205
V. Process
Engaging--                       0.161 (0.0061)    0.425
Executing                        0.117(0.0122)     0.377
Reflecting and                   0.171 (0.0009)    0.486
Goals and                        0.135 (0.0155)    0.375

                                 RWG (J) (Number
Construct Name by                of Clinics included
Domain                           in analysis)

I. Intervention Characteristics
Relative                         0.780 (45)
Complexity                       0.888 (45)
Compatibility                    0.819 (46)
II. Outer Setting
Patient needs                    0.890 (52)
and resources
III. Inner Setting               0.937 (52)
IV. Individual Characteristics
Knowledge                        0.860 (46)
and beliefs:
Other personal                   0.934 (52)
V. Process
Engaging--                       0.818 (46)
Executing                        0.708 (52)
Reflecting and                   0.693 (52)
Goals and                        0.732 (46)

Table 5: Clinic-level Correlations of CFIR Constructs

                                 I. Intervention

                                 a     b     c          d

I. Intervention Characteristics
a. Relative advantage            1.00  0.09  0.49 (**)  -0.07
b. Complexity                          1.00  0.12        0.22
c. Compatibility                             1.00       -0.04
II. Outer Setting
d. Reporting                                             1.00
e. Recognition
f. Patient needs and
III. Inner Setting * g.
Second order factor
IV. Individual Characteristics
h. Appeal
i. Openness
V. Process Of Implementation
j. Engaging
k. Executing
1. Reflecting and
m. Goals and

                                 II. Outer Setting
                                                    III. Inner
                                                    Setting (*)
                                 e     f            g

I. Intervention Characteristics
a. Relative advantage            0.29   0.14         0.23
b. Complexity                    0.15   0.002        0.05
c. Compatibility                 0.25   0.35 (**)    0.48 (**)
II. Outer Setting
d. Reporting                     0.04  -0.06        -0.04
e. Recognition                   1.00   0.07         0.20
f. Patient needs and                    1.00         0.64 (***)
III. Inner Setting * g.                              1.00
Second order factor
IV. Individual Characteristics
h. Appeal
i. Openness
V. Process Of Implementation
j. Engaging
k. Executing
1. Reflecting and
m. Goals and

                                 IV. Individual

                                 h            i          j

I. Intervention Characteristics
a. Relative advantage             0.48 (**)    0.09       0.47 (**)
b. Complexity                     0.23         0.12       0.02
c. Compatibility                  0.42 (**)    0.29 (*)   0.45 (**)
II. Outer Setting
d. Reporting                     -0.07        -0.17      -0.15
e. Recognition                    0.34 (*)     0.15       0.13
f. Patient needs and              0.37 (**)    0.23       0.40 (**)
III. Inner Setting * g.           0.57 (***)   0.09       0.49 (**)
Second order factor
IV. Individual Characteristics
h. Appeal                         1.00         0.12       0.66 (***)
i. Openness                                    1.00       0.25
V. Process Of Implementation
j. Engaging                                               1.00
k. Executing
1. Reflecting and
m. Goals and

                                 V. Process Of Implementation

                                 k            l            m

I. Intervention Characteristics
a. Relative advantage             0.38 (**)    0.18         0.40 (**)
b. Complexity                    -0.0002      -0.12         0.20
c. Compatibility                  0.48 (**)    0.41 (**)    0.52 (***)
II. Outer Setting
d. Reporting                     -0.16        -0.04        -0.08
e. Recognition                    0.25         0.25         0.23
f. Patient needs and              0.37 (***)   0.68 (***)   0.20
III. Inner Setting * g.           0.60 (***)   0.71 (***)   0.20
Second order factor
IV. Individual Characteristics
h. Appeal                         0.35 (**)    0.44 (**)    0.64 (***)
i. Openness                      -0.11         0.13         0.26
V. Process Of Implementation
j. Engaging                       0.53 (***)   0.51 (***)   0.62 (***)
k. Executing                      1.00         0.62 (***)   0.27 (*)
1. Reflecting and                              1.00         0.31 (*)
m. Goals and                                                1.00

(*) < .05.
(**) p < .01.
(***) p < .001.
N =59.
COPYRIGHT 2018 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kegler, Michelle C.; Liang, Shuting; Weiner, Bryan J.; Tu, Shin Ping; Friedman, Daniela B.; Glenn, B
Publication:Health Services Research
Date:Dec 1, 2018
Previous Article:Identifying Children with Special Health Care Needs Using Medicaid Data in New York State Medicaid Managed Care.
Next Article:Sample Selection for Medicare Risk Adjustment Due to Systematically Missing Data.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters