Printer Friendly

Ecological analysis of the first generation of community clinical oncology programs.

Pfeffer, J., and G. R. Salancik. The External Control of Organizations: A Resource Dependence Perspective. New York: Harper & Row Publishers, Inc., 1978.

Rogers, D. L., and D. A. Whetten. Interorganizational Coordination: Theory, Research, and Implementation. Ames: Iowa State University, 1982.

Schopler, J. H. "Interorganizational Groups: Origins, Structure, and Outcomes." Academy of Management Review 12, no. 4 (October 1987): 702-13.

Objective. An ecological framework is proposed for assessing factors important to consider in allocating funds to promote sound performance of interorganizational programs.

Data Source/Study Setting. This framework is used to examine the first generation of Community Clinical Oncology Programs (CCOPs) funded by the National Cancer Institute (NCI) from 1983-1986 to coordinate clinical research activity at the local level. The research reported is based on secondary data collected for the Community Cancer Care Evaluation at the Fred Hutchinson Cancer Center.

Study Design. A repeated measures design was used to analyze differences in the level and patterns of CCOP productivity, a measure of the number of patients enrolled on NCI-approved Phase III trials. The predictive dimensions include (1) measures of environmental inputs (population density, organizational dominance, professional support, NCI funding); (2) measures of organizational inputs (number of hospitals, number of staff, number of physicians, NCI experience, clinical research experience); and (3) structural measures (functional specialization, administrative concentration). Predicted relationships were assessed using general linear models procedures.

Data Collection/Extraction Methods. Data obtained from NCI files were supplemented by interviews with NCI personnel and published statistics.

Principal Findings. Funding level, clinical research experience, and number of staff are the most important predictors of patient enrollment. Clinical research experience has a positive relationship with patient enrollment and a negative association with changes in enrollment. The reversal is explained by the influence of the CCOPs that had the greatest amount of clinical research experience at the beginning of the program.

Conclusions. The ecological approach provides a useful framework for understanding factors that should be considered in funding interorganizational programs and promoting their development. Most importantly, results suggest that a somewhat different approach is needed to initiate programs rather than to expand existing programs.

Keywords. Interorganizational relations, organizational ecology, external funding, community oncology programs, open systems

Promoting effective interorganizational collaboration has become increasingly important in the broad realm of health services. Public demands for more effective, wholistic approaches to health care and pressures for cost containment come at a time when health services activities have become both more complex and more expensive. Frequently, health policy goals related to service delivery, treatment, prevention, and research can be accomplished only by the joint action of diverse professional groups in multiple organizations. Funding incentives and technical supports are often used to stimulate development of the interorganizational relationships necessary to attain these goals. Yet, even when health services professionals are firmly committed to common goals, coordinating activities across organizational boundaries can be problematic. Competition for resources, turf concerns, unequal power, varied levels of investment, and other conflicting interests may impede coordination when organizational commitments are involved. Unfortunately, interorganizational endeavors unable to resolve problems of coordination may be short-lived and have little impact; or their continuing struggles may result in limited productivity and inefficient use of resources. Because interorganizational coordination can be so essential to the attainment of current policy goals but so difficult to attain, decision makers need guidance in determining (1) the factors that should be considered in funding interorganizational programs; and (2) the supports needed to promote the development of successful interorganizational programs.

The organizational literature provides only limited guidance in addressing these questions. As Kaluzny et al. (1990, 86) note, the interorganizational dependencies characteristic of the current health environment "challenge existing organization theory." The examination of interorganizational relationships has evolved from an early focus on the boundary roles of individuals who represent their organizations in transactions with other organizations (Adams 1976; Aldrich and Herker 1977) to the more recent conceptualization of boundary spanning systems (Aldrich and Whetten 1981). Theoretical models tend to explain variations in the outcomes of joint organizational activity either by emphasizing organizational characteristics that contribute to an organization's stability or dominance in interorganizational relationships (e.g., Adams 1976; Benson 1975; Cooke 1977; Pfeffer and Salancik 1978), or by considering the ways in which environmental forces shape relationships among organizations (e.g., Aldrich 1979; Aldrich and Whetten 1981). Critiques of existing approaches to the study of interorganizational relationships suggest that both the organizational and the environmental perspectives are necessary if we are to understand why interorganizational endeavors meet with varied success (e.g., Hernes 1976; Rogers and Whetten 1982; Schopler 1987).

The present analysis is based on an ecological framework that is compatible with findings from previous research. The framework extends earlier work by enabling us to examine both environmental and organizational factors, as well as to consider their interaction. Hypotheses generated from this ecological framework will be tested to identify factors that were important to the development and outcomes of the first generation of Community Clinical Oncology Programs (CCOPs), a nationwide interorganizational program funded by the National Cancer Institute (NCI).


The first CCOPs are an appropriate focus for examining ways in which to promote interorganizational coordination, since they are part of NCI's ongoing "network strategy" for enhancing the quality of cancer care delivered in community settings. This network strategy originated in the early 1970s with the head and neck cancer demonstration projects that brought about changes in the delivery of cancer treatment through the introduction of new treatment strategies and patient guidelines (Fennell and Warnecke 1988). Over the past two decades, NCI has initiated a number of community programs to promote the more rapid diffusion of state-of-the-art practices related to cancer treatment and cancer control. Because success in changing local practices ultimately depends on the ability of local physicians and institutions to coordinate their activities, the continuing challenge that NCI confronts in designing community programs lies in finding more effective ways to support local coordination.

The CCOPs are based on the premise that participation of community physicians in clinical trials research will stimulate the highest quality of patient care while it expands the existing network of state-of-the-art practice. In 1983, NCI funded 62 CCOPs with a primary emphasis on improving cancer care at the local level by supporting the participation of community physicians in clinical trials research. In clinical trials, eligible patients are randomly assigned to either the best current treatment protocol or a new, potentially better, treatment protocol (Frelick et al. 1984). This first generation of CCOPs provided an opportunity for supporting and coordinating research activities related to cancer treatment in local organizations such as hospitals and physician group practices. Through these community programs, local physicians were given access to new cancer interventions through NCI-approved treatment protocols that previously had been available only in major cancer centers and research-oriented medical centers. NCI provided scientific, financial, and data management support to physicians to facilitate their participation. A comprehensive evaluation of the effectiveness of the CCOP I programs, conducted by the Statistical Analysis and Quality Control Center at the Fred Hutchinson Cancer Research Center in Seattle is available in the final report to NCI (Feigl and Diehr 1987).

In 1987, NCI initiated a second phase, the CCOP II. The focus of this second generation of CCOPs has been expanded to include cancer control research, and results are being evaluated by a research team at the Cecil G. Sheps Center for Health Services Research at the University of North Carolina at Chapel Hill in collaboration with the Survey Research Laboratory of the University of Illinois (Kaluzny, Ricketts, Warnecke, et al. 1989). The initial CCOPs, which are examined in this secondary analysis, provide a convenient organizational population for applying an ecological framework to help decision makers identify (1) the environmental and organizational conditions associated with productive interorganizational relationships, and (2) the external supports needed to promote these relationships.


This ecological framework draws on two perspectives that apply ecological principles to the study of formal organizations, the population ecology approach and the ecosystem approach. The population ecologists explain organizational change by examining the nature and distribution of resources in the organizational environment (e.g., Aldrich 1979; Hannan and Freeman 1977). This framework draws on the population ecologists' delineation of the conditions that contribute to coordination and restrain competition when groups of organizations come together to accomplish a specific purpose within a limited time frame (Aldrich 1979; Aldrich and Whetten 1981). These include such factors as a small number of organizational members, a concentration of leadership, similarity of values and attitudes, and a low level of environmental competition. The ecosystem approach to the study of organizations, which contributes most heavily to this framework, was developed by Bidwell and Kasarda in their ecological theory of organizational structuring and was applied to their longitudinal analysis of the public school system in one state (Bidwell and Kasarda 1985, 1987; Kasarda and Bidwell 1984). This approach can be regarded as a specialized extension of population ecology that provides a comparably more balanced view of the interplay of external and internal forces that shape the way organizations, or groups of organizations, adapt to constantly changing but restrictive environments.

From an ecosystem perspective, any group of organizations that comes together to accomplish some function for society can be conceptualized as an open system that is comprised of member organizations and their environments. An open systems view of interorganizational systems dependent on external funding as a key source of resources is modeled in Figure 1. This model directly addresses the question of how evolving structure and outputs are influenced by and affect changing external and internal conditions over time.

The environment provides resources necessary for carrying out the key function, but the availability of these resources is constrained by regulations and the extent of political support in the environment. These environmental inputs may be very accessible and more than adequate, or they may be difficult to acquire because of scarcity and competition. The external inputs interact with the inputs of the member organizations, which can be characterized by the number and composition of participants and their level of expertise. All of these inputs are transformed through the structuring process and influence both the structural form and output of the interorganizational system. The success or productivity of the interorganizational system in carrying out its key function constitutes feedback to both the internal system and the environment that, in turn, affects future inputs. Systems that are successful in performing the function for which they have been funded are likely to attract more resources and to become more effective in organizing their activities. Systems that have problems with performance will have more difficulty in acquiring and using resources effectively.


In an ecological conception of an interorganizational system that depends on external funding, the unit of analysis shifts from the choices and characteristics of individual members and their parent organizations to the properties of the collective unit. Although the commitment and efforts of individual members are regarded as important contributions, they are not considered in measuring system characteristics. The critical outputs in these systems are defined in terms of productivity. The predictive dimensions include environmental inputs, inputs of member organizations, and structural indicators. Structure is considered both as an outcome and as a predictor of the output. A description of these conceptual dimensions provides the basis for hypotheses that both address relationships among the predictive dimensions and predict the nature and pattern of outputs.

Outputs. The output of externally funded interorganizational systems is the productivity of the key function the organization has been funded to perform. These organizations typically operate within a limited time frame and are highly dependent on external funding. The funding agency specifies the function and stipulates measures of productivity and other requirements. While productivity is most important to system survival, compliance with other funding requirements -- such as reporting deadlines and site visits -- has important implications for system development over time. Both the level and nature of productivity and the timeliness and extent of compliance constitute feedback to the funding agency. Organizations that fail to meet minimal requirements are likely to have their funding decreased or discontinued. Thus, organizations that expend excessive resources in developing a structure for their work together or in adjusting to internal and external changes may not survive.

Environmental Inputs. The resource-supplying environment provides the funding, a supply of specialized labor with appropriate skills and information, facilities, equipment, and the "raw materials" (such as patients) that are necessary to sustain the interorganizational system. The institutional environment regulates the composition and timing of these resources through the operation of competitive forces and under institutional rules and regulations. Boundary-spanning programs that rely on external funding in health settings tend to be highly protected and highly regulated, but their environments vary in the extent of competition for labor and patients and in the degree of regulation (Scott and Lammers 1985). Depending on how active professional and regulatory associations are, local environments also may differ in the amount of support, legitimacy, and expertise that is available to interorganizational programs.

Inputs from Member Organizations. The key properties characterizing the inputs from member organizations are the size and composition of the workforce, the experience with interorganizational coordination, and the technology necessary to perform the key function. Size has an important influence on the structuring process within an organization (Kasarda 1974). Although larger size is associated with greater adaptability, a small number of members facilitates coordination and constrains competition (Aldrich 1979). Further, increasing size requires greater administrative intensity to maintain coordinated activity and is associated with greater complexity (Freeman and Hannan 1975). The composition of interorganizational systems refers to the mix of organizational components as well as individual members. A heterogeneous composition can increase the complexity of coordination but may be necessary to perform the key function. Prior experience with coordinating interorganizational activity is considered important because it represents the presence of a social technology that is critical to carrying out joint organizational activity (Greenwald 1980). Finally, the degree of skill related to the key function is critical to organizational performance. Programs with a higher level of technology from their inception require less start-up time and can more easily meet funding requirements.

Structure. The structuring process results in both a hierarchical and functional division of labor. The administrative hierarchy can be described in terms of administrative concentration. The extent of functional differentiation can be characterized by the specialization of the subunits.

Administrative concentration represents a measure of the consolidation of power within the system and is assumed to be a critical property of interorganizational systems. When power is highly concentrated in a few administrators, the coordination of resources, personnel, and environmental transactions is facilitated and the system has the capacity to act as a unit. In this conception, the ratio of administrative positions to other personnel in the system (cf. Hawley's Measure of Power) provides a measure of concentration of power (Aiken 1970; Aiken and Alford 1970; Aldrich 1979; Hawley 1963). A low ratio of administrative personnel to the total labor force indicates a greater concentration of power and is associated with more rapid formation, greater adaptability, and greater productivity. A high ratio of administrative personnel to the total workforce may represent a diversion of resources from the production function to coordination and is likely to be associated with lower productivity (Bidwell and Kasarda 1975).

Specialization refers to the extent of differentiation in the division of labor (Bidwell and Kasarda 1985). One measure of functional specialization is the number of different position categories in the interorganizational system. Interorganizational systems that develop a highly differentiated structure have multiple classifications. Systems with a simpler structure may have only one or two classifications. A greater level of functional differentiation is required when inputs are heterogeneous. Greater differentiation is also associated with production of more diverse outputs.

Hypotheses. The general hypothesis to be tested in this study is whether or not variation in outputs among the CCOPs is significantly associated with differing levels of environmental inputs, organizational inputs, and measures of structure. For each of the models developed, specific statistical hypotheses are tested to assess the significance of the contribution of each of the inputs and structural measures to variations in outputs among the CCOPs.


Research Design and Data Sources. The first generation of CCOPs was funded to carry out the key function of increasing community participation in NCI clinical trials. All of the CCOPs were subject to similar regulations and were bounded by the three-year funding period. The CCOPs were selected for their ability to enroll at least 50 patients per year to Phase III clinical trials and for their asserted potential for changing local physicians' patterns of practice. Despite these similarities, system constraints and resources differed initially and as the CCOPs developed.

The research reported is based on data that were collected during the Community Cancer Care Evaluation conducted at the Statistical Analysis and Quality Control Center of the Fred Hutchinson Cancer Center under contract to NCI (Feigl and Diehr 1987). A repeated measures design was used to analyze differences in the level and patterns of CCOP productivity. The series of within group comparisons that were carried out enhances the validity of statistical conclusions (Cook and Campbell 1979). Further, following the approach described by Yin (1984), case study design was used to examine organizational factors in more depth. Predicted relationships were assessed using general linear models procedures. Variations in the measures of the explanatory variables were occurring naturally and were assumed to be independent of each other. Models based on the conceptual framework were built in a stepwise fashion, assessing collinearity at each step.

Data were obtained from NCI files for each of the 62 CCOPs funded in 1983. Three CCOPs that failed to receive funding all three years were deleted because of inadequate information. Thus, the research reported is based on the population of 59 CCOPs that were funded from 1983 to 1986. All of the major steps in the analyses were also completed for the full data set of 62 CCOPs and, again, for 57 CCOPs (with the three failures and two other CCOPs that specialized in pediatrics deleted). Similar results were obtained from the analyses of all three data sets. The individual CCOP files maintained by the Community Oncology Program of the Division of Cancer Prevention and Control at NCI were the main source for the output measure (the number of patients on protocol) and for the explanatory variables. NCI also provided data on various environmental and organizational properties of each site, such as the list of counties and the number of members of the American Society of Clinical Oncologists, Inc. in each service area. Further, data from CCOP records and reports were supplemented by interviews with NCI personnel and by published statistics on hospital beds (American Hospital Association 1983) and population density (U.S. Bureau of the Census 1983).

Measurement of Variables. All of the variables identified in the conceptual framework were represented by either continuous or categorical measures. The major measure of the productivity of CCOP output was the number of patients enrolled on NCI-approved Phase III clinical research protocols for each of the three years of funding. Patient enrollment was based on reports from the CCOPs that were validated through comparisons with research base reports. Because it was important to know the impact of the CCOPs on existing efforts related to clinical research, a measure of change in patient enrollment was also computed by comparing the actual number of patients enrolled each year with baseline estimates of the number of patients enrolled on research protocols during the year prior to the CCOP program.

The environmental inputs included measures of population density, organizational dominance, professional support, and NCI funding. Population density, the population per square mile of the service area, is an indicator of the concentration of resources and potential patients. Organizational dominance, a ratio of the number of beds in hospitals affiliated with the CCOPs to total hospital beds in the service area, is an indirect measure of competition. It is assumed that hospitals affiliated with the CCOPs that have a large proportion of the total hospital beds in the service area will face less competition for patients, labor, and other resources. Professional support, a ratio of the number of CCOP physicians to members of the American Society of Clinical Oncologists, Inc. (ASCO), the professional oncology association, is a general measure of the professional involvement of oncologists in the service area of CCOPs. A high ratio for support suggests a greater pool of skill and a higher level of involvement in cancer research. NCI funding is the annual dollar amount NCI provided the CCOPs for direct costs each of the three years.

Organizational variables included measures describing the inputs from member organizations and the structure of the CCOPs. The size of organizational membership was measured in terms of the numbers of hospitals affiliated with each CCOP at the end of each year of funding. Measures of individual CCOP membership (number of physicians and number of support staff) were also obtained for each year. Relevant technology was measured in terms of NCI experience and clinical research experience. The NCI experience of each CCOP was a categorical measure of prior experience with two prior NCI community programs, the Community Hospital Oncology Program and the Cooperative Group Outreach Program. The measure of clinical research experience was based on CCOP estimates of the number of patients enrolled on clinical trials in the year prior to funding. Structure was measured in terms of functional specialization (number of different position categories each year) and administrative concentration (ratio of administrative personnel to nonadministrative personnel each year, with a low ratio indicating more concentrated administration).

Analysis. The process of building models to explain the relationship between CCOP outputs and predictive dimensions was a sequential procedure based on descriptive analyses of the dependent and independent variables and an assessment of the relationships between each of the independent and dependent variables. The next phase of analysis focused on assessing the general model to determine if the explanatory variables specified in the theoretical model explained differences in outcomes. For each time period, the general model was:

CCOP = Overall + Indicators of + Measures of + Error

Output Mean CCOP Inputs Structure

Stepwise procedures were used to identify the set of independent variables with continuous measures that provided the most parsimonious and adequate explanation of variance for each of the outcomes for each year of funding (Kleinbaum and Kupper 1978). Following model development, the categorical variable, coordinating technology, was added to the model for each year and analysis of covariance procedures were used to assess model improvement. In assessing each of the models identified, parameter values were examined for their direction and contribution (Neter, Wasserman, and Kutner 1985; Kleinbaum and Kupper 1978). Further, regression assumptions were checked for all of the models presented (Belsley, Kuh, and Welsch 1980).

The CCOP Profile

All of the results presented are based on the 59 CCOPs that were funded from 1983 to 1986. Examination of the CCOP reports of the number of patients enrolled on protocols indicates that mean enrollment for each of the three years of funding was considerably greater than baseline estimates (baseline enrollment: M = 39 patients on protocol, s.d. = 43.8; year 1 enrollment: M = 75 patients, s.d. = 41.5; year 2 enrollment: M = 83 patients, s.d. = 36.3; year 3 enrollment: M = 81 patients, s.d. = 40.5). The variability in amount of patient enrollment was high for the baseline period and for all three years of funding. Further, reports of patient enrollment tended to be positively skewed with the median lower than the mean, a pattern indicating that a few CCOPs had very high patient enrollment.

The trend toward declining patient enrollment in the third year was investigated by reviewing progress reports of the individual CCOPs. Comments in the progress reports of the 31 CCOPs with declining patient enrollment frequently relate lower accrual in the third year to lack of research protocols appropriate for their patient population. During this period, several of the major research bases phased out clinical treatment protocols that had previously accounted for a substantial portion of CCOP patient enrollment.

The within group analysis compared patient enrollment for each year to baseline enrollment. Results of the matched pair t-tests used to evaluate differences were highly significant (p |is less than~ . 001 for each comparison) and were supported by the results of the Wilcoxon Signed Ranks test at the same level of significance. Thus, the number of patients CCOPs placed on protocol for each year of funding was significantly different from baseline estimates of enrollment.

All of the measures of the independent variables are described in terms of central tendency and variability in Table 1. The amount of dispersion is extensive for most of the environmental inputs (i.e., density, professional support, and, to some extent, organizational dominance) and raises questions about the reliability of the measures.

For density, the extreme variability in persons per square mile in the service area is related to wide differences in the way individual CCOPs define their service areas. Variability in professional support is associated with very low membership in the American Society for Clinical Oncology in many areas of the country. Variability in organizational dominance was clarified by examining the extent to which CCOP service areas were contiguous or overlapping: the patterns observed tend to TABULAR DATA OMITTED support the measurement of this construct. A wide range in the dollars the CCOPs received from NCI was also observed, but the distribution for NCI funding each year tends to be close to normal with only a slight positive skew in the third year. The average level of NCI funding increased by about 16 percent from the first to second year and was 29 percent higher in the third year than in the first.

The description of the organizational inputs and indicators of structure indicates some variability in cross-sectional comparisons, but limited variability across years. Composition of the CCOPs remains relatively stable across the years. The only increases noted are for number of hospitals, physicians, and support staff, which were identified as the best indicators of CCOP composition. Over half of the CCOPs had some prior experience in NCI programs, and four-fifths had some clinical research experience as indicated by their baseline level of patient accrual. The two measures of structure, specialization and administrative concentration, show only slight changes across years, some of which appear related to more rigorous reporting requirements in the third year.

Associations of each of the input and structural measures with the outputs -- patient enrollment and change in enrollment -- are presented in Table 2. These findings are based on Pearson correlations, supported by the results of simple regression models and analysis of variance procedures.

All of the input and structural measures, except population density, have positive relationships with patient enrollment by the third year. NCI funding is the only environmental input that has a significant relationship with patient enrollment, with r increasing from .6 to .7 and p |is less than~ .001 for all three years. Most of the organizational measures of size and experience tend to have positive, significant associations with patient enrollment, although relationships with both measures of experience decline over the three years. The positive relationships of the structural measures with patient enrollment are relatively weak.

The relationships of environmental inputs and organizational inputs to change in patient enrollment are similar to those observed for actual enrollment, with one exception: clinical research experience, measured by baseline enrollment, reverses direction and has a highly significant negative relationship with change in enrollment. The other significant associations with change in enrollment are for NCI funding in the second year, organizational dominance for the last two years, physicians and hospitals for the last two years, and staff for all three years. The measures of organizational structure, specialization, and TABULAR DATA OMITTED administrative concentration have very weak, insignificant associations with change in patient enrollment.

This statistical profile of the CCOP outputs, environmental and organizational inputs, and organizational structure indicates that the data were adequate to test hypotheses and to build models that provided an ecological explanation of variations in CCOP development and output. The associations established among the outputs, inputs, and structural measures tended to support theoretical predictions and provide guidance for model building.

Predictive Models of Patient Enrollment on Clinical Research Protocols

The findings from the cross-sectional analysis of patient enrollment provide some support for the theoretical prediction that the output, patient enrollment on clinical research protocols, is a joint function of the environmental and organizational properties of the CCOPs. The consistent predictors of patient enrollment each year, shown in Table 3, are NCI funding, clinical research experience (as measured by baseline enrollment), and the number of support staff.

Together these three variables explain a considerable proportion of TABULAR DATA OMITTED the variance in patient accrual in the first year (adj. |R.sup.2~ = .72). In the second year, organizational dominance also appears in the model, but the proportion of variance explained by this model (adj. |R.sup.2~ = .64) is slightly less. Then in the third year, with only the first-year inputs (NCI funding, clinical research experience, and support staff) in the model, the proportion of variance explained is further reduced (adj. |R.sup.2~ = .54). All of these input measures have a positive relationship with patient enrollment.

Examination of the inputs associated with patient enrollment indicates that NCI funding is a highly significant predictor of enrollment and becomes increasingly significant for models of patient enrollment in the last two years of the program. Clinical research experience, measured by the estimate of patient enrollment when the CCOPs were initially funded, is also a highly significant predictor of enrollment in the first two years, but is less significant in the model for the third year. Model results for each year indicate not only that baseline enrollment helps predict later enrollment, but also that the inclusion of baseline enrollment in models predicting enrollment controls for prior clinical research experience in estimating the effects of the other factors. The number of support staff is a significant, but less important, predictor of enrollment, and its significance declines slightly over the three years. The coefficient estimates for funding and staff remain relatively the same over the three years, but for clinical research experience (baseline enrollment), the estimates decline by two-thirds for the final year. Organizational dominance is a significant predictor only in the second year. The positive sign for the coefficient of organizational dominance, an indirect measure of competition, suggests that patient enrollment increases as CCOPs become more dominant and face less environmental competition. Overall, models of patient enrollment provide some support for theoretical predictions, but suggest that structural measures play a minor and insignificant role in explaining variability in accrual. Specialization appears only in unrefined models and has a negative relationship with enrollment.

Predictive Models of Change in Patient Enrollment on Clinical Research Protocols

The important predictors of change from baseline estimates of patient enrollment on clinical research protocols are the same variables that predict the actual amount of enrollment each year. Table 4 summarizes the results for the final, refined models of change in enrollment for the 59 CCOPs.

For the first year of funding, one input from the environment, NCI funding, and two organizational inputs, clinical research experience and the number of support staff, explain about 40 percent (adj. |R.sup.2~ = .39) of the total variance in change in enrollment. These same variables explain over 60 percent of the variance in the final models for both the second year (adj. |R.sup.2~ = .63) and for the third year (adj. |R.sup.2~ = .61). NCI funding has a stable, positive association with change in enrollment across years; the positive association with number of support staff declines slightly from year 2 to year 3; and the negative relationship between clinical research experience and change in enrollment almost doubles by the third year of funding.

The finding that clinical research experience, measured by baseline estimates of enrollment, is a negative predictor of change in enrollment and a positive predictor of the actual patient enrollment each year may appear inconsistent. The results of an analysis using a categorical measure of three levels of baseline enrollment are presented in Table 5 and help clarify the different roles of baseline enrollment in predicting later patient enrollment and change in enrollment.

The three groups created by this classification are: low baseline enrollment (14 or fewer patients); medium baseline enrollment (more TABULAR DATA OMITTED than 14 and fewer than 43 patients); and high baseline enrollment (43 or more patients). When CCOPs are categorized as low, medium, and high by baseline enrollment, the mean amount of patient enrollment at the end of years 1, 2, and 3 is positively associated with the level of baseline enrollment. Based on the results of F-tests, differences among means for patient enrollment for the three baseline enrollment groups are significant for each year (year 1, p |is less than~ .001; year 2, p = .003; year 3, p = .05). The mean change in enrollment for the three baseline groups for years 1, 2, and 3, in contrast to the findings for actual enrollment, has a negative association with level of baseline enrollment. The results of F-tests used to evaluate differences in amount of change for the three groups are also significant (year 1, p = .003; year 2, p |is less than~ .001; year 3, p |is less than~ .001).

This analysis was undertaken to determine where the action in the correlation was taking place, since the observed pattern of results could be viewed as evidence of regression to the mean. If the finding represents a regression effect, the medium baseline enrollment group would be expected to have the least amount of change in years 1, 2, and 3 (Cook and Campbell 1979, 52-53). This does not appear to be the case. The mean change in the medium baseline enrollment group is significantly greater than the changes observed for the high baseline group for all three years. Further, statistical regression effects are more generally a function of the degree of correlation. The correlations reported in Table 2 between both patient enrollment and clinical research experience (baseline enrollment) and change in enrollment and clinical research experience, although not perfect, are quite high. In contrast, the correlation between second-year patient enrollment in the CCOPs and third-year change in enrollment (r = -.13) after the program was already in place indicates almost no relationship. Further, a comparison of patient enrollment patterns for the CCOPs and 15 controls (unfunded CCOPs) indicates that patient enrollment and change were not related for the controls. Correlations are close to zero for the relationship between year 1 enrollment and years 2 and 3 change for the controls (year 2 change and patient enrollment, year 1, r = .07; year 3 change and patient enrollment, year 1, r = -.02). These findings tend to discount statistical regression as a competing hypothesis (Campbell and Stanley 1963, 11). If regression to the mean were operating, a strong negative correlation between patient enrollment and amount of change would be expected for the controls and for the CCOP third-year change and second-year enrollment.

Inspection of the means for patient enrollment indicates that as baseline level goes from low to high, mean patient enrollment is consistently higher for each year. The opposite is true for the means for change in enrollment for the three groups (except for a slight reversal between the low and medium groups for the first year). It appears that the low means for amount of change in patient enrollment in the high-baseline group are the main source of significance.

Therefore, it appears that the program has the least effect on the group of CCOPs that have the most experience with placing patients on clinical research protocols at the beginning of funding and the greatest impact on the CCOPs that started with the least clinical research experience. Because the less experienced CCOPs make substantial gains in the number of patients they enroll on protocols, the average enrollment for each of the three groups is much closer at the end of the three years than at the beginning of the program. The results of this analysis support the inclusion of baseline enrollment as a measure of clinical research experience in the models predicting change as well as models predicting annual enrollment. Findings indicate that the difference scores used to measure change in patient enrollment from baseline estimates do not account completely for the effects of clinical research experience in models predicting change. In addition, the results of an alternative approach to modeling change in patient enrollment, with baseline enrollment excluded from all models, illustrates the effects of ignoring clinical research experience. With this latter approach to modeling change, the significant predictors of change are different, and the proportion of variability in change in patient enrollment that is explained substantially decreases to between 15 and 34 percent.


The 59 CCOPs that are the focus of this study represent a rich source of data for examining the way external and internal factors influence the outcomes of interorganizational systems that rely on external funds. Because the population of funded CCOPs was selected through a competitive process, these programs may be somewhat superior to those of other groups of community physicians engaged in clinical trials research. In this respect, they are similar to other joint ventures that evolve in response to external opportunities and compete for funding. The environmental and organizational conditions that affect the organization of activity and outcomes are also broadly characteristic of other federally funded research programs in health services that require the coordination of diverse professionals located in a range of organizations.

Consistent Predictors. The simplicity of the models predicting CCOP productivity raises some doubts about the apparent complexity of these interorganizational systems. A substantial portion of the variations in protocol output and changes in output were explained by only three inputs: the amount of NCI funding, the number of support staff, and the level of clinical research experience. On the surface, these findings are neither surprising nor profound. Systems that have more funds to support their joint activity, a greater supply of labor, and the necessary skills to carry out the key function designated by the funding agency can be expected to be more successful. A closer examination of findings points to the shortcomings of this prima facie interpretation. Funding, staffing, and past performance tend to be the most consistent predictors of productivity, but they are not the only factors associated with patient enrollment, and their contributions vary over the three years. An obvious limitation of the predictive models is their failure to clarify the way inputs are shaped and modified by differences in organizational structure. Within these constraints, a more reflective consideration of the results of modeling the CCOPs as open systems yields some useful insights.

NCI funding, clinical research experience, and support staff appear to have consistent relationships with patient enrollment and change in enrollment, but their importance and roles vary. Clinical research experience, as measured by patient enrollment prior to the program, seems most important to the actual level of productivity in the first year and then, as all of the CCOPs begin to acquire this experience, the level of external funding plays a more important role in predicting the numbers of patients on treatment protocols. NCI funding has a positive relationship with both patient enrollment and change in enrollment, but the association with change is weaker than with actual enrollment. In practical terms, when all other factors are equal, one additional protocol patient can be expected for each $1,000 increase in NCI funding. The highly significant nature of this relationship emphasizes the critical effect of NCI's annual funding decisions in determining the level of CCOP productivity, particularly after the first year.

The declining but positive relationship of clinical research experience with the level of patient enrollment and its increasingly negative relationship with change in enrollment appears to be explained by the influence of CCOPs that had the greatest amount of clinical research experience at the beginning of the program. The CCOPs with high baseline enrollment placed a greater number of patients on protocol than CCOPs with low and medium levels of baseline enrollment, but their productivity dropped steadily over the three years of funding. In contrast, the less experienced CCOPs made modest, but steady gains in patient enrollment.

The decrease in the productivity of CCOPs with high baseline enrollment is somewhat eclipsed by their generally impressive record of patient enrollment over the three years. Each year, these CCOPs enrolled far more patients on protocol than NCI required. Their high productivity counterbalanced the less impressive outcomes of the CCOPs that were below the minimum of 50 patients. While the contribution of these high producers to the overall success of the CCOPs cannot be discounted, their outcomes do not represent an expansion of protocol treatment at the local level and their tendency toward declining productivity over time raises concerns about cost efficiency. NCI support for organizations that have already achieved high productivity before becoming CCOPs does not appear to be effective in maintaining the prior level of effort of these CCOPs. This implies that funding incentives may be somewhat more useful in initiating joint endeavors than as mechanisms for expanding ongoing activity.

The size of support staff makes only a modest contribution to the explanation of variance, but it appears to be a somewhat more important predictor of change in patient enrollment than of actual enrollment. The differences in these relationships may reflect greater stability of staff in the highly productive CCOPs. Further, the finding that size of support staff had the most positive association with CCOP enrollment and change in enrollment during the first year of the program suggests that adequate staff to manage data and carry out other supportive activities may be most critical during the start-up phase.

Other Predictors. Further support for the general open systems model is provided by predictors that play more minor roles. Three other inputs (organizational dominance, professional support, physicians) have a positive influence on outcomes when they appear, and one structural measure (specialization) has a negative relationship to both the number of patients placed on protocol and change in patient accrual. Despite their minimal contributions, the pattern in which these predictors appear is of some interest.

Although the ratio used to indicate professional support was a crude measure, the positive relationship with outcomes in the first year may suggest that physicians affiliated with ASCO have greater access to information during the start-up phase than physicians who are not affiliated with this national oncology organization. The positive relationship of organizational dominance with both accrual and change in accrual, in the second, and sometimes third year, suggests that CCOPs that face less competition may find it easier to expand. Organizational dominance might not have been a predictor in the first year because the major focus during the start-up period is on establishing effective internal relationships. The negative effect of specialization on outcomes, which was counter to expectations, may relate to the routine nature of enrolling patients on protocols. Perhaps a high degree of differentiation is not required and may be counterproductive in attempts to streamline protocol activity and increase output. Requirements for structural differentiation are likely to change for the second generation of CCOPs, because the CCOP II has the added function of cancer control research, which necessitates the involvement of a greater range of professionals.

Theoretical Support. The models of CCOP productivity and change in productivity support the usefulness of an ecological approach and the importance of considering both environmental and organizational factors. The results of the descriptive analysis and the models predicting CCOP outputs provide considerable support for the general hypothesis. The CCOPs are significantly different with respect to selected measures of the level and pattern of their outputs. A substantial proportion of this difference is explained by the joint effects of environmental and organizational inputs and the structural form of the CCOPs both initially and over the three-year funding period. Although some of the inputs played a less significant role than anticipated, their lack of influence may have been due to the relatively imprecise measurement of some important constructs. Structural indicators that are capable of capturing the complex, but elusive, form of these interorganizational systems are also needed.

The increasing importance of the level of funding in predicting change in productivity, when baseline performance is controlled, suggests the need for external supports to stimulate this type of interorganizational activity. Particularly for the less experienced programs, the amount of funding NCI provides appears to be a critical determinant of increased productivity. The contribution of staff size to productivity also suggests the importance of having an adequate base of support to maintain and expand clinical research activity. A clearer understanding of the mediating effects of structure might have been obtained with a more complete specification of the theoretical model.

Policy Considerations. Because the level of funding and technical expertise, and the size of support staff played such an important role in increasing productivity, policymakers need to consider the adequacy of these factors in making funding decisions. Funding cannot have the desired effect unless interorganizational systems have the support staff and technology necessary to carry out required activities. Funding may provide an attractive opportunity for community physicians who are committed to improving local care, but it does not appear sufficient to see these new systems through the early stages of development. Technical consultation may be needed when organizations lack adequate expertise.

Because the intent of the CCOPs program is to promote the expansion of coordinated cancer activities at the local level, more attention may need to be given to the extent of involvement of the individual members and the investment of member organizations. In many cases the activities of these boundary-spanning systems may be somewhat extraneous to the key functions of organizational members, such as hospitals, and a relatively low-level priority of some of the individual physician members. Programs where a few physicians are responsible for placing most of the patients on protocol may have complied with minimal requirements but have done little to increase local coordination. A few active members may represent a core of experience that can provide community leadership, but dependence on them tends to be an unstable foundation for future coordination since the loss of one or two members may wipe out most of the available expertise.

The uneven performance of the CCOPs points to some important funding issues. Funding inexperienced systems seems to produce more of a change in productivity but is more costly. Programs that have already demonstrated their productivity during the baseline period continue to produce desired outcomes at a substantially higher level than the less experienced programs, but their level of effort has a tendency to decline once they are assured of funding. Further, funding alone does not promote productivity when other resources, such as appropriate protocols, are not available.

A more stringent application of minimum requirements would weed out CCOPs that were not meeting standards and might reduce unit costs to NCI. These CCOPs may, however, be the very programs that need more external support to enable them to carry out protocol treatment and upgrade the quality of cancer care in their communities. Withdrawing funding from CCOPs struggling to develop could run counter to the overall aim of improving cancer care at the local level. A more balanced strategy would address both the uneven, and often declining, performance of CCOPs with high levels of patient enrollment as well as the needs of the CCOPs that are not complying.

Incremental standards of performance coupled with funding increases might be a more effective approach to promoting expansion and containing unnecessary costs. Raising expectations of productivity for CCOPs that are already productive could be a condition of increased funding and might serve as an incentive for expansion. If this approach did not serve to stimulate increased productivity, funding could be limited to less experienced programs. This strategy would probably be more costly during the start-up period but would be in keeping with the intent to improve local cancer care, not simply to maintain the existing level of effort.

Because CCOPs continued to receive funding even when they did not comply with minimal standards, feedback from NCI to noncomplying programs was distorted. The use of variable productivity requirements, based on an assessment of the system's potential capacity, might make it possible to set more realistic, achievable expectations for the less experienced, less productive CCOPs, and would make it possible to convey expectations more clearly. This strategy, however, would make it critical to evaluate the capacity for productivity more reliably and implies a need for more aggressive outreach to help faltering CCOPs procure appropriate research protocols, expand their pool of eligible patients, and develop data management systems. When resources such as appropriate protocols are not available, standards of performance need to be adjusted.

Implications for Future Research. The more complete specification of the inputs and structural measures that predict the development and outcomes of interorganizational systems is an important focus for future research. Many of the problems encountered in measurement tended to evolve from inconsistencies in defining the scope of the service areas and from variations in reporting requirements. Future study would be enhanced by the application of uniform criteria for defining the service areas and by providing programs with a more standardized format to use in their annual reports of activities, personnel, and relationships. In addition, requiring more detailed reports of organizational characteristics at the beginning of the program would make it possible to expand models with baseline measures of inputs and structure so that the effect of internal changes on productivity could be more fully assessed.

The further specification of organizational inputs raises conceptual questions related to the overlap between the internal and external inputs. In interorganizational programs, organizational members such as hospitals tend to have a relatively limited involvement and are not only a part of this system, but also of the institutional and resource-supplying environment. Given their dual role, the organizational components may be in competition with each other, as well as with other organizations in the environment, and this may affect the availability of such critical resources as the pool of eligible patients and research protocols for treatment and cancer control. Recognizing this duality would imply that models would include measures of internal competition among components, as well as environmental competition, and some measure of protocol and patient availability.


This ecological approach provided a useful framework for gaining further understanding to identify those factors that decision makers should consider in funding interorganizational programs and supporting their development. Perhaps most importantly, the results suggest that an approach somewhat different from that of expanding ongoing activity is needed to initiate local coordination. Although the generalizability of results is limited to interorganizational programs of similar composition and functions, the insights gained from this analysis have been applied to making decisions about funding and supports that will enhance the second generation of CCOPs.


The author expresses deep appreciation to Dr. Arnold Kaluzny and to two anonymous referees for their helpful comments on earlier versions of this article.


Adams, J. S. "The Structure and Dynamics of Behavior in Organizational Boundary Roles." In Handbook of Industrial and Organizational Psychology. Edited by M. D. Dunnette. Chicago: Rand McNally 1976.

Aiken, M. "The Distribution of Community Power: Structural Bases and Social Consequences." In The Structure of Community Power. Edited by M. Aiken and P. E. Mott. New York: Random House, 1970.

Aiken, M., and R. R. Alford. "Community Structure and Innovation: The Case of Urban Renewal." American Sociological Review 35, no. 4 (August 1970): 650-65.

Aldrich, H. Organizations and Environments. Englewood Cliffs, NJ: Prentice Hall, 1979.

Aldrich, H., and D. A. Whetten. "Organization-Sets, Action-Sets, and Networks: Making the Most of Simplicity." In Handbook of Organizational Design. Vol. 1. Edited by P. Nystrom and W. Starbuck. Oxford, England: Oxford University Press, 1981.

American Hospital Association. American Hospital Association Guide to the Health Care Field. Chicago: AHA, 1983.

Belsley, D. A., E. Kuh, and R. E. Welsch. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons, Inc., 1980.

Benson, J. K. "The Interorganizational Network as a Political Economy." Administrative Science Quarterly 20 (June 1975): 229-49.

Bidwell, C. E., and J. D. Kasarda. The Organization and Its Ecosystem: A Theory of Structuring in Organizations. Greenwich, CT: JAI Press, Inc., 1985.

-----. "School District Organization and Student Achievement." American Sociological Review 40 (February 1975): 55-70.

-----. Structuring in Organizations: Ecosystem Theory Evaluated. Greenwich, CT: JAI Press, Inc., 1987.

Campbell, D. T., and J. C. Stanley. Experimental and Quasi-Experimental Designs for Research. Boston, MA: Houghton Mifflin Co., 1963.

Cook, T. D., and D. T. Campbell. Quasi-Experimentation Design and Analysis Issues for Field Settings. Boston, MA: Houghton Mifflin Co., 1979.

Cook, K. S. "Exchange and Power in Networks of Interorganizational Relations." The Sociological Quarterly 18 (Winter 1977): 62-82.

Feigl, P., and P. Diehr. Executive Summary: Community Cancer Care Evaluation (CCCE) Final Report. NCI Contract no. 1-CN-35009. Seattle, WA: Statistical Analysis and Quality Control Center, Fred Hutchinson Cancer Research Center, 1987.

Fennell, M. L., and R. B. Warnecke. The Diffusion of Medical Innovations. An Applied Network Analysis. New York: Plenum Press, 1988.

Freeman, J., and M. T. Hannan. "Growth and Decline Processes in Organizations." American Sociological Review 40 (April 1975): 215-28.

Frelick, R. W., J. W. Yates, W. Dunlap, and R. Foster. "Cancer Control Activities of Community Clinical Oncology Program (CCOP) Applicants." In Advances in Cancer Control: Epidemiology and Research. Edited by P. F. Engstrom, P. N. Anderson, and L. E. Mortenson. New York: Alan R. Liss, Inc., 1984.

Greenwald, H. P. Social Problems in Cancer Control. Cambridge, MA: Ballinger Publishing Co., 1980.

Hannan, M. T., and J. Freeman. "The Population Ecology of Organizations." American Journal of Sociology 82, no. 5 (March 1977): 929-64.

Hawley, A. H. "Community Power and Urban Renewal Success." American Journal of Sociology 48 (January 1963): 422-31.

Hernes, G. "Structural Change in Social Processes." American Journal of Sociology 82, no. 3 (November 1976): 513-47.

Kaluzny, A. D., J. P. Morrissey, and M. M. McKinney. "Emerging Organizational Networks: The Case of the Community Clinical Oncology Program." In Innovations in Health Care Delivery. Edited by S. S. Mick and Associates. San Francisco: Jossey-Bass Inc., Publishers, 1990.

Kaluzny, A. D., T. Ricketts, III, R. Warnecke, L. Ford, J. Morrissey, D. Gillings, E. Sondik, H. Ozer, H. Goldman, and J. Goldman. "Evaluating Organizational Design to Assure Technology Transfer: The Case of the Community Clinical Oncology Program." Journal of the National Cancer Institute 81, no. 22 (15 November 1989): 1717-25.

Kasarda, J. D. "The Structural Implications of Social System Size: A Three-Level Analysis." American Sociological Review 39, no. 1 (February 1974): 19-22.

Kasarda, J. D., and C. E. Bidwell. "A Human Ecological Theory of Organizational Structuring." In Sociological Human Ecology. Edited by M. Micklin and H. M. Choldin. Boulder, CO: Westview Press Inc., 1984.

Kleinbaum, D. G., and L. L. Kupper. Applied Regression Analysis and Other Multivariate Methods. Boston, MA: Doxbury, 1978.

Neter, J. W., W. Wasserman, and M. H. Kutner. Applied Linear Statistical Models. Homewood, IL: Richard D. Irwin, 1985.
COPYRIGHT 1993 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schopler, Janice H.
Publication:Health Services Research
Date:Apr 1, 1993
Previous Article:Effect of prospective reimbursement on nursing home costs.
Next Article:A cost-effectiveness analysis of hepatitis B vaccine in predialysis patients.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters