Printer Friendly

Greater than the Sum of its Parts: A Qualitative Study of the Role of the in Facilitating Coordinated Collaborative Science.


In recent years, biomedical research has become increasingly collaborative (Falk-Krzesinski et al., 2011; Wuchty, Jones, & Uzzi, 2007). Today's large research challenges such as global climate change and the early detection of cancer can only be addressed in large, multi-site, multi-disciplinary collaborative efforts, as they require the input of scientists from disciplines as disparate as epidemiology, ecology, sociology, clinical medicine, molecular biology, population genetics, and veterinary medicine. The development of information and communication technologies (ICTs) has allowed scientists to work together in larger numbers, on increasingly complex problems, over ever greater distances. Such large collaborative projects bring together scientists from different labs, different disciplines, and different institutions, generally bringing all these disparate elements together into a functioning whole. Yet this collaboration comes at a cost. Coordinating large numbers of dispersed researchers working on such complex questions across geographic and institutional boundaries requires a substantial commitment of time and resources (Cummings & Kiesler, 2007). This administrative burden often falls on the lead Principal Investigator (PI) and his/her staff.

In the field of cancer epidemiology, multi-site research projects are increasingly employing coordinating centers (CCs) as a tool to ease that administrative burden by offloading it onto a group with substantial experience in the coordination of such projects (Rolland, Smith, & Potter, 2011). A CC is a central body tasked with coordination and operations management of a multi-site research project. We call this type of collaborative science "Coordinated Collaborative Science," defined as collaborative research done with the support of a CC. While other types of collaborative science may use similar facilitation techniques or experience similar challenges, Coordinated Collaborative Science concentrates much of that facilitation work in the CC itself and, thus, represents a unique perspective on facilitation.

A CC is generally formed to support a specific project, such as a consortium tackling a problem that can only be addressed by employing a networked structure. Seminara et al. (2007) define networks in epidemiology as "groups of scientists from multiple institutions who cooperate in research efforts involving, but not limited to, the conduct, analysis, and synthesis of information from multiple population studies" (p. 1). Such networks can be built and/or funded in a variety of ways; however, in Coordinated Collaborative Science, the research centers and the CC are generally funded as individual components of the network by separate Requests for Application (RFAs) or, occasionally, by contracts. The CC does not usually have an official pre-existing connection to any of the research centers.

We know very little about either how such networks function or how best to facilitate them. In fact, there is no definition of what facilitation means in the context of Coordinated Collaborative Science. CCs receive very little guidance as to how to go about their tasks beyond the vague, high-level expectations laid out in the funding agency's RFA. Few CCs write about their work, leaving new CC PIs and managers to devise their practices anew without evidence of efficiency or efficacy. NIH spends millions of dollars each year supporting such networks and their CCs, yet little research has been done on how the CCs work, how to structure these CCs, or precisely which aspects of the research project should be allocated to the CC. This research presented here seeks to rectify that deficiency by investigating and documenting the work practices of two CCs currently involved in Coordinated Collaborative Science. To that end, we have identified areas of the collaborative process that are enhanced by the work of the CC. The areas on which CC members chose to focus, along with their tools and techniques, are the result of collective decades of experience coordinating multi-site projects. As such, they represent crucial sources of knowledge, which, in turn, could be used to improve the process of collaboration in other networked-science projects. Though limited by its focus on just two CCs at one institution, this research represents a crucial first step toward defining the work of CCs and what constitutes facilitation in Coordinated Collaborative Science.

What We Know about CCs

In the mid-1970s, the National Heart, Lung, and Blood Institute (NHLBI) began a project called Coordinating Center Models Project (CCMP) in an attempt to better understand CCs in clinical trials (Symposium on Coordinating Clinical Trials, 1978). At that time, clinical trials were still a fairly new method of doing research and large amounts of money were being spent to coordinate those trials. Yet very little was known about what made a good CC or how to run a CC most effectively. To address these issues, a CCMP research team was designated, made up of scientists who were interested in the design and implementation of clinical trials. Their approach consisted of a survey of those involved in six NHLBI-funded clinical trials, as well as interviews with key staff members. The results were reported at a conference in 1978 and published soon after (Symposium on Coordinating Clinical Trials, 1978).

One of the key findings of the CCMP was that it was not possible to identify a common set of activities across the CCs (Symposium on Coordinating Clinical Trials, 1978). The research group concluded that there was no one model of a CC. They apparently did not consider the possibility that the great variation in activities and attitudes stemmed from the fact that CCs represented a new organizational model with no existing blueprint and that CC leaders were creating policies and procedures in reaction to the events around them. Perhaps the variation could be traced to the lack of standards both for running a CC and for communicating among CC leaders.

Soon after the CCMP report was published, investigators from several clinical trials published articles about their CCs. These were not empirical studies but, rather, reports written by the CC and clinical-trial leadership detailing how their own CC worked, including a list of the activities for which the CC was responsible, as well as assessments of issues or problems and particularly interesting solutions that were devised for working in a clinical trial. Although the articles described vastly different levels of detail about what a CC should do, all stressed that the primary responsibility was to ensure the quality of the science. Blumenstein, James, Lind and Mitchell (1995) stated that the CC's primary mission is "to assure the validity of study findings that eventually will be disseminated in publications and public presentation" (p. 4). Going into slightly more detail, Mowery and Williams (1979) wrote that monitoring the implementation and adherence to protocol are the primary responsibility of the CC. Rifkind (1980) added delivery of results to the community in a timely and high-quality manner.

The specific responsibilities listed by these authors vary widely, ranging in level of detail from "statistical and content methodological support" (Bangdiwala, de Paula, Ramiro, & Munoz, 2003, p. 61) to "ordering study medications" (Meinert, Heinz, & Forman, 1983, p. 356). Some articles divided responsibilities into categories, most of which are common in theme, if not in a specific label. These categories include: (1) statistical coordination and management; (2) study coordination; and (3) administrative and secretarial support. The first category of responsibilities involves data, including data management and analysis, monitoring data collection, and performing quality assurance (see, for example: Blumenstein et al., 1995; Bangdiwala, et al., 2003; Meinert et al., 1983; Curb et al., 1983; Margitic, Morgan, Sager, & Furberg, 1995; Greene, Hart, & Wagner, 2005; Lachin, 1980; Berge, 1980; and Winget et al., 2005). The second category involves coordinating studies, including developing protocols and forms, monitoring adherence to the protocol or performance monitoring, developing computer systems, training staff, documenting and archiving of study information, communications, adhering to institutional policies, reporting, allocating CC resources, and preparing manuscripts. Administrative and secretarial support included functions such as fiscal management, meeting and site visit organization, budget preparation and management, securing equipment rentals, and personnel management, as well as general secretarial support (Bangdiwala, et al., 2003; Meinert et al., 1983; Curb et al., 1983). These last two categories were sometimes conflated into one, but the described duties were consistent.

One overarching theme raised in some of the papers is the difficulty of staffing a CC. CCs are expected to have on-staff expertise in a wide range of activities, including administration, statistics, federal regulations, human subjects, technology, and organizational development. At the same time, the CC's organizational structure is expected to evolve over the course of the project in response to changes in the work, while minimizing costs. At a workshop at the CCMP kickoff in 1977, the group reported:
One major managerial problem has to do with the establishment of a
large, well-trained staff and whether personnel should be retained or
transferred out once a study is terminated. Many university-based
coordinating centers are locked into the cycle of maintaining these
staff positions and have invested much time and effort in staff
training in order to fulfill their function. Frequently the only way
personnel can be retained is to proceed directly into another study.
Since this option is not always available, there is a clear danger in
creating too large a coordinating center within a university setting.
(Meinert, 1977, p. 265)

This staffing difficulty is even more challenging given the current financial climate and budget cuts at NIH. Finding funding to support the infrastructure of a CC, as opposed to funding a CC for a specific project, is thought by some of us to be virtually impossible. This situation leaves CCs with the dilemma of losing experienced staff and institutional memory or continuously taking on new projects, not necessarily on anything like an optimal schedule.

Curb et al. (1983) and Blumenstein et al. (1995) noted that one of the major problems of running a CC is the time crunch inherent in such a project. Once funded, CCs are expected to get the project up and running quickly, with little attention paid to the set-up phase. These papers argued that more time spent on securing agreement on organizational issues such as data-sharing agreements, authorship policies, and communication, as well as scientific issues such as common data, survey forms, and required technologies, would have made the project run more smoothly and, thus, produce better science more quickly (Curb et al., 1983). CC managers also noted that more time for close-out and staff time to support manuscript writing at the end of the projects would have, similarly, led to even stronger outcomes for the project (Blumenstein et al., 1995).

There is a great variety in the organizational models followed by the different CCs described in the literature. Blumenstein et al. (1995) described several different models of clinical trials and several different models of CCs, although no discernible pattern for matching these was described. Curb et al. (1983) noted that "[t]he role of a coordinating center in a multicenter clinical trial varies with the particular design and organization of each trial" (p. 171). Their implication is that the organizational structure of the CC must also be a consequence of the trial it supports. Curb also asserts that responsibilities, and, therefore, the staffing makeup, of the CC must shift as the trial progresses through its phases.

Thus, the literature on CCs is lacking a comprehensive model of what different kinds of CCs look like, how they are formed, how they should be managed, or even what impact they have on the projects they are coordinating. Furthermore, the projects being coordinated are structured in many different ways, with little understanding of what types of CCs might work best for these different types ofprojects. In short, we know very little about how either CCs or the projects they coordinate actually function.


The findings presented reflect research on two consortia, known here as the Biomarker Network and the Screening Network. (The network and participant names are pseudonyms.) Their CCs are housed at the Fred Hutchinson Cancer Research Center (FHCRC) in Seattle, WA, and are run by a group at FHCRC that specializes in the management of multi-site research projects, the Science Facilitation Team (SFT). Thus, the two CCs share many staff and PIs, making them ideal to explore the work required to support consortia with different scientific objectives.

The Biomarker Network has been in operation for approximately 12 years and has, as its overarching scientific objective, the discovery and validation of biomarkers for cancer diagnosis and prognosis. The aim of this program is to establish the efficacy and reliability of such markers for use in clinical practice. The Biomarker Network has many research sites and affiliate members around the world.

The Screening Network is a relatively new project, having been funded approximately four months before fieldwork began (Fall 2012). It seeks to improve cancer screening in the United States by developing a deeper understanding of the process and by searching for ways to personalize screening recommendations based on risk profiles. The specific aim of the Screening Network is the creation of a repository of screening information across populations at seven research centers in order to understand the impact of screening. Three of these research centers are focused on breast cancer, three on colorectal cancer, and one on cervical cancer.

For this qualitative, interview-based study, we interviewed 17 consortium members, including nine CC staff and PIs, two funding-agency representatives, three Biomarker Network PIs, and three Screening Network PIs. The interviews were semi-structured with questions focused on the work of the consortium and the CC. Interviews were digitally recorded and transcribed, then coded using qualitative-analysis software according to interview questions and themes using a grounded-theory approach (Charmaz, 2009). We also conducted 95 hours of observations of meetings of the SFT over the course of seven months and attended three of the larger, in-person meetings of the consortia themselves.

This research was approved by the Institutional Review Board of the Fred Hutchinson Cancer Research Center. Written consent was obtained from all participants.

[In this paper, data from participant interviews are noted by the participant's name and transcript line number in parentheses (e.g., (Martha, 382)).]


Coordinated Collaborative Science

The CCs under study were charged with facilitating coordinated collaborative science. As the name implies, the employment of a CC as a tool to facilitate the network's scientific objectives is a defining characteristic of coordinated collaborative science. Per the RFAs, the CC's primary responsibilities revolve around the operational and logistical coordination of the collaborative activities and data management and data analysis for collaborative projects. CC staff and PIs are expected to organize all network meetings, guide all the collaborative activities to ensure the production of high-quality data, create systems to manage the project's data, and perform statistical analyses on those data (Biomarker Network RFA, Screening Network RFA). The CC also plays a role in generally helping the group of diverse sites work together as a network. However, as will be shown below, that role is not always well defined or even agreed upon.

The research centers are the grantees charged with performing the scientific work as proposed in their grant applications. The precise nature of the work of each research center varies, from recruiting patients to extracting data from databases, but is all done in service of the overarching scientific objectives as defined in the RFA. In addition to their scientific work, the research center PIs are expected to participate in the collaborative activities of the consortium. These activities include attendance at meetings, contribution to discussions about the scientific direction of the consortium, active involvement in Working Groups that make decisions about scientific implementation, and participation in resource (e.g., biosample or data) sharing in compliance with consortium policies (Biomarker Network RFA, Screening Network RFA).

The funding agency representatives in a consortium, highly respected scientists in their own right, are there to represent the funding agency's interests; the aim is to ensure that the work proceeds as expected by the original proponents of the project. Funding agency representatives answer questions about the agency's expectations and policies, in addition to giving input on the scientific direction. Like the research center PIs, the funding agency scientists are expected to attend all meetings and contribute to the discussions on achieving scientific goals (Screening Network RFA). They also participate in working groups, as appropriate. They work very closely with the CC to track the progress of the CCEN, generally through participation in frequent conference calls between NCI and the CC.

Both consortia in this study are funded as cooperative agreements, a specific type of NIH funding in which the funding agency representatives have "significant scientific and administrative input" into the operations of the network (Biomarker Network RFA). The funding agency representatives are not permitted to give direct instructions to the grantees, either to the CC or the research center PIs, on how to do their work, but are expected to give suggestions and guidance to ensure the project is meeting the funding agency expectations (Rebecca, 63).

A Typology of Work

In developing a typology to describe the facilitation of collaborative work by a CC, we began with the categories of CC work presented in Rolland et al. (2011), which documented the work of one specific CC, the Asia Cohort Consortium Coordinating Center, and included four types of activities: collaboration development; operations management; statistical and data management; and communications infrastructure and tool development. Our review of the literature on CCs, primarily papers from individual CCs, produced a list of activities that fit into the Rolland et al. (2011) categories. We then noted that the categories of work in the respective RFA focused on two main areas of responsibilities: facilitating network activities and work that involved data (i.e., data management and statistical analyses). Reconsidering our data and the types of work described by participants, as well as types of work we observed, we developed the typology described below. We chose to fold the Rolland et al. (2011) category of "communications infrastructure and tool development" into "operational work" because the staff", skills, and overall objectives involved in both were largely the same. Though the RFAs do not mention "collaboration development" as a responsibility of the CC, participants mentioned the work that they did to negotiate the activities of the consortium frequently enough that it necessitated its own category.

The observed CCs engaged in a wide variety of complex tasks while facilitating collaboration. Some of these tasks were consistent across projects, such as organizing conference calls and meetings, whereas others were more closely tied to the scientific objectives of the specific program.

We have divided these tasks into four areas of responsibility:

1. Structural work;

2. Collaboration development work;

3. Operational work;

4. Data work.

We briefly describe the first three types of work here, then delve more deeply into the fourth, as it is in data work that the experience and expertise of the CCs play out most explicitly and most clearly show the deep and lasting impact of the work of a CC.

Structural work

Structural work consists of those activities that shape the official rules of the project and dictate the organizational structure of the consortium, once funded and initiated. Most of the structural work is done by the funding agency in the development of the RFA, which specifies the scientific objectives of the project, the governance structure (i.e., required committees and how the scientific direction will be set), and the overall responsibilities of the grantees. Although this work is predominantly in the realm of the funder, the CC may need to participate if changes take place during the funding cycle or in the development of the RFA to re-fund a consortium. The structural work of a consortium--and its impact--is also discussed in a related paper published separately.

One example of the CC's involvement in developing the structure of its consortium is evident in the Biomarker Network CC's influence on the RFA for the Biomarker Network's third funding cycle. Toward the end of its second funding cycle, the Biomarker Network CC suggested the introduction of "team project" requirements for each organ-specific working group as a way to increase the amount of collaborative science taking place within the consortium. Some in the CC felt that not enough collaboration was happening within the biomarker-discovery labs, which was holding back the entire Biomarker Network. Adam, a Biomarker Network CC PI, reported, "'[t]eam projects' is a concept we proposed after the [first] two cycles...because we saw [that, for] the individual biomarker-discovery lab, most of them just do not have [the] ability or capacity to move the biomarker to validation. So we thought maybe they needed some help. And so if we have team projects, as a team they can pool resources together, pool expertise together, can recruit the sample quicker and they can identify [some] of the most important questions" (Adam, 275). These team projects are still getting off the ground, but have already led to greater collaboration among the discovery labs, which the CC hopes will result in more biomarkers to validate (Adam, 345). Adding more responsibilities to the project requirements in the RFA is engaging in structural work.

Collaboration-development work

Collaboration-development work is defined here as the extra work scientists participating in a collaborative project do to elevate the disparate groups of individuals and institutions toward a functioning whole, or, in the words of many of our participants, to make the consortium "greater than the sum of its parts." This work includes participating in committees and working groups, negotiating roles and responsibilities of consortium participants, creating meeting agendas, reviewing consortium documents such as governance manuals, and aligning human-subjects applications across projects and institutions. Such work takes a great deal of time, yet was rarely accounted for in the time commitment that research sites allocated in their grant proposals. Participants noted that they often participated in several committees or working groups in each consortium, each with a monthly conference call and associated work. They also noted that these groups rarely had defined objectives and could just waste time if not well led.

The prioritization of work in the face of limited resources falls into the category of collaboration-development work. One of the processes developed by the Biomarker Network CC was a system to evaluate proposed collaborative projects. During the first grant period, the Biomarker Network CC realized that they did not have the resources to coordinate all of the studies being proposed by research center PIs. Accordingly, the CC PIs rated each project based on criteria such as scientific impact and required resources and then ranked them. At first, funding agency representatives and the Executive Committee were very resistant to this approach, thinking that the CC has overstepped its bounds; indeed, they rejected the idea. However, the CC presented their rationale and methods to the NCI and Executive Committee at their next site visit and the visitors were quickly convinced that this was the right approach.
So our proposal to NCI is we help them to identify [the best proposals]
because we had so many team projects and the NCI thought we should
coordinate all. And we said no, no, that's not possible. So we offered
to read those [submitted] team projects and identify which ones we
think are the good ones, good in the sense that their prospective
collection does not have bias and it's more likely to be very useful by
the end. And so we will rank them as higher priority and we propose
that we coordinate those. So at first they were not happy. They wanted
us to [coordinate] all [the proposals]. They had a site visit, I think
in year one, and that was one important question. So the NCI project
director and the two chairs of the Biomarker Network [visited us for
our site visit]. We presented our thinking, and we [told] them here are
our rankings. And so after our presentation, they had a closed
discussion. And so after that then it's yeah, we'll do it the way you
guys say it. And they never raised that issue again....Because our
criteria are clear. If it is approved, the study design principle is a
prospective collected, and those are high quality ones that we ranked
high (Adam, 371).

The CC's experience with study coordination and scientific expertise allowed them to make a rational, evidence-based case for which studies should receive access to the CC's limited resources. Furthermore, they had done so using criteria that were objective and drawn from the scientific objectives of the Biomarker Network. The effect of this action by the CC was twofold. First, by creating an objective system of scoring based on scientific merit, the CC eliminated some of the political issues around evaluating the projects; e.g., scientists are not immune to the pressures of supporting a project because it is proposed by a powerful colleague. Second, by providing leadership in the area of project prioritization, the CC saved the Biomarker Network from wasting a substantial amount of time: had the CC not done this, all the work of devising criteria, scoring each project on those criteria, and ranking them would have taken considerable time in future Steering Committee meetings.

The Screening Network, on the other hand, struggled in the area of collaboration-development work as a result of differences of opinion over roles and responsibilities, compounded by disagreements over the scientific goals of the project. These disagreements resulted in much effort being devoted to discussion of the overarching purpose of the collaboration and how to work together. These conflicts are discussed in greater detail in a related paper published separately.

In general, the observed CCs took the lead on all collaboration-development work, organizing committees and working groups, scheduling conference calls and tracking their work, coordinating the writing of any governance documents, and creating a central human-subjects document that could then be altered by participating research centers. This leadership and the work done by the CC on behalf of the research sites not only centralized coordination, ensuring greater alignment among tasks, but aimed to reduce the amount of time that research-center PIs needed to spend on it.

There is another, less tangible benefit of the CC's leadership of the collaboration work. Because they had been working with consortia for many years, CC PIs and staff were able to guide the groups toward overall policies that had proven beneficial in the past. Furthermore, because the CC personnel had a high-level overview of the consortia and what each research center was doing, they were better able to ensure that specific policies worked for the majority of participants. Finally, as a neutral party, the CC was in a position to negotiate differences among participating research centers and to ensure that the achievement of objectives remains the consortium's highest priority.

Operational work

The operational work of the CCs comprises the administrative and technologic tasks done in support of the group's scientific objectives. The aim is to help the group's diverse and varying tasks function in a coordinated fashion, e.g., each CC organized conference calls so the groups could get together and draw up plans for data collection, harmonization, and analysis. Operational tasks include building the project's website, developing and administering email listservs and other communications, organizing meetings and conference calls, and tracking the consortium's publications. Although these tasks are not considered "scientific work," their performance by CC PIs and staff allows research center PIs to spend less time thinking of, and dealing with, project administration and more time working on science.

The CC group's previous experience with coordinating collaborative research meant they were able to start quickly. They had existing contracts with conference call providers, had systems in place for scheduling conference calls, and had computer programmers on staff. In fact, the Biomarker Network CC had spent substantial amounts of time developing these systems and was able put them into use rapidly when awarded the grant to manage the Screening Network CC.

Whereas operational activities, in general, require little scientific knowledge to complete, they have a profound impact on the group's ability to achieve its scientific goals. Anyone who has ever spent time organizing a conference call involving dozens of participants across multiple time zones understands how much work it really entails. When that effort is multiplied by any number of committees and sub-groups, it can become almost a full-time job in a large, complex consortium. We are unable to quantify the precise amount of time spent on operational work; however, the Biomarker Network consortium had one full-time (100% FTE) project coordinator engaged only on this aspect. The Screening Network started with a project coordinator devoting a smaller amount of time but, by the end of our observations, was hiring a full-time coordinator. Additionally, the project managers of both projects spent substantial amounts of time on operational work, as did the computer-programming staff.

Data work

Both CCs engaged in substantial amounts of data work, the focus of which is the generation of the highest-possible-quality data for collaborative projects. Again, the range of activities in this category is wide and varies based on the scientific objectives of the particular project. In the Biomarker Network, standardized protocols had to be developed in each biomarker trial to ensure uniform collection of data and samples, whereas, in the Screening Network, common data elements (CDEs) had to be extracted from existing databases by participating research sites. (CDEs are standardized definitions of data to be collected or shared (National Cancer Institute, 2014)). Each of these goals required the CC PIs and staff to draw upon their expertise to ensure the collection of the correct data.

It is in this area of data work that the CC's experience and expertise played out most explicitly, with the greatest impact on the consortium's progress toward its scientific objectives. The CC team has learned important lessons from each study they have coordinated, lessons that have then been incorporated into improved processes for subsequent studies. Specific examples of data work that the Biomarker Network CC team developed and improved in the light of their previous experience include: a) the establishment of common data elements and data-entry forms; and b) the creation of eligibility-criteria flowcharts.

Common Data Element (CDE) Development:

Although the research center PIs writing the protocols and leading the studies were responsible for defining the aims of a validation study, they relied heavily on the expertise of the CC in both leading the conversations to discern precisely which data should be collected and how to represent those data in the data-entry system. The actual data needs of a validation study vary based on the proposed clinical purpose for the marker. When possible and appropriate, the CC tried to standardize the data collected from each study into common data elements. For example, the CC might use a CDE to standardize the way data on smoking are collected, requesting "Cigarettes Per Day" and "Years Smoked" from each study participant for studies. Because these CDEs had been used in the past, in both Biomarker Network and other studies run by the Science Facilitation Team, they have been vetted and shown to be well behaved and useful (Kieran, 185). The CC has compiled a list of standardized CDEs, which allows for the more rapid development of protocols, in that a PI could review the list and select those most applicable to the study (Kieran, 181). If new CDEs were necessary for a new study, they were created and could also be incorporated into future projects.

The information provided in the protocol was used by the CC to develop the CDEs, but conversations were still required to ensure that the right data were collected. When asked about the process of developing the data-collection protocol, Edith, a CC staff member, described a meandering, iterative process of working with the study PIs to nail down precisely which data they wanted to collect. She noted that the process was time-consuming because of the need to have, at least in practice, multiple conversations to ensure that everyone is talking about the same thing.
I could walk you through [the process] but it's really more like
wandering around in the forest. It's an iterative process....So
when someone proposes a project they usually say, "We want to collect
these kinds of information about the patients that are supplying these
samples or the patients that are being analyzed in some way." And they
can be fairly general. And so we will talk to them and say, "Okay,
let's try to come up with a specific list of all the data points that
will collect this information that you want." Sometimes we use data
lists from other projects and adapt them. And we'll send them either an
Excel sheet or a Word document that is more precise. Then they'll say,
"Oh yeah, well we really didn't mean that, that, that. We meant this,
this, this. And this is what this other study did and the way they
collected it, but that's not the way we think about it so we want it
phrased differently." (Edith, 174).

As this makes obvious, PIs unprompted could find it difficult to express precisely what data they wanted for their research. One of the ways in which the CC added value to the data work of the consortium is by leading this process of tightly specifying the data to ensure that its collection would be rigorous and focused. If the data collected are not exactly what the researchers need in order to confirm the validity of the biomarker, the entire study will be much less valuable, perhaps fatally flawed. Previous experience with CDEs helps avoid this outcome.

When asked for a specific example of a time when she experienced that disconnect between what a research center PI thought s/he wanted and what s/he actually wanted, Edith described this incident:
A project that we're working on currently, one of the forms is
collecting information on lung nodules, and we have never collected
that kind of information before. So, we're collecting information from
either CT scans or MRIs. And there are a lot of technical data points
that have to do with running a CT machine or an MRI machine that we
don't necessarily know what they really mean. But it's obvious to the
clinicians who do it all the time. And so we've had a lot of back and
forth about how best to organize that information and exactly what
information is needed. And we finally realized that what we really
wanted was not information for every CT scan, but information on every
nodule, whether it was a CT scan or an MRI. And then we'd follow that
nodule and follow up, and that was a huge difference. And so just
working that out took a lot. So, you start with what they give you and
you try to figure it out but then you have to go back to them and say,
"Well, I think this means this, and it would look this way in our
system. Tell us what needs to change." (Edith, 271)

Here, we see how, by an iterative process, Edith discovered that the required data centered not on a CT scan or MRI, but on each visualized nodule and what was known about it and done to it. These required two fundamentally different data structures, a conclusion obvious, perhaps, only to those who specialize in thinking about data collection.

Creation of Eligibility-criteria Flowcharts:

A second example of experience and expertise being used to improve systems and processes is the CC's work on developing eligibility-criteria flowcharts. When designing a clinical validation study, it is crucial to have precisely defined and scientifically appropriate criteria to determine who is eligible to be enrolled. There were some early Biomarker Network studies where the eligibility criteria encoded in the protocol and, subsequently, in the data-entry forms, proved to be in error--either eligible participants were not enrolled or ineligible participants were. Subsequently, the CC developed a detailed process to ensure that all parties were deeply familiar with the criteria for eligibility and that this understanding was precisely encoded into the protocol (Edith, 249). This development was designed to ensure that both: (a) the PIs themselves were clear on the implications of the eligibility criteria they had proposed; and (b) there were no misunderstandings in terminology or intention as the CC interpreted what the PI had proposed (Edith, 184). Edith described her goal in developing the eligibility-criteria flowcharts as explicitly documenting who would be included vs. excluded in such a way that the logic contained in the flowchart could be easily programmed into the data-entry system, all with the goal of ensuring that the proper participants had been recruited.
My goal in an eligibility flowchart is to combine in one document all
the online phrasing of each data point that's required to determine
exclusion and inclusion. And also the place in the database where the
programmer can find where that data point will be stored. And also
the--so you found this data point, it's got this value for this
person, what do you do with that? And so the idea is for each, to
create a point where you start off with a data point. You describe
everything about it, and you have arrows that point to the options
depending on the value of the data point (Edith, 222).

In essence, Edith's work on the eligibility flowcharts acted as a bridge between the data work of identifying the eligibility criteria and the operational work of building the data-entry system.

The development of the flowcharts required iterative conversations among the PI of the protocol, Edith, other CC staff, and the project statisticians at the CC, who were called in to evaluate the eligibility criteria and calculate the number of participants likely to be recruited under the proposed rules at the proposed sites. From these conversations, the CDE specialist created a flowchart that made explicit the data that determined eligibility. For example, the first data point used to determine eligibility might be age, say, excluding any patients under 60. Next, a check on the patient's previous diagnosis of cancer might exclude more patients, possibly recruiting only those with no previous cancer. Such detailed attention to eligibility decisions allowed the PI to adjust the recruitment strategy and choose the study sites before, rather than after, the study began, saving time and money. There might still be adjustments once the project was underway but they would be likely to be less dramatic as a result of these steps. This work resulted in fewer ineligible participants being enrolled and fewer eligible participants being missed.

The aim of the CC data work described here was to ensure that the study sites generated high-quality data that could then be sent to the CC for analysis. By using their experience with previous studies to improve data collection in subsequent studies, the CC took advantage of the skills and knowledge, both individual and collective, which had been developed over more than a decade of study coordination. This focus resulted in studies that operated more smoothly because the routine challenges of designing a study, e.g., data collection and eligibility, have already been addressed and codified.

In interviews with both internal and external Biomarker Network participants, this data work was noted as critical to the success of the project. When asked about the role of the Biomarker Network CC, Karen, a CC staff" member, noted first that it was to ensure high-quality data for the validation studies (Karen, 38). Several other members of the CC also stressed that high-quality data are the top priority for all the data work that they do. The focus on quality stems not just from a desire to do their jobs well, but also from an understanding that only high-quality data will securely underpin the group's scientific objectives. If the data were suspect, the Biomarker Network would lose the ability to make claims about the quality of a biomarker, as described by Tamara, a Biomarker Network staff member:
I think it's a process of educating folks that if you're trying to
figure out a usable biomarker, it's imperative that your samples are
uniform and are of the highest quality. So it benefits you to follow
these protocols and I think it's educating the people to think in a
bigger picture. This is going to be better science if we all do it in a
standardized way that is of quality. And then ultimately we will have
better outcomes because you won't have some crazy data set....And
so then we'll know, gosh, that biomarker failed and I'm pretty
comfortable that it failed because my samples were of quality or, wow,
that biomarker had awesome results and I'm really confident with my
data because my samples were really good quality. (Tamara, 370)

Tamara noted that it was the CC's responsibility to make sure that validation-study sites understood why compliance with the protocol was so crucial, underscoring the importance of the communications work done by the CC.

In addition to their work facilitating consortium-level work, the Biomarker Network CC was charged with developing novel statistical methods for biomarker science. When the Biomarker Network began, little was understood about how to ensure that biomarker validation studies were reliable. As James, a Biomarker Network research center PI, noted, "the science of biomarkers is complicated....Say you have a blood test or a urine test that you think finds a cancer early--one would like to think that there is a very simple design of a study that will confirm that. Actually, it is extraordinarily complex" (James, 35). The CC has made major contributions to the field of biomarker science by creating study design and clinical validation criteria for biomarker discovery and development (Pepe, Feng, Janes, Bossuyt, & Potter, 2008; Pepe et al., 2001).

Thomas, an NCI representative, described the importance of the work of the CC, noting the lasting impact of the CC's statistical work not only on the Biomarker Network but on the field of biomarker science overall.
For example [CC statistician] is so well-known in the area of cleaning
and early detection for her statistical research. [Adam], again, very
well-known in the field, so they come up with some creative ideas and
one of the creative ideas that you can think about was their
publication on five-phase criteria for biomarker discovery and
development. What should drive the study design? So they talk about
clinical endpoints, then what sort of specimens are needed for that,
to meet their clinical goal. So the probe design expands on five-phase
criteria to elaborate on the requirements of the biomarker validation,
depending on the organ sites you deal with. So I think those are the
unique contributions that CC has made to the research within their
coordinating center and this has been partially because we have leaders
in the field of statistical design at Fred Hutch. So those were
something that they did for the larger community but they also
conducted studies within their center and that are very useful for
everything we do within Biomarker Network. (Thomas, 109)

In addition to the obvious benefits to the Biomarker Network of developing stronger and more reliable methods for validation studies, the CC's work on statistical methods and study design have had the added benefit of boosting the entire field of biomarker science.

Thomas further described the substantial impact of the CC data work on the Biomarker Network, noting that he wished they had more funding for the CC to expand their services from work on trans-Biomarker Network projects to the individual projects of the research centers.
Honestly, I don't want to brag about it but [CC staff] are so
well-appreciated by members of the [Biomarker Network] that some of the
members started asking whether [the CC] can advise individual members
on their statistical study design. That was not possible because of the
funding restrictions and also the funding limitation. But [CC] agreed
that on a case by case basis they will help individual investigators if
the study is likely to lead to a large validation study. (Thomas, 153)

The biostatisticians of the CC have developed such a reputation as those who elevate the quality of studies that Thomas of the NCI wished they could be involved in the statistical work of the research centers' individual studies, as well, especially in the realm of designing stronger studies. Well-designed studies result in more valid conclusions; even null studies produce new knowledge. Unfortunately, the resources of the CC are limited so that they are able to coordinate only four to five trans-Biomarker Network validation studies at a given time.

The Screening Network CC (essentially the same group), on the other hand, struggled with data work. During the period of observation, the majority of data-related work done by the Screening Network CC was focused on securing agreement from the Screening Network research centers about which data elements to send to the data repository and in what form. As they worked toward that objective, the CC tried to use its extensive knowledge of cancer-related data elements to steer the group toward choosing data that would result in the best analyses. The CC's experience in data collection within the Biomarker Network had given them a deep understanding of the potential pitfalls in collecting and harmonizing such data; however, due to organizational issues with roles and responsibilities as detailed in a companion paper, the CC experienced difficulty in getting the research center PIs to agree on which data to collect and paradoxically struggled to exploit their own experience and knowledge in data-collection procedures to the benefit of the Screening Network (Edith, 315; Nigel, 509).


Each CC's work in facilitating its consortium provided valuable services, as well as a unique perspective on the project, allowing it to facilitate the collaborative work that could drive the consortium toward its scientific goals. Because of its experience in coordinating consortia, the CC was able to help the groups create processes and policies that were effective and supported the science. This, then, is the essence of facilitation in Coordinated Collaborative Science: moving a consortium toward its scientific objectives through the application of expertise in the following areas.

1. Objectivity and Big-Picture Thinking: One of the great advantages a consortium gains from the addition of an independent CC is a neutral third-party with a high-level view of the entire research program. The CC occupies a unique position in that it is simultaneously a grantee and a scientific contributor, yet it is not a research site in the consortium. As such, the CC enjoys a level of camaraderie with the other grantees and can speak their language, but also has a direct relationship with the funding agency. As a neutral facilitator, the CC can use its position to guide the consortium to stay focused on the overarching goals and scientific objectives of the project without getting distracted by its own agenda. This was the first major advantage of an independent CC that research center PI participants identified in interviews.

2. Leadership: A strong CC can provide leadership in an environment that is generally devoid of it. The cooperative agreement structure is such that it is led by a Steering Committee, which may be made up of dozens of research center PIs, CC PIs, and funding agency scientific staff. What this means, in practice, is that everyone is in charge and no one is in charge; progress is dependent on one or more people stepping up and taking leadership roles, which may or may not happen. When the CC has strong PIs who are well-versed in leadership of consortia and understand what it takes for a consortium to flourish, the consortium as a whole benefits. The Biomarker Network CC PIs, among other roles, took leadership in their assessment of proposed team projects.

3. Development of Governance and Operating Procedures Policies: As with leadership, the expertise of the CC team comes into play when the consortium is deciding upon governance policies and operating procedures, as well as organizing meetings and running conference calls. Although these activities seem relatively straightforward and are thought of as primarily administrative tasks, the decisions that are made and encoded into the consortium's practices have a lasting scientific impact. For example, a governance structure that allows a small Steering Committee to decide which collaborative projects move forward without input from the rest of the consortium's members can: (a) result in substandard projects being approved for political, rather than scientific, reasons; (b) dissuade non-SC members from submitting projects; and (c) damage feelings of community and consortium-focused efforts. Meetings and conference calls can quickly go from productive and organized to chaotic and frustrating without a strong facilitator. It is difficult, if not impossible, to accomplish scientific progress in that kind of chaos

4. Data Development and Project Management: CC PIs and staff were not just experts in the biostatistical methods needed to run the appropriate analyses; they were also experts in the conversations and processes required to produce the right data to accomplish a study's goals. As described in the sections on CDE Development and Eligibility Flow Chart Creation, Edith and the CC team were not experts in CT scans and MRIs but, rather, experts in the work needed to collect the right data. A CC cannot possibly have expertise in every area of biomedicine and the establishment of CDEs for every disorder. Although the scientific knowledge they do have proves very useful, their skill in leading conversations toward the collection of appropriate data, as evidenced by both these examples, may be even more important to the outcome of a validation study.

5. Centralizing and Offloading Work: A well-run CC saves participating research center PIs and staff substantial amounts of time by offloading administrative tasks from the research sites onto the staff of the CC. This process allows research site PIs to spend more time doing science and less time on organizational and administrative tasks. In general, the research site PIs whom we interviewed were committed to an average of 10% FTE on the consortium under discussion. This time commitment encompassed not only their responsibilities for their independent projects at their local sites, but also their responsibilities to the consortium. Considering an average work week of 40 hours per week (a marked underestimate for most working scientists), the 10% commitment gives them four hours per week to meet their obligations for this project. Clearly, any work the CC takes on helps PIs accordingly. The CC's focus on producing high-quality data likewise saves time and effort for the PIs: the upfront effort that the CC puts into establishing data structures and data-collection instruments saves the project PIs from having to do or redo considerable amounts of work. Finally, the contribution of the CC can allow research sites to spend less of their grant funding on administrative and organizational aspects of the project. Although we were unable to measure this saving directly, it was mentioned by several participants.

And yet, from the examples given here of the struggles of the Screening Network, it is clear that the experience and expertise of the CC are not enough. We have to ask why, precisely, was the CC not able to apply their established, vetted, and proven systems and processes to ensure strong facilitation of the Screening Network? We present one answer to this question in our companion paper that describes major differences in the RFAs that initiated the Screening Network and Biomarker Network. Simply put, when the CC is not able to engage fully as facilitators, for whatever reason, the consortium suffers. We also have seen the cost of a weak CC in the setting of other consortia.

Furthermore, the skill set of the Science Facilitation Team at FHCRC is unique and developed over decades of experience in managing collaborative research. We must ask how other CCs can develop similar skills without needing to first invest decades of work.


CCs such as the one described in this paper are powerful, underused tools that facilitate Coordinated Collaborative Science, tools that show great promise in helping groups of researchers working on pressing problems to make greater progress. By applying collective decades of experience and expertise in the facilitation of collaborative work, the CC PIs and staff were able to provide the consortium with a neutral, third-party view of the project, keeping it on track toward its scientific objectives, providing leadership when needed. The CCs also helped the consortia avoid some of the pitfalls of collaborative research that have been well-documented in the literature on team science. By doing these things, the CC saves research site personnel time, effort, and money.

Yet groups such as the Science Facilitation Team discussed here are rare, primarily because of the difficulties in developing and sustaining such an organization under the project-based funding model of scientific research. This is as true today as when the 1978 Coordinating Center Models Project report was written. It is extremely challenging for an organization to maintain the systems, personnel, and knowledge base required to facilitate collaborative science at this level without consistent funding. We call on the National Institutes of Health to begin considering such groups as essential components of all collaborative projects, funding them as infrastructure rather than an administrative component of individual projects and especially of large collaborative research. There are precedents for such a move, exemplified by the Supercomputer Centers funded by the National Science Foundation. Our research begins to support the hypothesis that coordination and facilitation have as deep and lasting an impact on the scientific progress of a project as its computing facilities.

Furthermore, we call for more research on CCs, team science initiatives, and consortia to develop guidance for new CCs as they develop their own systems and processes to facilitate Coordinated Collaborative Science. As mentioned earlier, few resources exist to help in this area, but we believe the development of templates and sample governance manuals could greatly decrease the time and effort required to guide a consortium through its initial start-up phase.

This research on CCs and their facilitation of collaborative research is a beginning. We need to develop a deeper understanding of this facilitation work and seek ways to better document the processes and procedures that the CCs described here use in such a way that their knowledge can be transferred to other groups that facilitate collaborative research. We also need ways to train the various consortium participants--funding agency representatives, research-center PIs and CC personnel themselves--in what facilitation entails.

Authors' Note

This manuscript draws upon the work of Dr. Rolland's dissertation. We would like to thank our participants for their generosity with their time and expertise. This work was supported by the National Cancer Institute at the National Institutes of Health (grant number R03CA150036) and by the Fred Hutchinson Cancer Research Center.

Betsy Rolland, PhD, MLIS, MPH University of Wisconsin-Madison Carbone Cancer Center 800 University Bay Drive, 210-19 Madison, WI 53705 (608) 262-8314 Email:

Other affiliations: Public Health Sciences Division, Fred Hutchinson Cancer Research Center; Human Centered Design & Engineering, University of Washington

Charlotte P. Lee, PhD

Associate Professor, Human Centered Design & Engineering University of Washington

John D. Potter, MBBS, PhD

Member and Senior Advisor

Public Health Sciences Division, Fred Hutchinson Cancer Research Center; Department of Epidemiology, School of Public Health, University of Washington; Centre for Public Health Research, Massey University


Bangdiwala, S. I., de Paula, C. S., Ramiro, L. S., & Munoz, S. R. (2003) Coordination of international multicenter studies: Governance and administrative structure. Salud Publica de Mexico, 45(1), 58-66. doi:10.1590/S0036-36342003000100008

Berge, K. C. (1980). Perceptions of the coordinating center: As viewed by an advisory board. Controlled Clinical Trials, 1(2), 143-146. doi:10.1016/0197-2456(80)90019-7

Blumenstein, B. A., James, K. E., Lind, B. K., & Mitchell, H. E. (1995). Functions and organization of coordinating centers for multicenter studies. Controlled Clinical Trials, 16(Suppl. 2), 4-29. doi:10.1016/0197-2456(95)00092-U

Charmaz, K. (2009). Constructing grounded theory: A practical guide through qualitative analysis. Los Angeles, CA: Sage.

Cummings, J., & Kiesler, S. (2007). Coordination costs and project outcomes in multi-university collaborations. Research Policy, 36(10), 1620-1634. doi:10.1016/j. respol.2007.09.001

Curb, J. D., Ford, C., Hawkins, C. M., Smith, E. O., Zimbaldi, N., Carter, B., & Cooper, C. (1983). A coordinating center in a clinical trial: The Hypertension Detection and Followup Program. Controlled Clinical Trials, 4(3), 171-186. doi:10.1016/0197-2456(83)90001-6

Falk-Krzesinski, H. J., Contractor, N., Fiore, S. M., Hall, K. L., Kane, C., Keyton, J.,...Trochim, W. (2011). Mapping a research agenda for the science of team science. Research Evaluation, 20(2), 145-158. doi:10.3152/095820211X12941371876580a-second-language clinical investigators. Journal of Continuing Education in the Health Professions, 24(3), 181-189. Retrieved from PubMed Central.

Greene, S. M., Hart, G., & Wagner, E. H. (2005). Measuring and improving performance in multicenter research consortia. Journal of the National Cancer Institute Monographs, 35, 263. doi:10.1093/jncimonographs/lgi034

Lachin, J. M. (1980). Perceptions of the coordinating center: Foreword. Controlled Clinical Trials, 1(2), 125-126. doi:10.1016/0197-2456(80)90015-X

Margitic, S. E., Morgan, T. M., Sager, M. A., & Furberg, C. D. (1995). Lessons learned from a prospective meta-analysis. Journal of the American Geriatrics Society, 43(4), 435-439. doi:10.1111/j.1532-5415.1995.tb05820.x

Meinert, C. L., & Coordinating Center Models Project. (1977). Proceedings of the Fourth Annual Meeting Of Personnel Involved In Coordinating Collaborative Clinical Trials, Chapel Hill, N.C., 5/19-20/77. Baltimore: University of Maryland Coordinating Center Models Project.

Meinert, C. L., Heinz, E., & Forman, S. (1983). Role and methods of the coordinating center. Controlled Clinical Trials, 4(4), 355-375. doi:10.1016/0197-2456(83)90022-3

Mowery, R. L., & Williams, O. D. (1979). Aspects of clinic monitoring in large-scale multiclinic trials. Clinical Pharmacology and Therapeutics, 25(5 Pt. 2), 717-719. doi:10.1002/cpt1979255part2717

National Cancer Institute. (2013, May 2). Introduction to CDEs. Retrieved June 3, 2014, from monDataElements-IntroductiontoCDEs

Pepe, M. S., Etzioni, R., Feng, Z., Potter, J. D., Thompson, M. L., Thornquist, M.,...Yasui, Y. (2001). Phases of biomarker development for early detection of cancer. Journal of the National Cancer Institute, 93(14), 1054-1061. doi:10.1093/jnci/93.14.1054

Pepe, M. S., Feng, Z., Janes, H., Bossuyt, P. M., & Potter, J. D. (2008). Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: Standards for study design. Journal of the National Cancer Institute, 100(20), 1432-1438. doi:10.1093/jnci/djn326

Rifkind, B. M. (1980). Perceptions of the coordinating center: As viewed by a project officer. Controlled Clinical Trials, 1(2), 137-142. doi:10.1016/0197-2456(80)90018-5

Rolland, B., Smith, B. R., & Potter, J. D. (2011). Coordinating centers in cancer epidemiology research: The Asia Cohort Consortium coordinating center. Cancer Epidemiology, Biomarkers & Prevention, 20(10), 2115-2119. doi:10.1158/1055-9965.EPI-11-0391

Seminara, D., Khoury, M. J., O'Brien, T. R., Manolio, T., Gwinn, M. L., Little, J.,...Network of Investigators Networks. (2007). The emergence of networks in human genome epidemiology: Challenges and opportunities. Epidemiology, 18(1), 1-8. doi:10.1097/01. ede.0000249540.17855.b7

Symposium on Coordinating Clinical Trials. (1978, May). Proceedings of the fifth annual Symposium on Coordinating Clinical Trials 1978. Baltimore; Springfield, Va. University of Maryland, School of Medicine, Dept. of Epidemiology and Preventive Medicine, Section of Biometry. Retrieved from the National Technical Information Center.Winget, M., Kincaid, H., Lin, P., Li, L., Kelly, S., & Thornquist, M. (2005). A web-based system for managing and co-ordinating multiple multisite studies. Clinical Trials, 2(1), 42-49. doi:10.1191/1740774505cn62oa

Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036-1039. doi:10.1126/science.1136099

Betsy Rolland, PhD MLIS MPH

University of Wisconsin-Madison Carbone Cancer Center

Charlotte P. Lee, PhD

University of Washington

John D. Potter, MBBS PhD

Public Health Sciences Division, Fred Hutchinson Cancer Research Center; Department of Epidemiology, School of Public Health, University of Washington; Centre for Public Health Research, Massey University

Please Note: Illustration(s) are not available due to copyright prestrictions.
COPYRIGHT 2017 Society of Research Administrators, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Coordinating Center (evaluation)
Author:Rolland, Betsy; Lee, Charlotte P.; Potter, John D.
Publication:Journal of Research Administration
Article Type:Report
Date:Mar 22, 2017
Previous Article:The Roles of Chief Research Officers at American Research Universities: A Current Profile and Challenges for the Future.
Next Article:Research Shared Services: A Case Study in Implementation.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters