Printer Friendly

RAD Research as a framework for Writing Center inquiry: survey and interview data on writing center administrators' beliefs about research and research practices.

Abstract

This article is a follow-up to "Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009" (2012), which demonstrated that although the trend was improving, WCJ had published few studies (less than 6% of the total articles) that would be classified as replicable, aggregable, and data-supported (to use Richard Haswell's [2005] terms). To better understand this critical issue, the authors set out to understand the challenges WCAs face in conducting RAD research. The present study shares findings from a survey of 133 writing center administrators (WCAs) and follow-up interviews with 15 selectively sampled WCAs from that initial survey group.

**********

In the last decade, researchers in writing center studies have expressed a renewed interest in data-supported practices and empirical research, as demonstrated in calls by Paula Gillespie (2002b), Neal Lerner (2009), and Isabelle Thompson, Alyson Whyte, David Shannon, Amanda Muse, Kristen Miller, Milla Chappell, & Abby Whigham (2009). Despite these trends, the concept of "research" has a highly contested and convoluted history. To give a sense of this, we turn to Alice Gillam's (2002b) introduction to Writing Center Research: Extending the Conversation, where Gillam argues that we need "more explicit talk about what we mean by research, what should count as research, and how to conduct research" (p. xv). She poses three questions that deserve further investigation by members of our field:

* What counts as 'good' or worthwhile research?

* By what criteria do we make such judgments?

* What role has research played in defining our professional identity? (2002a, p. 3).

Other scholars, including Jeanette Harris (2001) and Paula Gillespie (2002), question the kinds of research that we conduct and its reliance on anecdote (Gillespie, 2002, p. 39). Despite these calls, a gap exists in examining and understanding the specific challenges inherent in defining and conducting research in a writing center context. With this manuscript, we seek to fill this gap by further investigating the application of Richard Haswell's (2005) RAD framework to writing center research.

To initially address the above questions, we examined the role of The Writing Center Journal (WCJ) in publishing research articles. Our study, published in 2012, consisted of a systematic analysis of all WCJ articles from 1980 until 2009. We examined articles using Haswell's RAD research framework as an analytical lens to determine if the published research articles were replicable (systematic enough and descriptive enough to be replicated), aggregable (able to be built upon and extended), and data-supported (presents clear evidence in support of claims).

Of the 270 articles analyzed, we found that 91 contained some form of data-driven research, primarily using human subjects, but less than 16% of those 91 studies would be considered RAD research (of all articles published, only 5.5% were RAD). We did, however, see promising trends of more RAD-leaning research as time passed. While we felt that we answered the question "What is the status of research in the field, as it is reflected in WCJ?" we were left with two overarching questions: "Why do we have so little RAD research?" and "What barriers prevent writing center administrators from conducting more RAD research?"

Haswell's RAD Research Framework

Haswell defines RAD scholarship as "a best effort inquiry into the actualities of a situation, inquiry that is explicitly enough systematized in sampling, execution, and analysis to be replicated; exactly enough circumscribed to be extended; and factually enough supported to be verified" (p. 201). He argues that the RAD framework works with existing research paradigms, including "feminist, empirical, ethnomethdological, contextual, action, liberatory, or critical" provided they are replicable, aggregable, and data-supported (p. 202). In essence, RAD identifies research qualities that may help writing center administrators to build a base of evidence-supported best practices to establish a tradition of research to both build knowledge and to further legitimize the field. Because RAD qualities can be applied to many types of research, RAD allows us to steer clear of false binaries like debates about the efficacy of qualitative versus quantitative research that have taken place in the broader field of rhetoric and composition (see Johanek, 2000) or debates concerning the theoretical underpinnings of research orientations (such as the empirical versus critical research debates between Marilyn Cooper and Davida Charney appearing in the 1996-1997 issues of College Composition and Communication). We want to stress that by conducting RAD research in writing centers, we are promoting another framework for discussing and engaging in research--not devaluing other paradigms and frameworks that have served and will continue to serve the WC community. We are suggesting that RAD research gives WCAs a new framework in which to work, one that is valued in fields beyond our own.

In the present article, we seek to answer the calls that Gillespie, Harris, Gillam, and others have made by extending our understanding of RAD research via an examination of WCAs' beliefs and research practices. Using survey data from 133 WCAs and interview data from 15 of those administrators, we describe how WCAs conceive of writing center research, address areas of difficulty, define important terms, understand research practices, and theorize RAD inquiry. We conclude by articulating why RAD research provides a particularly fruitful model for writing center inquiry.

Methodology

We define WCAs as those in an administrative role in a writing center; the term includes writing center directors, writing center associate/ assistant directors, and graduate administrators. (1) We also include those working as part of a larger unit, such as a learning center, who are in charge of writing consultations. We note that not all WCAs identify themselves as WC researchers and not all WC researchers identify as WCAs.

With this study, we sought to better understand WCAs' research practices and the barriers to their research. Because we knew from our first study that the field does not publish substantial numbers of RAD studies, we sought to further understand what has prevented us from doing so and if RAD may be of value to the WC community. The following questions informed our study of WCAs' practices (2) and shaped our data collection and analysis:

1. What are WCAs' beliefs about research generally and RAD (replicable, aggregable, and data-supported) research specifically?

2. What do WCAs believe about qualitative and quantitative data?

3. In what kinds of research practices and with what methods do WCAs currently engage?

4. What do WCAs see as the relationship between RAD research, assessment, and program-based reporting for an external audience, such as university administrators?

5. What other challenges do WCAs face with regards for RAD research?

Survey

The survey represented the first stage of our study; it gave us a broad picture of the kinds of research practices, training, and barriers that may prevent WCAs from conducting RAD research. After gaining IRB approval for our study, we used an online survey program (Qualtrics) to administer the instrument. To recruit participants, we posted a short invitation that described our study and asked for WCA participants during the fall 2011 semester. We posted this invitation to three listservs that WCAs commonly use: WCenter, WPA-L, and the Michigan Writing Center Association listserv (our home state, which may have introduced some regional bias). Two weeks later, we posted a reminder about our survey to those same listservs. We left the survey open for one month. A copy of the survey questions can be found in Appendix I. A total of 133 WCAs took our survey; 99 of them completed all questions. We did not exclude incomplete surveys but rather used all information WCAs provided; to address this, we indicated the number of participants who answered each question.

Interviews

At the end of our survey, respondents had the opportunity to include their name and email address if they were willing to be interviewed. Approximately 60% of our respondents indicated they would be open to interviews. From this list, we carefully selected individuals who represent diversity within the writing center community, including different institutional settings, geographic locations, positions, and views on research. Of the 20 WCAs whom we selected, 15 agreed to be interviewed. Interviews were conducted in summer 2012 using two voice-over-IP programs (Elluminate and Skype) for audio voice recording. Interviews lasted approximately 45-60 minutes each and were transcribed professionally. A copy of the interview questions can be found in Appendix I. Interviewees also were asked questions specific to their survey responses.

Analysis

For the quantitative survey data, we calculated descriptive statistics using tools within Qualtrics as well as Statistical Package for the Social Sciences (SPSS) 20. To code our qualitative responses (survey and interview), we used a multi-layer coding process adapted from Saldana (2009). This process was as follows: Initially, we read through the qualitative data and independently coded it for emergent themes. We then met to discuss our themes and developed a tentative list of codes based on our independent readings. Next, we re-read the data with our new list and re-coded independently. We met again to discuss our responses and finalize our analysis. Through this work, a series of six themes emerged, two of which--the politics of research and the practices of research--are the focus of this article.

Study Limitations

While we endeavored to gain a representative sample of WCAs, it is likely that some self-selection bias is present in our results. First, the study was limited to WCA respondents who subscribe to one of the three distribution listservs. We recognize that there are likely groups of WCAs who were not able to be reached, such as those not as connected to the writing center community, those who do not regularly read/ subscribe to any listserv, or those who were too busy to respond. Additional self-selection bias was present in terms of who agreed to be interviewed. While we have no way of knowing how those who did not participate would have responded to our questions, we are pleased with our overall sample and believe it is large enough and diverse enough to represent various WCA voices. Additionally, as is the case with much exploratory research, we determined areas where refinement of our questions would have yielded better results. For example, we wish we would have asked more about the specific research questions WCAs were deploying and how they are analyzing and using data in their centers.

Participant Demographics

Participant demographics are particularly important because we wanted to investigate and understand research that was representative of the broader field of writing center studies--institutional settings, geographic locations, institutional placement, position status of the WCA, and so forth. We present these demographics to show the representativeness of our sample in terms of WCA background, geography, and institutional demographics. WCAs represented a wide range of geographic regions within the USA, including the Midwest (38.2%), Northeast (22.8%), South (27.9%), and West (11.1%); we also had one participant from Europe. Because the survey was distributed to the Michigan Writing Centers Association, Michigan was more represented than any other state (21), followed by New York (12), and Texas (8). A wide range of institutions were represented in the survey, including community colleges (10%), four year private (22%) and four year public (21%) colleges, doctoral private (8%) and doctoral public (33%) universities, and other types of schools (6%).

Forty-seven (35%) WCAs were part of an English Department; 55 (41%) were independent; 31 (23%) were part of a larger unit, such as an academic skills center; and 2 (1%) were located in high schools. Sixty-eight percent of respondents were writing center directors; 11% were associate/assistant directors; and 21% were directors of other kinds of centers, tutor educators, and graduate students in administrative roles.

WCAs reported a range of 200-50,000 tutorials per year with an average of 4,625 tutorials. (3) Writing centers employed between two and 90 consultants, with a mean of 24 consultants. Writing centers engaged in a number of professional development practices: 52% required a peer tutoring course, 28% required monthly tutor professional development meetings, 60% required reading a WC tutoring manual, 33% assigned peer mentors, and 65% required new tutors to shadow seasoned consultants, among many other practices.

Most writing center administrators were trained within rhetoric/ composition programs (44%) or English literature programs (24%); others came from creative writing (5%), education (10%), linguistics (7%), and other fields (10%). Of the 98 respondents who shared their degree status, 46 had Ph.D.s, nine were pursuing a doctorate, and one held a doctorate in pharmacy. Seven additional administrators had done some Ph.D. work but had not completed those degrees. Three had education specialist credentials. Twenty-three had earned M.A.s or M.F.A.s, whereas one was pursuing an M.A. and the other held a B.A.

Our interview participants included six with Ph.D.s and nine with M.A. or A.B.D status. Of those, all six Ph.D.s were tenure-line faculty members, one M.A. was a contract faculty member, and the remaining eight were administrative professionals. We interviewed three individuals from community colleges; one from a technological institute; one from a specialized college; two from branch campuses; and eight from various universities, including research and teaching colleges. We also attempted to represent regional diversity among our participants, with three interviewees from the South, four from the Midwest, one from the Mid-Atlantic, and three from the West.

Results

The results are organized by research question, and we address themes and findings that emerged from both survey and interview data.

What are WCAs' beliefs about research generally and RAD research specifically?

Our survey reveals a number of competing definitions for "writing center research" among our participants. Table 1 below describes survey participants' responses to the question "How do you define writing center research?" Note that many of the 98 participants who answered this open-ended question provided more than one definition in their description (157 different responses).

Nearly all survey participants suggested that "writing center research" was based in the writing center. But the similarities end there. According to the responses, writing center research might be theoretical, based in assessment, secondary/source-based, methodologically sound, and/or evidence-based. These definitions, as we discovered in our interviews, represent the wide range of educational and professional backgrounds and institutional placements of WCAs. Table 1 provides a breakdown of the different responses from WCAs.

Despite the diversity of descriptions of what research means, participants expressed broad agreement that empirical research is important to writing centers (mean 4.1 of 5); research was also acknowledged as useful for reporting to administrators/stakeholders (mean 4.43 out of 5).

Our interview participants' responses reveal how definitions of research may be influenced by educational background, position, and institutional placement within the university. On one end of the spectrum is Nathan, an Associate Professor and WC director at a private doctoral institution in the Northeast, who suggests that the writing center should be the site of research: "One of the reasons I was hired was to conceive of the writing center as a research site, as a site for academic and intellectual work around the teaching of writing in writing center contexts." He goes on to describe his mentoring of Ph.D. student tutors and his own research practices, which include both localized research on university populations as well as broader work within writing centers.

On the other end of the spectrum is Nan, who works in a staff position as a WC coordinator within a learning studio at a Midwestern private 4-year university. Her research practices consist mainly of "sharing new articles to come out--Chronicle of Higher Education or The New York Times"--and "grab[bing] an article here, a chapter there." She strongly believes that writing center research is rooted in practice, not in data (which she sees only as necessary to report to administrators and secure funding). In talking about the role of empirical research in her tutor education courses, she argues that empirical research takes away from and distances us from actual practice, which "waters down the work." (4)

While WCAs were generally supportive of data-supported research, survey respondents were much less familiar with RAD research (mean 2.52 out of 5). Those who were aware of RAD research expressed their belief that it is important for "expanding the field's knowledge of research supported practices" (mean 4.70 out of 5).

Our interview participants revealed a complex relationship with RAD research, with some strongly in favor, some having never heard of the term, and some dismissive of it. Table 2 shows the diversity of views on RAD research from our interview participants.

These results, we think, represent some of the challenges inherent in understanding RAD research in writing centers given that 26% have never heard of it and another 26% dismissed it as impossible or not useful for writing centers. For example, Bonita says she is familiar with RAD research as "quantitative" and "empirical" and suggests that it has limited value for writing centers, as the field's emphasis is on the qualitative (more about this conflict in the next section). Dara suggests that we should be doing it, but we aren't because "we hold up our fingers like a cross and say 'get away' ... and it's not valid in this paradigm." Carrie argues that we need RAD research, especially replication, because "it's one of the things we are missing." These understandings of RAD research are influenced by participants' (misunderstandings of the roles data play within writing centers and how that data is collected and used. While attitudes towards data aren't the only contributing factor concerning the implementation and support of a RAD research framework, it is clear from such results that these beliefs substantially contribute, and it is to these attitudes that we now turn.

What do WCAs believe about qualitative and quantitative data?

Another area concerning definitions and politics of writing centers is the kinds of data we collect and our relationship to those data, especially the perceived differences and uses of qualitative and quantitative data. Understanding WCAs relationship with data has substantial ramifications for how the field engages in RAD research because many WCAs continue to associate RAD research solely with quantitative work (as Bonita discusses, above). As such, we set out to investigate this emergent theme in our interviews, where we again discovered two dominant views.

View #1: Qualitative research is more representative of WC work (4 or 26% of interviewees). In our interviews, some participants strongly favored qualitative research (a dominant theme in the broader field of writing studies, as described by Johanek, 2000). This view privileges qualitative research and suggests that, at best, quantitative research is not very useful to understand the "real work" of the WC and at least, according to one interviewee, that it is "meaningless for writing centers." Alice, a WC director at a Midwest two-year branch campus opined:
   I know that the numbers really don't represent what a good job
   we're doing, so I'm aware that the numbers are sort of this set of
   information I give the chancellor and the dean because they don't
   know that it doesn't mean anything. It makes them happy and it
   makes me happy, even though I know it's sort of meaningless. But
   what's really important is qualitative data.... The thing about
   qualitative data is that it tells you the stories about the people,
   and that's where our business is: helping people.


Alice sees quantitative data as disconnected from students and their stories. In other parts of her interview, she describes herself as in charge of her program's assessment data where she uses students' stories to help mitigate the problems she sees with quantitative data. She proclaims, "Here's my numbers; here's what they mean, but let me tell you some stories along the way." While this view was not dominant in our interview data, it likely represents a substantial portion of WCAs.

View #2: Qualitative and quantitative data work together to provide a more complete picture (11 or 73% of interviewees). The second view recognizes the value of both qualitative and quantitative data in developing quality writing center research. Dara, a WC director at a private four-year college in the mid-Atlantic region, concedes:
   I favor mixed methods.... When you're trying to show that something
   is the case or is not the case and your answer is a "yes-no" type
   of question, the quantitative data is far more convincing and far
   more useful. But that said, for a lot of the questions that I ask,
   I don't know the answer so I need qualitative data. I need the
   things that people are saying.... I need to find out what questions
   to ask before I ask a "yes-no" question.


In analyzing these respondents' ideas about quantitative and qualitative research, it is important to note that institutional position likely plays a role in their views. Whereas Alice occupies a severely underfunded staff position in a learning center where she is compelled to justify her existence, Dara is a tenure-line faculty member in a small, private liberal arts college who receives support for her work and research.

We should also note a difference between viewpoints and practice. Only 33% of our survey respondents indicated that they were confident in calculating statistics (responding with either "agree" or "strongly agree"), which certainly impacts how respondents view quantitative research. Furthermore, while 76% of interviewees were supportive of quantitative research, less than half indicated that they were engaging in quantitative research beyond calculating basic descriptive writing center statistics.

What kinds of research practices and methods do WCAs currently engage in?

We now move from views to research practices and examine the kinds of data that the WCAs collect and use. Table 3 offers a breakdown of the data that survey participants report collecting. Every respondent collected more than one type of data; most collected three to four kinds of data for their writing center. Surveys and session observations were employed most often, but many other data collection methods were present.

All interview participants but one, who was new to the field, reported collecting data on their centers. The purposes, rationales for selection, and uses for that data collection, however, were quite varied. Fourteen (93%) of interview participants who collected data indicated one reason they did was for external reporting/assessment, such as tracking students, number of tutorials, etc. However, nine interviewees (60%) also collected data to understand specific aspects of their center and to improve their center's work, such as session report evaluations, faculty surveys, tutoring observations, etc. Seven (46%), mostly fulltime tenure-line faculty or seasoned staff, were involved in additional projects and/or in translating this data into conference presentations (three of the seven) and publications (four of the seven).

What do WCAs see as the relationship between RAD research, assessment, and program-based reporting for an external audience, such as university administrators?

Because so much writing center data collection is rooted in assessment, we wanted to understand the relationship between assessment, RAD research, and program-based reporting. We asked our survey and interview participants about how they viewed and used the data they collected in their centers. Their responses fell into three categories: external, internal, and reciprocal.

External views of data, which represent 57.3% of survey responses, indicate that data was collected mainly for an external audience of stakeholders, largely for the requirement of keeping the center funded and open. This was the case for Alice, who claimed, "Most of my research is about how to prove that we exist and we deserve more funding. That's the goal of the research." Internal research, which only one (1%) survey respondent indicated having the luxury of doing, was done solely to improve the writing center. The remainder of the survey participants (35.4%) fell into the reciprocal research category, where research was done in a balancing act between internal and external reasons. (5) Cody, a writing center director working at a branch campus of a western public doctoral research university, reported, "I think that there are, of course, overlaps, and often the data that I gather from my research are used in some sort of fashion for unit assessment and it's reported to upper administration, things like that." What these categories and numbers suggest is that while we are collecting a lot of data, over half of WC administrators see that data only in terms of how it might be described to external stakeholders or upper administrators, not necessarily as data that can be used by the field to better understand its practices and to develop more data-supported best practices. This is an important point and one we'll return to in our discussion.

Our interviewees described this relationship in more detail. Dara, for example, noted the similarities between assessment and research: "Research is assessment; assessment is research." Patricia, a staff director working in a southern doctoral branch campus writing center, revealed that her view of assessment and research connected to audience:
   When I think of assessment, I think 'Okay, let's take a look at
   what we're doing here, and let's take a look at how things we're
   doing could be better'.... When I think of research, I think of
   something bigger than university or program level assessment.
   Research is breaking outside of the university, connecting with
   other writing centers that are doing whatever it is that we're
   doing in the field of composition. So what we do in our assessment
   can connect with that I think, but I don't think that's what we are
   doing.


What other challenges are present for RAD research? The Uniqueness Factor

A final issue--and one that has direct bearing on the concept of RAD research and that stems, in part, from views on data and assessment--is what we call the "uniqueness" factor. One clear consensus emerged about the data WCAs collect: it was unique to individual WC contexts. A majority of our interview participants emphasized the uniqueness of each writing center and context, the uniqueness of each individual student served, and the lack of generalizability of said contexts. Six of our fifteen participants felt that their data and contexts were so unique that their data would not be useful to anyone else (this is especially true for data that is qualitative in nature). Katrina, a WCA staff director at a community college in the Midwest, explained, "You know, every situation is so different and every conversation, it's so different from the next one ... it's all about doing that kind of [qualitative] research."

In some cases, this uniqueness factor allowed those WCAs who were familiar with the concept of RAD to directly question whether or not members of the field can or should do RAD research. In the following exchange, Kelly, a community college staff director located in the South, examined this issue:

Kelly: I 've read a lot of Richard Haswell and a lot of everybody else ... there's been a lot of discussion about how you do [RAD] with writing because how do you measure it? It depends on where the writer is, and that's one of the intangibles, so it's so complicated when we're looking at writing and maybe even more complicated when you look at writing center work.... Doesn't replicable mean that you have to have the same kind of site? Like I said, I don't think that you're about to do it in writing center work 'cause everybody is still unique and they come from so many different places.

Researcher: So, you're saying that because writing centers are so different it makes it hard to do RAD research?

Kelly: I think that, yes, and I think [it depends] on who the writer is when [s/he comes] to see you. So, yeah, it makes the difference in what they need. How do you make that replicable?

In this exchange, Kelly seems to suggest that the need to adapt tutoring strategies to different kinds of learners is unique, and because of that, the complexity of the writing center as a research site disallows replicability or aggregability.

This view was not shared by all WCA interviewees, however. Larry, a first year interim director with a Ph.D. in English who serves at a southern community college, indicates that one barrier to research in writing centers is this "uniqueness" problem. In his view, the field has neither given serious thought to replication nor seriously considered what RAD research looks like. He explains, "I don't know that we've got the best understanding of what replication means or RAD.... We have a model of replication coming from the sciences, and I think we need to re-theorize what replication means for us." Larry recognizes that the field has not theorized replication and that other fields' definitions may need to be adapted to WC research.

Nathan likewise identifies the "uniqueness" factor as a problem when he discusses the relationship among assessment, program-based reporting, and research. He argues that program-based assessment and assessment research are always seen as local and are important to the local context. He continues by saying:
   But as I said with RAD research, I mean, if [assessment is] done in
   ways that make it replicable or generalizable, [these practices]
   then move over into research and they're applicable for lots of
   folks. Ideally, I think it's been a problem in writing center work
   that there's been lots of local research done that never gets
   published beyond the local institution, and really, it shouldn't
   because it wasn't done very well, but at the same time, it creates
   a dearth of knowledge for what goes on in writing centers....
   What's the impetus for doing the research? Is it to justify the
   existence of such program? Probably for more funds, to grow, or to
   share knowledge--or all three, right? It doesn't have to be just
   one.


As a result, Nathan sees this "uniqueness" factor as creating a "dearth of knowledge" in the field. He also suggests that while a lot of local assessment is done, WCAs don't have the skills, knowledge, and/or time to turn it into research for the broader community.

Discussion--RAD Research as a Framework for Writing Center Inquiry

We close our results section with Nathan's quote because he raises a number of critical issues about the usefulness of our data and about bringing replicability and aggregation into our data-supported research (or bringing the R&A into the D). This final section examines issues surrounding our findings and creates suggestions for moving forward by examining replicability, aggregability, and data-supported research as concepts for writing centers.

Before discussing RAD research specifically, we want to address the concept of "research" as a whole and acknowledge some problematic terms surrounding it. The widely divergent definitions of research prevalent in our study present a clear challenge to research in the field. The term "research" simply isn't sufficient to describe the variety of our work; writing center research means so many different things that the term "research" itself creates confusion and masks understanding. We recommend that researchers in the field consider terms like "theoretical research" and "RAD research" to describe different approaches and to use these terms rather than the broader "writing center research" umbrella.

Furthermore, responses to the term "empirical" demonstrate a substantial amount of confusion and concerned feelings; some participants equate empirical with quantitative (and likewise, equate quantitative with RAD research, neither of which is accurate). Traditional definitions of empirical research, however, refer to any research based on systematic direct observation or experience--this can be experimental, case-study, observational, and so on. These distinctions must be taught and reiterated regularly if we are to avoid this misunderstanding.

Understanding Data-Supported Research and Moving Beyond "Uniqueness"

The principle of data-supported research is one of the three areas that Haswell's RAD framework considers. Supporting one's practices with data is something that most WCAs in our study have had to do for the sake of survival. We'd like to revisit the kinds of data that WCAs collect and their views of that data--specifically, beliefs surrounding qualitative and quantitative data. Our results suggest that while WCAs are collecting copious amounts of data, over half of WCAs see that data only in terms of its use for university stakeholders, not necessarily as information that the field can use to better understand writing centers and to develop best practices. While each institution and individual writing center is certainly unique, we also argue that a great deal of similarity exists in the practices and procedures of the centers our WCA participants administer. Despite some of our participants' emphases on the differences rather than the similarities, the interview process revealed quite similar discussions and activities taking place in diverse writing centers. Each interview participant, for example, mentioned using observations in tutor education. We believe we have much shared practice across centers, and additional RAD research in a variety of centers may aid in our understanding of how to best engage in these shared practices.

Furthermore, in response to the "uniqueness factor" we found in the study, we suggest a mindset shift into thinking more broadly about our data, a suggestion we also made in "Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009" (2012). After conducting this research, we have found that many writing center practices do not actually differ that much from site to site as is broadly believed and that these similarities are a jumping off point for RAD inquiry. We propose, therefore, that we might see our data as falling into two overlapping categories:

Unique Assessment Data: Some of the data writing centers collect is unique to their particular circumstances and is collected solely for the purposes of assessment and for external stakeholders (e.g., How many students did we serve this year versus last year? How many nursing students came to our center?) In most cases, this kind of data is not necessarily useful for answering broader questions in the field, but rather, is useful to WCAs for the day to day operation of writing centers and to maintain funding.

Reconceptualizing "Local" Data: Other kinds of data WCAs collect, especially surrounding the efficacy of our practices, can be rooted in broader questions the field needs to address across writing center sites. We'd like to suggest that there are a number of critical issues around which WCAs are already collecting data and around which data from different centers might be collected and leveraged to build more evidenced based practices using a RAD framework. We argue that one way the field can foster more RAD research is to shift how we see the data we collect. We need to ask not only "How is this useful to my center?" but also "How could these data be useful to other centers?" In order to make this shift, however, we also have to ensure that our data is collected using the best practices of research and to understand how our individual data might be shared with other centers.

The act of tutoring--of working one-with-one with writers, of dealing with individual writing and writer-based challenges--is at the core of all of our practices. Writing centers generally share the need to document and assess the efficacy of the center, to reach out and educate others about the work of the writing center, to prepare tutors, and to anticipate and respond to diversity in students and their texts. While the diversity of students might differ on various campuses, enough of us are encountering English language learners, graduate students, developing writers, and various ethnic groups that data collected on these local student populations would be of interest to those examining similar populations on other campuses. While the lengths of tutorials and types of texts we encounter might change, the acts of tutoring, of establishing rapport, of focusing on the tutee's needs, are essentially the same. While tutor education programs may differ, many WCAs are using tutor education courses and/or observations, tutorial shadowing, professional development meetings, and so on. If the above hold true, then, the data we collect on said practices would be of use to WCAs in many institutional settings.

Furthermore, there are at least two approaches toward collecting writing center data. One is a deductive approach, where interested WCAs identify overarching questions and build RAD data collection procedures into their center's work, whereas the other approach is inductive. Both of these approaches can be used to envision our data more broadly and to consider how such data might be used to answer questions of relevance to the field as a whole.

One more important note about our data--as Nathan mentions in his quotation above--not all data are created equally. The present study did not examine the data that participants collected; rather, it examined their discussions of it. In order to work within a RAD framework, we also need to consider the education and support necessary to encourage best practices, such as sampling, bias avoidance, and ethical research practices. We need to foster more methodological discussions in the WC community about best practices for data collection. When we make the move to reassess our data as part of a RAD framework, we also need to treat it with care: to clearly state the research question or problem at hand, to present an appropriate methodology in clear and explicit terms, to use systematic approaches to data collection and analysis, and to understand research as a conversation with others.

Creating Replicable Writing Center Research

In our interviews, we found that there was confusion about what replication entails and there were questions concerning its value. Furthermore, our survey and interview results demonstrate that WCAs often collect the same kinds of data--which could lead to a series of replication studies. In order to use this data in a RAD manner, however, researchers in the field need to better articulate and understand the principle of replicability.

Replicability refers to the degree to which the study's methodology is described in a manner that another researcher could use to replicate the study's design, given reasonable contextual differences. In describing a trend in problematic, unclear methodology sections, Smagorinsky (2008) writes, "I do think that I ought to be able to reconstruct a study's design based on how an author explains it. In most cases, unfortunately, authors are far too nebulous in their account" (p. 394). Our 2012 WCJ study complemented Smagorinsky's observations of fuzzy methodology sections that would not be able to be replicated. Two areas critical for replicability and aggregability--participant selection and limitations/ future work--earned the lowest score on our RAD rubric. Thus in terms of replicability, we see two challenges--both in the reporting of results (as evidenced from our 2012 study) and the confusion over what replicability means (as evidenced from this study).

We'd like to return to our 2012 discussion about the importance of reporting results in a manner that is clear, precise, and replicable. Even if other WCAs chose not to replicate a particular study and report on the results to the broader community, the standard of reporting RAD research should be such that we are able to clearly understand a method in its entirety. We'd also like to refer readers to Peter Smagorinsky's 2008 article for an extended discussion about how to write a high quality, replicable methods section.

The other piece of the replicability aspect of RAD research is the replication of research itself. The importance of replicability in research should not be understated because it allows researchers to re-create the circumstances under which study results are understood and expressed in relation to the research question. This allows us to know if findings are context-dependent or can move across contexts and be of more general use. However, the field shouldn't see replicability as requiring exactly the same contexts; the idea of replication in a writing center context might be that we can replicate practices and studies to see how well they function in multiple sites. Replicability does not necessarily mean that a study needs to be experimental and replicate all conditions of the study; this isn't reasonable for WC contexts and doesn't fit with the situated research that we often do. But, we can look to others' methods, instruments, and approaches, and try them in our centers and see if they work as previously reported, and since this is how writing centers operate as a whole, this suggestion is fitting for replication in a WC context. If, for example, several different writing centers conduct the same study and learn the same thing by replicating each other's work in their unique settings, we can say with some certainty that this concept can be applied to writing centers more broadly. For writing centers specifically, the concept of replication is critical as we continue to build our understanding of evidence-supported best practices.

How do we engage more effectively in replication research? Multi-institutional research is one clear way of engaging in replication studies--groups of WCAs ask the same research questions, design the same instruments, and examine results collectively. Multi-institutional research has numerous other benefits including sharing resources, drawing upon the different strengths of various research team members, and learning from each other. We may also begin to consider replication simply by examining what kinds of "reconceptualized local data" are collected by multiple institutions and asking questions of those datasets. The meta-analysis, a research method frequently now referred to in the context of a quantitative analysis of multiple datasets, is not yet something that the field is able to do because we lack the sheer amount of replication data needed for such an approach. In the future, however, meta-analysis may be another approach to using replication studies and moving towards aggregable research.

Replication can help us overcome the "uniqueness factor" expressed by the WCAs in the study. If and when research can be replicated in broad contexts, we are moving toward a more evidence-supported understanding. A final note about replicability--replication studies need a voice and a venue in our field's publications and this work needs to be valued in the same way other kinds of research is valued.

Aggregability in a Writing Center Context

The principle of aggregability builds from the replication research. It goes a step beyond replication by allowing researchers to extend and build upon previous research findings. Participants in our study saw their data in isolated, context-bound ways, and most did not consider aggregability in their discussions of research. Haswell suggests that aggregation is tied specifically to the advancement of research (p. 201); without aggregation, we are left with a set of isolated studies that do not necessarily enter into conversation with one another or build upon one another (a finding that we saw in our 2012 article). Aggregation implies a continual conversation about research in which the field engages. This means we must identify the issues that are deserving of our research time and engage with those questions through the development of studies devoted to their understanding that build upon, question, and/or challenge previous research. Literature reviews of previous research, while not RAD in nature, help solidify what the field "knows" at present and helps identify gaps for future researchers to address, thus encouraging aggregable work.

Like replication studies, studies that aggregate are almost completely lacking in our field's publications, as are the methodological mechanisms through which we might aggregate our work. In our 2012 WCJ study, for example, the sample's "limitations and future work" scores were the lowest in our rubric, with most studies providing little to no mentions of how the work should be extended. Like replicability, the concept of aggregation is critically important to writing center research because it allows us to build a body of tested, generalizable knowledge over time--knowledge that allows us to say with some certainty that our tutoring strategies, professional development, and pedagogical strategies work.

Ultimately, WC A researchers must acknowledge that aggregability and replication do not necessarily imply that all researchers agree. It is simply important to know if a study can be successfully replicated or aggregated, and, if the results are different, the question of why those results differ is an important part of aggregable research.

Conclusion

We believe that RAD research represents a useful framework for building evidence-supported practices because it can be effectively leveraged for writing center research and is of value to writing centers in a number of ways. First, it allows us to provide evidence that tests the efficacy of our practices, using strategies and techniques that are understood in diverse fields and by diverse stakeholders. It allows us to develop and refine writing center pedagogy (including building upon each other's work, which our first study found that the field thus far largely has failed to do). It encourages us to test our lore and assumptions about long-standing tutoring practices, an issue taken up by Thompson, Whyte, Shannon, Muse, Miller, Chappell, & Whigham in 2009. Finally, a RAD framework helps us to legitimize writing centers as sites of inquiry in ways that external audiences can understand, specifically, by allowing us to produce evidence about the efficacy of writing centers that external audiences can value.

This article has examined the views and practices of 133 WCAs to better understand barriers to and challenges with RAD research. WCA participants demonstrated a wide variety of beliefs about how writing center research is defined, how research is conducted, and about their relationships to research at the institutional level. Through this work, we discovered that while WCAs collect and use many different kinds of data in their local settings, this data is rarely shared with others. We argue that RAD research can present a useful framework both for sharing institutional data and for promoting best practices in research. We close this piece with the words of Carrie, one of our participants at a private, four-year southern institution, who says,
   If we have a replicable study that [was] happening in each of our
   centers, even 15% of our centers across the US, if we were able to
   do that in such a way that we could aggregate that data, we would
   have some compelling information.... If we could find a way to get
   that data, I think we could find much more support for what we're
   doing from the administration as well as from other faculty
   members.


As Carrie suggests, we need to embrace RAD research as a methodology for writing center studies. In other words, we need to recognize what data can be useful to others and use replication and aggregation techniques of data collection and analysis to extrapolate local findings to other settings and to develop multi-institutional projects. In doing this, we also need to have serious conversations about how to share our data and how to theorize replicability and aggregability, so we can build more research-supported practices for the important work of writing centers.

Appendix I: Survey and Interview Questions Writing Center Survey Questions

Please note that in the interest of space, we've only included the questions, not the close-ended response categories. If you would like a complete copy of the survey, including the response categories, please contact the authors.

1. What is your role in the writing center? (select response)

2. Which classification best fits your institution? (select response)

3. In what geographic location is your writing center located? (select response)

4. Please describe the nature of your writing center (e.g. part of an English or Writing Department, Independent, Part of Academic Skills Center)? (select response)

5. How many student tutorials do you typically serve in a year? (numeric answer)

6. How many consultants do you typically employ? (numeric answer)

7. In which of the following ways are your consultants trained for employment in your writing center? (select response)

8. How do you define "writing center research"? (open ended)

9. What do you think are the most important features of writing center research? (open ended)

10. Which of the following statements describe your relationship to writing center research? (select response)

11. What do you see as the relationship between empirical research, assessment, and program-based reporting for an external audience, such as university administrators? (Open-ended)

12. Please respond to the following questions, using the following responses: Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree)

a. Empirical research is important to Writing Centers.

b. Research is useful to me only for reporting purposes.

c. I conduct empirical research frequently.

d. I wish I knew more about empirical research methods.

e. I am confident in calculating statistics.

f. When I am confused about research, I seek out help from colleagues.

g. I believe that we have enough evidence-supported best practices in writing center scholarship.

h. I don't see why we need more research on writing centers.

i. I am familiar with the concept of RAD Research.

j. I wish I had more formal research training.

13. On a scale of 1-10, how important do you believe it is to conduct research on writing centers for the purposes of expanding the field's knowledge of research-supported practices?

14. On a scale of 1-10, how important do you believe it is to conduct research on writing centers for the purposes of reporting to administrators/stakeholders?

15. If you conduct any kinds of primary data gathering for your center (for research or assessment purposes), can you please describe what you collect and how it is used? Primary data gathering can include: surveys, interviews, observations, ethnography, tutor intake forms, etc. (Open-ended)

16. Do you typically seek Institutional Review Board (IRB) approval for any research you conduct? Why or why not? (Open-ended)

17. What is your highest level of education?

18. What is your degree field? (e.g., rhetoric and composition, literature, secondary education) (open ended)

19. Have you ever completed coursework in research methods/ methodology? If so, how many courses have you taken? (Answers: No, Yes--1 course, Yes--2 or 3 courses, Yes--more than four courses)

20. Which of the following software packages, if any, have you employed in your own research? (select response)

21. Is there anything you wish you had been taught in graduate school that would have better prepared you for research and/or assessment? (open-ended)

22. Have you published in the field of writing center studies? (select response)

23. Have you published outside of the field of writing center studies? (select response)

24. If you have published research articles on writing centers, what motivated you to do so? (Open-ended)

25. Do you have anything else you'd like to discuss concerning writing center research? (Open-ended)

Writing Center Administrator Interview Script

Opening Question: Can you tell us a bit about your institution and writing center?

1. How do your research practices relate to your work in the writing center?

2. As we've been coding the survey data, we found the word "empirical" seemed to be a loaded word. What is your reaction to this term?

Follow-up: What is the place of empirical research in writing centers?

3. Are you familiar with the concept of Replicable, Aggregable, and Data supported (RAD) research?

If no, explain and move on.

If yes, ask: What do you see as the value of RAD research in writing centers?

4. What do you see as the relationship between research and assessment and/or program-based reporting?

Follow-up from Q3 and program goals, if necessary.

5. Some of our respondents indicated that writing centers could learn more from qualitative data than quantitative data. Do you agree or disagree? Why or why not?

Follow-up: How do you define qualitative research?

Follow-up: How do you define quantitative research?

6. One of the things we are interested is the role of sponsorship and support of writing center research.

A) What support resources, such as funds, release time, and mentors/collaborators, have been available for your research at your home institution?

Follow-up: Have you sought or received any of this support?

B) What kinds of support are available for research in writing center studies?

Follow-up: Have you sought or received any of this support?

C) What kinds of disciplinary support are available for your writing center work?

Follow-up: Have you sought or received any of this support?

7. What kinds of training, if any, have you received in research methods (methods including qualitative or quantitative research techniques, statistics, etc.)?

Follow-up: In what context--professional coursework, professional seminars, or on the job training--have you honed these methods?

Follow-up: Have you worked to increase your knowledge in research methods in any other ways (such as partnering with those in research-focused disciplines, etc?)

8. Do you seek Institutional Review Board (IRB) approval for your research? Why or why not?

9. What do you see as the greatest barriers to writing centers conducting more RAD-based research?

10. What can we, as a field, do to better support writing center research?

11. Is there anything else you want to add about writing center research?

Acknowledgment

Special thanks to the International Writing Center Association for grant funding and to audiences at IWCA 2012, MAWCA 2013, and a WCJ Live session for their feedback, comments, and suggestions. We also would like to express our gratitude to two anonymous reviewers, Danielle Cordaro, and the late Linda Bergmann for their feedback. We thank our survey and interview participants for their time and insights.

References

Charney, D. (1996). Empiricism is not a four-letter word. College Composition and Communication, 47(4), 567-593.

Charney, D. (1997). Paradigm and punish. College Composition and Communication, 48(4): 526-565.

Cooper, M. (1997). Distinguishing critical and post-positivist research. College Composition and Communication, 48(4), 556-561.

Driscoll, D. L., & Wynn Perdue, S. (2012). Theory, lore, and more: An analysis of RAD research in The Writing Center Journal, 1980-2009. Writing Center Journal, 32(2), 11-39.

Gillam, A. (2002a). The call to research: Early representations of writing center research. In P. Gillespie, A. Gillam, L. Falls Brown, and B. Stay (Eds.), Writing center research: Extending the conversation (pp. 3-21). Mahwah, NJ: Routledge.

Gillam, A. (2002b). Introduction. Writing center research: Extending the conversation. In P. Gillespie, A. Gillam, L. Falls Brown, & B. Stay (Eds.), Writing center research: Extending the conversation (pp. xxv-xxix). Mahwah, NJ: Routledge.

Gillespie, P. (2002). Beyond the house of lore: WCenter as research site. In P. Gillespie, A. Gillam, L. Falls Brown, & B. Stay (Eds.), Writing center research: Extending the conversation (pp. 39-51). Mahwah, NJ: Routledge.

Harris, J. (2001). Review: Reaffirming, reflecting, reforming: Writing center scholarship comes of age. College English, 63(5), 662-668.

Haswell, R. (2005). NCTE/CCCC's recent war on scholarship. Written Communication 22(2), 198-223.

Johanek, C. (2000). Composing research: A contextualist paradigm for rhetoric and composition. Logan, Utah: Utah State University Press.

Lerner, N. (2009). The idea of a writing laboratory. Carbondale: Southern Illinois University Press.

Saldana, J. (2009). The qualitative coding manual. Thousand Oaks, CA: Sage Publications Ltd.

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication, 25(3), 389-411.

Thompson, I., Whyte, A., Shannon, D., Muse, A., Miller, K., Chappell, M., & Whigham, A. (2009). Examining our lore: A survey of students' and tutors' satisfaction with writing center conferences. Writing Center Journal, 29(1), 78--105.

(1) If we had launched this study after the publication of Anne Ellen Geller & Harry Denny's "Of Ladybugs, Low Status, and Loving the Job: Writing Center Professionals Navigating Their Careers," WCJ 33.1 (2013), we probably would have used the term writing center professional (WCP) rather than writing center administrator (WCA) to avoid the stigma of the term administrator, although the term "professional" is not without its own problems. The language used in the study should not be construed as a preference for the term "administrator," which we continue to use because it reflects the language used in our IRB and with our participants. For more discussion of such issues, see Melissa Ianetta, Linda Bergmann, Lauren Fitzgerald, Carol Peterson Haviland, Lisa Lebduska, & Mary Wislocki's "Polylog: Are Writing Center Directors Writing Program Administrators?" Composition Studies, 34.2 (2006).

(2) Note that this list represents a portion of the research questions in our broader study; the remaining questions are explored in "Centering RAD Research: An Exploration of Conditions that Influence Writing Center Administrators' Data-Supported Practices," in preparation.

(3) Note that some of these numbers represent tutorials for a college system including multiple branch campuses, as some of our WCA participants directed multiple sites.

(4) Additionally, it should be mentioned that Nathan's position is, in part, research-based and he holds a Ph.D., while Nan's position is solely administrative and she holds an M.A.

(5) Note that the last four survey responses in this category did not indicate a clear relationship.

Dr. Dana Lynn Driscoll is an Associate Professor of Writing and Rhetoric at Oakland University. There, she teaches courses in first-year writing, peer tutoring, research methods, writing studies, and global rhetoric. She also directs the embedded writing specialist program where writing center tutors are placed in basic writing courses for tutorial support. Her research interests include writing transfer, research methodologies, writing assessment, and writing centers. She has published in The Writing Center Journal, Across the Disciplines, WPA: Writing Program Administration, Assessing Writing, Teaching and Learning Inquiry, and Composition Forum. She is a co-principle investigator on a multi-institutional research project, the "The Writing Transfer Project," which seeks to understand transfer of learning and metacognition in diverse settings. Her co-authored work "Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980--2009" won the IWCA's 2012 Outstanding Article of the Year Award. She also serves as a CCCC Executive Committee member.

Before assuming the Oakland University Writing Center helm, Sherry Wynn Perdue earned degrees in English/American Studies, edited such diverse publications as Re-Visions: Journal of the Women's Studies Program at Michigan State University and The Oil Pipeline Monitor, and taught rhetoric and composition. Currently, she is the book review editor of The Writing Lab Newsletter; managing editor of The Oakland Journal; member of the IWCA, ECWCA, and MiWCA boards; and an AP English Language consultant. Her research, published in WCJ, WLN, Perspectives on Undergraduate Research and Mentoring, and Educational Libraries, addresses writing center research methodologies, institutional support for dissertation writers, and undergraduate research. Renovating her mid-century home, walking her standard poodle Max, and embarrassing her teenage daughter occupy her fleeting spare time.
Table 1: Defining Writing Center Research

"Research" is based at the writing center                  86
"Research" is evidence-based and methodologically sound    28
Research is assessment                                     18
Research is secondary/article based                        13
Research is developing and applying theories               12

Table 2: Interviewee Reports on Understanding and Use of RAD
Research

Interviewee Reports on Understanding
and use of RAD research

Never heard of RAD research                                   4   26%

Never heard of RAD research but discuss research practices    1   6%
in RAD terms

Heard of RAD but do not believe it is useful for writing      2   13%
Centers (all equate RAD only with quantitative work)

Heard of RAD but believe it is impossible for writing         2   13%
centers to do

Heard of RAD and believe it is necessary                      4   26%

Heard of RAD, believe it is necessary, and are conducting     2   13%
RAD research

Table 3: Data Collected by Survey Respondents

                                            Responses
Data Collected by Survcv Respondents        (out of 100 WCAs)

Surveys                                     81

Session Observations                        78

Session Evaluations                         68

Intake Forms                                57

Interviews                                  51

Case Studies                                30

Textual Analysis                            26

Focus Groups                                5

Quantitative Analysis using TutorTrac or    4
Institutional Research

Portfolios                                  2

Pre/Post-tests (experimental design)        1
COPYRIGHT 2014 University of Oklahoma
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Driscoll, Dana Lynn; Perdue, Sherry Wynn
Publication:Writing Center Journal
Article Type:Report
Geographic Code:1USA
Date:Sep 22, 2014
Words:9851
Previous Article:The unpromising present of writing center studies: author and citation patterns in The Writing Center Journal, 1980 to 2009.
Next Article:Review essay: divergent ways of creating knowledge in writing center studies.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters