Printer Friendly

Lessons from data: avoiding lore bias in research paradigms.

Abstract

Based on a three-year empirical study of tutor- and writer-posed questions in writing center conference dialogues, this article does not report findings; instead, it offers a meta-analysis challenging prevailing research frameworks rooted in lore. The data defied the neat categories suggested by current taxonomies; the coding difficulties revealed not just a flawed taxonomy but also a prescriptive research framework based in a long-outdated Directive/Non-Directive (D/ND) lore paradigm. Using this research as an exemplar, the article demonstrates how unexamined paradigms limit researchers and their findings. Principles are presented that researchers--and research consumers--can use to discover and disrupt lore bias in research design, including triangulating with other disciplines and shifting research motivation from proving best practices to investigating outcomes. Finally, the article investigates how other disciplines with longer research traditions avoid constricting paradigms.

**********

Over a three year period beginning in 2011, our writing center conducted IRB-approved empirical research on the role of tutor-posed and writer-posed questions in writing center dialogues. Using a corpus comprising three linguistically accurate transcripts and 25 glossed transcripts, we (1) painstakingly identified and coded writers' and tutors' questions using a question taxonomy based on Arthur C. Graesser & Natalie K. Person (1994) and adapted by Isabelle Thompson & Jo Mackiewicz (2013). We also identified and coded the cognitive moves revealed in both writers' and tutors' resultant answers using an answer taxonomy, the revised Bloom's taxonomy proposed by Iowa State's Center for Excellence in Learning and Teaching (Heer, 2012). (2) While we intend future articles to discuss our findings, especially those significantly informing our internal practice and staff development, our research plays a different role in this article. In a typical research write-up, the data reveals the plot; that is, researchers start with a question that leads them to collect data. Researchers then handle the data, analyze it, and interpret it to answer their research questions. But for the purposes of this article, which is a metacognitive reflection on our research process, Data plays a different role. In a way, this is the story of how Data handled us.

For us, Data became a character. In the process of negotiating coding schemes and trying to stuff Data into supposedly tested taxonomies--basically, in trying to manage Data--our Data ended up taking on a willful persona. Too many times, Data just wouldn't do what we asked. So rather than serving as a plotline in traditional results-and-discussion style, Data in this article serves as a character, always intriguing, but also always cantankerous and recalcitrant. This Data character is a stubborn SOB, not to be mastered or controlled; worse, it's ignorant--it clearly hasn't read the academic literature. This Data doesn't listen; it shouts. Loudly. With attitude. At first listening to our Data was as objectionable as listening to the noisy blather of a political talk show. But once we really began to hear Data, we became aware of another steady presence: Lore. If Data became embodied, Lore was more of a specter. And if we thought Data was difficult to handle, Lore was worse: insidious, sneaky, controlling--not just one voice in our heads, the voice in our heads.

In recounting the unfolding relationship between Data and Lore, I present our journey not in the order of our realizations. Because of its a priori influence, I'll start with Lore's narrative as told through both theory and research--how did it get to be the voice in our heads? Next, I present the way Data disrupted Lore, using data snippets so readers can see how it challenged our assumptions. These lessons from Data function both as the impetus for an alternate paradigm for questions-related research and the strategy for locating lore bias in other research paradigms. (3) Finally, I explore how departing from lore-based paradigms opens transformational terrain, not only for research on questions but also for research on other signature pedagogies associated with writing centers.

The Voice in Our Heads: Lore 1.0

For as long as questions have been considered a fundamental pedagogy of writing centers, our scholarly discussion of them remains almost exclusively situated within the binary directive/non-directive (D/ND) paradigm (4), often to the detriment of useful alternate theoretical framing such as scaffolding (Thompson, 2009; Thompson & Mackiewicz, 2013; Nordlof, 2014; Mackiewicz & Thompson, 2015). Although historically Lore vilifies the D while it also glorifies the ND end of the continuum, recent writing center scholarship complicates our "Grand Narrative" (Grutsch McKinney, 2013) by asserting that conferences yield more effective outcomes when tutors move within the entire continuum. Nevertheless, it is difficult to find writing center theory or research on questions that does not reference the D/ND continuum. Rather than challenging the entire framework, writing center studies and its research predominantly features prescriptive recommendations about questions that are polarized: Open-ended "real" questions are nondirective and good; test or leading questions are bad, directive at best and dishonest at worst, but definitely bad.

The D/ND paradigm--Lore--shapes question research in subtle ways, including through a number of reductive over-simplifications that influence both best practice recommendations and research taxonomies.

For instance, when considered as a whole, writing center scholarship features a limited definition of what counts as questions. Rather than considering illocutionary purposes, our field typically defines questions based on linguistic and grammatical markers; so only utterances marked by an interrogative are identified (see especially precedent-setting research including Ashton-Jones, 1988; Johnson, 1993; Blau, Hall, & Sparks, 2002). Questions are judged open-ended when they include the classic grammatical interrogative marker "wh-" [Where are you going for dinner? Who is your favorite author?]. Interrogative markers for yes/no or either/or questions are typically categorized as closed questions [Do you prefer green tea or black tea? Are you coming with me?]. Tag questions are also defined by linguistic markers [You're coming, aren't you?]. By defining questions so narrowly, we count as questions any utterances that function not as elicitory devices but rather as commands [Would you like to start reading your draft aloud?]. We also fail to count as questions those utterances that do function as elicitory devices but that are framed linguistically as non-interrogatives [Tell me more about X.].

Starting with too narrow a definition for questions leads to developing taxonomies that are too simplistic. For instance, collectively our literature reports the following not-mutually-exclusive question categories: rhetorical, tag, leading, open-ended, test, yes/no, either/or, and closed. Some gloss questions even further, suggesting two main categories: "real" questions for which the questioner truly doesn't know the answer and "dishonest" questions, that is, questions for which the questioner knows the answer but tests students to see whether they also know it. This excessive simplicity reduces our research taxonomy to a binary with accompanying judgment: Questions are only either good or bad. Bad questions include leading and testing--basically any prompt that can be considered directive or "dishonest." Good questions--open-ended ones--are also known as genuine, "real," and non-directive.

These simplistic definitions and taxonomies have suited our scholarly purposes over the years because most of our literature seems purposed with making how-to recommendations. Although these recommendations feature much certainty about what tutors ought to do, they are based on the woefully unexamined assumption that our "shoulds" actually prompt student learning.

Starting an inquiry with "Are questions good or bad?" can't help but yield a moralistic literature rife with battle lines drawn along the D/ND continuum. Early combatants decry questions as directive to the point of oppression; J.T. Dillon (1978), a scholar from the education field, suggests that questions function not to promote inquiry but rather to suppress thought and threaten affect. A decade later, writing center scholar Evelyn Ashton-Jones (1988) similarly indicted questions as overly directive, suggesting a heuristic for asking "right" rather than "wrong" questions. In a stinging critique of questions using empirical evidence from recorded writing conferences, JoAnn B. Johnson (1993) discussed the negative effects of tutor questions, concluding that lore-based confidence in Socratic questioning is misplaced. Not only are questions both intrusive and directive, they "may derail the student's train of thought" (p. 34). In fact they have "so many negative qualities ... it would seem logical to avoid asking questions as much as possible" (pp. 38-39). Taken together, these scholars line up on the D side of the D/ND binary; that is, many if not all questions are too directive to be of pedagogical use in teaching and learning.

But another camp of scholars favors questions, especially non-directive ones. Offering equally judgmental and prescriptive appeals, Donald McAndrew & Thomas Reigstad privilege open-ended and probe-and-prompt questions over closed or yes/no questions because the former signal "collaborative tutoring" (nondirective or student-centered tutoring) while the latter lead to less desirable "teacher-centered tutoring" (2001, pp. 25-26). A perusal of popular tutoring manuals reveals that all at least mention the D/ND binary when discussing questions as a signature pedagogy (McAndrew & Reigstad, 2001; Rafoth, 2005; Gillespie & Lerner, 2007; Ryan & Zimmerelli, 2009; Murphy & Sherwood, 2011). While some suggest complications to the question taxonomy by considering functional purpose, the previous research prescribes, sans evidence, best practice: Good questions are recommended primarily because they are non-directive. Thus speaks Lore.

The Updated Voice in Our Heads: Lore 2.0

In the years since Linda Shamoon & Deborah Burns (1995) and Jeff Brooks (1991) firmly anchored the binary ends of the D/ND continuum, Lore 1.0 has gotten a five-minute makeover, courtesy, in part, of empirical research. Meet Lore 2.0, which speaks a smidgen less judgmentally about directiveness but still remains firmly rooted in the old paradigm. For instance, in their survey of qualitative studies on questions conducted from 1983-2006, all studies engage the paradigm in an explicit way, leading Rebecca Day Babcock, Kellye Manning, Travis Rogers, & Courtney Goff (2012) to confirm a lore-based taxonomy, revealing no additional question categories and no additional complexity to the definition of questions. Further, in accurately glossing this corpus of qualitative research over three pages, the authors' synopsis reiterates much evaluative language found in the original studies, citing questions as "good, harmful, dishonest, frustrating, controlling, deficient, and real" (pp. 42-45). While this corpus may persuade us to embrace directive questions, at no point does the original research or this meta-analysis challenge the D/ND as a prevailing paradigm on question research.

Lore 2.0 also prevails in our field's other notable meta-analysis of empirical research, Researching the Writing Center: Toward an Evidence-Based Practice (Babcock & Thonus, 2012). In a section devoted to questions, the authors gloss the overall research context for questions in writing center practice with the usual taxonomy, and they go on to review four empirical studies of questions. Esther N. Goody confirms questions as behavioral control, Terese Thonus confirms the value of closed questions, Teri Sinclair Haas suggests only two types of questions, "real v. teacher," and David C. Fletcher reports tutors using only closed questions (cited in Babcock & Thonus, 2012, pp. 50-51). If we read generously, Goody may be suggesting that questions functioning as commands should not actually be classified as questions. Otherwise, along with the reductive definition and taxonomy, the D/ND paradigm prevails. While the authors avoid prescriptions immediately following their synopsis of question research, the volume contains several sections on directive strategies that reinforce the D/ND paradigm and offer best practice recommendations such as emphasizing directive strategies with multilingual writers (p. 105) and favoring directive statements over questions when questions aren't necessary (p. 142).

Three writing center research studies not included in the preceding volumes deserve special attention for ushering in a more tolerant Lore 2.0. In her case study micro-analyzing the dialogue of a single conference, Thompson (2009) challenges the notion that tutorial success depends greatly on ND strategies, arguing that "directiveness alone does not negatively influence students' perceptions of conference success," especially once a tutor has properly scaffolded the writer's motivation (2009, p. 447). Asserting that directiveness is explicitly helpful and perhaps unavoidable in tutor-student exchanges, Thompson explains, "When students are motivationally ready ... tutors can be productively directive" (p. 447). Lore 2.0 emerges again in Thompson & Mackiewicz' (2013) study of questions where scaffolding provides strong rationale for identifying and using directive (leading) questions. This research represents a noteworthy departure from previous research for a number of reasons. First, the study features a broader definition of questions, introducing the concept of questions as negotiated and fluid rather than fixed by linguistic markers. Second, the study codes questions using a richer taxonomy, one adapted from Graesser & Person (1994). Finally, the researchers resist to a greater extent an evaluative stance, concluding that "it is not possible to describe a 'good' question outside of the context in which it occurs, and even in context, the effects of questions are difficult to determine" (p. 61).

The same authors make similar claims about the value of directivity in their new volume on tutor talk (Mackiewicz & Thompson, 2015), which features the lens of scaffolding to explain both cognitive and affective growth. Although scaffolding could potentially supplant the D/ND influence altogether in their work, frequent mentions of the old paradigm suggest that while the field may embrace Lore's facelift, we're not ready to replace it altogether. For instance, in the book's chapter five on instructional strategies, the first subject heading calls directiveness "worrisome but necessary" (p. 83), suggesting not only that we still need to apologize for directiveness but that the D/ND paradigm persists in Lore 2.0. In the research design, Lore-based question categories serve to illuminate scaffolding moves. For example, the reductive question categories "leading" and "closed" are used as exclusive identifiers for "pumping" and "prompting" scaffolding moves, and throughout one section, question types are glossed as either open or closed (pp. 35-36). Of course this ambitious research volume has much to offer our collective understandings around scaffolding; nevertheless, it's disappointing that at times scaffolding emerges more as the concept to redeem, rather than replace, directivity.

Lore's Challenger: Data

Although we started our research firmly in Lore's spell, in retrospect it's hard to remember just when and how we gradually came to hear Data's distinctive counterpoint. For so many of the anomalies Data presented to us, the undergraduate researchers on our team were the first to notice them. "Roberta, Data broke the taxonomy again!" became their frequent refrain. Perhaps not as jaded by Lore--and less respectful of existing taxonomies, less awed by star researchers--undergraduate researchers were very receptive to Data's rebel message. Data won them over first. Then as a chorus, they reached me too. For as loud a character as Data seemed to us later, at first we could only begin perceiving its message by muzzling the endless loop of interference Lore played in our heads. As a parallel move in our text, then, I am now silencing Lore to let Data speak. I realize that this silencing risks reinforcing a data-lore binary, which I so wish to avoid. But as much as we might hope they often speak as one, we need to give Data and Lore room to disagree.

In this section, 1 present four key lessons we learned as we listened to Data. I deeply hope these principles serve as strategies for researchers, and readers of research, to identify and eliminate lore bias in research on other writing center pedagogies and practices. More specifically, I believe these lessons offer the beginnings of a new paradigmatic approach to question research by proposing a new definition, a new taxonomy, and a new focus on outcomes. Although I embellish our lessons with after-the-fact understandings informed by scholarship outside our discipline. I also reveal some of our inner conversation while we were in the process of grappling with Data. And of course I also include representative samples of our actual Data, which you will see speaks eloquently for itself.

Lesson 1: Triangulate Key Theoretical Concepts with Those from Other Disciplines

As broken taxonomies sent us scurrying to other disciplines for new explanatory concepts, we found research outside writing center studies that supplied several new lenses for question research. In the edited volume titled Why Do You Ask?: The Function of Questions in Institutional Discourse, for example, researchers from domains as varied as medicine, law, and education effectively move research beyond what we call the D/ND paradigm and what they describe as "the overly simple claim that questioning in institutional settings is equivalent to interactional control" (Ehrlich & Freed, 2010, p. 12). In addition to introducing a number of new question types, including epistemic stance, (5) hypothetical and reflective, (6) and disruptive (7) questions, these research studies offered insights on definitional issues that fit better with Data than did holdovers from Lore 2.0. For instance, we resisted classifying as questions those utterances that didn't yield substantive answers. Rhetorical questions [Would you like to read your draft aloud?] functioned primarily as commands, so the linguistic framing serves as a politeness device rather than elicitory one. Similarly, tag questions [I'm thinking this is your thesis sentence, aren't you?] were typically used to convey or confirm information or to hedge a direct statement in combination with a test question [This is your thesis--you see that, right?]. Eliminating these utterances from the definition of questions doesn't mean they are not worthy of study; rather, it means that conflating them with elicitory devices creates an avoidable research confound. After looking at writers' answers in our Data (see Lesson 3), we moved from defining questions by linguistic markers to defining them by discursive, illocutionary function, thus excluding those questions designed to state information or prompt a behavior rather than to elicit an answer.

Triangulating with non-Lore-entrenched research prompted both the exclusion of non-elicitory utterances and the inclusion of utterances that clearly function to elicit response, regardless of linguistic framing. In our data, for instance, we found several types of "non-questions" operating as elicitory devices, including imperatives [Tell me more about X], declaratives [I wonder how Xconnects to Y.], and an abundant supply of what the scholars in Why Do You Ask? identify as DIUs, "designedly incomplete utterances" (Koshik, 2010, pp. 164-168). In this Data excerpt, the tutor (we call him Tony) offers a declarative processual suggestion followed by a DIU that elicits an extensive and complex verbal response from the writer (we call her Carolyn).

T: No, believe, but it's ... I think if you kind of home in on something.

C: I mean ... Here ... Here's another implication. It's one of the things that's recent ... that's happened recently in Occupy Wall Street, right?

T: Yeah.

C: That didn't have a leader.

T: No, it didn't. It still doesn't ...

C: No ... No ... And part of ... And yet it was successful. It went global. And it had a ... It had a ... It had a ... an initiator. You know, a magazine in Canada, I can't remember the name of it [mumbles] Ad Busters. You know, they were talking about it, and it sort of went out and that was it. And, you know, there wasn't anybody that you would say [mumbles] talk to the leader of the Occupy Wall Street ...

T: A similar thing in the Middle East lately, with all the ...

C: Right ... right ... right. So, what ... and so there's a comparison. You know, that ... um ... that it isn't a blueprint of any kind. That ... that it's coming up against something and saying, you know, that's not going to work I'll do something else. And as an individual or [mumbles]. But it's still a collective action.

Note how Data significantly troubles the practice of defining questions by interrogative marker. The initial declarative suggestion to "home in on something" functions as an elicitory device akin to "What would you like to home in on?" So does the utterance count as a question? And what do we do with Tony's DIU: "A similar thing in the Middle East lately, with all the ..."? As she "answers" Tony's DIU, Carolyn verbalizes an entirely new theme and articulates how it connects to her existing themes; in other words, she creates new knowledge. Even the most complex taxonomy in Lore-based question research would omit these utterances from consideration. Yet the writer's response demonstrates how Tony's declaratives and DIU elicited more deep thinking than we saw emerging in many linguistically framed open-ended questions, the supposed gold standard of questions.

Although we adjusted our taxonomy to exclude non-elicitory devices framed linguistically as questions (such as rhetorical and tag) and to add elicitory devices not framed as questions (such as DIUs and imperatives), we did exclude from our analysis other question types suggested by scholars featured in Why Do You Ask? We did so mostly because we didn't see many of them being posed. Yet we believe these question types hold promise for both research and practice. Hypothetical questions such as "If you could write about one passion, what would it be?" might serve to connect writers to intrinsic motivation, for example, and reflective questions such as "How could you apply the strategy we just used in future writing?" show promise for prompting metacognitive moves in both writers and tutors. In triangulating with other question research where these types proved significant, we became curious about why our Data doesn't mention them much. We think it's because Lore has spoken unchallenged for so long into our practitioner psyches that tutors don't even imagine asking such un-Lore-like questions. They are simply beyond our worldview. Although this speculation scares us, it also illuminates the role Lore plays with Data. Lore doesn't just influence how we perceive our Data character; it genetically alters what Data is produced. Triangulating with cross-disciplinary research not only challenged our too-narrow conceptual notions around questions, but it also helped us become curious about why our Data sometimes manifested an entirely different DNA than other researchers' data. (8)

Lesson 2: Apply Exploratory Rather Than Prescriptive Lenses

Along with speculation about what was absent, listening to the anomalies present in Data acted to fuel true inquiry. Of the many paradigms for research, including descriptive, exploratory, and predictive to name a few (Wollman, 2012), prescriptive research (also called normative) best matches the prevailing paradigm in writing center scholarship. Instead of asking exploratory or descriptive research questions such as "What happens cognitively and affectively for writers when tutors ask questions in conference dialogue?," we tend to ask evaluative research questions such as "What are the best questions for tutors to ask?" If past researchers who wanted to study "questions" had been less concerned with prescribing which ones to use, someone might have noticed sooner that writers' answers are equally fascinating and revealing elements of conference dialogue. Yet to date, there appears to be no other study whose method includes the same attention to coding writers' answers as it does to coding practitioners' questions. Why does our field's research continually place practitioners at the center of our gaze? Perhaps it's because we are so fearful about the effectiveness of writing center teaching and learning that we use research primarily to justify our practices. In the search for a set of not-just-promising but best practices, much of our literature reads as a litany of "shoulds." But by jumping too quickly to evaluation, we miss the opportunity to describe, to explain, to explore, to predict--we miss all the complications that tell us we don't really know what works and why. An exploratory paradigm offers us the opportunity to suspend judgment, avoid causal leaps (practice A always yields outcome X), and linger in an attitude of wonder. But these valuable perspectives are lost when research functions merely to prescribe practice.

This self-conscious search to affirm best practices about our questions has not just kept us from looking deeply at writers' answers; it has also kept us from being curious about writers' questions. Educational research suggests that students ask as many as 250 times more questions in a one-to-one tutorial than in a classroom setting (Graesser & Person, 1994), a suggestion that cries out for investigation. While our own Data suggests that tutors ask most of the questions, nevertheless, writers also ask them frequently. We know of no research analyzing writers' questions and tutors' answers in dialogic sequence, nor, despite current research interests in learning outcomes for tutors, do we know of any research linking writers' questions to cognitive outcomes for tutors. The lack of curiosity about writer-posed questions appears to be tied to the D/ND paradigm because that paradigm teaches us to be curious in certain ways about certain questions--not in all ways about all questions.

In our writer-posed question data, 75% of writers' questions asked tutors to evaluate; yet only 25% of tutor responses were evaluative in nature, a glaring mismatch that warrants investigation. For example, in one exchange between the tutor (we call her Desiree) and the writer (we call her Lydia), we see the following writer-posed question.

D: Wow, that was really interesting! How do you feel about it?

L: I like it but I don't know if my thesis is all that great, what do you think?

D: Well what do you think?

Desiree completely avoids Lydia's question, a request for evaluation, by using a decidedly Rogerian deflection. Our tutors, says Data, employ an impressive variety of strategies for avoiding an evaluative role, and they demonstrate such skill in doing so that we became very curious about what this avoidance means and how both tutors and writers perceive it. Although our speculations fuel a future study, our point here is that this is the kind of rich phenomenon we will continue to miss if we fail to move our research beyond a prescriptive purpose.

Lesson 3: Cultivate Multiple Perspectives During Data Analysis

The Desiree/Lydia disconnect was one of many that moved us to create a taxonomy that fits not only how questions function for tutors but also how they function for writers. Regardless of tutor intent, our Data taught us that writers, not tutors, often ultimately decide how tutors' questions function. Failing to acknowledge the writers' perspective in interpreting questions results in seriously flawed taxonomies; yet the writers' perspective remains missing from existing research. In classifying how questions function from the point of view of the questioner, Thompson & Mackiewicz (2013) omit classifying the answers with the same depth of analysis as they put on analyzing questions, perhaps because Graesser & Person (1994), whose taxonomy Thompson & Mackiewicz adapted, made the same omission. The original taxonomy (Graesser & Person, 1994) offers several categories tied to questioners' goals, including to fill a knowledge deficit, to establish common ground, to achieve social coordination during tutorials, and to exert conversational control; Thompson & Mackiewicz (2013) add a fifth: to lead (scaffold) student learning. But failing to consider how writers answer questions yields a taxonomy that privileges the practitioner, effectively silencing writers and ignoring their agency in negotiating dialogue.

We weren't the first to notice the misalignment of question intent and answer outcome. Just as Graesser & Person's question research was designed to inform machine-based tutoring, Pomerantz (2005) undertook question research while attempting to design a machine-based replacement for reference librarians in academic libraries. Using a social constructionist lens, Pomerantz suggests that questions should not be classified based on the intent of the question alone; rather, classifications must include an analysis of both the question and the answer. Why? If a questioner poses a knowledge-gap, deep-thinking question like, "What overall idea do you want to convey to your reader?" but the student answers with "I'm so tired of this paper," the question did not truly function as a knowledge-gap question because the answer didn't actually supply the requested knowledge. Extending Pomerantz' assertion, psychologists Gillian L. Roberts & Janet Beavin Bavelas (1996) microanalyze dialogue in the helping professions by considering questions across the context of an entire dialogue. Not limited to an IR, IRE, or IRF (9) sequence, even the extended one Thompson & Mackiewicz mention (2013, p. 57), Roberts & Bavelas use principles of discourse-level linguistic analysis to recommend studying questions in the whole context of negotiated meaning, which they call an "utterance-reaction-confirmation dialogic sequence," a sequence that may emerge over the course of an entire dialogue.

Consider the following dialogic sequence in which our Data reveals that the writer's response clearly misses the tutor's intent. Later in the session between Desiree and Lydia, Desiree poses a question to scaffold for Lydia several cognitive moves located on Bloom's revised taxonomy (Heer, 2012): to analyze what's missing from her thesis, to evaluate what information would add clarity for the audience, and to create a new thesis sentence that addresses the clarity problem. In the answer, however, Lydia states a need for thinking time.

D: How do you think you could change your thesis to make that more clear?

L: I'll have to think about that a little bit.

Given that the question didn't actually function to create new content, how does one code said question? At this point in the dialogue, the question is a "failed" question because the answer doesn't confirm the dialogic sequence. Desiree prompts content; Lydia responds with process. Is a failed question coded according to intent or according to its actual function, in this case, to prompt an insightful metacognitive realization about the value of thinking time to the writer's process?

But the dialogic sequence continues. In response to Lydia's "failure" to create new content, Desiree doesn't respond with the extra time Lydia indirectly requests or with recursively posing questions that require less complex cognitive moves. Instead, Desiree gives up, using an imperative to nominate a new topic for conversation, a nomination that also fails.

D: Okay, let's move on to Y [another area of the paper Lydia mentioned as a concern].

L: The thing is, I don't know if X [still talking about X, not Y] fits into my main idea. X is about ... (long explanation) ... and I find it interesting that her eyes are closed and I think that says something about the whole "male view."

Ignoring Desiree's most recent utterance entirely, Lydia seems now to be in the process of talking out an answer to the original question. Although Lydia can't yet revise and has not yet addressed the notion of clarity, she is still clearly engaging the thesis idea. Lydia begins by explaining a hunch she has barely mentioned in her text, an idea that gathers steam as she explains it over the next few turns. Although Desiree did not scaffold the complex thinking called for in the original question and was prepared to abandon it, perhaps in an effort to save the writer's face, Lydia takes the lead in providing her own scaffolding by navigating Bloom's revised scale: retreating from the most complex move, create content, to a simpler one, recall content. Desiree then follows Lydia's lead, posing a question that ups the ante, asking Lydia to move from recall to analysis.

D: I think you're right X is an interesting point; can you think of any connection to the rest of your paper?

L: The only thing I could think of is maybe it would fit in with the part about Z because [explains].

So while the original question was intended to prompt a cognitive move at the top of Bloom's scale--create new knowledge--Lydia and Desiree collaboratively build effective scaffolding, moving first through less complex recall to more complex analysis all in support of Lydia's success in creating new knowledge.

In deciding whether to code Desiree's initial question, we followed their dialogic exchange to consider Desiree's initial intent, Lydia's answer, Desiree's revised intent, Lydia's refusal, and their subsequent negotiation. We eventually revised our taxonomy (10) to include coding intent from tutors' perspectives and eventual outcome from writers' perspectives, and we would recommend this step to other researchers. But our point here is to resist stuffing Data into one-dimensional taxonomies and to remain open to the other perspectives Data poses. We speculate that the existing D/ND paradigm, that is Lore, teaches us to focus almost exclusively on the practitioner. Some in our field feel obliged to prove that our interventions serve a value-added function to some also believe there are "best" ways for our interventions to achieve these valuable results. But just this one exchange clearly demonstrates all that might be learned if only we switched perspectives.

Lesson 4: Frame Research in Terms of Learning Rather Than Practice

Moving away from prescriptive research and one-dimensional data analysis will free researchers to move toward identifying learning outcomes, an area of research in which writing centers have lagged behind others in higher education. Our Data taught us that to truly understand learning in writing center conferences, we needed to look for ways conference dialogues illuminate not just practices but outcomes. For instance, if we look at questions just in terms of practice, both of these questions "What is your name?" and "What are the implications of the ideas in your conclusion?" would be classified as open-ended and knowledge-deficit questions in the Graesser & Person (1994) taxonomy. But it seemed wrong to us to classify these very different questions the same way. Based on Rex Heer's (2012) revised Bloom's scale, the former prompts the recall of a well-known fact (name), while the latter prompts writers to create new concepts (implications). In the most sophisticated taxonomies in use in our field, these questions would be codified based on function to the practitioner, that is, to fill a knowledge gap. (11) Yet, depending on how writers engage the questions, the functional cognitive outcomes for writers are miles apart. When we initially tried to code these questions identically, Data legitimately pitched a fit.

Since learning outcomes play a significant role in K-12 assessment, education researchers Christopher H. Tienken, Stephanie Goldberg, & Dominic DiRocco (2010) initiated rare outcomes-based research on questions. To test teachers' assumptions that using questions equates with effective teaching because questions automatically prompt intellectual growth, these researchers propose a simplified taxonomy based on Bloom. They divide questions into two categories: productive or "higher order" questions prompting "analysis, synthesis, evaluation" (questions with multiple possible answers) and reproductive or "lower order" questions prompting "recall, comprehension, application" (questions with one right answer) (p. 29). This scarce but important research confirms the value of designing taxonomies with outcomes in mind.

To enact an outcomes focus in our study, we primarily analyzed "deep-reasoning" questions, a distinction first suggested by Graesser & Person's taxonomy but lost in writing center adaptations. (12) While deep-thinking questions ideally function to promote sophisticated cognitive moves at the upper end of Bloom's taxonomy (evaluate, create), Data taught us that they can just as easily prompt no cognitive growth whatsoever. What if Lydia had accepted Desiree's prompt to switch to an easier cognitive task rather than scaffolding her own higher-order cognitive moves? If Data hadn't taught us to look for outcomes across the entire dialogic arc, we may have simply accepted Desiree's deep-thinking question as it functioned for the tutor, to fill a knowledge gap, and if Data hadn't taught us to look for outcomes, we may have missed Lydia's very significant moves to scaffold her own cognitive growth. And we may have missed the fact that, although they sometimes have difficulty doing so, both tutors and writers regularly navigate the higher-order cognitive moves suggested by the revised Bloom's taxonomy, an outcome we feel sure would be greatly valued by our institutions.

The Data-Lore Relationship: Implications for Future Research Frameworks

Decades ago, disciplines with longer research traditions began acknowledging that without careful attention to challenging their existing paradigms, research might serve not to create new knowledge but instead to reify Lore. (13) In the so-called objective sciences, Thomas S. Kuhn (1962) warned that paradigms color every aspect of the research process, including the kinds of questions researchers ask and their ability to interpret results in new ways, potentially yielding research that merely confirms a deeply held worldview of what is "normal." Social scientists, too, acknowledge how schema can function as sources of bias, regularizing their ways of knowing just as Grutsch McKinney (2013) claims narrative does for writing center studies.

To avoid confirmation bias in these disciplines with longer research traditions, researchers have adopted practices we could borrow. For instance, researchers in sciences and social sciences are wary about the generalizability of research that is context-specific. The results of question research featuring transcripts from a single writing center, for example, likely reveal much about the ethos and tutor education of that particular center, but we should be skeptical about generalizing those findings across writing centers writ large. As Dana Driscoll & Sherry Wynn Perdue (2014) assert, the "aggregable" in RAD (14) must mean more than merely compiling researchers' findings from individual writing centers. To enhance reliability and validity in health fields, researchers include participants in a variety of contexts, both national and international. Scientists increasingly publish data sets along with their research results, giving other researchers the ability to triangulate data across studies. Although the National Institutes of Health wrote policies regarding ethical standards of data sharing in 2003, in more than a decade since, few cross-institutional research studies have been conducted by writing center researchers, none featuring questions or conference dialogues. In terms of both facilitating our field's research agenda and expanding our paradigms, how powerful would it be if writing center researchers considered transcripts from a cross-institutional, perhaps even international, corpus housed in an institutional repository?

In forging expanded paradigms, theoretical innovation may model strategies researchers can use for moving beyond Lore. In his thoughtful search for a theory to explain learning in writing center sessions, John Nordlof (2014) exposes the D/ND paradigm as a pseudo theory, a set of practices that lacks the explanatory power true theory provides. By masquerading as theory "these stances have become proxies for theoretical models" (2014, p. 48), meaning that we have settled for Lore to stand in place of broadly applicable principles for explaining learning. Nordlof completes the transition to a post-D/ND paradigm by exposing untested assumptions embedded in the old paradigm and by following the theoretical reasoning of the new framework to its logical end: Scaffolding renders the D/ND framework entirely irrelevant. If growth is the desired outcome, practitioners need not ask themselves whether they are being directive; rather, they are free to focus solely on steps to scaffold growth (2014, p. 59). If our Data could read Nordlof's powerful argument for a post-D/ND era in writing center scholarship, it would shout "Huzzah!"

In some ways, the relationship between Data and Lore in our field is much akin to the one between Desiree and Lydia. (15) Desiree spoke with Lore's voice, posing the prescribed open-ended, deep-thinking question to scaffold cognitive growth without being directive. When Lydia failed the cognitive task, Desiree used lore-prescribed politeness strategies to rescue Lydia and change the subject. But Lydia would have none of it. With Data's voice, Lydia responded from a different framework altogether, speaking the unexpected with responses like "I need time to think about X," and "No, I don't want to talk about Y, I want to keep talking about X." And Lydia posed her own awkward question, "Do you like my thesis?" It's as if the two were operating in alternate realities. We might have been tempted to call Desiree aside to ask her how she could have missed the opportunities Lydia presented. But how many times did we ourselves almost miss what Data was saying because we expected it to say something aligned with Lore?

This article began by portraying Data as a brash rebel and Lore as a, well, what exactly? A tyrant? A seducer? Even if we're not quite settled on a role for Lore, we know one thing: Lore isn't a villain. As a community of practice, we value our Lore for good reasons. We need the stories we tell ourselves and each other, and we need them to help explain our work to others. Not only can Lore serve these emic and etic purposes, it can also stir our curiosity as researchers. But because it's part of our culture, the internalized rules that constantly whisper truth to us, Lore will continue to constrain Data until researchers take deliberate steps to complicate Lore's pre-scripted frameworks. When Data convinced our research team that the D/ND continuum had bullied us long enough, we realized Lore couldn't bully us unless we allowed it to. Data can change Lore--if we hear it.

Lore is not just our story--it's our identity. But as individuals our identities become intricately layered as we mature through life's passages. Like Lerner (2014), we're not so sure our field has donned the mantle of maturity that befits us at middle age. Perhaps this new age of RAD research in our field serves as an opportunity to enrich our collective identity, but it will only do so if we build a research tradition with paradigm-busting practices in mind. Just as writer Lydia opens new territory for practitioner Desiree, and just as Nordlof opens post-Lore territory in our theory, Data can lend a powerful hand in fostering that maturity. Our own Lore-entrenched worldview was exposed as we triangulated with other disciplines, cultivated exploratory inquiry, analyzed data from multiple perspectives, and focused on learning outcomes. There may be yet other ways for unfettered Data to shift unexamined frameworks in our field. As RAD research comes of age in writing center studies, let's pause to listen to what we hope will become increasingly field-aggregated, increasingly complex, increasingly recalcitrant but always interesting: Let's listen deeply to Data. If we listen deeply enough, it will teach us not only how to research but also how to teach.
APPENDIX A: Revised Bloom's Taxonomy (Heer, 2012)
Adapted for Deep Thinking Questions (16)

Type of Knowledge

Cognitive Process

                     FACT                      CONCEPT
                     (Know of)                 (Know that)

RECALL                   We didn t use these categories because we
(Retrieve relevant     determined they didn't meet our criteria for
prior knowledge)                      deep thinking.

UNDERSTAND               We didn t use these categories because we
(Construct meaning     determined they didn't meet our criteria for
from facts)                           deep thinking.

APPLY                IdeaX applies to this     This example really
(Apply concept or    example.                  demonstrates X
process)                                       concept.

ANALYZE              X happened in             Y may have partially
(See relationship    relationship to Y.        caused X.
between parts)

EVALUATE             X and Y are the           I think example X
(Make criterion-     relevant facts for my     adequately
based judgments)     conclusion.               demonstrates point Y.

CREATE               Taken together, A and     The reason I think A
(Use concepts or     B really mean C.          and B mean C is
processes to make                              because X and Y.
new knowledge)

                     PROCESS                   META
                     (Know how)                (know that you know)

RECALL                   We didn t use these categories because we
(Retrieve relevant     determined they didn't meet our criteria for
prior knowledge)                      deep thinking.

UNDERSTAND               We didn t use these categories because we
(Construct meaning     determined they didn't meet our criteria for
from facts)                           deep thinking.

APPLY                I will add examples to    I know when I should
(Apply concept or    show that I understand    add examples to
process)             X.                        demonstrate that I can
                                               apply theory.

ANALYZE              To analyze causes, I      I don't have enough
(See relationship    listed these possible     strategies for
between parts)       causal factors.           analyzing deeply.

EVALUATE             My conclusion would be    I lack good strategies
(Make criterion-     better if it didn't       for creating strong
based judgments)     repeat what I already     conclusions.
                     said already.

CREATE               When I revise my          I need to try strategy
(Use concepts or     conclusion, I plan to     Q next time I draft a
processes to make    add C as a new            conclusion.
new knowledge)       interpretation.

APPENDIX B: Elicitory Prompts (Questions) Taxonomy

PROMPT FUNCTIONS         PROMPT FRAMES            PROMPT RESPONSES

Social Coordination      Interrogative            Postpone answer
Conversational Control     Who                    Ignore
  Test Question            What                   Answer with an
  Non-Interrogatives       When                     elicitory device
Knowledge Gap              Where                  Answer a different
  Common Ground            Why                      prompt
  Recall                 Close                    Answer with cognitive
    Facts                  Do                       move
    Concepts               Will
    Processes              Could                    Cognitive Move
    Metacognition        Hypothetical                 Apply
  Deep Thinking            What if                      Facts
    Apply                Imperative                     Concepts
      Facts                Tell me X.                   Processes
      Concepts           Implied                        Metacognition
      Processes            I wonder what X            Analyze
      Metacognition        means                        Facts
    Analyze              Open Ended                     Concepts
      Facts                Any interrogative w/         Processes
      Concepts             pause                        Metacognition
      Processes                                       Evaluate
      Metacognition        Incomplete                   Facts
    Evaluate               Utterance (DIU)              Concepts
      Facts              Choice                         Processes
      Concepts             Is it X or Y                 Metacognition
      Processes          Tag                          Create
      Metacognition        You mean X, don't            Facts
    Create                 you?                         processes
      Facts                                             Metacognition
      Concepts
      Processes
      Metacognition


Acknowledgement

For partial support on the original research, I thank the International Writing Centers Association for a research grant and research assistants Kellie Cams and Pippa Hemsley. For manuscript support, I thank Kelly Helms, Mary Wislocki, Paula Gillespie, and one other anonymous WCJ reviewer. For professional development during the summer WCJ virtual writing retreat and many times before and after, I extend deep appreciation to the editors of WCJ.

References

Ashton-Jones, E. (1988). Asking the right questions: A heuristic for tutors. Writing Center Journal, 9(1), 29-36.

Babcock, R. D., Manning, K., Rogers, T., & Goff, C. (2012). A synthesis of qualitative studies of writing center tutoring, 1983-2006. New York: Peter Lang.

Babcock, R. D., & Thonus, T. (2012). Researching the writing center: Towards an evidence-based practice. New York, NY: Peter Lang.

Blau, S., Hall, J., & Sparks, S. (2002). Guilt-free tutoring: Rethinking how we tutor non-native-English-speaking students. Writing Center Journal, 23(1), 23-44.

Brooks, J. (1991). Minimalist tutoring: Making the student do all the work. Writing Lab Newsletter, 15(6), 1-4.

Dillon, J. T. (1978). Using questions to depress student thought. The School Review, 87(1), 50-63.

Driscoll, D. L., & Perdue, S. W. (2014). RAD research as a framework for writing center inquiry. Writing Center Journal, 34(1), 105-133.

Ehrlich, S., & Freed, A. (2010). The function of questions in institutional discourse: An introduction. In A. Freed & S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 3-19). New York, NY: Oxford University Press.

Gillespie, P., & Lerner, N. (2007). Longman guide to peer tutoring (2nd edition). New York: Longman.

Graesser, A. C., & Person, N. K. (1994). Question asking during tutoring. American Educational Research Journal, 31(1), 104-137.

Grutsch McKinney, J. G. (2013). Peripheral visions for writing centers. Logan: Utah State University Press.

Heer, R. (2012). A model of learning objectives. Retrieved April 27, 2014, from http://www.celt.iastate.edu/teaching/ RevisedBlooms1.html

Holmes, J., & Chiles, T. (2010). "Is that right?" Questions and questioning as control devices in the workplace. In Why do you ask? The function of questions in institutional discourse A. F. Freed & S. Ehrlich (Eds.), (pp. 187-210). New York, NY: Oxford University Press.

Johnson, J. B. (1993). Re-evaluation of the question as a teaching tool. In T. Flynn & M. King (Eds.), Dynamics of the writing conference: Social and cognitive interaction (pp. 24-40).

Koshik, I. (2010). Questions that convey information in teacher-student conferences. In A. F. Freed & S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 159-186). New York, NY: Oxford University Press.

Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.

Lerner, N. (2014). The unpromising present of writing center studies: Author and citation patterns in The Writing Center Journal, 1980 to 2009. Writing Center Journal, 34 (1), 67-102.

Mackiewicz, J., & Thompson, I. K. (2015). Talk about writing: The tutoring strategies of experienced writing center tutors. New York, NY: Routledge.

McAndrew, D. A., & Reigstad, T. J. (2001). Tutoring writing: A practical guide for conferences (pp. 25-26). New York, NY: Heinemann.

Murphy, C., & Sherwood, S. (2011). The St. Martin's sourcebook for writing tutors (4th ed.). New York, NY: Bedford/St. Martin's.

Nordlof, J. (2014). Vygotsky, scaffolding, and the role of theory in writing center work. Writing Center Journal, 34(1), 45-66.

Pomerantz, J. (2005). A linguistic analysis of question taxonomies. Journal of the American Society for Information Science and Technology, 56(f), 715-728. Retrieved from http://doi.org/10.1002/asi.20162

Rafoth, B. (2005). A tutor's guide: helping writers one to one, (2nd edition). Portsmouth, NH: Heinemann.

Raymond, G. (2010). Grammar and social relations: Alternative forms of yes/no-type initiating actions in health visitor interactions.

In A. F. Freed & S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 87-107). New York, NY: Oxford University Press.

Roberts, G. L., & Bavelas, J. B. (1996). The communicative dictionary: A collaborative theory of meaning. In J. Stewart (Ed.), Beyond the symbol model (pp. 135-160). Albany, NY: SUNY.

Ryan, L., & Zimmerelli, L. (2009). Bedford guide for writing tutors (5th ed.). Boston: Bedford/St. Martin's.

Sarangi, S. (2010). The spatial and temporal dimensions of reflective questions in genetic counseling. In A. Freed & S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 235-255). New York, NY: Oxford University Press.

Shamoon, L. K., & Burns, D. H. (1995). A critique of pure tutoring. Writing Center Journal, 15(2), 134-151.

Speer, S. A. (2010). Pursuing views and testing commitments: Hypothetical questions in the psychiatric assessment of transsexual patients. In A. F. Freed & S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 133-158). New York, NY: Oxford University Press.

Stokoe, E., Sc Edwards, D. (2010). Asking ostensibly silly questions in police-suspect interrogations. In A. Freed Sc S. Ehrlich (Eds.), Why do you ask? The function of questions in institutional discourse (pp. 108-132). New York, NY: Oxford University Press.

Thompson, I. (2009). Scaffolding in the writing center: A microanalysis of an experienced tutor's verbal and nonverbal tutoring strategies. Written Communication, 26(4), 417-453. Retrieved from http://doi.org/10.1177/0741088309342364

Thompson, I., & Mackiewicz, J. (2013). Questioning in writing center conferences. Writing Center Journal, 33(2), 37-70.

Tienken, C. H., Goldberg, S., Sc DiRocco, D. (2010). Questioning the questions. Education Digest: Essential Readings Condensed for Quick Review, 75(9), 28-32.

Wollman, L. F. (2012). Research paradigms. Retrieved from http:// zimmer.csufresno.edu/~donnah/Research%20Paradigms.ppt

(1) Unless referring to the entire field, the "we" referenced throughout this manuscript is primarily me and my now-alumna research partner, Michelle Wallace. Although not involved in authoring this article, Michelle's intellectual labor deeply informs it.

(2) See Appendix A for a version of this taxonomy adapted to writing center dialogue.

(3) I define "paradigm" as a set of assumptions that influences research methods and theoretical approaches as strongly as the flat-earth paradigm did in its day. Although "paradigm" is the preferred vocabulary in the sciences, I use equivalent terminology interchangeably, including "worldview," "framework," and "schema."

(4) Not all paradigms anchor binaries along a continuum, but the D/ND paradigm does.

(5) For more on epistemic stance questions, see research studying so-called "silly questions" (Stokoe & Edwards, 2010) and yes/no declarative and yes/no interrogative questions (Raymond, 2010) to reveal how both questioners and answerers perceive where knowledge resides.

(6) For more on hypothetical and reflective questions, see research studying how psychiatrists use hypothetical as a way to test transsexuals' commitment to gender reassignment surgery (Speer, 2010), and how medical practitioners use both hypothetical and reflective questions to test patients' readiness for genetic testing (Sarangi, 2010).

(7) For more on disruptive questions, see research studying questions asked by what Janet Holmes & Tina Chiles (2010) call "non-primary" speakers to reveal how the institutional political landscape constrains how questioners and answerers do questioning (see also Ehrlich & Freed, 2010; Sarangi, 2010).

(8) Neal Lerner offers a sobering, evidence-based investigation on the lack of "biodiversity" in writing center studies, a field he finds disappointingly insular (2014, p. 68).

(9) IR stands for Initiation, Response; IRE adds Evaluation; IRF stands for Initiation, Response, Follow-up.

(10) See Appendix B for what is probably the twelfth revision of our initial taxonomy. Although it's the one we eventually used, we would likely revise it more before using it again.

(11) Mackiewicz & Thompson (2015) extend the knowledge-gap classification to questions writers ask. However, outcomes--that is, how tutors actually responded--was to our knowledge not considered in the process of classifying.

(12) Note that Graesser & Person's (1994) question taxonomy developed for the purpose of automating tutoring in STEM fields has, with slight adaption, played a substantial role in writing center research on questions. We find it remarkable and disappointing that less useful aspects of their taxonomy were forwarded, while what we considered the most profound aspect, the "deep-reasoning" distinction, was not.

(13) I interchangeably use "lore bias" and "confirmation bias." Conceptually, both mean that data function primarily to confirm what researchers assume they will find.

(14) As proposed by Haswell (2005), RAD research is defined as replicable, aggregable, and data-supported.

(15) Special thanks to Mary Wislocki for calling this parallel to my attention.

(16) Taxonomy contains hypothetical but representative examples as used in our questions research.

Once a peer tutor, Roberta D. Kjesrud now directs the writing center portion of the Research-Writing Studio in Western Washington University's Learning Commons. She has published in The Writing Center Journal, Writing Center Perspectives, and The OWL Construction and Maintenance Guide, and she has served as president of both the Pacific Northwest Writing Centers Association and the International Writing Centers Association. In addition to her abiding interest in writing conference outcomes, Roberta's scholarly interests center around integrating academic literacies, studio-based learning pedagogies, and the effects of space on learning.
Figure 1: Taxonomy of Current Empirical Research on
Questions in Writing Center Dialogue

                   DIRECTIVE (Bad)         NON-DIRECTIVE
                      QUESTIONS          (Good) QUESTIONS

Question type        Tag Leading            Open-ended
                        Test
                     Rhetorical
                        Closed

Evaluative           Controlling             Genuine
language          Non-collaborative            Real
associated        Teacher-centered       Student-centered
                       Testing             Collaborative
                   Depress thought
                     Threatening
                        Wrong
                Pose cognitive burden
                      Intrusive
                      Dishonest
COPYRIGHT 2015 University of Oklahoma
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kjesrud, Roberta D.
Publication:Writing Center Journal
Article Type:Essay
Date:Mar 22, 2015
Words:8845
Previous Article:"It's all coming together, right before my eyes": on poetry, peace, and creative placemaking in writing centers.
Next Article:Body + power + justice: movement-based workshops for critical tutor education.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters