Printer Friendly

Tell me what you really think: Lessons from negative student feedback.

Abstract

This study investigates negative feedback received via exit surveys completed by students who participated in over 11,000 writing center consultations. Although the feedback was overwhelmingly positive, the researchers mined negative comments for patterns--and note the value of negative feedback in helping writing centers assess and improve our practice. A key finding in the study is the prevalence of negative comments regarding what the researchers term "non-directive non-productivity": students' perception that some consultations guided by principles of non-directivity improve neither their product nor their process.

It's common for writing centers to incorporate some system for generating and processing student feedback. However, negative or critical feedback is rare, in part because what we do works so well. Students are overwhelmingly complimentary about the help they receive at the writing center (for example, Bromley, Northway, & Schonberg, 2013). Because negative feedback is so rare, it is commonly used for in-house training and quality control but seldom for identifying and addressing larger issues with writing center practice and theory; rarely does writing center studies research focus on negative feedback (Morrison & Nadeau, 2003). Fortunately, the writing center at The University of Texas at Austin is one of the largest collegiate writing centers in the country. We employ more than 80 part-time consultants during the long semesters. In the 2013-2014 academic year, our center conducted more than 11,000 consultations. Like many writing centers, we ask our student writers to complete a short exit survey designed to gauge our performance and consultee satisfaction. Over the years, we have collected a wealth of student feedback. While predictably the overwhelming majority (over 99%) of student feedback has been positive, the very size of our dataset has enabled us to make meaningful observations about negative student feedback. Parsing thousands of student-client responses, we discovered patterns in the rare negative feedback that illuminate student concerns about consultations.

One of the key patterns that we discovered in this dataset of negative comments is what we call non-directive non-productivity (NDNP). We define NDNP as well as our general methodology in greater detail below. It is beyond the scope of this paper to argue the efficacy of non-directive consultation practices; we are simply observing a pattern in student feedback. It is sufficient for our purposes now to define this term as describing a perception students may have that a consultation guided by the principles of non-directivity has improved neither their writing product nor their process. Concerns about non-productivity represent our most substantial subset of negative student feedback, and our writers' most consistent complaints cannot be readily addressed using wholly conventional ideas about the directive/non-directive continuum. As Roberta D. Kjesrud (2015) points out, even accumulated RAD investigations of tutor questioning far too often replicate a reductive and unhelpful directive/non-directive paradigm. This slippage between students' needs and the writing center lore that informs much of consultation practice may help us to better understand the need for more outcomes-based models for consulting, such as those espoused by Kjesrud (2015).

In our data set, NDNP expresses itself most succinctly in comments like, "This was a waste of time." Based on our findings, it appears that emphasizing non-directivity without simultaneously emphasizing productivity in consultations may leave some students disappointed. These findings affect the writing center community in both theoretical and practical ways. First, our current study of client feedback interrogates the principle of non-directivity and suggests a need for explicit objectives in writing center consultations, similar to those called for by James H. Bell (2000). This study encourages directors to prepare consultants in ways that empower them to address student concerns for productivity while still maintaining a focus on non-directivity. We also anticipate future studies on negative feedback that inform writing center practices writ large, not just in-house training. Attention to student concerns may also yield practical implications for how writing center directors elicit all kinds of feedback from students.

Negative Feedback: An Underused Resource

For decades, writing centers have used positive student feedback to justify their existence and the value of non-directive non-evaluative practice. In 1995, for example, Muriel Harris (1995) published a blanket defense of writing center practice titled "Talking in the Middle: Why Writers Need Writing Tutors." In order to support her argument for the value of "writing tutors," she largely drew on examples of positive student feedback from exit surveys (p. 30). While re-evaluating why we collect so much student data at our writing center, we questioned the assumption that we primarily need student feedback to provide justification to the University administration for our existing programs. In addition to being an important part of our annual report to the administration, positive student feedback was used to encourage our consultants. Every week, we sent a batch of positive comments from the exit survey to our consultants. Typical examples of positive student feedback include the following:
The Writing Center was extremely helpful! I highly recommend
that other students come here for help!
Tom L. is an excellent consultant!
I don't feel intimidated or feel stupid for coming here at all!!!:)
My consultant (blonde hair, blue eyes, thin and tall) was really
helpful!! Also, I don't remember his name but he is soo cute:)
My session was extremely helpful. Not only do I feel like my paper
will be a lot better now, but also my writing in general.


These positive comments were great for morale and reinforcing the lesson that writing center practice really works, but we started to wonder why we weren't using this robust feedback dataset for something more substantial. In short, we realized that we had an underutilized resource at our fingertips.

At the time, we were using a manifestation of the exit survey developed by our director emeritus in 2010. The survey consisted of ten statements (the content of which we will describe in detail later on), each accompanied by an eight-point Likert scale. Between September 2010 and May 2014, we collected 41,897 individual surveys. In order to narrow our search, we focused on the statement that received the greatest number of negative responses: "The consultant helped me meet my needs as a writer." Of the 41,897 responses to this statement, we found 357 negative responses. By comparison, "The consultant made me feel welcome at the writing center" received only 103 negative responses.

Out of the 357 negative responses to "The consultant helped me meet my needs as a writer," only 103 included written comments. To reiterate, of our nearly 42,000 responses, only 103 included negative written comments. That's just over.002% of survey responses. The positive feedback vastly outweighed the negative, but a small number of negative comments could be indicative of broader discontent. In other words, negative comments are like cockroaches--if you actually see one, you can bet there are a lot more where that came from. With such a large dataset to work from, we were able to populate a satisfactory pool of negative comments and discern meaningful trends.

First we will examine the value of looking at negative feedback in writing center studies in general; then we will discuss our findings and some of our preliminary interpretations of those findings. We found that negative feedback can be classified in three ways: 1. suggestions for administrators regarding policies, unprofessional behavior, or consultant expertise; 2. perceptions of emotional distance; and 3. complaints about productivity stemming from non-directive practice and consultants ignoring writers' goals (NDNP). The percentages here do not necessarily add up to 100 because comments were frequently double-coded as describing both; for example, non-productivity and unprofessional behavior. Students often indicated two or more concerns in a single comment. The key point we wish to emphasize is that these writers were invested enough in the feedback process to actually register complaints about our practice that forced us, among other things, to address issues related to NDNP.

Reflections on Negative Feedback

Collecting data from students and consultants is, itself, not an uncontroversial step. Bell & Frost (2012) have pointed out that "academic support units' discomfort with quantitative assessment, and the complexity of a comprehensive isolation of the factors that make writing centers work" have contributed to this unease with assessment data (p. 17). Writing centers have had a long tradition of being "data-shy," often because of a lack of resources and training in data collection. Nonetheless, as the discipline of writing center studies has blossomed, attitudes towards collecting feedback have changed. Collecting responses from writers, consultants, and other stakeholders has become far more integral to writing center practice since the early days when Neal Lerner (1997) had to justify the use of quantified methods (for instance, see Schendel & Macauley, 2012). Even though writing center studies has begun to embrace the practice of collecting and assessing data, there is still a lack of consensus about what constitutes reliable and valid data.

Wildly different methods of collecting feedback have prompted Jo Ann Griffin, Daniel Keller, Iswari P. Pandey, Anne-Marie Pedersen, & Carolyn Skinner (2006) to complain that "the focus on the local context in writing center discourse, [ignores] the benefits of the national perspective," in spite of the fact that "institutions often come in similar shapes and forms, resulting in similar sets of choices for directors" (pp. 6-7). There is no national standard for how writing centers collect feedback, despite the fact that we inhabit similar institutional positions and encounter similar challenges. It is rare that even two centers share the same feedback procedures. The data we collect is relentlessly localized.

Miriam Gofine (2012) determined that overall student feedback has only limited research validity in terms of outcomes. A writing tutor herself, Gofine provides an excellent review of literature on writing center feedback. Among a spectrum of feedback ambiguities, Gofine points to the example of Bredtmann, Crede, & Otten (2013), who discovered that while there was no change in the grades students received on assignments after visiting the writing center, all of these students indicated "strong student satisfaction with writing center tutorials" (Gofine, 2012, p. 44). The problem of overly positive student feedback has called into question the validity of data derived from student satisfaction surveys and the efficacy of unexamined survey design. Students who use our services, especially those surveyed immediately after a consultation, feel a powerful "glow" from engaging in one-on-one discussion about their writing. Linda Ringer Leff tells the story of one batch of feedback: "On one item, every one of the nearly two hundred students circled the highest possible Likert scale number. The evaluation consultant insisted on throwing out those data" (as cited in Bell, 2000, p. 9). The overwhelmingly positive responses were considered suspicious and the outside researcher was skeptical that such a result could be valid. When Leff related this story at an IWCA conference, she records that the audience was "shocked" (as cited in Bell, 2000, p. 9). Should we take the overwhelmingly positive feedback that writing centers receive as indicative of the quality of our services, or is such feedback potentially suspect? Questioning the reliability and validity of positive feedback has inspired us to focus on the inverse question: What do we make of the minuscule amount of negative feedback we receive? In a sense, the elusivity of negative comments suggests a general student reluctance to criticize, indicating to us that negative comments may represent the squeakiest tip of the iceberg, to mix metaphors. Negative feedback may be rare, partly because survey design can prime students to provide mostly positive responses, but it should be taken seriously exactly for that reason. Against a backdrop of positive feedback, negative feedback can be most illuminating.

One characteristic of this negative feedback worth noting is that some comments are positive even when they are negative. When students give our services a low rating, their comments frequently are apologetic. These students are not embittered or hostile. They hedge their negative feedback. Roughly a third of negative comments indicated that they felt that their negative experience was an exception to the quality of consultation they typically receive. Many of these students seem almost apologetic about giving negative feedback. Said one student, "i [sic] have gotten a lot of good help here before but today i [sic] was really confused and it just made me need more help but the first person i [sic] saw for help on my paper did great." Even more acrimonious comments also referenced students' previous good experiences: "If I hadn't been here before and had really good consultants, I probably wouldn't come back or recommend it." Another said, "I will come back to the writing center only because I had [a] positive experience the first time." These and other comments like them indicate that, in spite of isolated bad experiences, repeat visitors want to express their overall satisfaction.

In addition to suggesting that negative experiences were the exception to the rule, students also hedged their negative comments by making suggestions for improvement. They might suggest, as one student did, "If UWC had a special person to help with different types of essay assignments I would think it will be more affective [sic]." Or "If the beginner consultant's [sic] could get better training," as another student opined, "I think they would be better prepared." Many of these suggestions were directed towards the administration of the writing center rather than to individual consultants. Comments about clerical practices or writing center policy, along with training to improve professional behavior and expertise, make up our first category of comments. Again, across all categories of criticism, student complaints are hedged with expressions of general satisfaction and a desire to improve the writing center.

Although they hate to complain, students tend to complain about one of three categories: administrative issues, emotional needs that were unmet, or consultation non-productivity (what we have previously abbreviated as NDNP). The remainder of this paper will focus on student concerns regarding NDNP, because they represent the most substantial subset of negative feedback and have rich implications for writing center theory and training practices.

Non-Directive Non-Productivity

In our urgency to focus on developing, as Stephen M. North (1984) says, "better writers, not better writing" (p. 38), we may lose sight of the fact that learning to improve writing improves writers. Our feedback data suggest that students who come in with expectations for improving their projects may feel stymied by consultations characterized by what we have termed "non-directive non-productivity."

NDPD was the largest category in our database of student complaints. This category comprised 38.8% of negative student comments (40/103). These comments concern a lack of concrete consultation objectives and a dissatisfaction with the work accomplished during the consultation. Students at the end of a consultation may feel like there is nothing to show for their work, not even material changes in their writing. "I left the writing center with no notes, [no] thesis, and no better understanding of my argument," wrote one student. Comments like these reinforce the importance of each writer developing a clear plan of action by the end of the consultation (Neaves, 2011). Pushing students to articulate such a plan in writing may feel overly directive to some, and such an approach may not fit the needs of every student. But our findings suggest that clearly establishing some kind of "takehome" message is an important part of meeting student expectations of productivity.

The fact that some students are leaving the writing center without a clear revision strategy or plan of action suggests that consultants may be falling into the trap of equating non-directivity with indirect or vague feedback. Often times, concerns about non-productive consultations are directly related to the types of questions consultants ask. Rebecca Day Babcock, Kellye Manning, Travis Rogers, Courtney Goff, & Amanda McCain (2012) devote a section of their Synthesis of Qualitative Studies of Writing Center Tutoring, 1983--2006 to "Questioning." The authors focus on the dangers of "closed-ended questions," which attempt to elicit specific responses from writers, as compared to open-ended questions "to which only the tutee has the answer" (p. 43). The latter are clearly preferable, but Babcock, Manning, Rogers, Goff, & McCain (2012) overlook the danger of questions that aspire to be open-ended but fail because they leave the writer with little potential for a productive, generative answer. Though Babcock, Manning, Rogers, Goff, & McCain (2012) admit that "Sometimes questions became frustrating" (p. 44), they fail to address the potentially legitimate concerns about productivity that might cause such frustration. By asking students excessively open-ended questions ("So what do you think about your paper?") the consultant makes space for an open answer, but risks leaving the writer stymied.

Our exit survey results reveal that writers are often dissatisfied with excessively open-ended questions. Such indirect methods in one consultation led to a lengthy comment:
I asked the Instructor [sic] very specific question regarding what he
thought the specific parts of the poem meant, so that I could
understand it better. However i [sic] received, quite literarlly [sic],
every single time the question 'what do you think.' In the end i [sic]
felt that coming to the writing center only resulted in being subjected
to that one question, and it was as good as doing the assignment on my
own, without any help.


Of course the consultant wasn't an instructor and wasn't in any position to just tell the student what the "poem meant," but this comment may indicate a missed opportunity to, at the very least, address the how and why of the non-directive approach. This comment reveals a serious breakdown in communication between a consultant who is trying to remain non-directive and a writer who is seriously concerned about productivity. Rather than focusing on the writing process, or even a given product, this writer was concerned with not "doing the assignment on my own." In this case, the assignment was some form of literary analysis, and the writer was trying to get the consultant to do the interpretive work. According to the writer, the consultant repeatedly fell back on a question that forced them to take ownership of the interpretive work but did little else to improve the product or process of writing. That question, according to the student, was a painfully reiterated "What do you think?"

Even in the absence of vague questions, many students misinterpret consultant non-directivity as a willful reluctance to be helpful. One comment pointed out, "I felt as if the consultant was very concerned with not telling me what to do which inhibited her to give me actual feedback." Another comment echoes this concern: "I felt like my consultant was just waiting for me to come to an answer on my own and I don't feel like I received very much direct assistance on questions that I had." As these comments reveal, preoccupation with not being directive can lead to extremes of non-productivity.

NDNP has emotional implications as well. Excessive non-directivity can lead to feelings of isolation or even hostility when students feel abandoned or as if consultants are withholding valuable information, playing keep-away with their expertise. Students can feel isolated in such situations, feeling like they are in the consultation alone. One student wrote, the "consultant made it seem like I was going through the paper on my own." Another wrote that "it was almost as if 1 had to do most of my editing... . I could have done that in my own time. I needed another pair of eyes and suggestions." Other students may not feel entirely alone, but they may be frustrated at the feeling that the consultant is in some way an expert but is purposefully withholding their expertise. One student writes that the low quality of feedback could "be provided by any classmate and really defeats the purpose of scheduling an appointment at the Writing Center." If students feel that the consultant is purposefully withholding when they give, as one comment indicates, "abstract suggestions and deliberately ignor[e] mistakes," they may be frustrated with what they perceive as an emotionally insensitive I-know-something-you-don't-know game.

Some of those who criticize non-directivity as non-productive may not be aware of the larger aims of the writing center and our philosophies, but even among those who see non-directive policy as a good thing in itself, we find frustration with a non-productive interpretation of it. One such comment begins, "I know you all cannot write my paper for me, but [...] I wish I could have had someone tell me this is how I would put it in order, so [sic] I have something to go by." Another student wrote, "I understand the concept of non-evaluative, but that doesn't mean non-critical approach. [...] I know they aren't going to edit for me, of course not, but what I do expect is advice." The comment starts off by commending the typical practice of the writing center, but the frustration settles in as the comment continues: "I couldn't get another point of view just 'how do you feel' on everything, and it was a huge waste of time."

In order to help writers feel that the consultation wasn't a "waste of time," it's worth considering what kind of feedback consultants can give while still empowering students in their writing process. We at The University of Texas often train our consultants in the magic phrase of non-directive peer tutoring: "As a reader." This might be one corrective to a non-productive session. As consultants feel confident expressing their individual and perhaps quirky responses and students take those suggestions with the salt required of them, both can feel more comfortable with "actual feedback." Another possibility might be to provide a range of options, from which students can choose (e.g., "This idea doesn't seem to fit: you can cut it altogether or else provide a clear connection to this section, or else you can refigure your thesis to accommodate this idea").

We recognize that our sample of comments looked at a very specific group--students who left the consultation with an overall negative feeling, which was indicated on the end-of-consultation Likert scale. It may be possible to build an even bigger corpus of negative student comments by including those students who, generally, indicate satisfaction but still have concerns about writing center policy and professionalization, emotional responsiveness, or productivity. Serendipitous skimming of our database of more than 11,000 responses revealed that negative comments can occur even when students indicate an overall positive experience at the writing center. Further studies may expand to include negative comments made in conjunction with positive Likert-scale responses; however, we did not determine a formal methodology for finding and assessing such data. Other researchers might employ natural language software to seek negative words and phrases hidden among the responses that indicated a generally positive response to the tutoring session. For instance, just this last year, one student who indicated that she "strongly agreed" with all positive statements in the exit survey nevertheless said, "I wish I could have had a little more help with how exactly to organize my paper. I know you all cannot write my paper for me, but my cover letter had a lot of topics to cover and I wish I could have had someone tell me this is how I would put it in order. So I have something to go by." Such comments indicate a concern with NDNP but weren't captured in our original sample, where we focused only on those comments that accompanied negative responses on the Likert scale.

Implications

Structural and institutional changes can affirm the value of negative comments in the writing center. Changes about when we ask for advice and how we solicit feedback can empower both students and consultants to voice their concerns with the assurance that their feedback (positive and negative) is valued. Currently, we at The University of Texas at Austin have implemented a simple mechanism by which consultants can provide immediate feedback to administrators about sessions that feel unproductive or raise other concerns. Like many writing centers, we require our consultants to fill out an exit report on what occurred during the consultation. We have added an option to this report that allows consultants to voice concerns directly to administrators. Once provided with this information, administrators can serve as conduits for channeling both student and consultant concerns about individual consultations. At times this has even included individual meetings between administrators and "problematic" students, in which the administrators articulate the policies and practices of the writing center. Whether or not similar methods would prove equally effective for all writing centers, administrators should recognize that student feedback can and should have a role in directing the continued growth of individual consultants and the center as a whole.

While responding to negative feedback in our training is crucial, the most important implication of our study is that writing center administrators need to find creative ways to solicit and respond to negative feedback. Psychologist Norbert Schwarz observed in 1999 that in self reports about behavior and attitudes, the wording, format, and context of questions shape the answers. Taking our cue from Schwarz, we revised questions on our exit poll to improve clarity and solicit more helpful feedback about productivity. First, we made our exit poll more comprehensible. There were some statements on the survey, such as "I left with a better sense of how I can make this a writing project, and how to make ones like it more effective," that were difficult for even us to parse. If you ask confusing questions, you will not get clear responses. While this may seem painfully obvious, we found we had not been regularly reviewing the content of our exit poll. Thousands of student writers had been taking the same poll for years, but we had not considered that the exit poll needed regular revisiting and revising until we began attempting to interpret student feedback. Related to this, we made sure to ask questions to which we actually wanted the answers. Since we had discovered a lingering concern about productivity, we now directly inquire whether student writers' goals were met. In doing so, we hope to collect more candid feedback and encourage students to voice more than just praise. We hope that over the next academic year this question will allow us to assess how effective our changes in training and policy have been in addressing student concerns about non-productivity.

Finally, we began to include more open and less leading questions to evaluate consultant performance and the experience of the writing center as a whole. After realizing the benefits of receiving negative feedback, we revised our exit poll to be more inviting of all perspectives. For instance, our previous exit survey mentioned that we would share writer feedback with our consultants: "Feel free to comment on any aspect of your consultation below. We share your remarks with our consultants to help them do a better job!" Students might not "feel free" to be forthright if they feel as though they are being asked to make a judgment on the person with whom they just met. Of course, successful consultations include a variety of factors from entering the door to leaving, none of which are apparent in this framework. Now we invite writers to "Please comment on any aspect of your experience at the University Writing Center. Tell us what you found helpful or where we might improve." The benefit of this change is that it makes the feedback impersonal and directly solicits constructive criticism. When an exit poll is clear and specific and welcomes a range of feedback, writers are more likely to be forthright with their opinions.

Additionally, we have reassessed when we ask for feedback. Bell's (2000) article, "When Hard Questions Are Asked: Evaluating Writing Centers," finds that surveying students immediately after, two weeks after, and two months after their consultations respectively decreased the amount of effusive positive feedback. We wonder if delaying initial administration of our survey would garner a more robust range of feedback.

It can be tempting to solicit only positive feedback. It makes our consultants happy, and it impresses our administrators--also, we like it. However, we need to get more comfortable with the idea of negative feedback as we seek to discover areas for improvement and set a baseline for future longitudinal research that can provide a more holistic record of our performance. All feedback should be valued, even when it causes us to reassess our most basic assumptions, even if it's negative.

Acknowledgment

We are deeply indebted to Vicente Lozano for collecting and maintaining years of data at the University Writing Center at the University of Texas, and to the UWC's Alice Batt, whose assistance on early versions of this research was invaluable. The reviewers and editors at WCf provided thorough and thoughtful insight at all stages of the revision process. We are also, of course, beholden to the generations of UWC tutors and student writers who supplied the feedback at the root of our research.

References

Babcock, R. D., Manning, K., Rogers, T., Goff, C, & McCain, A. (2012). A Synthesis of Qualitative Studies of Writing Center Tutoring, 1983-2006. Peter Lang New York.

Bell, D. C, & Frost, A. (2012). Critical inquiry and writing centers: A methodology of assessment. Learning Assistance Review, 17(1), 15-26.

Bell, J. H. (2000). When hard questions are asked: Evaluating writing centers. Writing Center Journal, 27(1), 7-28.

Bredtmann, J., Crede, C. J., & Otten, S. (2013). Methods for evaluating educational programs: Does writing center participation affect student achievement? Evaluation and Program Planning, 36(1), 115-123.

Bromley, P., Northway, K., & Schonberg, E. (2013). How important is the local, really?: A cross-institutional quantitative assessment of frequently asked questions in writing center exit surveys. Writing Center Journal, 33(1), 13-37.

Brooks, J. (1995). Minimalist tutoring: making the student do all the work. In C. Murphy & S. Sherwood (Eds.), St. Martin's sourcebook for writing consultants (168-173). Third edition (2008). New York: St. Martin's.

Gofine, M. (2012). How are we doing? A review of assessments within writing centers. Writing Center Journal, 32(1), 39-49.

Griffin, J. A., Keller, D., Pandey, I. P., Pedersen, A. M., & Skinner, C. (2006). Local practices, national consequences: Surveying and (re) constructing writing center identities. Writing Center Journal, 26(2), 3-21.

Harris, M. (1995). Talking in the middle: Why writers need writing tutors. College English, 57(1), 27-42.

Kjesrud, R. D. (2015). Lessons from data: Avoiding lore bias in research paradigms. Writing Center Journal, 34(2), 33-58.

Lerner, Neal. (1997). Counting beans and making beans count. Writing Lab Newsletter, 22(1), 1-3.

Morrison, J. B., & Nadeau, J. P. (2003). How was your session at the writing center?: Pre- and post-grade student evaluations. Writing Center Journal, 23(2), 25-42.

Neaves, J. (2011). Meaningful assessment for improving writing center consultations (Master's thesis). Western Carolina University, Cullowhee, NC.

North, S. M. (1984). The idea of a writing center. College English, 46(5), 433-446.

Schendel, E., & Macauley, W.J. (2012). Building writing center assessments that matter. Utah State University Press.

Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93-105.

About the Authors

Mary Hedengren was a Graduate Assistant Program Coordinator at the University Writing Center (UWC) at The University of Texas at the time of this research and then served as its first post-doc and Graduate Writing Coordinator. Currently, she teaches at the University of Houston, Clear Lake, where her research interests include disciplinarity and emerging writing identities.

Martin Lockerd worked as a Graduate Assistant Program Coordinator at the University Writing Center (UWC) at the The University of Texas for two years. He is an Assistant Professor of English at Schreiner University, where he teaches literature and composition. His research interests include literary Modernism, digital humanities, and writing center administration.

(1) Other recent scholars have also added dimensionality to our maps of writing center work. Some recent examples include Anne Ellen Geller & Harry Denny's (2013) interview study, Jackie Grutsch McKinncy's (2013) narrative inquiry, and LaFrance & Nicolas's (2013) own institutional ethnographic study.

(2) Although this article describes a local study, a research grant awarded by my institution provided the opportunity to expand my study to multiple institutions. The results of the larger multi-institutional IE are forthcoming.

(3) I discovered later that the hiring committee had consulted with Michele Eodice when writing the job description.

(4) In referring to these documents, I was acting out the power such institutional texts have in defining our work. Dorothy Smith, founder of IE, notes the importance of texts in coordinating our work within our institutions. Smith & Susan Marie Turner (2014) argue that "the recognizable identity of a text from one site of activation to another is integral to the text's distinctive form of coordinating ruling relations" (p. 5). Texts like job ads, job descriptions, grants, etc., help us define our work through their ability to be replicated and therefore read by multiple people.

(5) Although my particular study begins from my standpoint and therefore expands my role as researcher to researcher/ participant, not all institutional ethnographic studies begin with the standpoint of the researcher. For example, Marie Campbell (2014), a Canadian professor of Human and Social Development, studied "how ideas that are promoted by organizations such as the World Bank and OECD and by countries donating development assistance make the crucial move from particular discourses into people's local development action" (p. 59). Her study begins with the standpoint of women grassroots activists in Kyrgyzstan who were expressing mixed feelings about the Paris Declaration.

(6) Once the FIFSE grant funding was eliminated, class sizes were reduced to 33. Thanks largely to the work of Kirk Branch, the class sizes for the first semester of our first-year composition sequence have since been reduced to 25.

(7) Grutsch McKinney (2013) specifically notes how the telling of the writing center grand narrative influences others' perceptions of writing center work. My findings suggest that alternative stories also influence others' perceptions, thus validating Grutsch McKinncy's (2013) argument that we need to be aware of the doors our narratives both open and close.

(8) The three department administrators I interviewed (Kirk Branch, Doug Downs, and Linda Karell) all expressed the desire to have their real names included in the text. In following with Amy E. Robillard's (2006) argument of "the exchange value that accompanies citation" (p. 163), I have honored their request and kept their real names.
Graph 1: Trends in Negative Feedback

36%  Administrative complaints
20%  Non-emotionally
     responsive tutor
44%  Non-productivity of
     session

Note: Table made from pie chart.
COPYRIGHT 2017 University of Oklahoma
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hedengren, Mary; Lockerd, Martin
Publication:Writing Center Journal
Article Type:Report
Date:Mar 22, 2017
Words:5718
Previous Article:Looking up: Mapping writing center work through institutional ethnography.
Next Article:Consulting with collaborative writing teams.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |