Printer Friendly

Writing center assessment: why and a little how.

Why should writing centers embrace rather than simply comply with external mandates for assessment? As all of us know, writing center directors are already overwhelmed with duties, and any free time needs to be spent on improving our services and training our tutors, not facing the "math anxiety" brought about by collecting and analyzing assessment data. Even more important, many of us may equate externally mandated assessment with external accountability to conservative institutions not particularly supportive of our process-based pedagogy. My purposes are to argue that writing centers should move beyond mere compliance with externally mandated assessment and to describe a very general plan for beginning to expand our assessment efforts. To fulfill our daily responsibilities, writing center directors spend most of our time being concerned about the services offered in our centers--from tutoring students ourselves, to handling complaints from faculty members or students, to training tutors. Routine assessment allows us to move beyond our daily concerns so that we can consider our services from a more global perspective and better plan improvements or justify what is currently done.

At least four benefits of externally mandated assessment for writing centers are apparent:

* Externally mandated assessment can make our effectiveness visible to administrators and, hence, increase our power and prestige on campus.

* Assessment involves our centers in a constant process of data collection and analysis and, hence, can enhance writing center research.

* The on-going collection and analysis of data increases the opportunities for reflective practice and brings reflection to the forefront of daily activities.

* Routine assessment is the intelligent, professional, and ethical thing to do.

These four benefits are not exclusive. They all derive from the same set of activities and from each other.

I begin discussion of externally mandated assessment for writing centers with a description of each benefit and then include a brief history of student outcomes assessment, as mandated by accrediting agencies. Most of this article discusses how to develop an assessment plan. It describes the current assessment measures used most often by writing centers--use counts and satisfaction surveys, both of which provide important information about how our services are received by the students and faculty we support. I argue that, although use counts and satisfaction surveys are important and should be continued, writing centers also need to develop measures of student learning. Throughout, I include examples from Auburn University's English Center, the writing center I direct. (1)

Power, Research, Reflective Practice, and Professionalism

The four mutually reinforcing benefits of our participation in externally mandated assessment are the opportunities to make our effectiveness visible on campus, to expand our research agendas, to immerse ourselves in reflective practice as a daily habit, and to honor our professional obligations to our funders and to ourselves.

The Opportunity to Make our Effectiveness Visible. In the special issue of The Writing Center Journal devoted to discussion of the future for writing centers at the beginning of the 21st century, Joan Mullin and Albert C. DeCiccio, editors at that time, asked several well-known writing center researchers about our continuing viability in the academy and the research community. Responses to the questions indicated that assuming we had ever been viable on our campuses (Brannon and North) or in our research (Kail) may be a mistake. Two other more recent articles published in Composition Studies point out that writing centers are marginalized not only among other entities on campus but also within composition studies generally (Lerner "Punishment"; Rohan). It is not difficult to find discussions of the low status and the lack of power writing centers have in their home institutions, in composition studies, and in the academy at large.

In her article in the millennial issue of The Writing Center Journal, appropriately titled "Preparing to Sit at the Head Table," Muriel Harris hopes that writing centers can become "recognized campus leaders whose vision of how learning environments should be structured has come to dominate educational thinking" (13). To achieve Harris's goal, writing centers must point to the value of our services and our effectiveness. Externally mandated assessment gives us that opportunity. By avoiding assessment altogether or by allowing our services to be assessed along with the writing programs we support, our effectiveness becomes invisible. To make our effectiveness known, we need to conduct meaningful assessments that focus on writing center services.

Telling the story of his 25-year involvement with assessment of composition programs, Edward M. White points to the practical benefits of taking control of assessment and the perceptions that come from assessment: Perceptions "determine resources ... for everything from new duplicators to faculty positions" ("The Opening" 306). In an earlier article, White equates gaining control over assessment with defining "what is valued in education" ("Power" 9), hence influencing institutional goals while defining programmatic ones. Also emphasizing the power of assessment, Brian Huot equates assessment with "progressive social action" because of its power "to disrupt existing social order and class systems" (7). White and Huot both clearly believe that the power from institutional and societal "naming" deriving from assessment extends far beyond its possibility to improve student learning--in itself a good justification for conducting assessments. Assessment can lead writing centers to gain the power to define ourselves and to extend and revise our services. This power to choose derives not from the fear that if we do not take charge of our own assessment, someone will do it for us (Mullin) but from the confidence that our services are important to the university community and that writing centers are worth even larger investments of revenue and attention.

I learned early in my experience as a writing center director the power-building benefit of assessment. Although the English Department at Auburn University established a writing center in the late 1960s, few funds had been invested, and the English Center, as it came to be called, was staffed by graduate teaching assistants, who were not trained and barely supervised. When Auburn University, like most other universities, instituted a large English core required for graduation, use of the English Center increased, and the Department Head decided that the graduate teaching assistants needed more supervision. At that time--almost 10 years ago-- faculty members in English were not willing to invest Department funds on developing student support services. The Department Head finally convinced the faculty to try a "pilot" year and allowed me to direct the Center. It was clear that I had to demonstrate the worth of the English Center services to my colleagues. The best way I knew to justify our existence was to collect usage data and conduct student and faculty satisfaction surveys. The assessment publicized the large increase in use during the one-year pilot and the high levels of satisfaction from both faculty and students. The result was that the English Center became a permanent unit in the English Department and has been allowed to continue developing student support services in reading and writing.

I presented the usage data and results of the surveys in internal grant proposals and conversations with the Dean and the Provost. As a result of its reputation, the Center received additional funding--expanding in size to take in the classroom next door, receiving new carpet and furniture, and becoming a focus of technological innovation. As use of English Center services increased to full capacity, I have been allowed great latitude in deciding how English Center services will be delivered--one-to-one tutorials in the Center, not in dorms and not exclusively through the Internet--and to whom--primarily undergraduates, with special concern for underprepared native-English speakers. Although I was hesitant at first and resentful at having to conduct an assessment, while the established programs in English did not, the benefits have been substantial: the English Center now "sit[s] at the head table" in the English Department (Harris) along with the high-status undergraduate major and graduate student programs. Moreover, all of the programs in English are now required to conduct routine assessments. Our regional accrediting agency, SACS (Southern Association of Colleges and Schools), expects assessment data from all units on campus to be available, and Auburn has established a unit, the Office of Institutional Research and Assessment, to ensure compliance.

Opportunity to Enhance Our Research: Besides being an area of research itself, assessment brings to the forefront important topics and questions for investigation (Huot). Our assessment plans are likely derived from research and, hence, will return to inform future research. In fact, it is difficult sometimes to distinguish assessment from research. Published in Assessing Writing, Teresa Thonus's recent study of writing center tutorials exemplifies this connection between assessment and research. The purpose of the study was to determine the characteristics of a "successful" writing center tutorial so that other tutorials might be evaluated according to the presence or absence of these characteristics. Taping tutorial sessions with six native and six non-native speakers of English, Thonus identified some linguistic features of the tutorials through using conversational analysis techniques. Then, she interviewed both the tutor and the student in each tutorial to determine the tutorial's "success." In later interviews, Thonus asked the tutors and students to identify behaviors that they thought contributed to the success of the tutorials. Based on the conversational analysis and the responses in the interviews, Thonus was able to identify 10 attributes that appeared "necessary but not sufficient conditions for the success of tutorials in [her writing center] context" ("Tutor" 126).

Along with providing information useful for evaluating future tutorials, Thonus's study increases our knowledge about what goes on in tutorials and how students respond to tutors' behaviors. The data collection methods she uses to identify the attributes--taping sessions and interviewing--are common in writing center research. In fact, Thonus has published a similar study, also from her dissertation research, in The Writing Center Journal ("Triangulation"). Interestingly, Thonus does not use direct measures of student learning in her study. Instead her view of "successful" tutorials equates with student and tutor satisfaction. Having to settle for satisfaction as an outcome equivalent to success in tutorials demonstrates the importance of developing measures of student learning to push forward both assessment planning and research in writing centers.

The Opportunity to Increase Reflective Practice: The natural ebb and flow of data collection and analysis required for assessment foregrounds reflective practice (Schon), what Brian Huot refers to as a "two-way movement that can also be called dialectic" (168). According to Stephen D. Brookfield, reflective practice requires teachers to be researchers rather than interpreters or implementers of research conducted by others: "Through continuous investigation and monitoring of their own efforts," teachers, and by implication writing center directors and tutors, can develop their own "contextually sensitive theories of practice rather than importing them from the outside" (215). Unlike researchers gathering large amounts of data for quantitative analysis, reflective practitioners focus their data collection and analysis--their testing of the assumptions of their practice--primarily on individuals or small groups of students. With our primary service consisting of one-to-one tutorials, this emphasis on individual "cases" makes reflective practice particularly relevant and appropriate for writing centers.

Our assessment plan can act as a frame to give shape to the "messy problems" of daily practice and to encourage focus on certain activities. It can also provide a process for reflection. Hence, learning through observing and reflecting on practice becomes an organized building process, rather than simply trial and error (Hillocks). Although I am responsible for collecting the use counts and for administering the surveys, the English Center tutors all participate in determining questions to ask and interpreting survey responses. During our first few years of operation, when our growth was very rapid, the tutors requested weekly reports on use, a cause for celebration but also a chance to determine which times during the semester are the busiest.

The Opportunity to Fulfill Our Professional Responsibilities: In order to achieve professional status, writing centers have to assume the responsibilities required of other units on campus and in the academic community generally. We have to show that our services are effective through data collection and analysis rather than simply through anecdotes, even though anecdotes offer great stories of our successes. We need to demonstrate to ourselves and to others that the funds we receive are well spent. With the freedom of choice that comes from the power to develop and direct our own services comes the responsibility to constantly question and improve those services. Assessment is beneficial for writing centers because it leads us to assume the professional and ethical behaviors important not just for writing centers but for all of higher education.

The next section provides a brief history of the accelerating demand for assessment and the limited response of writing centers so far. It will focus on student outcomes assessment, the current form of assessment mandated by most accrediting agencies. Student outcomes assessment emphasizes student learning and development rather than relying on faculty credentials or campus resources as indicators of academic excellence (Jacobi, Astin, and Ayala; Paloma and Banta). For writing centers, this type of assessment requires us to demonstrate that students who use our services improve as writers.

Externally Mandated Assessment and Writing Centers: A Brief History

Student outcomes assessment allows units and the university as a whole to document effectiveness through a systematic process of setting goals and objectives (intended outcomes) and then measuring the attainment of those goals and objectives (actual outcomes). It is prescribed by accrediting agencies to provide results used "for continuous improvement" (WEAVE 2), but it is often associated with external accountability rather than internal improvement. Student outcomes assessment is the most recent version of large-scale externally mandated assessments based on learning objectives. These objectives-based assessments began in the early 1900s with educational testing research by E. L. Thorndike. One of the earliest and probably the best known and most innovative objectives-based assessment was designed for the 1930s Eight-Year Study by Ralph Tyler and his associates to evaluate the effectiveness of a new curriculum for the Depression-era students crowding public schools (Gredler; Worthen, Sanders, and Fitzpatrick). The mandate for student outcomes assessment came during the 1980s, growing out of President Reagan's urgency to hold educational institutions accountable for student learning and federal dollars. It continues as "No Child Left Behind" and other legislation enforcing accountability edge upward from elementary and secondary schools to colleges and universities.

Along with the extensive time and energy student outcomes assessment will take from the writing center focus of working one to one with students and the "math anxiety" some of us may have to face, writing centers directors may be further put off by the conservative tradition that spawned this accountability. Joan Mullin quotes one objection to creating writing center accrediting teams that could be extended to assessment generally: It "seems ... like a sell-out to institutional practices from which we wish to remove ourselves" (8). The Reagan administration vowed to clean up the mess left by the "1960s radicals" (Brittenham) by clarifying and tightening federal control of education. In contrast, as discussed in several writing center histories (Boquet; Carino), writing centers perceive ourselves as birthed by the "1960s radicals." During our "glorious past" (Brittenham 534) shared with composition programs, writing centers assumed the role of providing access to students traditionally unable to attend college. The perception of writing centers as nurturing, personally empowering, and concerned with fostering individual development remains today (Carino; Grimm; Summerfield).

Cleaning up the "mess" left by the "1960s radicals" required the Reagan administration to undertake two related conservative initiatives. One was to define and objectify educational goals, to return education to the past "core of common studies" in the "culture and civilization of which [the Reagan administration assumed] [students were] members" (Sims 46). The first initiative allowed simplistic and skewed, but easily objectified, curricula such as that associated with "cultural literacy" to come into vogue. The second initiative was the demand for accountability from educational institutions. Armed with A Nation at Risk, the alarming report about the sorry state of elementary and secondary schools that recommended large-scale assessments for accountability, and To Strengthen Quality in Higher Education, extending the discussion to colleges and universities, Secretary of Education William Bennett pushed through some new criteria for regional accrediting agencies. These criteria required accrediting agencies to evaluate instructional effectiveness. Large core curricula were also required by most regional accrediting agencies. If they could not comply with the mandate for large-scale assessments, colleges and universities were threatened with the loss of federal funds (Nichols; Sims). From this brief history, it is easy to see how the positive notion of "quality enhancement" as the primary goal of externally mandated assessment stated in the Virginia Commonwealth Assessment Plan, WEAVE, today can be confused with the negative notion of "mandated accountability."

In the late 1970s and early 1980s, before student outcomes assessment was mandated, a scattering of writing center researchers issued calls to evaluate student learning as a measure of writing center effectiveness. (2) In a 1977-78 survey of 120 writing center directors (with 56 responding), Mary Lamb found that assessment measures consisted primarily of use counts, satisfaction surveys, and pre- and post-grammar tests. She suggested that the exclusive use of these measures "reveal[s] a limited self-definition, which may endanger the centers' continued existence" (70). In a 1979 article, Nancy McCracken also criticized writing centers for relying almost exclusively on use counts, course grades for students using the center, and anecdotal responses from faculty members. She described pre-term and post-term error analyses of writing samples to demonstrate her writing center's effectiveness and "justify the lab's existence" (1). In a 1982 article, Janice Neulieb agreed with McCracken's suggestion for pre- and post-tests of specific skills, such as proofreading, but also recommended the collection and scoring of two writing samples, one collected during a student's first visit to the center and the second collected during the student's last visit of the semester.

Although discussions of assessment continue to appear in writing center journals, only a few writing centers appear to have taken up the challenge to develop measures of student learning. Although we may doubt the validity of tests of skills in isolation from text production and we resist the notion of the writing center as a "grammar fix it" shop, we should take on the spirit, if not the practice, of these pioneers and go beyond our current reliance on use counts and satisfaction surveys for assessment. As James Bell points out, use counts and satisfaction surveys are "time honored methods," which are "necessary, but not sufficient [for assessment], for quantity does not necessarily equal quality" ("When" 9).

Beginning an Assessment Plan

The Virginia Commonwealth Assessment Plan is intended to improve (formative assessment, an internal purpose), to prove (summative assessment, an external purpose), and to inform (clarify what is occurring in a unit). Numerous sources describe six characteristics of assessment, primarily as it is used for program improvement:

* Pragmatic, intending to be formative and, hence, improve conditions for student learning as well as summative and, hence, justify a program or service (Allen; Bell "When"; Palomba and Banta; Program-Based; WEAVE).

* Systematic, orderly, and replicable (Allen; Bell "When"; Program-Based; WEAVE).

* Faculty-designed and led (Allen; Huba and Freed; Program-Based).

* Multiply measured and sourced (Allen; Bell "When"; Huba and Freed; Program-Based; WEAVE).

* Mission-driven (Huba and Freed; Program-Based).

* On-going and cumulative (Huba and Freed; Program-Based; WEAVE).

Most discussions of writing assessment are concerned primarily with the evaluation of programs, particularly composition programs. Since we do not award course credit, writing centers are not "programs" but "educational support units" (WEAVE). Our assessment plans are more like those of units responsible for supplemental instruction and other forms of academic support than those for English department programs (See Simpson; Upcraft and Shuh). As with other support units, our assessment focus is to determine how our activities contribute to the accomplishment of the mission of our university and, like other units on campus, writing centers should be assessed as much as possible according to "the outcomes experienced by those [we] serve" (WEAVE 2). At Auburn, the English Center, along with the other educational support units, is concerned with the University goal of increasing retention.

Table 1 shows a procedure writing center directors might follow in developing a student outcomes assessment plan (adapted from WEAVE and Program-Based). In making suggestions for an assessment plan for writing centers, I will be concerned with the first three steps and will summarize advice about conducting use counts and designing and administering satisfaction surveys before speculating about how to incorporate student outcomes into our assessment plan.

Developing an assessment plan begins with a writing center's mission statement and, hence, requires consideration of the writing center's identity, goals, and aspirations. Here is the first paragraph from the mission statement of Auburn University's English Center:
   The primary goal of the English Center (EC) is to offer tutorial
   services to students enrolled in English core courses at Auburn
   University. The consultants in the EC help students learn all
   aspects of the composing process, from exploring ideas to
   developing strategies for proofreading the final document, and
   assist students in developing critical reading skills. A secondary
   goal of the EC is to provide support for students from any course
   at Auburn University in which writing and reading are required.


Based on its mission statement, the English Center is responsible for offering tutoring services to students. Therefore, use counts and satisfaction surveys focusing on those tutoring services are appropriate and necessary for assessment. However, the English Center's mission statement also refers to a complex learning outcome--the development of expertise in the composing processes. This goal can be assessed by measuring enhanced efficiency and effectiveness in the strategies students use in composing and improved quality of their written products.

The goals, objectives, or intended educational outcomes derived from the mission statement control the assessment process (see Huba and Freed). As Table 1 shows, three types are possible: student outcome statements, use statements, and satisfaction statements. These goals, objectives, or intended educational outcomes imply the need for multiple measures and sources: evaluations of student learning, use counts, and satisfaction surveys, respectively. At present, according to discussions in The Writing Center Journal and The Writing Lab Newsletter, writing centers depend heavily on use counts and satisfaction surveys to demonstrate effectiveness. Because these important but incomplete assessments measures have been discussed in our journals and other assessment research, I will present only brief summaries here before moving to measures of student learning, which are more difficult to develop.

Use Counts:

Use counts allow us to calculate the number of students who used our writing center's services during a term, the courses students were from, the number of tutorials held with a single student, the number of tutorials provided for students from a particular teacher, the purpose of the tutorials, and any other aspect of writing center services that can be singled out numerically (Bird; Kalikoff). A count of repeated users seems particularly interesting as an indication not only of use but, by implication, satisfaction.

The meaning of terms, such as "goals," "objectives," and "intended educational outcomes" can vary somewhat across discussions of assessment. My use of "intended educational outcome" reflects the requirements of Auburn's Office of Institutional Research and Assessment. For example, The English Center assessment plan has the following "intended educational outcome" regarding use counts: "The number of one-to-one consulting sessions conducted each academic year will remain steady." The "means of assessment and criteria for success" related to this outcome is as follows:
   Records of student use of the English Center's one-to-one
   consulting sessions will be kept by the Coordinator. At least 3000
   and no fewer than 2000 consulting sessions should be conducted each
   academic year.


Use counts are important for summative assessment, but they do not evaluate the quality of the services the students received.

Satisfaction Surveys:

Satisfaction surveys can determine the attitudes of users toward writing center services (Kalikoff; Kiedaisch and Dinitz; Leff; Paloma and Banta; Program-Based; WEAVE). Sent by mail, administered through telephone calls or emails, or given out in the center or in classes, these surveys consist of brief questionnaires aimed at a target group that has used writing center services. Target groups may include students, faculty members, alumni, and even writing center tutors. The items used on the satisfaction surveys should relate to the administrative decisions under the writing center director's control and should include a general assessment of the benefits of the writing center. These items may be developed through the assistance of focus group interviews with targeted users (Ball State; Gredler; Program-Based). Satisfaction surveys may be conducted immediately after a tutoring session or at later dates (Bell "When"), including after grades are received (Morrison and Nadeau). Satisfaction may be correlated with other variables, such as number of visits (Carino and Enders).

In the English Center, we administer satisfaction surveys to students and to the English Department faculty during late spring each year. We do not administer satisfaction surveys to students after each session because we found, as James H. Bell suggests ("When"), that the results were too positive to be useful or believable. We are currently moving from paper forms administered once a year, to students in randomly selected freshman composition and world literature classes, to a yearly email that surveys all students who have used our services and dumps student responses into a website database where the results are calculated. Because of convenience and the much smaller population, we will continue to measure faculty satisfaction with a yearly paper survey. Appendix A contains the paper form of both surveys.

Our assessment plan has the following "intended educational outcome" about faculty and student satisfaction: "Users will be satisfied with the English Center services." The "means of assessment and criteria for success" are:

* The Coordinator of the English Center will conduct an annual survey of at least 20 sections of students enrolled in ENGL 1100, 1120, 2200, and 2210. At least 80% of the student who use the English Center services will agree that the services are effective. No fewer than 70% of student users will rate their satisfaction with the consulting services as average or above.

* The Coordinator of the English Center will conduct an annual survey of all English Department teaching faculty. At least 80% of the faculty responding to the survey will agree that English Center services are effective for their students. No fewer than 70% of faculty responding to the survey will rate the effectiveness of English Center services as average or above.

Although they are a necessary part of any assessment plan for writing centers, satisfaction surveys, like use counts, do not provide much direct information about student learning.

Comparisons of Users and Students Who Did Not Use Writing Center Services:

Institutional data and the data we collect can allow us to correlate the characteristics of the students who have used writing center services with the same characteristics of those students who have not used the center's services or with the general population. These measures of student use have not been discussed very often in our journals. Only a few published studies (Lerner "Counting"; Magee; Newman) compared the grades in composition of students using tutoring services with those of students who did not use the services (See also Lerner "Writing Center Assessment"). Both studies used SAT scores to level the effects of students' entering abilities. In a later study further examining the conclusions from one of the two comparison studies, however, Neal Lerner found that SAT scores did not correlate with the grades in composition received by the students he selected ("Choosing"). However, with a large enough sample size, such assessment can provide important information. In fact, at Auburn, the Provost requested these comparisons as indicators of writing center effectiveness.

In the English Center, we have collected data to compare the "profile" of freshmen writing center users to the general freshmen "profile." The measures used to develop the profiles were academic indicators identified in well-known studies of student retention (Astin, What; Tinto). From a total of 3,709 freshmen enrolled at Auburn for Fall Semester 2003, 791 (21%) used the English Center's services. (This use statistic is higher than the other academic support services on campus.) As shown on Table 2, students who used the English Center services entered with lower converted overall ACT/SAT scores and lower scores on the ACT/SAT verbal sections than those who did not use the English Center services. However, at the end of the Fall Semester, students who used the English Center services had higher grades in composition and higher overall GPAs than those who did not use English Center services. Although these findings do not indicate that English Center use caused higher grades in composition, they do point to a correlation between English Center use and higher academic achievement overall and in composition specifically. At Auburn, we plan to compute these correlations every three years, relying on our Office of Institutional Research and Assessment for assistance.

In a study published in The Writing Center Journal, Beth Rapp Young and Barbara A. Frizsche tested 206 students--61 of whom were writing center users--to determine tendencies toward procrastination. They asked the students to select a major writing assignment that they could be tracked on and collected additional data about the students, including their responses to a Writing Behaviors Assessment where students reported prewriting, writing, and revision behaviors for the writing assignment and the students' grades on the selected assignments, GPAs, and course grades. Young and Fritzsche found that 38% of the 206 students procrastinated on the selected major assignment but that students who used the writing center or received feedback from other sources started their writing earlier and were more satisfied with their writing behaviors than those who did not receive feedback. Although these findings do not show that writing center use causes less procrastination, they point to a correlation between writing center use and starting writing assignments early. The researchers suggest that these findings indicate the effectiveness of their writing center. This project was funded by an IWCA Research Grant, showing the close connection between assessment and research.

Adding Student Outcomes to Our Assessment Plans:

As previously stated, since the 1970s, we have been challenged by some writing center colleagues to include measures of student learning in our assessment plans. Without measuring student learning, our effectiveness is invisible, buried in the assessments of the writing programs we serve. Further, assessments of student learning are most related to research--incorporating previous research, providing topics and questions, and leading to further research--or adding to our common store of writing center knowledge themselves. Student outcomes also define the boundaries of our practice and provide the substance for daily reflection. Although the potential benefits of assessing student learning seem clear, the development and implementation of an assessment plan for writing centers can be difficult. This section is highly speculative in its discussion of a well-known assessment framework and a methodology that might provide a conceptual means for developing measures of student learning.

As Lerner points out ("Writing Center Assessment"), Alexander Astin's talent development model provides a useful framework to measure student learning for writing centers because of the focus on cognitive growth rather than simply on the achievement of some minimum competency. In the talent development model, assessment focuses "on changes or improvements in students' development from entry to exit" (Jacobi, Astin, and Ayala iv) (see also Astin "Assessment"; Assessment). Because outcomes are highly dependent on the entering competences of students, it is impossible to determine whether single-shot outcomes reflect the impact of a program or service. Rather than single testing, entry and exit measures (pre- and post-tests), with some definable experiences between the two, are important. Astin refers to the talent development model as an I-E-O model for assessment. In this model, "I" refers to "inputs," the characteristics of students when they enter a class or begin a learning experience; "E" refers to "environment," the experiences provided by the educational treatment; and "O" refers to "outcomes," the characteristics of students after the educational treatment. Astin's model encourages pre- and post-test comparisons based on identifiable educational experiences (see also Simpson).

Catherine A. Palomba and Trudy W. Banta, well-known assessment experts, argue for the importance of emphasizing progress rather than relying exclusively on single-shot outcomes. As they say, "assessing outcomes implies a finality; assessing progress suggests there is time and opportunity to improve" (Assessment Essentials, 5; see also Banta, "Summary"). I would add that assessing progress shows whether or not improvement has occurred. The talent development model can lead to direct assessment of student learning as measured by performance. Unlike multiple-choice tests measuring discrete factual knowledge, performance assessment is concerned with "finding out if students use their knowledge effectively to reason and solve problems" (Huba and Freed 13).

To the talent development model with its concern for assessing growth in performance, we can add research from the 1980s that incorporates pre- and post-tests of writing quality and that considers the development of expert composing processes. The rest of this section will discuss some applications of that research.

Pre- and Post-Tests of Writing Quality:

In an extensive review of research about the relationship of writing center use and writing improvement, Casey Jones describes several empirical studies, using quantitative measures, comparing the quality of written products and other outcomes. Some of the studies that Jones reviews are experimental-control comparisons of the performances of writing center users with those of other students on campus. Other studies in Jones' review use pre- and post-tests to determine growth in writing ability. For example, Jones describes a 1985 study comparing pre- and post-essays of students who failed composition courses and were assigned to the writing center to improve their skills (David and Bubloz, described in Jones). In another study described in Jones' review, grades given by the same panel of instructors before and after students participated in writing center tutorials were compared (Bennett described in Jones). Both studies showed that students who used writing center services produced better products. Luke Niiler's 2003 and 2005 articles published in The Writing Lab Newsletter also used a pre- and post-test method for assessment. In these fairly recent studies, arguing for "the statistical analysis of writing center outcomes" ("The Numbers" 6), Niiler first collected clean copies of essays that students wanted to revise for higher grades, and then after the students had used writing center services to revise their drafts, he again collected clean copies. According to the trained tutors (in the 2003 study) and trained faculty members (in the 2005 study) who rated the drafts, the writing improved significantly in each category rated.

James H. Bell's analysis of students' revisions across drafts paired with the strategies that tutors of these students used in conferences offers another type of pre- and post-comparison ("Research"). In two related studies, Bell audiotaped conferences and collected drafts reviewed in these conferences and final versions submitted for grading. He classified the changes made as "Surface" or "Text-Based" according to a taxonomy developed by Lester Faigley and Stephen Witte. He also described the tutoring roles according to T. J. Reigstad's "typology of tutoring" (11). In the first study, Bell reviewed audiotaped conferences conducted by peer tutors and found that most of the revisions showing up in later drafts were made during the conferences. Hence, with so much editing done during the conferences, he could not assess the effectiveness of writing center conferences in teaching students to become better writers on their own. In the second study, Bell reviewed conferences conducted by a professional tutor. He found that, although peer tutors were more likely to edit students' drafts, the professional tutor was more likely to teach the students how to make the changes themselves. The professional tutor also made many more macrolevel suggestions, while peer tutors were more concerned with microlevel changes. Based on the revisions the students made to the drafts after the writing center conferences had ended, Bell concluded that the conferences with the professional tutor taught the students new writing strategies they were able to apply to improve drafts after they left the writing center. Hence, the results of the second study demonstrated the effectiveness of Bell's writing center. Interestingly, Bell's two studies are described in an article classified as a "research report," once again pointing to the overlap between research and assessment.

Development of Expert Composing Processes:

Along with considerations of writing quality, writing center student outcomes assessments may also evaluate changes in the skills and strategies that students use as they move from novice to expert writers (or not). These assessments incorporate qualitative as well as quantitative measures of development. As Lester Faigley and his associates point out in their 1985 book about writing assessment at the University of Texas, researchers have described composing strategies important for expert writing. In a more recent discussion of composing process research, Paul Prior identifies several ways of eliciting writers' accounts of their composing processes. Although not without problems, one method, retrospective accounts of composing, may allow us to glimpse the strategies students employ while writing particular texts.

Appendix B provides a table adapted from Faigley and his associates' review of expert and novice composing behaviors. It is commonly agreed that expert knowledge is more likely to be organized around general concepts and principles rather than random facts. Experts are likely to perceive patterns among pieces of information and recall relevant information more quickly than novices. Therefore, they are able to spend more time on nonroutine issues ("Cognitive Science"). If our writing center services are effective, we can expect our student users to develop more expert--focused and flexible--composing behaviors over time. The behaviors identified in Appendix B should change accordingly. (See Nancy Sommers and Laura Saltz for a discussion of novice-expert writing growth among 400 Harvard freshmen.) Faigley and his associates describe three instruments for prompting retrospective accounts of composing. These instruments use retrospective accounts to identify development of more expert composing strategies across a term or longer. They are the Process Log, the Self-Evaluation Questionnaire, and Pre-Term and Post-Term Interviews. To provide data about development, these instruments can be used throughout a particular time period with a randomly selected group of students.

The Process Log is a set of questions that students respond to at different times during the composing process. Each question relates to the knowledge of certain composing processes. For example, before a student begins drafting an assignment, we might ask questions that tap into the student's previous experience with the topic and the type of writing (How much do you know about the topic? Have you ever written a paper like this one before?) and his typical strategies for composing (How will you begin writing? Will you make an outline? Just start writing?). After the student has completed a first draft, we might ask him to reflect on the changes that occurred in his initial impressions (Have your ideas about the topic changed since you started the assignment?). After the paper is turned in for grading, we might ask the student to reflect back on the changes he made to the draft and to his usual writing process in completing this task.

An addendum to the Process Log, the Self-Evaluation Questionnaire is given after the students have completed a particular writing assignment. It asks them to reflect on the task just completed (What are the most successful things in your paper? What parts of the composing process were easier than in the past? What parts were more difficult?). Pre-Term and Post-Term interviews allow a comparison of attitudes and knowledge about composing at the beginning of the term or an academic year with those at the end. The same questions about knowledge of good writing (What is good writing?) and about the procedures typically used to compose effectively (What do good writers do when they write?) can be asked both times. The questions asked in the three process instruments are likely to increase students' awareness of their composing processes and, hence, encourage reflection and enhance writing development. These process assessment tools are learning strategies as well.

The use of qualitative as well as quantitative methods of data collection not only makes assessment appear a little more friendly to those of us without training in statistics but also broadens the range of what can be assessed as development (Simpson). For example, retrospective accounts focus on the cognitive growth of individuals rather than the statistical comparisons of pre- and post-tests. Although quantitative measures can provide "big picture" views of writing center effectiveness, qualitative measures can allow us to focus on cases. In addition, although quantitative measures are important for many kinds of assessment, both performance measures and the talent development framework allow for qualitative measures. Michael Patton, M. Lee Upcraft and John H. Schuh describe the benefits of qualitative sampling methods as providing "focus in depth" (56). They describe 15 different sampling approaches to identify "rich cases" through "purposeful sampling" (Patton in Upcraft and Schuh). Three of these sampling approaches seem particularly relevant for writing center assessment:

* Homogeneous sampling brings together a small group of similar students. These students may be interviewed in focus groups.

* Typical case sampling leads to the development of individual "profiles" for a few students who most frequently use writing center services.

* Critical case sampling leads to the development of "profiles" for the most underprepared or difficult students. As Upcraft and Shuh point out, this sampling method is based on the assumption that "if it happens there, it can happen anywhere" (57).

Selecting a few cases for assessment, writing center directors can put together portfolios of student responses for assessment of growth toward increased expertise in writing (See Black; Gredler; White "Portfolios"). As Kathryn Yancy and Liz Hamp-Lyons point out in their separate articles, portfolios represent the "third" generation in writing assessment. Although they are not without their limitations (and Hamp-Lyon speculates about the characteristics of "fourth generation" writing assessment), portfolios have become assessment staples for writing programs, and they may include more than students' drafts and reflections. For example, Cathie Scott and Carolyn Plumb describe portfolio assessment of a writing intensive program in their engineering college. The portfolios they describe contain writing produced for engineering courses and for the workplace, summaries of interviews with students about their attitudes toward writing and the assistance they have received, background data from student records such as SAT scores, course grades, GPAs, process logs, entry and exit essays describing the composing process and assessing their writing abilities, and syllabi from their courses. This information provides "thick description" of each student case.

In the English Center, several tutors and I have been experimenting to develop measures of student learning. Recently, we developed a brief survey based on research by Faigley and his associates to identify changes in the composing processes of students who used our services. (See Appendix C for the survey items.) We administered the survey to students enrolled in the first freshman composition course at the beginning of Fall Semester 2004 and to students enrolled in the second freshman composition course at the end of Spring Semester 2005. Problems with the survey and its administration precluded usable assessment data, but we learned some important things from this "pilot." (3)

According to their responses, most students who used the English Center, whether they were at the beginning of their freshmen year or at the end, reported that they spent more time planning than proofreading. Of 47 usable responses to this item across both administrations of the survey, 33 students rated proofreading as the least time-consuming, and 21 rated planning as the most time-consuming. This finding was not what we predicted. In addition, when questioned about the audience and purpose for their essays, students typically indicated that they were writing for a "general audience" or the "teacher" and in many cases gave a one-word response for their purpose or a very general phrase, "to solve the problem of teenage drinking." No change in specificity occurred across the year, again not the finding we were expecting. We plan to revise the survey and conduct another pilot after we learn more about what goes on in our tutorials.

Our future plans combine assessment with a large-scale research project the tutors and I are planning. The research project along with the information we collect routinely from students should provide data sufficient to construct portfolios for some students who use our services frequently. As part of the larger research study, using data collection techniques similar to those in Thonus's studies, we plan to videotape as many consulting sessions as we can without disrupting use of the Center, scan or copy drafts and notes students bring to the session or develop there, and administer satisfaction surveys to both the tutor and the student at the end of each session. We plan to videotape our routine users several times during the year, and we can retrieve descriptive information about the sessions not videotaped from a recently developed, elaborate database of information about all the sessions conducted in the Center. Because many of our frequent users are underprepared, we will use both typical case sampling and critical case sampling as described by Upcraft and Schuh. At the end of the academic year, we intend to code the videotapes according to the level of sophistication in conversations about composing. We have begun to develop a coding scheme based on the questions used in Faigley and his associates' Process Log and Self-Evaluation Questionnaire along with traces of expertise identified in Appendix B. We also intend to evaluate changes in drafts according to the "Surface" and "Text-Based" distinctions that Bell used in his study.

If we find that students who use our services frequently or students who are the most underprepared for freshman composition increase their expertise in the composing process or improve the quality of their writing, we will be able to begin an assessment plan based on student outcomes.

Conclusion

In this essay, I have attempted to describe some benefits beyond the requirement for accountability and to outline a tentative plan for externally mandated writing center assessment. My purpose has been to discuss one conceptual framework, not to offer a template for assessment or exclude the possibility of other equally promising conceptual frameworks. This essay intends to provide an impetus for developing assessment plans to measure student learning in writing centers. By taking up the challenge to develop student outcomes assessment plans, we--and those we serve--can reap the benefits.

Even though he is concerned that testing for assessment is "a cynical manipulation of the public desire to see better writing in schools at little cost" ("The Changing," 111), Ed White praises the scholarship about assessment as "creative and varied" and adds that "it has become impossible to be an informed teacher of writing in the twenty-first century and remain uninformed about writing assessment" ("The Changing," 110). Further, as White, William D. Lutz, and Sandra Kamusikiri say, assessment "helps determine what programs are approved and offered, who receives opportunity, who gains power and privilege, and who is successful" (1). Assessment determines the predominant values in society by identifying those who should be rewarded (See also White "Power"). It is also an important influence on disciplinary change and formation.

Extending the possible influence of assessment on curriculum development and social change even further, John Trimbur believes that writing is the common culture conservative politicians have searched for in their focus on "cultural literacy." Pointing out that external assessors seem content that students simply "appreciate" literature while at the same time requiring demonstration of writing skills, Trimbur argues that writing is a powerful tool for social harmony:
   [L]iteracy--particularly the ability to write--is being called on
   to provide a common means of communication in a divided culture, to
   promote national economic recovery, and to explain the success and
   failure of individuals in a class society. (48)


For Trimbur, the power of writing to shape ideology is reflected in the need to measure it.

Externally mandated assessment is a professional responsibility for writing center directors. This requirement for accountability can also become an impetus for change, a vehicle for testing established practices and conducting meaningful research, and a means for gaining as well as using power. Assessment can bring opportunities as well as accountability for writing centers.

Appendix A: Student and Faculty Satisfaction Surveys

[ILLUSTRATION OMITTED]
Appendix B: Expert vs. Novice Composing Behaviors

Expert Behaviors                    Novice Behaviors

General                             General

* More knowledge of composing       * Not aware of the importance of
strategies and content of           an effective composing process
composing task

* Less apprehension about writing   * Not much knowledge of composing
                                    strategies

                                    * More apprehension about writing

Planning and Setting Goals          Planning and Setting Goals

* Spends time planning and goal     * Often does not plan; just
setting                             begins writing

* Plans related to rhetorical       * Generates goals based on topic
situation

* Plans revised during composing

* Can develop a variety of
different plans-top down as well
as bottom up

Generating Content                  Generating Content

* Generates more content than       * Has difficulty generating
needed and then prunes              enough content

* Uses both goal-directed and       * Use primarily spontaneously
spontaneously associated memory     associated memory search
search

Organizing                          Organizing

* Organizes according to the        * Organizes in order of retrieval
subject-matter and audience

Drafting                            Drafting

* Writes first-draft straight       * Stops frequently to check for
through                             sentence-level errors

Revising                            Revising

* Makes changes related to          * Makes more changes related to
content and structure               sentence-level concerns

* Does not proofread until
content and  structure determined

* Makes fewer revisions because
spends more time planning

Adapted from Faigley and his associates.


Appendix C: Student Survey Questions

[ILLUSTRATION OMITTED]

Acknowledgments

I sincerely appreciate the assistance of Mary Alm, writing center director at University of North Carolina at Asheville; Maury Maryanow, Troy State University-Montgomery; James Groccia, Director, Biggio Center for the Enhancement of Teaching and Learning at Auburn University; and Donald Cunningham, Joyce Rothschild, and Elizabeth Smith, all of whom are experienced with designing student outcomes assessment plans for technical and professional communication at Auburn University. I also appreciate the suggestions provided by the editors and the anonymous reviewers representing The Writing Center Journal.

WORKS CITED

Allen, Jo. "The Impact of Student Learning Outcomes on Technical and Professional Communication Programs." Technical Communication Quarterly 13 (2004): 93-108.

Astin, Alexander W. Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. New York: American Council on Education, 1991.

--. "Assessment, Value-Added, and Educational Excellence." Student Outcomes Assessment: What Institutions Stand to Gain. New Directions for Higher Education 59 (1987).

Ed. Diane F. Halpern. San Francisco: Jossey-Bass. 89-109.

--. What Matters in College: Four Critical Years Revisited. San Francisco: Jossey-Bass, 1993.

Ball State University Assessment. 22 November 1999. 3 January 2003 http://web.edu/IRA/AA/WB/foreword.htm.

Banta, Trudy W. "Summary and Conclusion: Are We Making a Difference?" Making a Difference: Outcomes of a Decade of Assessment in Higher Education. Ed. Trudy W. Banta and Associates. San Francisco: Jossey-Bass, 1993. 357-76.

Bell, James H. "Research Report: Better Writers: Writing Center Tutoring and the Revision of Rough Drafts." Journal of College Reading and Learning 33 (2002): 5-16.

--. "When Hard Questions Are Asked: Evaluating Writing Centers." Writing Center Journal 21.1 (2000): 7-28.

Bird, Penny C. "Program Assessment and Reporting: Counting, Analyzing, and Developing." The Writing Center Resource Manual. Ed. Bobbie Bayliss Silk. Emmitsburg, MD: NWCA Press, 1998. (Section III.6)

Black, Lendley C. "Portfolio Assessment." Making a Difference: Outcomes of a Decade of Assessment in Higher Education. Ed. Trudy W. Banta and Associates. San Francisco: Jossey-Bass, 1993. 139-50.

Boquet, Elizabeth H. "'Our Little Secret': A History of Writing Centers, Pre- to Post-Open Admissions." College Composition and Communication 50 (1999): 463-82.

Brannon, Lil, and Stephen M. North. "The Uses of the Margins." Writing Center Journal 20.2 (2000): 7-12.

Brittenham, Rebecca. "You Say You Want a Revolution? 'Happenings' and the Legacy of the 1960s for Composition Studies." Journal of Advanced Composition 21 (2001): 521-54.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher. San Francisco: Jossey Bass, 1995.

Carino, Peter. "Open Admissions and the Construction of Writing Center History: A Tale of Three Models." Writing Center Journal 17.1 (1996) : 30-48.

--, and Doug Enders. "Does Frequency of Visits to the Writing Center Increase Student Satisfaction? A Statistical Correlation Study--or Story." Writing Center Journal 22.1 (2001): 83-103.

"Cognitive Science, Expert-Novice Research, and Performance." Theory Into Practice 36 (1997) : 240-47.

Faigley, Lester, Roger D. Cherry, David Jolliffe, and Anna Skinner. Assessing Writers' Knowledge and Processes of Composing. Norwood, NJ: Ablex, 1985.

Gredler, M. E. Program Evaluation. Englewood Cliffs, NJ: Merrill, 1996.

Grimm, Nancy. "The Regulatory Role of the Writing Center: Coming to Terms with a Loss of Innocence." Writing Center Journal 17.1 (1996): 5-29.

Hamp-Lyons, Liz. "The Scope of Writing Assessment." Assessing Writing 8 (2002): 5-16.

Harris, Muriel. "Preparing to Sit at the Head Table: Maintaining Writing Center Viability in the Twenty-First Century." Writing Center Journal 20.2 (2000): 13-22.

Hillocks, George. Teaching Writing as Reflective Practice. New York: Teachers College Press, 1995.

Huba, Mary E., and Jann E. Freed. Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. Boston: Allyn and Bacon, 2000.

Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning. Logan: Utah State UP, 2002.

Jacobi, Maryann, Alexander Astin, and Frank Ayala, Jr. College Student Outcomes Assessment: A Talent Development Perspective. ASHE-ERIC Higher Education Report No. 7. Washington, DC: Association for the Study of Higher Education, 1987.

Jones, Casey. "The Relationship between Writing Centers and Improvement in Writing Ability: An Assessment of the Literature." Education 122.1 (2001): 3-18.

Kail, Harvey. "Writing Center Work: An Ongoing Challenge." Writing Center Journal 20.2 (2000) :25-29.

Kalikoff, Beth. "From Coercion to Collaboration: A Mosaic Approach to Writing Center Assessment." Writing Lab Newsletter 26.1 (2001) : 5-7.

Kiedaisch, Jean, and Sue Dinitz. "Learning More from the Students." Writing Center Journal 12.1 (1991): 90-100.

Lamb, Mary. "Evaluation Procedures for Writing Centers: Defining Ourselves through Accountability." Improving Writing Skills. New Directions for College Learning Assistance No. 3. Eds. Thom Hawkins and Phyllis Brooks. San Francisco: Jossey-Bass, 1981. 69-82.

Leff, Linda Ringer. "'Authentic Assessment in the Writing Center': Too Open to Interpretation." Writing Lab Newsletter 21.5 (1997): 12-14.

Lerner, Neal. "Choosing Beans Wisely." Writing Lab Newsletter 26.1 (2001): 1-5.

--. "Counting Beans and Making Beans Count." Writing Lab Newsletter 22.1 (1997): 1-3.

--. "Punishment and Possibility: Representing Writing Centers, 1939-1970." Composition Studies 31.2 (2003): 53-72.

--. "Writing Center Assessment: Searching for the 'Proof' of Our Effectiveness." The Center Will Hold: Critical Perspectives on Writing Center Scholarship. Ed. Michael A. Pemberton and Joyce Kinkead. Logan: Utah State UP, 2003. 58-73.

Magee, Craig. "A Writing Center's First Statistical Snapshot." Writing Lab Newsletter 24.10 (2000): 14-16.

McCracken, Nancy. "Evaluation/Accountability for the Writing Lab." Writing Lab Newsletter 3.6 (1979):1-2.

Morrison, Julie Bauer and Jean-Paul Nadeau. "How Was Your Session at the Writing Center? Pre- and Post-Grade Student Evaluations." Writing Center Journal 23.2 (2003): 25-44.

Mullin, Joan. "NWCA News from Joan Mullin, President" Writing Lab Newsletter 21.7 (1997): 8, 16.

--, and Albert C. DeCiccio. "From the Editors." Writing Center Journal 20.2 (2000): 5-6.

National Commission on Excellence in Education. A Nation at Risk: The Imperative for Educational Reform. An Open Letter to the American People. A Report to the Nation and the Secretary of Education. GPO: Department of Education, April 1983.

National Commission on Higher Education Issues. To Strengthen Quality in Higher Education: Summary Recommendations of the National Commission on Higher Education Issues. Washington, DC: The Commission, 1982.

Neuleib, Janice. "Evaluating a Writing Lab." Tutoring Writing: A Sourcebook for Writing Labs. Ed. Muriel Harris. Glenview, IL: Scott, Foresman, and Company, 1982. 227-32.

Newmann, Stephen. "Demonstrating Effectiveness." Writing Lab Newsletter 23.8 (1999): 8-9.

Nichols, James O. Institutional Effectiveness and Outcomes Assessment Implementation on Campus: A Practitioner's Handbook. New York: Agathon Press, 1989.

Niiler, Luke. "The Numbers Speak: A Pre-Test of Writing Center Outcomes Using Statistical Analysis." Writing Lab Newsletter 27.7 (2003): 6-9.

--. "'The Numbers Speak' Again: A Continued Statistical Analysis of Writing Center Outcomes." Writing Lab Newsletter 29.5 (2005): 13-15.

Palomba, Catherine A. and Trudy W. Banta. Assessment Essentials: Planning Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey-Bass: 1999.

Prior, Paul. "Tracing Process: How Texts Come Into Being." What Writing Does and How It Does It: An Introduction to Analyzing Texts and Textual Practices. Eds. Charles Bazerman and Paul Prior. Mahwah, NJ: Lawrence Erlbaum, 2004. 167-199.

Program-Based Review and Assessment: Tools and Techniques for Program Improvement. University of Massachusetts Amherst. Fall 2001. 3 January 2003. http://www.umass.edu/oapa/assessment/program_based.pdf.

Rohan, Liz. "Hostesses of Literacy: Librarians, Writing Teachers, Writing Centers, and a Historical Quest for Ethos." Composition Studies 30.2 (2000): 61-77.

Schon, Donald A. Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions. San Francisco: Jossey-Bass, 1987.

Scott, Cathie, and Carolyn Plumb. "Using Portfolios to Evaluate Service Courses as Part of an Engineering Writing Program." Technical Communication Quarterly8 (1999): 337-50.

Simpson, Michele L. "Program Evaluation Studies: Strategic Learning Delivery Model Suggestions." Journal of Developmental Education 26.2 (2002): 2-4, 6, 8, 10, 39.

Sims, Serbrenia J. Student Outcomes Assessment: A Historical Review and Guide to Program Development. Westport, CN: Greenwood Press, 1992.

Sommers, Nancy, and Laura Saltz. "The Novice as Expert: Writing the Freshman Year." College Composition and Communication 56 (2004): 124-49.

Summerfield, Judith. "Writing Centers: A Long View." The Writing Center Journal 8.2 (1988): 3-9. Rpt. In The Allyn and Bacon Guide to Writing Center Theory and Practice. Ed. Robert W. Barnett and Jacob S Blummer. Boston: Allyn and Bacon, 2001. 22-28.

Thonus, Terese. "Triangulation in the Writing Center: Tutor, Tutee, and Instructor Perceptions of the Tutor's Role." Writing Center Journal 21.1 (2001): 59-82.

--. "Tutor and Student Assessments of Academic Writing Tutorials: What is Success?" Assessing Writing 8 (2002): 110-34.

Tinto, Vincent. Leaving College: Rethinking the Causes and Cures of Student Attrition. Chicago: U of Chicago P, 1993.

Trimbur, John. "Response: Why Do We Test Writing?" White, Lutz, and Kamusikiri 45-48.

Upcraft, M. Lee, and John H. Schuh. Assessment in Student Affairs: A Guide for Practitioners. San Francisco: Jossey-Bass, 1996.

WEAVE: A Quality Enhancement Guide for Academic Programs and Administrative and Educational Support Units. Virginia Commonwealth University. April 2002. 3 January 2003. http://www.vcu.edu/quality/pdfs/WEAVE Manual2002.pdf.

White, Edward M. "The Changing Face of Writing Assessment." Composition Studies 32.1 (Spring 2004): 110-16.

--. "The Opening of the Modern Era of Writing Assessment: A Narrative." College English. 63 (2001): 306-20.

--. "Power and Agenda Setting in Writing Assessment." White, Lutz, and Kamusikiri 9-24.

--. "Portfolios as an Assessment Concept." New Directions in Portfolio Assessment: Reflective Practice, Critical Theory, and Large-Scale Scoring. Eds. Laurel Black, Donald A. Daiker, Jeffrey Sommers, and Gail Stygall. Portsmouth, NH: Heinemann, 1994. 25-39.

--. William D. Lutz, and Sandra Kamusikiri, eds. Assessment of Writing: Politics, Policies, Practices. New York: MLA, 1996.

--. William D. Lutz, and Sandra Kamusikiri. "Introduction." White, Lutz, and Kamusikiri 1-8.

Worthen, Blaine R., James R. Sanders, and Jody L. Fitzpatrick. Program Evaluation: Alternative Approaches and Practical Guidelines. 2nd ed. New York: Longman, 1997.

Yancey, Kathleen Blake. "Looking Back as We Look Forward: Historicizing Writing Assessment." College Composition and Communication 50 (1999): 483-503.

Young, Beth Rapp, and Barbara A. Fritzsche. "Writing Center Users Procrastinate Less: The Relationship between Individual Differences in Procrastination, Peer Feedback, and Student Writing Success." Writing Center Journal 23.1 (2002): 45-58.

NOTES

(1) Although I have been experimenting for some time to develop measures of student learning, I have not been entirely successful, and the plan I suggest is highly speculative. It provides one possible conceptual framework for assessment, not the only possible conceptual framework. It is not a template for practice.

(2) See Lerner, "Writing Center Assessment," for a different discussion of these articles.

(3) First, the form is not as clear and usable as it should be. Students often did not fill out the questions on the back and responded so inconsistently to the items that they must not have read or understood them. Second, the survey was not administered correctly, probably because filling out the survey delayed students from participating in tutorials. Even though the survey clearly should be administered before the conference is conducted, the work-study student who directed the signing in procedure for students had been administering the survey either before or after the tutoring session-- whichever was more convenient. She also had not enlisted enough students to provide a reasonable sample for our large student use--only 45 surveys were completed in the fall and even fewer, 21, were completed in the spring.

Isabelle Thompson is an Alumni Professor of English and the Coordinator of the English Center at Auburn University. She has published articles primarily in technical communication journals.
Table 1: Developing an Assessment Plan

Steps                             Questions

1. Prepare a mission statement    What is the center's primary
for the writing center based on   function?
the services the center
provides and aspires to           What educational theory or
provide. Consider the mission     theories inform practice in the
statement for the university      center?
and the mission statement for
the unit that supervises the      What does the university expect
writing center                    the center to do?

                                  What does the supervisory unit
                                  expect the center to do?

2. Develop goals, objectives,     In what ways should the students
or intended  educational          who use the center's services
outcomes for the center. These    develop as writers?
may include:
                                  What services does the center
* Student outcomes                provide?
statements-learning gains the
users are expected to make        How many students should the
                                  center be held accountable for
* Use statements-number of        reaching each term?
students the center should
serve and other counts of         What are the characteristics of
productivity                      the target users?

* Satisfaction statements-        What is a realistic level of
ratings of satisfaction with      satisfaction that can be expected
the center's services             from users?

                                  What services are most important
                                  for users to be satisfied with?

3. Determine appropriate          How will the intended outcomes
assessment methods for the        suggested by the objectives be
writing center.                   measured?

* Outcomes measures of student    What data will be collected? From
learning and development          whom?

* Counts relating to the use of   What are the expected findings?
the center's services

* Satisfaction surveys of the
center's services

4. Conduct the assessment of      Who is responsible for designing
the writing center's services.    and  conducting the assessment?

                                  When should the assessment be
                                  conducted?

                                  Who will receive the results?

                                  When are the results due?

5. Analyze the results of the     How effective are the services for
assessment and draw conclusions   increasing student development as
about the results in terms of     writers?
outcomes and the current
strengths and weaknesses of the   What else was discovered from the
writing center.                   assessment?

                                  How are these findings supported
                                  by the data collected?

6. Use the results to bring       What are the
about  improvements in the        accomplishments-strengths-of the
center's services. Use the        center?
results to demonstrate the
effectiveness of the center in    What changes in procedures or
increasing students'              administrative structures are
development as writers.           suggested by the results?

                                  Which operations need to be
                                  improved?

Adapted from:

Program-based Review and Assessment: Tools and Techniques for
Program Improvement. University of Massachusetts-Amherst, Fall 2001;
and WEAVE: A Quality Enhancement Guide for Academic Programs and
Administrative and Educational Support Units. Virginia Commonwealth
University, April 2002.

Table 2: Entrance Scores and Fall Semester GPAs and Composition
Grades for Freshmen Who Used the English Center Services and Those
Who Did Not

                                   Used English    Did Not Use
                                      Center      English Center

SAT/ACT overall scores converted       23.31           24.65
SAT Verbal scores                     528.85          561.39
ACT Verbal scores                      22.93           24.56
Fall Semester GPAs                      2.92            2.66
Fall Semester Composition Grades        3.00            2.90

                                   Significance

SAT/ACT overall scores converted       .000
SAT Verbal scores                      .000
ACT Verbal scores                      .000
Fall Semester GPAs                     .000
Fall Semester Composition Grades       .004
COPYRIGHT 2006 University of Oklahoma
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Thompson, Isabelle
Publication:Writing Center Journal
Geographic Code:1USA
Date:Sep 22, 2006
Words:10600
Previous Article:The idea of a writing center community.
Next Article:"Just chuck it: I mean, don't get fixed on it": self presentation in writing center discourse.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |