Printer Friendly

Taming Big Data through Agile Approaches to Instructor Training and Assessment: Managing Ongoing Professional Development in Large First-Year Writing Programs.

As an increasing number of writing programs incorporate databases, course and learning management software, and/or digital submission of writing into their routines, and as instructors digitally respond to student writing, emerging practices previously more commonly associated with information technology--practices such as big data and agile (1)--may help administrators to ensure that both instructors and students have more productive experiences in their writing programs. Indeed, data-driven approaches to teaching and assessment and agile approaches to change were named as two of the six "key trends... that are likely to enter mainstream use" within the next five years in The New Media Horizon Report: 2014 Higher Education Edition. These two trends, along with online, hybrid, and collaborative learning (also discussed in the same report) promise change for many parts of higher education. Yet, as the US Department of Education's 2012 Enhancing Teaching and Learning through Educational Data Mining and Learning Analytics recommends:
Educators need to experience having student data that tell them
something useful and actionable about teaching and learning. This means
that instructors must have near-real-time access to easy-to-understand
visual representations of student learning data at a level of detail
that can inform their instructional decisions.... The kinds of data
provided to instructors need to be truly helpful in making
instructional decisions, and instructors will need to come to these
learning data with a different mind-set than that engendered by data
systems geared to serving purposes of accountability. (46)

The report also discusses areas throughout higher education where these tools can make a difference. While writing programs and writing instruction are not mentioned specifically in these reports, there are clearly places in our work where these technologies and processes can make significant impacts. The useful and actionable data referred to above enables us to explore and potentially answer such questions as the following:

* What do students do when faced with a particular rhetorical challenge in a writing course?

* How do new instructors compare to their more experienced counterparts when it comes to responding to student writing?

* How do students respond to commentary that seems too brief? Too lengthy? Too vague? And what are the key indicators of the students' responses?

At its most elemental, then, the term big data describes datasets too large and complex to handle by conventional methods. (2) While the sheer number of artifacts (student assignments, comments on those assignments, grades) associated with writing programs certainly don't approach the millions of data points accumulated in other enterprises, examining even several hundred or several thousand documents on a regular basis, has been beyond the abilities of WPAs until recently. The term agile refers to a rapid and occasionally chaotic type of project management most commonly associated with software development. Although the principles behind agile management are expanding beyond software, the idea of rapid response is not something ordinarily associated with higher education or writing instruction. Instead, big data and agile methods are heavily intertwined with technology and with quantitative data use--items that until recently were not a typical part of many WPAs' daily consciousness. Integrating both, however, into a program's organizational framework enables WPAs to conduct effective and accelerated instructor evaluation (or assessment) so that managed change can occur within the current semester, in addition to being phased in over subsequent semesters. For example, big data has been specifically referenced by Moxley in the context of understanding student learning. Likewise, the University of Georgia's <emma>, an electronic markup and documentation management application, has been used for nearly a decade now as both "essay processor and database" (Barrett et al. 37) and enabled a recent study of over 5,000 citations in student essays. Langbehn, McIntyre, and Moxley discuss the importance of real-time study and response (a.k.a., agile) in understanding what modifications might be needed in a writing program curriculum. In our first-year writing program, both big data and agile have played increasingly integral roles for the past several years. In this article, I'll discuss how the program has leveraged big data and agile practices, along with more traditional methods, to improve our professional development program for our graduate instructors by gaining a better understanding of the behaviors of its various stakeholders.

With the help of both qualitative and quantitative data, I break down traditional timelines as I work with instructors to support classroom/pedagogical and assessment practices. I and my associate director do so in many ways--through frequent meetings among instructor groups, more formal workshops, and individual meetings with instructors. In addition, I use our in-house software, the Raider Writer application, as well as commercial text and data mining software to help identify topics of discussion, areas of strengths and weakness in instructors' responses to students, and other items that help ensure that the program meets the needs of both the first-year students and our instructors. As Shirley K Rose suggests, those of us in composition studies "must understand our scholarly role as organizers employing strategic rhetoric to engage our institutions and communities in effecting change and to reflect on our actions" (230); in this first-year writing program, engagement and reflection may start with a single instructor or single student's assignment and strategically scale up to comprehensive and longitudinal program views.


In their 2011 WPA: Writing Program Administration article, Taggart and Lowery note that "teacher preparation should not end when the [graduate teacher preparation] course is over; rather, TAs' professional development as teachers continues until they graduate" (106). Indeed, much of the discussion for the last two decades regarding TA preparation for teaching first-year writing has focused on the particulars of the initial preparation course--the common element in nearly every program charged with training graduate students in the teaching of first-year writing--or other reflective or mentoring exercises that become part of the first or second semester of teaching. Until recently, far less attention has been paid to what WPAs can do to conduct meaningful and ongoing training, mentoring, and evaluating of these instructors beyond the initial semester, especially those practices that can be implemented before a graduate instructor graduates or moves on to another course. Estrem and Reid's "Writing Pedagogy Education" notes that scholarship on writing pedagogy education (WPE) should incorporate a more "systematic research" approach to complement the "thoughtful lore" that encompasses much of the field and examine training "well beyond the TA seminar" (224). Specifically, they argue for more long-term analysis informed by data "of the ways in which preservice, inservice, and continuing service WPE interact and reinforce one another" (238-39). Obermark, Brewer, and Halasek's "Moving from the One and Done to a Culture of Collaboration: Revising Professional Development for TAs" calls for an ongoing professional development program, one that responds to changing needs of graduate instructors as they move through their various teaching assignments.

One such area that requires further study is how WPAs effectively conduct timely and ongoing evaluation of our graduate instructors. Diana Ashe notes that
While our classes are taught by an assemblage that changes radically
each semester, we cannot pretend to make many claims about the
consistency of the quality of our teachers. This is not to say that we
do not have wonderful and dedicated teachers; it would seem from all of
the available, anecdotal evidence that the contrary is true. The
problem here is clear: we can have only anecdotal evidence to rely upon
while we depend on a heavily contingent workforce. (156-67)

Ashe calls for diachronic forms of evaluation and assessment to supplement the more synchronic forms, such as student evaluations or classroom observations. Both synchronic tools have their limitations in programs where a significant number of instructors average only one or two semesters of teaching a particular course. Depending on the size of the program and administrators available to observe, instructors may only be observed once or twice annually with no allowance for follow-ups. Student evaluations have limited use to assist with professional development for a couple of reasons. First, in many programs, evaluations aren't available for viewing and analysis until several weeks into the following semester. Furthermore, many questions on traditional evaluation forms focus more on what is known as "satisfaction-based assessment" (Allen 97) in which students are asked their opinions about such topics as the ability of the instructor to "stimulate student learning" or to move "beyond presenting the information in the text." Occasionally, a question might ask students about their engagement or participation in the form of the number of hours they spend weekly on a course, but few if any of the questions on the forms ever require students to engage in a discussion of what they actually learned. In short, while student evaluation information regarding satisfaction and participation can provide important data, information from the evaluations doesn't help us close the loop and revise or refine our training and assessment practices.

Clearly, if WPAs want to provide ongoing training to their instructors, we need more current information to do so. One of the most immediate information sources comes from both instructor and student response to current curricular requirements. For example, in discussing the use of their My Reviewers peer review application, Langbehn, McIntyre, and Moxley note that
By analyzing in real-time how students and teachers are responding to a
curriculum, WPAs and teachers can work with one another to fine-tune
assignments and crowdsource effective teacher feedback, thus improving
program assessment results. Most significantly, because of the
immediacy enabled by a digital system, they can accomplish this in real
time as WPAs trace teacher responses and grades, as indicators of
strengths and challenges, on a day-to-day basis. (n. pag)

How WPAs choose to view our training and assessment of instructors is a critical part of many programs that predominantly rely on a work force comprised of graduate students and contingent faculty. (3) While many may still object to any labeling of our work as managerial (and often fail to do more than stereotype and discount that term), Strickland's call for an "operative approach" (120) to program direction opens other possibilities. If operative management practices can serve as catalysts for action and leave "nothing off the table," (121) why not create professional development activities commensurate with the dynamic nature and large scale of first-year writing (or any writing program and many general education programs)? Given the ongoing nature of assessment now required by accrediting agencies, many of our programs need a different approach to instructor training and assessment, one that can accommodate a variety of training and assessment strategies, people, and timeframes--in short, programs need to become agile.


Agile methods value many of the same things WPAs do: individuals, interactions, collaboration, and productive response to change. While created for software design projects, the philosophical elements of agile are being increasingly adopted for other situations that contain dynamic, time-sensitive, and unpredictable elements. (4) In academic settings, agile and other time-sensitive methods are gaining favor as tools for instructional and assessment programs and redesigns. Bradley discusses a model of assessment that, although less comprehensive, is carried out in a cycle of a few months to a year and creates opportunities to close the loop; he notes that "[w]ithin the next year, of our 26 departments, 15 had closed the loop for at least one assessment task and 6 additional departments had put structures in place for reviewing assessment data and responding to them" (11). Schwieger and Surendran describe their rationale for revising a single course using agile: "[agile's] test and evaluate principle, [sic] provides the underlying motivation behind its use.... The process of identifying small goals, collecting and processing data about the progress towards those goals, and then evaluating the progress and acting upon the evaluation results has been found to be beneficial in multiple capacities" (4). In "Agile Learning Design," Groves et al. describe how IBM came to adopt elements of agile for its learning design of employee training. (5) They conclude that "[w] ith agile, our emphasis is on enabling employees to learn immediately and leveraging their experiences to drive improvements into the continuously improving overall learning experience" (51). This immediacy is a critical component of professional development. For example, if an instructor has misunderstood an element of an assignment, having the resources available for her to review a key concept or to examine other instructors' responses to that same assignment may make the difference in the quality of her evaluation and feedback to her students. In the case of most large writing programs, both the students and the instructors will benefit from immediate opportunities to learn. Thus all involved have much to gain from administrative and instructional processes that can respond quickly to change.


Our hybrid writing program has incorporated web-based, data-driven applications for well over a decade. In addition to distributing instruction between the classroom and the online environment, administrators distributed the responsibilities for teaching students between the brick and mortar classroom and online evaluation of student writing. Much of the early discussion regarding the program (Kemp, Wasley) dealt with the apparent benefits and liabilities of the distributed evaluation system. Originally, program administrators focused on creating and supporting interfaces for the distributed grading process. Thus, the initial phase of the model placed a premium on collecting data--whether that be student response to assignments or instructor commentary on those assignments. Far less attention was paid to how or if that data would be used to improve the user experience.

The user experience (instructor or student) demands more attention. In many large programs, instructors often seem isolated--they teach their own courses, grade their own students, and know very little beyond anecdotal evidence about what is happening in other sections of the courses they teach. Initially, our program of distributed assessment did little to remedy this. Some instructors were able to focus primarily on classroom teaching and face-to-face interactions with students, and others on responding to student assignments online, but because all assignments were pushed into a single course pool, little significant conversation occurred between classroom instructors and online graders about specific elements of assignments. Both types of instructors reported a continued sense of isolation, especially those whose teaching assignment was conducted primarily online.

I and my associate director have worked to remedy this, though, in the past seven years. Set up in groups at the beginning of the semester, four to six instructors collaborate to instruct 140-230 students during a semester. The variance in number of students and size of the groups is determined by the instructors' teaching/grading load. Some work for us for as little as 0.125 FTE, most work for either 0.25 or 0.50 FTE, and some work the equivalent of a full-time 4/4 load. Most appointments are split in some way between classroom instruction and grading, unless the instructor is a first-year MA or a PhD student who hasn't met SACS coursework requirements to be a teacher of record or has specifically requested that her entire appointment be as a grader. When at all possible, instructors of varying experience levels are grouped together to create a built-in mentoring system and knowledge base about the program.

To complement this knowledge base, I've made a conscious and determined effort to provide instructors with data that facilitates communication among them and enables a degree of self-assessment in addition to the more traditional, administratively initiated discussions. In retrospect, this evolution speaks to the first five of Marschall's adapted tenets. The program often delivers new features in software, as well as in the overall professional development program, and upgrades those features frequently as administrators develop improvements. While I am sensitive to the potential downside of introducing a new feature during a semester, especially with new users involved, administrators work closely with those instructors and students to bring elements online that enhance their experience. Additionally, I enable instructors to play to their strengths or needs: Those who thrive in the classroom often take on more sections, but commensurately less grading, and those who need to live or work away from campus take on grading-only assignments. Given the flexible nature of the program's teaching assignments, instructors who wish to increase their competency in a particular area of their teaching can also do so. However, with this increased dispersion comes a greater need for community and communication. Nowhere is this more apparent than in discussion of student assignments and instructor responses to assignments.

For over 30 years, scholars have examined the nature and role of commentary on student assignments. Since Sommers and Brannon and Knoblauch initiated the contemporary discussion in 1982, studies have ranged from the large (Lunsford and Connors' "Teachers' Rhetorical Comments on Student Papers" and Smith's "The Genre of the End Comment") to the fine-grained (Batt's "The Rhetoric of the End Comment"). Regardless of the size of the study, all echo the thoughts of Rick Straub:
More than the general principles we voice or the theoretical approach
we take into the class, it is what we value in student writing, how we
communicate those values, and what we say individually on student texts
that carry the most weight in writing instruction. It is how we receive
and respond to the words students put on the page that speaks loudest
in our teaching. (246)

Despite the importance of the comment, both Batt and Smith remind readers that specifics of the written comment generally aren't taught in teacher-training programs, and Smith notes that instructors "rarely share their comments with each other" (249). She continues with some observations about the temporality of the comments, noting that "[e]nd comments are not preserved in one location for perusal by members of the community. Teachers rarely read their comments more than once or twice, since comments are widely dispersed shortly after they are written" (249). The transient nature of these critical comments has previously made their study difficult, as first Lunsford and Connors, and later Lunsford and Lunsford, found as they described their attempts to collect commented-upon papers. When considering the context of the comment in a larger context--that in which new TAs learn to teach--the importance of repeated opportunities to discuss and reflect on these comments becomes even more critical. (6) Reid, Estrem, and Belcheir, in discussing their "newly intensified understanding of the pedagogy learning process as lengthy, initially partial, and recursive," (59) recommend that programs must provide "regular, formal, directed pedagogy education" beyond the first year to have any substantial impact (61). They also recommend reflective practice throughout a TA's time in the program.

The ability to store and analyze instructor commentary on student assignments, particularly in the context of those assignments, would hold significant promise for furthering the study of instructor commentary even if such analysis could only occur at the conclusion of a semester's teaching. However, the ability of text mining software to enable reflective analysis at various points (weekly or after particular assignments have been evaluated) and of various magnitudes (for a single instructor, for a single grading group, or for a particular demographic of students or instructors) during a semester can help create a more immediate impact for instructors and students alike. Still an emerging field, text mining draws upon fields as diverse as library and information science, computational linguistics, statistics, artificial intelligence, and machine learning in a quest to analyze and process semistructured and unstructured text data. Text mining brings together qualitative and quantitative methodologies to discover knowledge in a corpus of texts; it is a largely exploratory process with the goal of extracting meaning from texts. In the case of such software packages as QDA Miner, modules such as Word Stat, the text mining component, provide quantitative compliments to the main QDA Miner, which enables content analysis of a corpus of texts. What is key here is that the technology enables textual analysis by WPAs and instructors on scales heretofore impossible in a timeframe capable of improving the experiences of both current students and instructors. (7)


Our Raider Writer software, like any LMS such as BlackBoard or D2L, records every grade assigned by an instructor to every assignment, along with the commentary provided for the student. The data is exported into Excel or to QDA Miner and can then be filtered by course, course section, grading workgroup section, assignment type, individual assignment, or individual instructor. These components are examined individually, in combination, and in the context of the student writing to which they respond, using both descriptive statistics and rhetorical and content analysis. Since both instructors and administrators have access to qualitative and quantitative data concerning their commentary and grades, (8) assessment is initiated by both parties throughout a semester. For example, each semester an instructor works in the program, her comments for that semester are stored and remain accessible to her. She can review these at any time to remind herself of effective strategies for responding to particular assignments or compare any changes in the way she responds to an assignment. For instructors in their first or second year in the program, this element can help resolve questions for themselves and other group members about any number of aspects of responding to student work by breaking down traditional temporal boundaries.

Access to this data enables instructors to become, in some respects, the self-organizing teams that are a principle component of agile methods. All instructors in a grading group have access to the assignments, comments, and grades of the students in their courses. (9) All instructors are encouraged to review their own and other group members' average grades for each assignment, as well as recent comments made by their group members on assignments. Classroom instructors review comments and grades for their sections, both in the context of class preparation each week or at individual student requests. These reviews often prompt an instructor to contact an administrator to discuss a range of questions, which may include requests to evaluate the effectiveness of their own comments, explain the difference in strategies between their work and that of their colleagues, or to discuss differences in numerical grades assigned on work that is apparently similar in quality. In addition, though, through our web interface, administrators can evaluate comments and grades on assignments after an instructor has graded only a few of each assignment and offer suggestions that can be applied while the instructor grades the rest of her assigned drafts or before any brief follow-up is scheduled (Marschall's 6th tenet). One of the principle advantages of these intervention strategies is that the instructors have time to read, reflect, apply, and ask further questions at their convenience and/or while they are actually grading assignments. This process helps promote a sustainable environment, whereby instructors can continue their responding and assessing of student work without undue interruption (Marschall's 8th tenet). In exceptional cases, I can even return a commented assignment to an instructor for revision or expansion of their commentary so that improvement may occur for that instructor and that student within the semester.


While evaluating individual comments allows a snapshot view of an instructor's work, I often want a larger field view of, say, 30 instructors' responses to a particular assignment or comments by one or more instructors on preliminary and final drafts of particular assignments. Elsewhere (Lang and Baehr 2012), I've discussed how text and data mining software enable program administrators to examine hundreds or thousands of documents from a variety of perspectives. I can view our students' and instructors' writing in ways that help communicate trends, strengths, and weaknesses in their work to both populations. I can look for patterns in grade distribution, on-time or late submissions of assignments, and number of assignments submitted--in a single semester or over several years--to prepare our instructors to assist students more effectively.

The following example uses 1,088 preliminary drafts of rhetorical analyses that were evaluated by instructors in either their first or second year in the program. This assignment was selected because it was the first time during the semester that instructors were asked to comment on a longer draft, and many new instructors had requested rapid feedback from us on their comments. Thus, I wanted to explore behaviors related to both commenting and assigning a numeric grade: what, specifically, were the first- and second-year instructors commenting on and if there appeared a reasonable correlation between the comments and the numeric grade assigned. Figure 1 shows the grade distribution for each of the 24 instructors under consideration.

Several items become evident after examining this grid:

* The grades of 67 and 70 were the most frequently assigned, the former 53 times and the latter 48.

* Four graders (15329, 18436, 18445 and 18450) assigned more grades in the range 0-50 than the others.

* Mid-B to A grades were assigned approximately 12% of the time; second year students (identifable by year through their IDs) tended to assign those more frequently.

Although all three of these observations could be explored in more detail, one stood out. The number of grades in the high-D range likely indicated that students were not writing a competent rhetorical analysis. If so, the comments written by instructors on those drafts, then, would be a critical component of whether or not those students could successfully revise those drafts and complete the course with the target grade of C or better. Another possibility is that instructors were struggling with their own comments on the drafts and correlating a grade accordingly. The 53 essays that were assigned a 67 were selected for examination, and the instructor comments filtered for closer study. Figure 2, below, shows the initial set of results--instructors inserted 506 comments in the 53 essays, from single words to detailed holistic comments. Clicking on any of these comments and then clicking on the window behind enables me to see the comment in context.

In order to examine more carefully the instructors' commenting behavior, I ran the Word Stat section of the software on these comments. I first examine word frequencies in the comments, represented by figure 3.

From this screen, I can see that instructors most frequently used the terms audience, rhetorical, and purpose in their comments--over 20% of the comments written to students included one or more of those terms. Other words on this list would make it appear that instructors are most often discussing structural issues with students. However, to understand more of what is happening, I need to examine phrases and words in context. I am first able to pull more frequently used phrases (see fig. 4) While I obtain more evidence that instructors are discussing higher-level concerns with students, it's difficult to tell from them how much instruction is going on to help students with a text that they will be revising. One further step is required--to examine these words in context.

After selecting a particular term (rhetorical, in this case) and determining the amount of surrounding text to view, I can scroll through the various comments (see fig. 5) and see the highlighted word in context at the bottom of the page. I can quickly sample these and determine how instructors are using the term, as well as what type of instruction they are providing for the student.

I notice several trends that call for intervention. In the three comments quoted here, I see that all of them focus on either evaluation of a portion (or all) of the text and/or offer some suggestions for revision of the draft. The first two, in particular, exemplify Sommers' finding that "comments are not anchored in the particulars of the students' texts, but rather are a series of vague directives that are not text-specific" (153). This type of comment is one I see repeatedly in the work of new instructors. For example, one instructor writes,
I think you have a strong thesis here, but I'm not sure the second half
of the sentence contributes much to your argument. You might want to
think about incorporating that part of your thesis earlier in your
introduction. Also, while your introduction does what it needs to do, I
would suggest adding a more thorough discussion of audience and purpose
to help situate your reader in the rhetorical context of your essay.

In addition to its generality, as a comment on a preliminary draft to a first-year student, it assumes a vocabulary more suited to an advanced writer and offers confusing if not contradictory advice. There's nothing here that a student can directly connect to her own text; given the more abstract vocabulary used, this instructor herself appears to have little sense of audience. If, as Straub maintains, "during the time the student reads a set of comments, the image of the teacher that comes off the page becomes the teacher for that student and has an immediate impact on how those comments come to mean," (235) the student would lack a compelling image or inspiration to revise the text in question.

Another comment in this set demonstrates similar issues:
Overall, I think you are headed in the right direction with this essay,
as I can tell you understand the basic premise of a rhetorical
analysis. However, there are a couple things you can work on as you
begin to revise this paper. First, you need to add more analysis for
this to be an effective rhetorical analysis, as you are 300 words short
of the word limit. There are a few places, which I mention above, where
you can expand upon your discussion of the rhetorical choices that
Malcolm X uses. This will make for a more effective analysis and will
also allow you to meet the word limit. Second, work on making your
introduction and conclusion grab the reader's attention. There is also
some room for you to expand upon your discussion of audience and
purpose in your introduction and carry that throughout your draft.
Finally, you may need to be more specific about your rhetorical choices
or incorporate another one into your analysis to be able to be specific.

Like the first comment, this also provides the student with some mixed messages and vague advice for revision. Particularly problematic is the placement and emphasis on meeting a word count. It's one of the few concrete pieces of advice given--but if the student is having problems finding enough to write about, telling her to "expand upon your discussion" isn't going to point her in a useful direction. (10) The last sentence simply doesn't make sense. The student receiving this comment still doesn't have a clear direction or rationale (other than meeting word count) for revising her draft.

A third comment has more potential for success. The instructor highlights a specific problem (too much summary at the expense of analysis) and talks with the student about why the paper evolved in that direction. This instructor also gives the student some specific instruction (reread, identify, highlight) to get her started on her revisions.
Your writing is relatively clear throughout the essay. The biggest
problem I am seeing here is an abundance of summary rather than
analysis throughout your body paragraphs. Part of the problem is your
stated purpose, which sets up a summary of the text rather than an
analysis of the rhetorical choices (see below). Once you are working
with a purpose that focuses on "A Homemade Education" rather than the
historical figure of Malcolm X, you'll want to rework your rhetorical
choices and evidence so that you are analyzing specific choices made in
the text rather than the author's biography. In other words, you want
to focus on "how" and "why" X writes the essay, rather than simply
relating "what" is being said. You mention allusion, word choice, and
personal experience/anecdote in the essay. I would suggest that you
reread the text and highlight as many examples of literary allusion and
a specific type (formal/informal/violent/ humorous/etc.) of word choice
throughout the text. You can then have a better idea of how these
choices are being used and connect them to your revised purpose.

From reading these and the other 100+ comments that incorporate the term rhetorical, as well as examining the drafts to which these comments responded to determine their contexts, I'm able to rough out a plan for our next workshop in which instructors will consider the issues revealed in these comments. The workshop following this mining session incorporated a generous sample of these instructors' comments. I mined a sample of comments from assignments that had scored between a 55 and an 88 and also pulled the drafts themselves from the database. Instructors began the workshop with a discussion of their own revision practices and of the types of commentary from instructors or peers they found helpful. We then turned to a few of the comments (acontextual), and I asked them to discuss what they believed the issues were with each draft that belonged to the comments. We then turned to the drafts, discussed those and the comments in context, and then instructors crowdsourced revised comments for those drafts. Post-workshop, I asked instructors to email me when they'd responded to 3-5 of the next revision assignment for students; after doing so, I accessed their commentary and gave them brief feedback.

Readers can see a final tenet of agile applied here--reflect at regular intervals and adjust accordingly in a meaningful timeframe that benefits current instructors and students. While it's clear from the three sample comments that substantial revision was required, the comments fall short in the instructional sense. While the instructors have been trained to identify key concepts named in all iterations of the WPA Outcomes Statement, the instructors' comments aren't helping students bridge the gap from abstraction to application of those concepts. This gap also helps explain why it's not evident to students why their drafts received the score that they did. It's critical that our new instructors spend time analyzing and revising comments on these preliminary drafts while the process of grading them is still fresh in their minds. They will be working with students for the next several weeks on revision assignments leading to the final draft submission, so the work I do with them individually and as a group will be applied immediately, to both their and their students' benefit.


Another way in which I can study instructor behaviors and gain more insight into how both the classroom instructors and the graders, known as Document Instructors (DIs), interact with students about their writing is by examining the number of times our classroom instructors change the grades assigned to their students by the graders in their workgroups. As instructors of record, classroom instructors have the final authority to change a grade--and their frequency or reasons for doing so can point to issues I need to address in instructor training or curricular concerns. As the term progresses, trends in how many grades are amended by instructors can signal several potential points for discussion. First, if a classroom instructor is changing a significant number of her students' grades across multiple assignments and cites reasons such as "Student Grade Appeal" or "Raising a DI Grade," I need to find out why. One of the first things to look at is whether a particular grader is having her grades changed by one classroom instructor--which could signal a conflict in the interpretation of an assign-ment--or whether multiple graders are involved--which could suggest that the classroom instructor has a different understanding of an assignment than her graders. I would also examine the range of assignments listed--are grades being changed on a particular assignment, or are they being changed for all assignments submitted? The answers to these questions would determine whether I needed to work with the classroom instructor, with one grader, or with the entire grading workgroup to rectify the situation. If I see a widespread trend across groups, I would also look at how an assignment is written and presented to students and instructors. Regardless of cause, conversations can occur within days or weeks, resolving the problem so that both our students and instructors gain the knowledge about writing that they need. Via the use of data and agile strategies, programs have the potential to leverage their technology and data to connect more quickly with all stakeholders.


The above describes several iterations of a single exploratory text mining initiative. Given the recursive and complex nature of factors influencing student and instructor activity on a single assignment, the drilling down that can take place could move in any number of directions: to examine other grades/grade ranges, to examine other components of student performance--attendance, class participation, grades, or commentary on prior scaffolded assignments. The work of instructors could also be examined through different lenses--perhaps searching for particular terms related to rhetorical knowledge, critical thinking, processes, and conventions that are included in all iterations of the WPA Outcomes Statement to determine which of these concepts are in fact being effectively communicated to students and which require a different type of commentary than new instructors are currently providing.

These tools help us develop a "culture of using data for making instructional decisions" (Bienkowski, Feng, and Means 46) throughout our program as administrators work closely with instructors to understand and assess instruction as well as address issues that surface each year as we continue to train each group of new instructors. In this more flexible environment, I analyze information that is scalable from the individual instructor to the program as a whole and have available for comparison similar information from past semesters. Having this data at my fingertips enables me, in consultation with my associate director and instructors, to make mid-course corrections literally mid-course. In doing so, we are mindful that we use this data responsibly. For example, in the above discussion concerning the changing of grades, following up with the involved instructors is paramount, especially with the instructor making the frequent changes. Understanding the motivation for such decisions is as critical for future training as the decisions themselves.

The data also provides valuable insights into the process of writing instruction at the programmatic level, information that can then be incorporated into instructor training. I can learn much about such behaviors as how instructors respond to students in their comments, what they will tend to prioritize in discussions and how they interpret a rubric. By examining instructor comments, I can also learn which assignments seem most difficult for our instructors to understand and teach. Big data contributes to this knowledge in the following way: While I might anecdotally hear a few instructors individually discuss their particular issues or understanding of an assignment, or read a few sentences about their experiences in a piece of reflective writing, that information is self reported. In looking at data--the actual commentary on hundreds of students' assignments--I can see what instructors are actually telling students about their drafts, whether they are able to provide a balance of formative and summative comments to students as appropriate, and whether any beneficial or problematic trends are endemic across the entire cohort or isolated in a few members.

For example, for several years, I found that a significant number of our instructors experienced trouble with both our literature review and rhetorical analysis assignments; consistently, those instructors' comments on preliminary drafts of these assignments fell short in some way: not showing a full understanding of how to discuss a literature review, providing the students incorrect advice, or pointing out problems in the text without providing instruction to helps students improve their drafts. In response, my associate director and I have revised our custom text, swapping out chapters deemed less effective by both instructors and students and adding custom-written material that speaks to the needs of our user groups. We've also provided far more literature review samples for instructors to use in the classroom and spent more time discussing this assignment in workshops. More recently, we've added additional requirements to our training; prior to teaching or grading a course for the first time, new instructors must write one of the major assignments for that course, along with a reflection on the process of doing so. Those assignments become part of the workshop series for the next semester. This process of doing, reflecting, and retooling is a key component of agile--especially because it takes place in a timeframe that can actually help the instructor during a semester--not six months or a year later, when they've moved on to other teaching assignments or have graduated.


Because it is our responsibility as administrators to create an optimal experience for our students and instructors during their time in the first-year writing program, agile research and assessment is a critical component of our program. With over 2,600 students each semester and 30 to 40 new instructors each year, I now have a way to quickly identify problems and work toward solutions that benefit both those instructors and students as well as those who follow. I can respond to Langbehn, McIntyre, and Moxley's call for real-time response and have the ability to follow and reflect on the effects of our actions as Rose recommends. While data-driven learning and assessment is predicted to impact higher education even more noticeably in the next three to five years, it will only do so if WPAs can incorporate use of the data in ways that benefit the primary stakeholders, students and instructors, as they grapple with the difficult tasks of improving their writing skills--not simply examine the data post-mortem to see what could have gone better. This type of work breaks down the boundary that some see between administrative duties and research and blends those components of a WPA's position in inextricable ways. While Bloom and others see Strickland's book as dangerous, the current emphasis on practices that improve student retention and success makes WPAs' blend of administrative and research-based tasks all the more critical. Far from being mere administrators, using big data and agile to conduct meaningful research to engage in thoughtful, yet rapid, decision making enables WPAs to rightfully claim both the research and managerial aspects of their work.


(1.) Agile in this context is a term initially applied to software development, and later to project management more generally, that encourages short, iterative cycles of planning, development, and critique. It enables users to identify problems and pose solutions more quickly than traditional, linear development paradigms.

(2.) Most datasets considered big data are those defined by the 3-Vs--volume, velocity, and variety--all of which are constantly growing and correspondingly more difficult to analyze and interpret as they do so. For a writing program, we might think of the 3-Vs as follows:

Volume = the number of assignments submitted

Velocity = the pace at which these assignments are submitted by students and commented upon by instructors

Variety = the type of assignments submitted, and the range of commentary given by instructors.

(3.) While I'm well aware of the body of substantial scholarship surrounding labor practices in higher education and in first-year writing programs, engaging that debate is beyond the scope of this article. Instead, this text focuses on the situation that we face currently. It makes the assumption that more aggressive and transparent management of a program is a positive situation--in an attempt to further the development of graduate students as instructors and improve the instruction of our first-year students.

(4.) The "twelve principles behind the [original] manifesto," particularly as adapted by Matthias Marschall, speak to the embedded, iterative nature of such processes as our instructor training and various levels of assessment facilitated in part by our programware.

* Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

* Welcome changing requirements, even after deployment, agile processes harness change for the customer's competitive advantage.

* Upgrade working systems frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

* Business people and all technical staff must work together daily throughout the lifetime of the platform.

* Build platforms around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

* The most efficient and effective method of conveying information to and within a technical team is face-to-face conversation.

* A working platform is the primary measure of success.

* Agile processes promote sustainable work environments. The sponsors, technical staff, and users should be able to maintain a constant pace indefinitely.

* Continuous attention to technical excellence and good design enhances agility.

* Simplicity--the art of maximizing the amount of work not done--is essential.

* The best architectures, requirements, and designs emerge from self-organizing teams.

* At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

(5.) They suggest that those involved in instructional design and assessment ask themselves the following questions. Even a single answer of yes indicates a potential need for incorporating agile elements into the instructional and assessment processes.

* Does your team find it difficult to identify the business requirements for the learning you are designing?

* Do you find that your business requirements change frequently?

* Do your stakeholders seem resistant to adopting your recommendations?

* Does your team have access to learners so that you can observe their learning process?

* Do your requirements include adding social or informal learning to your formal learning designs?

* Do your requirements include learning measurement dashboards?

* Are your delivery schedules becoming more and more compressed?

* Do you predict that your content shelf life will be short?

(6.) Especially when even such fundamental documents as the WPA Outcomes Statement 3.0 are directed at those who have
expert understanding of how students actually learn to write. For this
reason, we expect the primary audience for this document to be
well-prepared college writing teachers and college writing program
administrators. In some places, we have chosen to write in their
professional language. Among such readers, terms such as rhetorical and
genre convey a rich meaning that is not easily simplified. While we
have also aimed at writing a document that the general public can
understand, in limited cases we have aimed first at communicating
effectively with expert writing teachers and writing program
administrators. (60-61)

The issue here is that for new instructors who don't have a solid foundation in teaching college writing, basic terms such as audience and purpose may not resonate enough to enable these instructors to actually help students improve their writing.

(7.) The software shares features of the much-maligned applications used for automated essay evaluation; however, the goal here is to bring information to the attention of WPAs and instructors to examine behaviors--not provide a final evaluation. But what should be noted is the ability of the software to provide an alternative view of textual data.

(8.) Both instructors and administrators also have access to each instructor's portfolio of graded work--all assignments, all instructor commentary, and, if a collaboratively graded assignment, the comments and grades given by other instructors on that assignment.

(9.) These assignments are presented to all group instructors with only the course and section number identified, not the name of the student. The classroom instructor, of course, has access to the full identifying information for the student in her records.

(10.) The comments referred to as the mentioned above comments consisted of four phrases, each of which encouraged the student to expand her discussion of a particular rhetorical choice without providing any discussion of how the student might do so.


Allen, Jo. "The Impact of Student Learning Outcomes Assessment on Technical and Professional Communication Programs." TCQ 13.1 (2004): 93-108. Print.

Ashe, Diana. "Fostering Cultures of Great Teaching." WPA: Writing Program Administration 34.1 (2010): 155-61. Print.

Batt, Thomas A. "The Rhetoric of the End Comment." Rhetoric Review 24.2 (2005): 207-23. Print.

Beck, Kent. "Agile Manifesto." 2001. Web. 18 May 2013.

Bienkowski, Marie, Mingyu Feng, and Barbara Means. "Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief." US Department of Education, Office of Educational Technology (2012): 1-57. Print.

Bloom, Lynn Z. "Review of The Managerial Unconscious in the History of Composition Studies." Rhetoric Review 31.3 (2012): 350-52. Print.

Bradley, W. James. "Horizontal Assessment." Assessment Update 21.3 (2009): 10-11. Print.

Estrem, Heidi and E. Shelley Reid. "Writing Pedagogy Education: Instructor Development in Composition Studies." Exploring Composition Studies: Sites, Issues, and Perspectives. Ed. Kelly Ritter and Paul Kei Matsuda. Boulder: UP of Colorado, 2012. 223-40. Print.

Groves, Amy, Catherine Rickelman, Connie Cassarino, and M J Hall. "Are You Ready for Agile Learning Design?" T+D (2012): 46-51. Print.

Johnson, Larry, Samantha Adams Becker, Victoria Estrada, and Alex Freeman. NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas: The New Media Consortium, 2014. Print.

Kemp, Fred. "Computers, Innovation, and Resistance in First-Year Composition." Discord and Direction: The Postmodern Writing Program Administrator. Ed. Sharon J. McGee and Carolyn Handa. Logan: Utah State P, 2005. 105-22. Print.

Langbehn, Karen, Megan McIntyre, and Joseph Moxley. "Re-Mediating Writing Program Assessment." Digital Writing Assessment & Evaluation. Ed. Heidi A McKee and Danielle N. DeVoss. Logan: Utah State UP, 2013. Computers and Composition Digital Press. Web. 8 Apr. 2013.

Marschall, Matthias. "The 12 principles Behind the Agile Manifesto Adapted to Web Operations." Agile Web Development and Operations 7 Aug. 2009. Web. 25 Apr. 2013.

McKee, Heidi. A., and Danielle N. DeVoss, eds. Digital Writing Assessment & Evaluation. Logan: Utah State University P, 2013. Computers and Composition Digital Press. Web. 8 Apr. 2013.

Moxley, Joseph M. "Big Data, Learning Analytics, and Social Assessment Methods." Journal of Writing Assessment 6.1 (2013). Web. 15 Aug. 2013.

Reid, E. Shelley, Heidi Estrem, and Marcia Belcheir. "The Effects of Writing Pedagogy Education on Graduate Teaching Assistants' Approaches to Teaching Composition." WPA: Writing Program Administration 36.1 (2012): 32-73. Print.

Rose, Shirley K. "The WPA Within: WPA Identities and Implications for Graduate Education in Rhetoric and Composition." College English 75.2 (2012): 218-30.

Schwieger, Dana, and Ken Surendran. "Information Technology Management: Course Re-design Using an Assessment Driven Approach." 2012 Proceedings of the Information Systems Educators Conference 29.1922 (2012): 1-14. Print.

Smith, Summer. "The Genre of the End Comment: Conventions in Teacher Responses to Student Writing." College Composition and Communication 48.2 (1997): 249-68. Print.

Sommers, Nancy. "Responding to Student Writing." College Composition and Communication 33.2 (1982): 148-56. Print.

Straub, Richard. "The Concept of Control in Teacher Response: Defining the Varieties of 'Directive' and 'Facilitative' Commentary." College Composition and Communication 47.2 (1996): 223-51. Print.

Strickland, Donna. The Managerial Unconscious in the History of Composition Studies. Carbondale: Southern Illinois UP, 2011. Print.

Taggart, Amy Rupiper, and Margaret Lowry. "Cohorts, Grading, and Ethos: Listening to TAs Enhances Teacher Preparation." WPA: Writing Program Administration 34.2 (2011): 89-114. Print.

Wasley, Paula. "A New Way to Grade." The Chronicle of Higher Education 10 Mar. 2006: A6-A8. Print.

Susan M. Lang is Professor and Director of the First-Year Writing Program in the Technical Communication and Rhetoric Division of the Department of English at Texas Tech University, a public Carnegie Tier 1 university. She teaches courses in data and text mining, writing program administration, social media integration, other aspects of technical communication, and rhetoric and composition. She has published in College English, College Composition and Communication, Computers and Composition, JTWC, and various edited collections.
COPYRIGHT 2016 Council of Writing Program Administrators
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lang, Susan M.
Publication:Writing Program Administration
Date:Mar 22, 2016
Previous Article:On Keeping Score: Instructors' vs. Students' Rubric Ratings of 46,689 Essays.
Next Article:An Institutional Ethnography of Information Literacy Instruction: Key Terms, Local/Material Contexts, and Instructional Practice.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |