College Writing Assessment

Fall 2010

 

This is the first time the English Discipline has attempted to assess the College Writing program, and the effort is very much a work in progress.  While the goals of assessment are clear, the precise assessment methodology is yet to be fully defined.  Despite this lack of clarity in the assessment process, the student papers examined do seem to indicate that students in College Writing classes are exposed to and learning the basic writing goals as described by the English Discipline, which are excerpted here:

 

(II) College Writing Assessment.  As demonstrated in Table II, all discipline goals are met by the current curriculum.  Furthermore, as demonstrated by the common language and goals expressed on all College Writing syllabi, all College Writing courses advance a core set of writing goals:  Writing as a Process, Effective Academic Argumentation, Understanding Grammar and Punctuation Conventions. 

 

In 2009, the English Discipline created an assessment process in large part by adapting methods that have been used at other institutions.  In our general assessment strategy document, English set out the following procedure:

 

Assessment of the College Writing curriculum also includes the following:

a.       Annual assessment of College Writing papers:  Each member of the faculty teaching College Writing in an academic calendar year collects representative examples from one of their sections.  At the beginning of the semester “weak” “average” and “strong” samples of student work are collected.  At the end of the semester, the same student’s work is collected.  These writing samples are reviewed by a subcommittee of English faculty early in the summer of pertinent years to determine whether the work demonstrates that 1) the goals for these courses are being achieved and 2) the quality of student work is improving from the beginning to the end.

b.       Annual review of College Writing syllabi:  This discussion will occur at the first College Writing meeting of the academic calendar year.  Faculty members teaching College Writing will provide a representative example of their syllabus for College Writing.  These syllabi will be reviewed to determine whether the syllabi contain the common language and requirements of the course.

c.       Annual College Writing meeting of 1-to-2 hours in which faculty discuss the current College Writing curriculum, strengths and weaknesses of the program, what has changed over the past year, and what needs to change for greater student success.  This meeting should happen in the spring of each year, for reasons similar to the annual English discipline meeting. 

 

To follow part a of the assessment plan above, a number of sample assessment rubrics were obtained from other university writing programs and that of Barbara Walvoord was used to evaluate the student papers collected last year.  Walvoord examines essays according to nine criteria and describes levels of proficiency from “No/Limited Proficiency” to “High Proficiency.”  Each paper examined by this process would receive nine separate grades that could be combined to provide a comprehensive score.  By this measure, UMM students are indeed being exposed to and learning the basic skills required for success in College Writing.

 


 

College Writing Assessment Ratings

 

 

 

 

 

 

 

 

No,/Limited Proficiency

Some Proficiency

Proficiency

High Proficiency

Thesis/Focus Originality

d

a,A,c,D,e,E,f,F,G

b,g,c

B

Thesis/Focus Clarity

d,e

c

a,A,b,c,D,E,f,F,g,G,

B

Organization

e

a,A,c,C,d,D,E,f,

b,B,F,g,G

 

Support/Reasoning

 

a,A,c,d,D,e,E,f,G

b,B,C

g

Uses of sources/documentation

(NA: a,c,d,e,f,g)

A,D,E,F,G

b,B,f,g,G

 

Audience Awareness

 

a,A,c,C,d,D,e,E,F

b,B,f,g,G

 

Style: sentences/syntax

 

a,A,c,C,d,D,e,E,F

f

b,B,g,G

Writing Conventions

 

A,e

a,b,c,C,d,D,E,f,F

B,g,G

Presentation

 

 

a,A,c,C,d,D,e,E,f,F

b,B,g,G

 

 

 

While the results of this year’s assessment process are encouraging in terms of student learning, the English Discipline believes that there are significant problems with the process we used for assessing College Writing.  One important weakness of our current model is that while the method is time-consuming, the quantitative data that the model produces is not particularly useful in improving teaching.  The subcommittee that examined the collected student papers came away with a broad sense of the way students respond to particular assignments, but it is not clear how any quantitative data could be used outside of the context of particular assignments and students. 

 

Some of the reasons for this conclusion include:

 

  1. Assignments vary tremendously.  Some of the assignments provided a relatively straightforward path to a thesis.  Others did not.  For example, a typical early assignment was for students to respond to an argument in an essay.  The thesis in these papers was generally a more focused version of the original argument if the student agreed with the writer or a simple negation of the original if the student chose to disagree.  In other cases, students were not provided as direct a path to a thesis, perhaps requiring more work in the generation of a thesis.  The concern here is not with the pedagogy of these assignments, but in knowing how to compare the very different essays that arise from them.

 

  1. In many cases the second paper provided was the research paper, which made comparisons for development over the course of the semester difficult.  Research projects at UMM are often the most process-intensive of the semester.  Does comparing the prose or thesis in the research essay to the prose or thesis of an early project show student improvement or instructor attention?

 

  1. Some essays were the product of an instructor-managed process of revisions and some were not.  This seemed true of both the early and the later essays. 

 

  1. Nancy Pederson’s ESL student papers were included for examination.  We may need alternative methods to assess the improvement of these students.

 

  1. In all the samples of writing, there were no truly terrible essays, even among the early work.  The reasons for this are unclear, but it is possible that instructors may feel personally responsible for their students’ writing and find it difficult to include essays they feel may cast their teaching in a negative light. 

 

  1. As quantitative data, the results tend to conceal student improvement.  College Writing assignments tend to become more challenging during the course of the semester.  As a result, a students’ early and late papers may both receive the same rating for some skill, but the later paper will have required considerably more sophistication to reach, for example, a rating of “proficient.”  Lower marks could be even more misleading as a student whose essays were both judged “somewhat proficient” might seem stuck at a low level of achievement when in fact substantial improvement has taken place.

 

  1. Our current assessment plan includes no clear process for using information we gather.  Additionally, the focus seems on producing quantifiable results, turning the complex process of learning to write into something that can take numerical measure.

 

As a result of our experiences with this initial assessment method, the English Discipline has decided to alter our approach.  Instead of attempting to produce quantifiable data from student writing, we intend to hold two meetings each semester of all faculty teaching writing courses, one early in the semester and one toward the end.  In the first meeting, instructors will review the goals of the writing program and share syllabi, assignments, and teaching methods.  In the second meeting, making use of a version of Walvoord’s assessment rubric, the group will assess the effectiveness of these methods and assignments.  As in our current plan, early and late essays will be shared and discussed in an effort to find best practices for individual teachers and students and to maintain some grading comparability across course sections.