Joy of Assessment: Five Things to Consider (Latest Draft)

The Joy of Assessment: Five Things to Consider

Anna Leahy

Chapman University / 714-628-7389 / ude.nampahc|yhael#ude.nampahc|yhael

Attitude Matters: C’mon Get Happy

When we see someone else smile, our own mirror neurons fire so that we, too, experience the sensations we associate with smiling. In other words, if I see you smile, I feel a little as if I’m smiling, too. Or I know what you’re feeling: joy. Cognitive science suggests that these mirror neurons allow us to develop empathy and to socialize well. So, I begin this talk by saying that smiling—putting on a happy face—might help us work together to accomplish assessment.

Recent studies also show that blame is contagious. Shifting blame becomes especially common within organizations that feel threatened. If I see you blame someone, I become more likely to blame someone else, too. When leaders point fingers publicly, instability in a university or department can quickly spread. Individuals, then, become less willing to take risks and less able to be innovative, which is antithetical to our work as writers and teachers. Faculty should be wary of voicing negative attitudes, because those attitudes have staying power.

These two points may be the most important part of my paper because, as I was deciding what to include, news broke that conservative education scholar Diane Rativich had changed her mind about the central role of standardized testing, the need for educational choices like charter schools, and her belief that the free market will improve schools. I have long advocated that we document our pedagogical approaches, but I have been deeply concerned about the effects of the rise of assessment on the arts and humanities. Rativich now admits that No Child Left Behind’s “requirements for testing in math and reading have squeezed vital subjects like history and art out of classrooms.” It feels newly problematic that some people who have driven accountability in education are saying, oops!

NSSE Doesn’t Matter, But Maybe It Does

On January 26, Inside Higher Ed published a short article called “Going Public” by Douglas C. Bennett, Earlham College’s president. He advocates what he calls a “public learning audit” as part of accreditation. This audit would be available to the public as well as the accreditation visit team, would be based on a rubric provided by the accreditor, and would include scores from the National Survey of Student Engagement (NSSE) and other tools, as well as program-level assessment. How many of you know what NSSE is? Do you know whether your institution participates in NSSE? Until I asked at a department meeting last fall, English facuty didn’t know that our university uses NSEE. Only a couple of us knew what NSEE is.

Administrators like NSSE a lot and, if a program doesn’t yet have student learning outcomes, it is possible to use NSSE to consider areas in which English excels. Let me summarize part of my article about NSSE at Inside Higher Ed.

“NSSE presents itself as an outside—seemingly objective—tool to glean inside information. […I]t provides feedback on a wide array of institutional issues, from course assignments to interpersonal relationships, in one well-organized document. Additionally, the report places an institution in a context, so that a college can compare itself both with its previous performance and with other colleges generally or those that share characteristics. And it doesn’t require extra work from faculty. […]

“Yet, NSSE does not directly measure student learning; the survey tracks students’ perceptions or satisfaction, not performance. [… The NSSE website implies] that an increase in a score from one year to the next is random unless the institution was intentionally striving to improve, in which case, kudos. Yet, NSSE encourages parents to “interpret the results of the survey as standards for comparing how effectively colleges are contributing to learning” in five benchmark areas, including how academically challenging the institution is.

“[…] So, let’s see what we might glean from NSSE [that might lead to outcomes].
“Here are items from the first page of the 2007 NSSE:

  • Asked questions in class or contributed to class discussions
  • Made a class presentation
  • Prepared two or more drafts of a paper or assignment before turning it in
  • Worked on a paper or project that required integrating ideas or information from various sources
  • Included diverse perspectives (different races, religions, genders, political beliefs, etc.) in class discussions or writing assignments

“[…] In another section, students were questioned about the number of books they had been assigned and the number they had read that weren’t assigned. They also reported how many [papers of various lengths they’d written]. We can quibble about these lengths, but, as an English professor, I agree with NSSE that putting their ideas into writing engages students and that longer papers allow for research that integrates texts, synthesizes ideas, and encourages application of concepts. And reading books is good, too.

“[…] If we are at a loss for learning outcomes or struggle to be clear and concise, we have existing expectations from NSSE that we could adapt as outcomes.”

Setting Our Priorities

As we practice program-level assessment, I see our greatest difficulty as the tension between our priorities as teachers and what is most easily measured. Our goals for our students are often not easily measured, and we, on the whole, are not well trained in the sociological research methods that underpin assessment practices. So, again drawing in part from my earlier Inside Higher Ed article, I’ll try to articulate this issue.

“Included in Thomas A. Angelo and K. Patricia Cross’s Classroom Assessment Techniques is a table of top-priority teaching goals by discipline. Priorities for English are Writing skills, Think for oneself, and Analytic skills, in that order. Arts, Humanities, and English have just one goal in common: Think for oneself. We can survey student perceptions of their thinking—an indirect measure—or maybe we know independent thinking when we see it, but how do we determine thinking for oneself in a data set? […]

“[…] Another table in Classroom Assessment Techniques lists perceived teaching roles. […] For English, all other roles pale in comparison to Higher-order thinking skills, which 47 percent of respondents rated most essential; the next most important teaching role is Student development at 19 percent. No other discipline is close to this wide a gap between its first- and second-ranked roles. Surely, that’s what we should assess.”

We strive to create an environment to foster creativity, and we see evidence of creativity in our students’ writing, but it is difficult to quantify that evidence, especially because we succeed as teachers when each student provides a different answer—a distinct iteration—to the question of what a story is or what a poem can achieve.

Student Learning Outcomes

I’ve brought Chapman University’s MFA student learning outcomes, developed last spring. For both the MFA and BFA, the data will be drawn from randomly selected thesis projects. The undergraduate thesis already requires an introductory essay in practice; such an essay will likely be required now of both MFA and BFA theses so that we can determine how aware students are of their learning. For instance, one outcome says that work will exhibit control, which implies intention. Another learning outcome addresses style, and its criteria offer very different ways of defining style from which we might choose thesis by thesis; a student’s essay could help shape the appropriate, individual definition.

Still, these outcomes look difficult to assess, and our faculty probably will have diverse interpretations of criteria. In addition, we loaded them with “and/or” to satisfy all tenure-line creative writing faculty. We outlined criteria so that we can pick and choose to fit an individual student’s work, which may be appropriate, but which risks relying on I know good writing when I see it. Also, we have so loosened the connection with genre that the criteria do not reflect our genre-defined curriculum. But that’s again where the introductory essay could help; if a student abandons various techniques of the genre, even though we require a genre-based Techniques course in order for the student to develop proficiency employing the genre’s techniques, the introductory essay can show us that individual’s decisions, awareness, or proficiency in that area.

Curriculum Map

We are now required to do a Curriculum Map. I’m not sure whether it is a university requirement or required by Western Association of Schools and Colleges (WASC), and it’s a chart more than it is a map, as you can see from our MFA Curriculum Map. Even if you are not required to produce such a document, I encourage you to create a one-page, visual representation of how your program’s student learning outcomes intersect with the required courses.

To produce such a chart, merely list the student learning outcomes across the top and the required courses down the side. While you could include electives, that dissipates the focus on what’s essential in your program. Once you have the chart formed, merely insert the terms introduce, develop, and master in the appropriate box to indicate which courses introduce, develop, and master each outcome. Mastery is the terminology of higher-ups. Creative writers often relearn with each project, so our lack of mastery keeps us writing. Ironically, the goal of assessment-driven learning—mastery—may be something we don’t really expect in our students.

Still, charting quickly shows whether a program has any required course that does not serve an outcome and, more importantly, whether any outcome is not embedded in requirements. Best, in one quick look, you see a version of what your program accomplishes—and how.

This map can lead faculty to ask informed questions about what’s happening in each core course and how much each outcome is being developed. For instance, if we look at Chapman’s Curriculum Map, we see that the Techniques and Workshop courses both introduce and develop certain outcomes. This doubling-up likely occurs because we don’t have prerequisites, sequence doesn’t matter, and Workshops are repeated. We might ask whether these characteristics of the program are logistical—some of our students are part-time, we can’t offer every course every semester—or whether they are pedagogical and really do serve students’ learning.

The map can help us think about prerequisites, academic advising, how we collect assessment data, and all sorts of programmatic and pedagogical, issues. It may also be the simplest, least time-consuming tasks faculty can do. And that makes me smile.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License