Sunday, November 4, 2012

Practicing What I Preach

I'm very aware of my own hypocrisy in terms of the gap between my educational philosophy and my classroom practice. I'm working on shrinking this gap, but let's just be honest: on this side of glory, I'm never going to make this happen perfectly.

This has taken on even more gravity for me lately though, since I'm teaching future teachers. They are pretty critical of the teaching they are observing--some have come right out and said so to me (which is hard, but good in a way)--they want to see the message match the methods. Basically, I'm striving to be more and more deliberate in practicing what I preach. (For an example from earlier in the semester, read this.)

For example, I just finished marking a test for my science methods class. But it wasn't a typical test. In fact, to call it a test is a little misleading. Over the past decade or so, I've become a big fan of differentiated instruction. At it's core, differentiated instruction is about giving students choices; providing different options in the content they learn, in how they learn the content, and/or how they demonstrate what they have learned. (I got to hear Carol Ann Tomlinson--the "guru" of differentiated instruction--present at a National Middle School Association conference back in 2004 or so--really changed my thinking about teaching!) To me, differentiation is a very "Christian" way to teach: it acknowledges that students are created as unique individuals, and allows a teacher to tailor instruction to the needs, preferences, and gifts present in his or her class. When I was a middle school science teacher, I sought to do this as much as was practicable--to be honest, it is more work at first...but the more you do it, the easier it gets.

And now that I'm teaching future teachers, we talk about how differentiation is a great way to teach. But until this semester, I haven't done too much of it myself with the college students.

But I'm trying to be deliberate here, right? Practice what I preach? So, here goes...

I always put a test on the syllabus about halfway through the semester; it's a check-up to see that they understand the theories we've been learning, that they have a solid grasp of the vocabulary, that they are pulling the big ideas together.

Its just that...I realized that a pencil-and-paper test might not be the best way to do that. So I brought it up in class with my students. And they agreed, mostly...but they pointed out that studying for a test is "easier" in some ways than alternative forms of assessment. Because I made it pretty clear to them that I do want to know what they know, and understand what they understand.

So we talked about alternatives to a traditional pencil-and-paper test. I entertained any options they would offer, at least for the discussion. A few floated to the top: an interview, a group project of some sort, and writing an essay instead. After a little more conversation, we decided on three options. They could have their choice of how they would show what they have learned so far from among these:
  1. A traditional pencil-and-paper test, made up of multiple choice and constructed response questions. (Some of them felt quite strongly that this was their preferred mode for showing what they have learned.)
  2. A "creative writing" project, in which I would give them a prompt that would give them a context in which to write about what they are learning in class.
  3. An interview, either individually or as part of a group.
Care to guess how many students picked each option? I have 26 students in this course this semester. Of those:
  • 5 chose the pencil-and-paper option.
  • 2 chose the creative writing option.
  • 3 chose an individual interview.
  • 16 chose to be part of a group for an interview.
All of them reported feeling like the choice they made provided the best opportunity for them to show what they had learned. That's a pretty powerful statement, if you think about it!

This was not a challenge-free process for me. The seven students who chose one of the written options were "easy"--we had a regularly scheduled class time that they could use for writing their test. For the other 19, I had to schedule times for them to visit with me. This took a lot more time, of course. Individual interviews lasted about 15 minutes on average; group interviews were more like 30 minutes, so all together I spent several hours in these interviews with students. And of course there was extra time invested on my part for creating the different testing options.

I noticed a few other things about these different choices that I think were interesting. An observation: the students who took the pencil-and-paper version were actually more likely to get a poorer grade over all, though no one got anything lower a "B+." I think that the nature of a multiple-choice test is such that we tend to focus on which ones students get wrong. (Side note: I always encourage my students to justify their answers for objective questions like multiple-choice or true-or-false--this helps me understand what they are thinking as they mark their answer. But this takes a lot more writing on their part, which they are probably less likely to do, unless they are feeling tentative about the answer they have selected.)

The students who chose more open-ended options tended to do better overall, because they could tailor their responses to specifically what they had learned. Those who chose the creative writing version were given a prompt to provide some context for their writing, but then they could basically choose from all of the content of the course as they synthesized an answer. And the interview was probably the most "authentic" of all: I had a half-dozen prompts to get conversation started, but then we really just had a discussion about what they had learned. (Interesting to note how deliberate most students were at using vocabulary we had studied, explaining the different theories we had explored, and integrating  ideas to describe what "good" science teaching looks like.)

I asked the students who chose the non-traditional options if they prepared for the test differently than they normally would. Their answers varied: some said they reviewed their notes and went through the class activities pretty much as they normally would; others (especially those who did the group interviews) tended to talk more with their group ahead of time and work more collaboratively. One duo--one of the last interviews I had--said that they thought preparing for the interview was pretty rigorous, but ultimately it made the interview a pretty "easy" test.

And all together, they did really, really well: students discussed the nature of science and how best to teach it, they described different approaches for planning and presenting science activities, they were able to articulate the different philosophies of education underlying contemporary views of science education. The thing that got me so excited was how they were able to integrate their new knowledge from this course with things they had learned in other courses to describe their ideas of what a distinctively Christian approach to teaching science looks like.

So was the test "easy" for them? I think this might be the wrong question. We focus so much on grades--even in higher education--that we sometimes forget that tests are NOT really about generating a grade. At least, I don't believe they should be. The test should be about finding out what students know, understand, and are able to do as a result of their learning. And if I want to know what they've learned, why not give them the chance to tell me in the mode of their choice?

No comments:

Post a Comment