Thursday, April 18, 2013

The Bucket-O-Points Approach to Grading

This year marks my 15th as an educator. Some things about my teaching practice have changed dramatically over that time; I'm the sort who continues to tinker and experiment with my classroom practice, because I'm sure I can get better. My thinking about the meaning and purpose of assessment has definitely shifted over these years. I've been thinking a lot about assessment lately, and I've been having conversations with colleagues about assessment for several years now. It boils down to the fact that I'm concerned with how letter grades are used.

When I started teaching, I was a junior high math and high school computer applications teacher. Being a "math guy," I was all about intricate calculations and statistical analysis of my students' grades. I had programmed a spreadsheet to do my calculations, weighting different categories, and even color coding so I would know at a glance what grades students were getting. ("A" = Green, "B" = Yellow, "C" = Orange, "D" or "F" = Red)

Looking back now, I call my approach then the "Bucket-O-Points" approach to grading.

Here is the basic scheme behind the Bucket-O-Points method:

1. The teacher decides on the categories he or she is going to use. Perhaps "Homework" is a category, maybe "Quizzes" would be another, and "Tests" another.

2. The teacher decides how much each category--or "bucket"--is going to be worth in the overall grade. Maybe Homework is 40%, Quizzes for 30%, and Tests for 30%.

3. Each assignment adds more points to the bucket. Students take a test? Add the points to the Tests bucket. Students do a problem set for homework? Add the points to the Homework bucket.

4. How big to the buckets get? It depends on the number of assignments the teacher give for each category. This is why the percentages are assigned. Perhaps the teacher will give twenty 5-point homework assignments, making the Homework bucket 100 points. Perhaps the teacher will give two 30-point quizzes, making the Quiz bucket 60 points. Perhaps the teacher gives just one test for 75 points. The percentages allow the teacher to weight each category to make the final grade "fit" his or her ideal.

5. At the end of the term, the teacher calculates how big each bucket is. (How many points were possible for that category this term?) Then the teacher calculates how "full" each bucket is. (How many points did the student receive for their work? Did they miss assignments? They missed chances to fill their bucket! Did they only get partial credit? They missed chances to fill their bucket!) This is the step we often call "averaging" grades: adding up how many points the student received and dividing by the number of points possible. If there were 50 points possible for Quizzes that term, and the student received 44 points, they would score 88% for quizzes.

6. The students' grades for the term then are determined by the formula the teacher devises: each bucket is measured (how full is it?) and then these numbers have the percentages applied to them to determine the students' final mark. Each percentage correlates to a letter grade on a pre-determined scale. (A = 96-100%, A- = 90-95%, etc.)

7. These letters are reported to students and their parents as a summary of the learning the students have done over the course of the term.

Not all teachers take this detailed, weighted approach to grading. (I am kind of a math geek...) But that said, the majority of teachers I've spoken with use a simplified version of the Bucket-O-Points method for grading their students' work. They might just have one bucket, but the basic approach is the same:
  1. Every assignment adds points to the bucket.
  2. At the end of the term, the teacher determines the size of the bucket (how many points were possible) and how full the bucket is (how many points that student received.)
  3. The teacher "averages" the grade by comparing the number of points the student received to the number of points possible.
  4. This percentage is correlated to a letter according to the grading scale, and this is reported to students and parents.
Now, this is such a common practice that you're probably just nodding along and saying, "Yep. That's how it works."

But think about this, for a moment: What does that letter really represent?

Supposedly, all of the information from the term is compressed into that one symbol. So a student with a "B+" has clearly learned more than a student with a "C+," right?

Not necessarily.

Purely mathematically, the C+ might be because the student refuses to turn in homework assignments (or doesn't show her work--am I right, math teachers?). It's not that she doesn't understand the concepts to be learned. Maybe her Tests and Quizzes buckets were nearly full to the brim--she understands the material at a high level! But because her Homework bucket is so empty, her whole grade is lower. Does the C+ really represent what she has learned then?

Or perhaps our student with the B+ actually learned much more than the student with the C+? Perhaps our student with the C+ is very adept getting her homework done, on time and complete. (Perhaps she has a parent helping out on a nightly basis?) But when it comes to test and quiz time, she is not performing well at all. In this case, the brimming bucket of Homework raises her grade higher than it might otherwise appear, because her Quiz and Test buckets aren't as full. Again, does the C+ really represent what she has learned in this case?

As I became more and more aware of these kinds of idiosyncrasies in my grading practices, I began moving away from the Bucket-O-Points method. I stopped weighting categories, going to just one bucket. And then I started to minimize the importance of homework. And then I began to assign less homework. And then I began to look at alternative ways to generate a grade entirely, such as standards-based grading, which gets away from "points."

I'm a work in progress--I don't have this all figured out. But I'm pretty convinced that the Bucket-O-Points method is flawed, and for all it's machinations, it probably doesn't actually give a very accurate picture of learning.

And isn't that supposed to be the point of assigning grades?


  1. and the Bucket O Points method makes the students "point mongers" not learners

  2. I am a ninth grader at Prairie Point High School in Iowa, and my school as recently instated a four point grading scale as along with standard based grading. The grading scale follows, 1- no attempt, 2- success with help, 3- at standard, and 4- above standard. The way most teachers follow this, is they put something on the test that they have not taught yet, and claim it as an above standard target. The issue with this, is that if I know everything I am supposed to know, I have a 75%, a C. In all my classes but two, I am lucky enough as to have an A, but this years average GPA is down by .12 from last year, and this is the only difference. (In relation to current tenth graders.) What confounds me about this, is that when I ask a teacher, an administrator, a principle, or a school board member, they all deny me, because "I'm a kid." I am more informed than most of the adults here, and I'm not trying to be snobby, but most kids and even a teacher tells me that. What I want to know, is there any definitive proof that says that this grading system is better than others? And how can I get people up in arms about this? I already had a petition signed by 300 students, but that did nothing. By fourth grade, I had MAP scores that allowed me to graduate. I shouldn't have a C in english for knowing just what I should.

    1. Thanks so much for taking the time to weigh in on this topic, and I love to hear your passion about this. The shift of thinking from "bucket-o-points" to authentic feedback is a hard one for everyone involved: teachers, administrators, parents, and (perhaps most of all) students. I think that some teachers feel they have had standards-based assessment forced on them, which makes them "take it out on their students" so to speak. I hope this is not what you are actually experiencing, but it might be!

      I really appreciate your question about "proof" that a standards-based system is better than the bucket-o-points method. There is quite a lot of research that has gone into standards-based assessment; I'd encourage you to look at work by Robert Marzano or Rick Wormeli as a starting place. They both have shaped my thinking about the value of standards-based assessment. The fact of the matter is, the bucket-o-points methods has wide acceptance in schools today, but there really isn't any "definitive proof" that it is a good system, other than the fact that people have been using it for a long time. Some people might say that makes it good, but I think it often is "just the way we've always done it," which doesn't automatically mean it's good, it's just what is familiar.

      I did use standards-based grading (similar to what you are describing here, but not exactly the same) the last few years I taught middle school science. Here's a link to my website, where I explained my approach and my reasoning for using standards-based assessment: I hope you'll share this with teachers (and classmates, and administrators, and anyone else who is struggling with standards-based grading!) It's not a perfect depiction, for sure, but it gives what I hope is a strong rationale for assessing students' learning this way. The key, I think, is feedback on what is going well and not so well, and then not lumping all of the students together as "the class." Different students need different things!

      Thanks again for stopping by; I hope this helps some!

      Peace to you,