Hello!

If you would like to participate in discussions, please sign in or register.

In this Discussion

Most Popular This Week

    Due to changing needs and technologies, the SMT Executive Board has decided to retire SMT Discuss (effective Nov. 9, 2021). Posts will be preserved for archival purposes, but new posts and replies are no longer permitted.

    Alternatives To Standards-Based Grading?

    Dear colleagues, 

    Are there alternatives to standards-based grading that ensure that the student masters the skills in, say, a level 1 class, before the student moves on to level 2? Any insight you can provide will be appreciated.

    Thanks so much,

    Guy

    Sign In or Register to comment.

    Comments

    • 6 Comments sorted by Votes Date Added
    • Dear Guy,

      There are indeed other options, one of which I discuss in an article coming out soon in the next issue of Engaging Students. The basic idea is that I set high standards for student achievement in every component of every summative assessment (i.e., exams). I communicate these standards through three-category rubrics, which describe the kind of work that will result in "high pass," "pass," or "retake" grades. Should students not reach the passing standards on their first tries, they continue to work on the components they did not pass until they do. Students in our four-course theory sequence at UMass Lowell must reach the passing level on all assessments before moving forward to the next course in the sequence.

      These ideas have been discussed in educational circles under various names, such as "mastery learning," "proficiency-based learning," and "competency-based learning." All share the basic tenet that students must master specific skills and concepts before they move forward.

      I hope this helps, and my article in Engaging Students should give you more to think about when it comes out.

      Very best,

      Garrett

    • Maybe Garrett's article will address this, but the biggest issue I have had using mastery learning in aural skills has been the academic calendar and the university's grading requirements. So, if a student doesn't master a skill by the time of the final exam, I might assign them an incomplete and have them demonstrate mastery after the break (over Christmas or over the long summer). It had to happen after the break because the students would be leaving campus immediately. Then the university began to crack down on incompletes if certain criteria were not being met, which did not include mastery-based assessment. We eventually were able to get an exception for aural skills, after the situation was explained, but other universities will have similar restrictions on thee grades that can be assigned and specific deadlines for when they should be assigned. Other than the hoops, I loved mastery-based assessment (for aural skills specifically). 

    • I'm interested to see the new issue of Engaging Students when it is released. But in the meantime, I can share what I've been developing the last few years. I use specifications-grading/assessment in all my courses, based on Linda Nilson's book Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time (Stylus Publishing, Sterling, VA. 2014), and after reading Robert Talbert's experiments with them. It is a mastery-based system that gives students room to learn from their mistakes and take a greater degree of ownership over how and when they are assessed on certain topics. 

      Here are the pros and cons to specs grading:
















































      Pros

      Cons

      Ready accommodation of quantitative assessments

      Unfamiliarity (for both students and faculty)

      Far less grading (and laboring over the minutia of point values)

      Requires comprehensive plan for entire course

      Fewer conflicts with students

      Need for clearly defined, evaluative criteria and models of acceptable performances

      Less time spent evaluating qualitative tasks

      Likely to require more feedback

      Greater motivation for meaningful learning through quality work

       

      Students more likely to learn from errors and corrective feedback

       

      Less stress and likelihood for cheating

       

      Improved student performance over time (through stronger models)

       

      Reasonably high standards

       
      Early warning of student weaknesses  

      For every assignment, I specify how many errors they can make, and of what kinds, before the work becomes unsatisfactory. I use variations of this system for concept-heavy courses like theory, skill-heavy courses like aural training, and writing-heavy courses as well.

      For example, in a theory course, students' grades are based on three kinds of evidence:


      1. Engagement with course topics. Students earn "experience points" (XP) for checking their homework against my answer key before class (half points for doing it late), or various other activities that show they are thinking about and engaging with the material outside of class.

      2. Mastery of basic skills/concepts (the 18 learning targets). Each learning target gets a quiz, and students take the quizzes on designated days throughout the semester. They can take as many or as few as they want, depending on how prepared they feel to show mastery of a given topic. If they do not earn satisfactory marks, they may reattempt after more study, meeting with a tutor, or coming to office hours for help.

      3. The ability to creatively apply those basic concepts/skills to new situations via analysis or composition. These Application Challenges are marked on a four-tier scale of S+ (which sounds similar to Garrett's "high pass"), S ("Satisfactory" or "Meets Expectations"), U ("Unsatisfactory" or "Revision Needed"), and U– (i.e.. not assessable).

      You can probably see how this progresses from shallow to deep engagement and familiarity with the semester's concepts. This translates into grades like so:
























      To earn this Grade Complete:

      D

      Achieve 2100 XP + 11 Learning Targets
      C Achieve D + 300 more XP (total of 2400) + earn S on 2 more Learning Targets (total of 13) + earn S on Application Challenge.
      B Achieve C + 300 more XP (total of 2700) + earn S on 2 more Learning Targets (total of 15)
      A Achieve B + 300 more XP (total of 3000) + earn S on 2 more Learning Targets (total of 17) + earn S+ on Application Challenge.

      Because there are no weighted averages to calculate, I actually give students a checklist to see how close they are to their desired semester grade. They simply check off boxes as they complete requirements. 

      I realize that despite the length of this post, I'm leaving out a lot of details. How this works in an aural skills context is a bit different, as it would be in a writing-intensive course. But I'm happy to go into more detail if there is interest.

      Hope this is of some use,

      Enoch

    • Regarding Brent's comments about university policies, I guess I've been lucky not to have had these sorts of issues raised. We do offer students incomplete grades when they have not yet demonstrated mastery of something, which we try to clean up before or right at the start of the next semester. But we also have students retake entire courses when their performance is not up to par. We try to catch students in this position before the "W" withdrawal deadline—the point at which students can withdraw from a course and not received an "F" grade. Everyone who moves beyond this deadline has demonstrated the capacity to pass the class, ideally, preventing them from ending up with a grade that could impact their GPAs. And should a student not meet the mastery standards and withdraw, they take the class again.

      One aspect of mastery learning that I like is that it recasts withdrawing or "failing" a course as merely evidence that a student needs more time to work on their mastery. Instead of having exam dates or course limits impose hard-and-fast deadlines on their learning, students can take the time they need. As I write in the article, I have often found that some students never really improve their performance throughout a theory sequence; a "C" student stays a "C" student. By setting a high bar for all, no one slips through the cracks and moves forward with serious knowledge or skill deficits.

      Regarding Enoch's "specifications grading," I am happy to see that others are experimenting with these ideas. I completely agree with his list of pros and cons. The one "pro" that I would underline is the simplification of grading: when assessment standards are clearly defined, an instructor simply indicates whether those standards have been met, gives feedback on what would bring a student's work up to those standards, and moves on. No fiddly systems involving point deductions are needed. The one "con" I would add is that arranging time for retaking assessment components can be difficult. This has pushed me towards more creative sorts of assessments (e.g., composition and improvisation) that can be revised out of class or quickly re-attempted one-on-one after class. But it is very possible that this sort of system cannot work in large lecture situations where the time needed to re-hear or re-grade student work without the aid of teaching assistants is not available.

    • I certainly agree with Garrett's additional "con." One way I've tried to alleviate that problem (since I don't have TAs) is to limit student retake appointments to x-number of times per week, for 15 minute appointments (and with two-business-day notice), which they perform orally, and I can assess in real time how well they have mastered the concept.

      I also make sure that the two-hour exam block we get at the end of the semester is purely for retakes—nothing new to attempt. This way students can be proactive and skip the final exam period entirely, or use the extra time as they need it. It's not a perfect system, but it is working better the more I use it (and the more students become accustomed to how to budget their time).

    • Another way of overcoming the time/logistics con is to create a lot of adaptive assessment activities in your institution's LMS (we use Moodle at Casper College). I've been creating ever-larger banks of questions so that students can just repeat a given assessment activity until they achieve mastery with different questions randomly picked each time. It DOES take a lot of upfront work, but saves an incredible amount of time and effort during the school year while allowing for as much individual student pacing as the semester calendar permits.

      My grading system, by the way, is to score each objective on a scale from 1-4: 1 complete lack of mastery, 2 rudimentary level of mastery, 3 satisfactory level of mastery, 4 perfect level of mastery. Receiving a 4 in all objectives results in an A for the course, while a 3 in all objectives results in a C. I allow students with a handful of 2s to continue on in the sequence with a D, but the lacking objectives must be satisfied to pass the next course in the sequence. I have found that while it takes the students a little while to wrap their heads around the system, it greatly improves student motivation and performance compared to the previous "traditional" grading system I used.

      Nathan Baker

      Music Theory Coordinator, Casper College, WY

      nbaker@caspercollege.edu