Hello!

If you would like to participate in discussions, please sign in or register.

In this Discussion

Most Popular This Week

    Due to changing needs and technologies, the SMT Executive Board has decided to retire SMT Discuss (effective Nov. 9, 2021). Posts will be preserved for archival purposes, but new posts and replies are no longer permitted.

    How's your university dealing with learning outcomes?

    Hello SMT!

    Some of you know that I have moved on from teaching in music theory, but I am still heavily involved in higher education. I am currently at Southern New Hampshire University working with their non-profit online programs. (You may have seen our commercials!) Anyway, one of the things that I have picked up in the two years I have been here so far is that the courses are designed in a way to measure student learning outcomes. Every assignment as a rubric and all students are measured against the same rubric. Usually they include categories such as exemplary, proficient, needs improvement, and not evident with the typical split of (100% / 90% / 50% / 0%) per each section of the rubric.

    What's interesting in this approach is that it allows for us to collect data on how successful assignments are in reinforcing expected outcomes in the course. If students widely struggle on the assignment, then either the content isn't being delivered effectively or the assignment is not adequately addressing that student learning outcome. Being all online, we're able to collect this data and make some really cool observations about trends, student cohorts, assignment success, etc.

    It has made me think about how we evaluated and assess how students are doing in the music theory curriculum. I remember grading counterpoint exercises and assigning fractions of points for every note and error. Sometimes you would see some students with "technically correct" counterpoint but nothing that really resembled stylistic writing. Has anyone moved to a model where rather than assigning points to errors that you grade more holistically and the level of proficiency on those skills. For example, a rubric could look like this for a melody harmonization assignment:







































    Learning Outcomes Exemplary (100%) Proficient (90%) Needs Improvement (50%) Not Evident (0%)
    Voice leading (30 points) No errors 1 or 2 minor errors, no major errors Many minor errors or several major errors Many major errors or assignment not completed
    Style (30 points) Stylistic approach to melody harmonization Overall, still very stylistic but  A few awkward progressions Lacking understanding of stylistic harmonization or assignment not completed
    Roman Numeral analysis (20 points) Correct RN analysis 1 or 2 minor errors 1 or 2 major errors or several minor errors Many major errors or RN analysis not present
    Functional label analysis (20 points) Correct function labels 1 or 2 minor errors 1 or 2 major errors or several minor errors Many major errors or functional analysis not present

    This is just an example I whipped up, but it would allow for easy collection of student learning outcomes. For example, if the melody harmonization exercise were to test a student's ability to apply secondary dominants and they did not use any - you might score them as not meeting the style (or perhaps you rewrite the rubric to reflect the need for secondary dominants). Then you can observe quickly how well the class is or isn't learning the material.

    In an age where accreditation agencies are pushing for better demonstrations of student learning outcomes under public pressure on the value of the degree, I imagine that we (the music theory community at large) will have to figure out a way to collect this information in a meaningful way. I'm curious if any of you have begun working on similar types of assessment projects and how you are approaching this issue.

    Thanks!

    Devin Chaloux

    Indiana University

    Sign In or Register to comment.

    Comments

    • 5 Comments sorted by Votes Date Added
    • Devin:

      I have been thinking very much along these lines for some time too. What we have begun doing at UMass Lowell is to have three-category rubrics for all our unit assessments. One catgory is "high pass" (90% or higher), one is "pass" (80–90%), and the third is "revise/retake". For each component of an assessment, we require all students to achieve a score of 80% or higher in order to move forward in the course. We define this 80% bar carefully, aiming for students to show a reasonable level of mastery of a particular task or skill. If students cannot clear this high bar on the first attempt, we offer them retakes or revisions so that they can continue to work at it and improve. This makes assessments less drastic and high stress for students and allows slower learners to take more time before being penalized with a low grade. Grading becomes quite simple, since we define exactly what we want to see at the 80% (and 90%) level; anything that comes up short we don't bother giving a score, but instead offer detailed comments about what needs improvement. Rubrics like this make our expectations extremely clear for students; in fact, it is essential that we go over them so that students know exactly how they will be assessed.

      Here are a few examples from a recent assessment in one of our courses:


      1. Chorale 1

        1. High pass: Bass line correct. Only minor errors in upper voices (such as a strange doubling or missed chord tone). No parallel 8ves/5ths and all sevenths resolve down. Voices are melodic and singable with no overly large leaps.

        2. Pass: Bass line mostly correct, one or two minor inversion errors. Upper voices good with a few poor doublings, missing chord tones, and no more than two outright errors. No more than one parallel 8ve/5th and one incorrectly-resolving seventh. A few unnecessary leaps but still quite singable lines.

        3. Revise: Errors in bass line due to factors other than inversion mistakes. Upper voices contain many mistakes like incorrect chord tones. Two or more parallel 8ves/5ths and/or incorrectly-resolving sevenths. Many awkward leaps in the voices that make them difficult to sing.


      2. Parallel period melody composition

        1. High pass: Melody is well suited to the harmony. Features several well executed NCTs and motivic development. Has the form of a parallel period.

        2. Pass: Melody is reasonably suited to the harmony. NCTs mostly make sense and some motivic development is present. Parallel period form evident but somewhat obscured by unnecessary changes at the beginnings of the two phrases.

        3. Revise: Melody does not have much basis in the harmony. NCTs do not follow pitch tendencies. No motivic development. Parallel period form not clear or nonexistent.


      3. Blues improvisation

        1. High pass: Clearly outlines blues harmony. Heavy use of motivic development, solid phrasing, shows confidence.

        2. Pass: Mostly outlines blues harmony. Some use of motivic development. Reasonable phrasing, shows confidence.

        3. Retake: No outlining of harmony. Utilizes blues scale only. No development, phrasing, confidence.


      Hope that gives you some ideas!

       

       

    • Hi Devin,

        Thanks for posting on this. You may be interested to check out the Standards-based-grading section of an article that Anna Gawboy, Bryn Hughes, Kris Shaffer and I wrote for MTO a few years ago. I think you'll find that this approach is very much in line with what you're talking about here. My colleagues and I have been using this kind of rubrics based grading at Delware for the past 5 years and I'd be happy to chat further if you're interested in hearing what we've been up to. 

      Cheers,

      Phil

    • Hey Garrett - that's actually really great information. I know that UML has a pretty robust online program, so do you know if they're collecting this rubric information to analyze the student learning data? 

      Devin Chaloux

      Indiana University

    • Hey Phil, 

      Let's definitely connect some time during SMT. I'd be interesting in hearing how you and Daniel are incorporating this at UD. I know you guys have a pretty robust program evaluation system at Delaware, so I would be especially curious at how you're using this data to measure student and program outcomes (if you are). 

      Devin Chaloux

      Indiana University

    • Devin:

      The rubrics I described are ones we use in the music department. We haven't used them to collect analytics. You're right that we have a strong online and continuing education program at UMass Lowell, but I don't know if they use anything similar. Music doesn't offer any online courses, so I don't have any window into what they're up to.

      Best,

      Garrett