If you would like to participate in discussions, please sign in or register.
Some of you know that I have moved on from teaching in music theory, but I am still heavily involved in higher education. I am currently at Southern New Hampshire University working with their non-profit online programs. (You may have seen our commercials!) Anyway, one of the things that I have picked up in the two years I have been here so far is that the courses are designed in a way to measure student learning outcomes. Every assignment as a rubric and all students are measured against the same rubric. Usually they include categories such as exemplary, proficient, needs improvement, and not evident with the typical split of (100% / 90% / 50% / 0%) per each section of the rubric.
What's interesting in this approach is that it allows for us to collect data on how successful assignments are in reinforcing expected outcomes in the course. If students widely struggle on the assignment, then either the content isn't being delivered effectively or the assignment is not adequately addressing that student learning outcome. Being all online, we're able to collect this data and make some really cool observations about trends, student cohorts, assignment success, etc.
It has made me think about how we evaluated and assess how students are doing in the music theory curriculum. I remember grading counterpoint exercises and assigning fractions of points for every note and error. Sometimes you would see some students with "technically correct" counterpoint but nothing that really resembled stylistic writing. Has anyone moved to a model where rather than assigning points to errors that you grade more holistically and the level of proficiency on those skills. For example, a rubric could look like this for a melody harmonization assignment:
|Learning Outcomes||Exemplary (100%)||Proficient (90%)||Needs Improvement (50%)||Not Evident (0%)|
|Voice leading (30 points)||No errors||1 or 2 minor errors, no major errors||Many minor errors or several major errors||Many major errors or assignment not completed|
|Style (30 points)||Stylistic approach to melody harmonization||Overall, still very stylistic but||A few awkward progressions||Lacking understanding of stylistic harmonization or assignment not completed|
|Roman Numeral analysis (20 points)||Correct RN analysis||1 or 2 minor errors||1 or 2 major errors or several minor errors||Many major errors or RN analysis not present|
|Functional label analysis (20 points)||Correct function labels||1 or 2 minor errors||1 or 2 major errors or several minor errors||Many major errors or functional analysis not present|
This is just an example I whipped up, but it would allow for easy collection of student learning outcomes. For example, if the melody harmonization exercise were to test a student's ability to apply secondary dominants and they did not use any - you might score them as not meeting the style (or perhaps you rewrite the rubric to reflect the need for secondary dominants). Then you can observe quickly how well the class is or isn't learning the material.
In an age where accreditation agencies are pushing for better demonstrations of student learning outcomes under public pressure on the value of the degree, I imagine that we (the music theory community at large) will have to figure out a way to collect this information in a meaningful way. I'm curious if any of you have begun working on similar types of assessment projects and how you are approaching this issue.