### Faculty Spotlight:

## Grading With Mastery-Based Testing at Mines

*This resource, created by Mines educators Becky Swanson and Aram Bingham, offers a comprehensive overview of their implementation of “mastery-based testing,” an alternative approach to grading that centers growth, reduces stress, and emphasizes attainment of concrete learning outcomes. This work was recently published in the International Linear Algebra Society bulletin (Fall 2023), with more forthcoming!*

**Why **We Implemented Mastery-BAsed Testing

#### Becky Swanson

###### Teaching Professor, Applied Mathematics & Statistics

##### Grades don’t reflect learning for many students.

In the years I have been teaching mathematics, one of my biggest frustrations was seeing students who would begin the course with an unsuccessful assessment, subsequently work hard and eventually learn the material, but end the course with a C or a B because of that first exam score. While an “A” often felt to me at least somewhat representative of learning, I found it challenging to convey what the other grades meant. Was a student who received a B learning at least 80% of the material well? Or none of it perfectly, but was getting by with partial credit? I attempted to reduce the stakes of exams over the years, but there were still students who eventually “got it” whose learning wasn’t accurately reflected in their final grade.

##### Partial credit doesn't motivate students to grow.

I didn’t always think this! In fact, in my earlier years as a faculty member, I felt I was doing students a favor by giving partial credit – they would at least get credit for some of their learning, even if they didn’t do everything correctly. Additionally, I was providing them feedback on what they needed to work on. The reality was that, without reassessment opportunities, most of my students were not reviewing their work or fixing their mistakes. The learning seemed to stop with that assessment, and there was little to no external motivation to review their exams.

##### Grades get in the way of learning.

Grades are the currency of higher education, and they cause many of our students stress and anxiety. While I accepted this for many years, I was also frustrated by the focus on grades over learning. Office hour conversations would begin, “What do I need to do on the next exam to make sure I get an A?” I would much rather have conversations that focus primarily on mathematics.

#### Aram Bingham

###### Postdoctoral Fellow, Applied Mathematics & Statistics

##### Grades communicate the wrong values to students.

In my teaching, I’ve always been animated by a desire to see *all* students succeed as they strive to learn. But students often don’t get this message from feedback in the form of deducted points. Too often, what they understand is a measure of their deficiencies, and a fraught way to compare themselves to their classmates. A summative score seems to indicate that the instructor’s role in assessing student work is to judge and find flaws rather than support growth. It has been heartbreaking to see students give up their learning goals based on disappointing grades, and I’ve been at odds with the handling of other human beings in this way since I began grading as a graduate student.

##### Grading is subjective.

As an instructor, initially at least, I thought I could do better within the traditional grading system. I was always a slow grader, agonizing over the fairness of half-points here or there, closely considering student work and what I wanted to communicate. I was trained to make clear rubrics for consistency and clarity, but the soft messaging I received from peers and seniors was to not invest too much time in grading. If you have enough students, experience (and the law of truly large numbers) has shown that even the best-laid rubrics will be defied in spectacular, inconceivable fashion. Combined with the realization that different people using the same rubric are often pretty far apart in judgment, it began to feel to me that traditional grading is more about what some folks call “objectivity theater” than actual fairness.

##### Grades are unactionable feedback.

I’ve long tried to provide comments and written feedback to students on their work as a way to let them know they and their efforts have been seen. But in traditionally graded settings, it was never clear to me whether my efforts had any effect since students rarely had opportunities to respond. Few students will work to fix past mistakes in understanding if course structure offers no incentives. My conversations with students after exams seemed to focus more on cynical litigation of points earned rather than understanding or course content. Instructors can’t be justified in lamenting the state of student preparation when we have trained students to play this game of minimizing effort for maximum points; and students should be suspicious of a system of (increasingly expensive) education that tells them that learning comes through making mistakes, while failing to provide them opportunities to do so.

**The need for alternative approaches to grading**

One of the fundamental challenges for educators in a system that requires assigning grades is to provide grades that will truly reflect the learning and achievements of students. Grades, by their nature, are a lossy compression of fact, which makes them subject to much well-deserved criticism and apprehension. It is a common occurrence for students to bemoan that an exam score does not reflect their actual understanding, much as instructors often fret that students that have passed a prerequisite course are still woefully underprepared. Alternative grading systems have emerged from the desire of instructors to align their grading practice with their students’ interest in learning, and a recognition that inherited practices weren’t always designed with this in mind.

**Mastery-Based Testing**

If a traditional grading scheme is about stratifying students, and creating competition and artificial scarcity through the currency of points, an alternative grading system ought to promote healthy learning processes for all students. Many of these systems have been influenced by the *mastery learning* approach articulated by Bloom, which asserts the abundance of student achievement as an organizing principle for educational design. An essential feature of these systems is the creation and maintenance of ** feedback loops** between students and instructors in order to support the learning process. Student attempts to meet specified objectives should not be penalized, and instead growth and progress through engagement with these feedback loops is rewarded.

**It can be overwhelming knowing where to start as an instructor.** After surveying practices of other colleagues in mathematics, we decided to utilize a system called ** mastery-based testing** (MBT) for our first foray into alternative grading. This methodology, also commonly called “standards-based testing,” retains the essential features of mastery learning philosophy while also being adaptable to existing mathematics course designs, which are often oriented around a few high-stakes exams.

In MBT, students are given a list of objectives/outcomes which correspond to problem-types that the course should prepare a student to be able to do. Over the course of the semester, the **students are given multiple opportunities to demonstrate “mastery”** over these learning objectives in a proctored, individual test setting, and **they receive targeted feedback** for each attempt which does not meet the criteria for mastery. Once a student has mastered a given outcome, they no longer need to attempt the outcome on future testing days.

At the end of the semester, what would be the “Exams” portion of a student’s grade is replaced by a simple “Mastery Testing” grade which is calculated as simply the number of outcomes for which the student demonstrated mastery divided by the total number of course outcomes. Thus, **the grade for each individual outcome is effectively pass/no credit,** though other components of the course may retain traditionally-graded elements.

**Benefits of Mastery-based Testing:**

##### Easy to communicate to students (and others)

One of the challenges any instructor may face when introducing innovative pedagogy into their classroom is getting buy-in from students, not to mention potentially needing to explain oneself to colleagues and administrators. With MBT, an instructor can choose to leave in place all (or part) of the components of their course other than the exam regime. The only necessary change to classroom dynamics is perhaps a small increase in the allocation of time given to testing, in order to allow for additional attempts. MBT then reduces the risk of cognitive overload and attendant anxiety or resistance that students may feel when being asked to learn a new grading system.

##### Easy for instructors to implement

Scarcity is a genuine concern when it comes to instructor time. We frequently chat with colleagues that say something along the lines of, “this alternative grading stuff sounds great, but when am I going to be able to set any of it up?” A related concern is, “What do I do when my alternatively graded course goes off the rails and I don’t know how to get it back on track?” It takes a lot of time to think through all the possible failure scenarios and to then arrange guardrails and insurance plans. MBT limits this upfront cost by restricting attention to (partial) redevelopment of the exam regime.

##### Aligns grades with learning

Because MBT removes partial credit and instead incentivizes eventual mastery, we can actually hold our students to higher standards and clearly communicate to them when they have met those standards. In addition, the grade more directly reflects their learning. An A on the Mastery Testing portion of the course means that the student correctly completed at least 23 of the 25 outcomes, or a B means that the student correctly completed at least 20 of the 25 outcomes and so on. This approach also seems to be truer to human modes of learning in general. In most realms, we expect to make mistakes and incorporate lessons from failed attempts on our road to learning new skills and concepts. Grading systems should allow and incentivize these organic learning behaviors that will lead to greater overall understanding.

##### Reduces test anxiety

Students are under a lot of pressure to succeed. There are many sources of this pressure which are beyond an instructor’s control, but we know that stress is not conducive to either learning or performance, so our course designs should seek to mitigate stressors. If there need to be exams in our educational future, MBT at least eliminates the negative consequences of an initial failure, which allows student-teacher interactions to center on learning and course content. This holds obvious appeal for students, helping with buy-in, and also benefits instructors by sidelining what tend to be less fun conversations about grades.

##### Highly tweakable

This can be said of almost any grading system, but we would mention a few ways in which we have seen folks modify mastery-based testing. If an instructor is concerned about students forgetting important ideas after showing mastery once, they can designate certain outcomes as “core” which need to be “recertified” one or more times. Instructors may give intermediate “revisable” marks for submissions that come close to mastery and which allow students to revise and resubmit outside of a test setting (more on this below). Finally, instructors might toy with the weighting of certain outcomes, or divide by something smaller than the total number of outcomes when computing grades to give students some choice of topics on which to focus.

**How **We Designed OUR COURSE USING Mastery-Based Testing

**Planning**

Trying new things is always scary the first time, and we recommend finding good company to conspire with as you approach your alternative grading implementation. *Consider joining our Alternative Grading & Equitable Assessment Learning Community! *Upon discovering our coinciding interests in alternative grading in Fall 2022, we decided that Becky’s Linear Algebra courses in Spring 2023 would be a good place to start actually trying these ideas out. Throughout the process we explored a number of useful resources, some of which we’ve listed here. Below, we go through the planning steps that came once we had settled on using mastery-based testing as our model.

#### Resources** That Helped Us**

- “
**Grading for Growth**” — A blog on alternative grading by Robert Talbert & David Clark **PRIMUS Special Issue on Implementing Mastery Grading in the Undergraduate Mathematics Classroom**- “
**Build a Syllabus An Introduction to Mastery Grading**” (Webinar) — Mathematical Association of America - “
**An Implementation of Standards-Based Grading in a Large Linear Algebra Class**” (Webinar)— Rose Morris-Wright - “
**Mastery-Based Grading in Higher Education**” (Webinar) — Silvia Huebach & Sharona Krinsky - “
**Ungrading As Resistance**” (Webinar) — Spencer Bagley - “
**Alternative Grading Materials**” (Google Folder curated by Rachel Weir)

**Developing Mastery Outcomes**

The development of **course-level learning outcomes** is considered to be good educational practice regardless of the grading system one chooses to use. These outcomes may already exist in some form for the course in which you hope to implement an alternative grading system, though they may require some reformulation. Generally, each outcome for a mastery-based testing course should correspond to a type of problem that a student might see on an exam. In math courses, an objective might concern the interpretation and application of a particular theorem or computational procedure, or the ability to select and use an appropriate method for solving a given problem. Other practitioners of MBT have recommended **15-20 outcomes** as a good number to shoot for when setting up the course. **We ended up with a list of 25** for Linear Algebra that was still workable, but going much further would probably complicate and expand the testing regime more than necessary, reducing the amount of time available for covering content and making everyone less happy. Here are some sample outcomes from the list distributed to all students at the beginning of the semester. Each outcome corresponds to a test problem, though some of these outcomes clearly suggest multi-part problems and are assessed as such. In this case, all parts of the problem need to be completely correct in order for the student to earn the mastery mark for that outcome.

**OUTCOME**

**Paired Assessment/Question**

**Outcome 5:** Determine whether a transformation is linear, one-to-one, and/or onto as well as the domain, codomain and any images of the transformation. When the transformation is linear, find its standard matrix.

**Outcome 8:** Apply the Invertible Matrix Theorem (including determinants) to determine properties of a matrix or system.

**Marks**

MBT hews pretty close to the grading philosophy present in, for instance, Linda Nilson’s *Specifications Grading.*** **This means that when it comes to talking about a student’s achievement of a specific course goal, there are essentially two states of the world: **either they have met the criteria, or there is still work to do**. Here are the marks we used:

**M: **Mastery

This is earned by completing the problem correctly and communicating your work fluently by using proper terminology and notation.

**MC**: Communication Error

This is earned by completing the problem correctly, but the work contains terminology or notation errors. Any such problem with this designation may be corrected and resubmitted outside of assessment days to receive an M.* (Other MBT users have included a similar category called “revisable.”)*

**P**: Progressing

This is earned by submitting a problem that contains errors. Except for the final exam, you will have at least one more attempt to master a problem in this category.

### WHY NOT PASS/FAIL?

In harsher tones, you might think the grading of each outcome is “pass/fail” (sometimes softened as “pass/no credit”), but there are a few caveats here. First, the terms “fail” and “no credit” convey a finality of judgment that misconstrues a mark meant to indicate that we are still expecting eventual mastery from our students, even if they aren’t there yet. Something like “progressing” or “not yet” suits this purpose better. On the other hand, the high standard to which we hold students means that when their attempt is successful, they have not merely passed a hurdle. They have demonstrated skill and sophistication worthy of something more exalted. There are some issues with use of the term “mastered” in this educational context, not least of which is that it conflates success in learning with, metaphorically, the power dynamic of domination. But it is ingrained in the education literature and may not be worth contesting in all settings. In any case, it’s what we went with to communicate that students had demonstrated the level of fluency and ease with a topic that we aspire for them.

**Timing**

Our learning outcomes fit into three larger units— a division held over from earlier editions of the course. In our implementation of MBT, we kept exams scheduled at the end of each unit, but added in additional assessments throughout the semester. The number of attempts it took students to master each outcome varied, but generally, we expected most students to have mastered at least 60% of the outcomes in a given unit after the corresponding exam day. We provided nine assessment opportunities throughout the semester: a mix of “quizzes,” which were shorter-length assessments, and exams, which were longer-length assessments. The table below shows the timing and time allotted to each of these assessments, as well as the number of attempts provided for each outcome.

**By restricting reassessment to a collection of fixed dates, the risk of grading overload for the instructor is reduced.** The annals of alternative grading literature are littered with stories of zealous instructors who started with a system in which arbitrary reattempts were allowed at the student’s convenience, and then got buried. Don’t end up like them.

The “retest day” on week 15 functions like a second quiz for Unit 3, but also as a catch-all opportunity for students to test on earlier semester stuff ahead of the final. Offering this opportunity promotes the stress-reducing aspect of MBT by providing students greater clarity on their grade ahead of finals. Some students may find themselves in good enough standing after the retesting day that it is not necessary to take the final exam, so they can focus on other courses. Otherwise, many will only need to study a limited amount of course material in preparation for the final exam. Some instructors may take exception to the notion that students do not have to study all course content for the final assignment, viewing the process of a cumulative review as an integrative and integral part of course completion. In response to such objections, we point back at the *high tweakability* benefit mentioned above. In Linear Algebra, because of the way course material builds upon itself, we did not deem it essential to ask students to retest on early semester ideas and procedures that were consistently being reused in later semester applications. But other courses could handle the final exam differently, requiring recertification of certain topics.

**How** We Enacted Mastery-Basted Testing

*Building Buy-In and Trust With STudents*

*Building Buy-In and Trust With STudents*

The benefits of MBT make it an easy sell, as students will recognize that it is a system that centers their growth and learning and is similarly more flexible to a diversity of learning styles than the system we have inherited. But they will only be able to fully take advantage of MBT if they have a clear picture of the mechanics and of why certain choices in the details of the implementation were made. Here are a few strategies we took to get student buy-in, as well as build and maintain trust with students:

- A day one discussion where we emphasize the benefits of and reasons for what we’re doing.
- Maintaining trust with our students by following through with our promises and continuing to converse about what we’re doing in the course.
- Reassuring students who are used to the currency of grades and therefore may be anxious about this new system.
- Acclimating students by including exercises that encourage students to familiarize themselves with course details in other low-stakes assignments, like daily pre-class question sets, over the next several weeks.

**Day 1 Slides**

During the first day of class, I encouraged students to think critically about how they learn, what they wanted to learn, and what grades mean. Scroll through some of these slides below.

*Managing Testing Logistics*

*Managing Testing Logistics*

The testing periods work just like any usual proctored test setting for students. Tests are distributed, students work only on the problems they need to work on, and then turn in their paper either when they’re through or time is up. Students are responsible for tracking which outcomes they need to work on.

We chose to use Gradescope because we found that it makes grading more efficient and accurate. This meant that we needed to scan all student assessments and upload them to Gradescope. In order to simplify the scanning process, longer assesments were divided into multiple single-page assessment templates.

*Grading*

*Grading*

One of the promises of systems that use pass/not yet style grade categories is that the grading will be more pleasant and efficient. Grading in this system is quicker and easier: it’s either correct, or we’re giving feedback on the mistakes. Instead of agonizing over partial credit decisions, we are merely deciding if someone did it correctly or not. Because students have the opportunity to reattempt for full credit, it is also reasonable to expect them to actually read the guidance and feedback an instructor leaves them, as it will have a direct bearing on their future attempts. While we wouldn’t claim that the amount of time spent grading is always less, we agree with those that jumped on the alt-grading wagon before us that it is time better spent.

*Working with an LMS *

*Working with an LMS*

One of the difficulties of doing alternative grading in a learning management system (LMS) like Canvas is that these LMS’s tend to be points-obsessed. At the time of writing, the same holds for grading software platforms like Gradescope, on which we also depend heavily. However, the gradebook in Canvas is still useful for students to be able to track their progress and performance on both mastery testing and non-mastery testing assessed coursework.

Each outcome is presented in Canvas as a separate assignment which is out of 1 point. After the grades are exported from Gradescope into Canvas, one can obfuscate the numerical score and show students only the category label M/MC/P by creating a *grading scheme* which assigns the label to the corresponding point range. However, Canvas will still attempt to compute a final grade for each student using the underlying point value by default. We found it preferable to just turn this option off entirely, so students would not be misled by artificial averages during the semester. We provided students with this downloadable Google sheet so that they could track their progress throughout the semester.

# HOW IT WENT:** Our Experience**

### Our experience and the student response to our first foray into alternative grading were **overwhelmingly positive**, albeit with a few challenges. In fact, returning to a traditional grading system would be hard for us! These were some of the** positive things we noticed **throughout the term:

*“**I get to focus on the material itself rather than my grades. There has not been a point in the semester where I have thought about my grade which is very relieving.”*

**Students were excited**

After the first day of class, students stopped by to express both interest in and excitement for this new grading scheme. Conversations continued throughout the semester. One student asked whether all of the math classes would be switching to this system, because this was his first time having fun in a math class. He said he was able to focus on learning and wanted to take more math classes if this was the case.

*“The system really, really helped my testing anxiety and allowed me to just enjoy learning and not freak out if I wasn’t able to get something right away. It motivated me to pursue feedback for my work and try to do better.” *

**Conversations were better**

After the first major assessment in a course, many instructors lament having conversations with students about failing grades and what such exam grades will mean for the students’ final grades. Due to the fact that students weren’t penalized for failing on the first attempt, as well as the clear alignment between assessment and outcomes, discussions in office hours focused on the course content. Students knew the outcomes with which they were struggling and came to office hours with clearer questions about what they needed to learn. Instead of talking about grades, we were talking about Linear Algebra!

**We knew where students were struggling**

During the semester we kept track not only of which outcomes each student had mastered, but also, how many students had mastered each outcome after each assessment. This meant it was very easy to determine which outcomes were most challenging to the students, as well as which students had an exam score of under 60 percent at any given point in the semester, as illustrated in this figure.

**The writing was better**

Partial credit without revision doesn’t incentivize students to learn from their mistakes. The MC grades in this course forced students to fix terminology and notation errors early, leading them to make such mistakes less often on future problems.

**Grading occurs more often, but is much easier**

Many faculty who hear about alternative grading systems are rightfully concerned about their own workload increasing. Given that there are more assessment opportunities as well as revisions, this was a concern we also initially shared. There are two features of MBT that end up making this a comparable time commitment: 1) With this system, you are either marking something as an M, MC, or P. There are no decisions to make about the point value of mistakes. While feedback is provided, Gradescope makes it very manageable. 2) Once a student receives an M, they are no longer attempting the problem. Overall, the workload felt similar to a typical semester, but distributed differently across the semester.

**Students seemed less anxious**

Other than hearing directly from students throughout the semester that they felt much less stressed, we also noticed that, our exam periods often ended with only a few students left in the room. To us, this indicated that students had sufficient time to work and were not experiencing the stress that often comes with high-stakes, time-constrained exams.

### Of course, no system is perfect. These were some of the **challenges we faced** as we planned and implemented MBT.

##### Trying new things is scary

One of the biggest challenges for Becky, as the one of us who had both taught the course many times before and would be initially implementing it, was probably best described as the “fear of the unknown.” Working out the details seemed daunting, and there were times during the semester where she wondered if the students were succeeding. This was, luckily, balanced out by times where everything seemed exciting and great. This was why working together was both beneficial and highly recommended.

##### There was more problem-writing

Because we offered more testing opportunities, that also meant that more assessments had to be written. This could certainly be a challenge in courses in which there are only a small number of problems one can pose about a particular topic. This was not a particular burden this semester, as Becky had taught the course a number of times and felt comfortable drafting questions. Having clear outcomes or learning targets makes this easier, too!

##### Some students were confused about the "MC" mark

The “MC” designation was intended to be given for students who essentially did a problem correctly, but made either a terminology or notation error. While this was discussed at the beginning of a course, there were at least a few students after each assessment who felt that the error they made should have been given an MC. This was often in the case of a multi-part problem in which they correctly completed most of the parts, but still demonstrated a major error in their mathematics on another part. This complaint was probably a proxy for the frustrations some students experienced with the lack of partial credit.

##### There was more scanning

By using Gradescope, each quiz and exam needed to be scanned and uploaded to the platform. In a typical semester, this might happen a few times, but now happens much more often. What makes scanning challenging is that when assessments are stapled, the staples need to be removed. To mitigate this, we used single page assessments for quizzes, and sometimes divided exams into multiple single page parts for reassessments.

##### Sometimes an "MC" was given when it shouldn’t be

A handful of times, a student received an MC for work that looked like it was mostly complete but had some sort of terminology error, and the terminology error was masking a graver misunderstanding of the concept. This was discovered when a student would either try to correct it (incorrectly) or come to office hours to discuss the problem. Luckily, these seemed to be rare occurrences.

##### There were some LMS challenges

Because the outcomes weren’t associated with individual assessments in Canvas, the quizzes and exams weren’t listed as individual assignments in the LMS. Many students rely on reminders coming directly from Canvas to remember approaching deadlines. While the dates were listed in the course calendar, reminders were given in class, and students were sent announcements for the first few assessments, some students forgot about later quizzes and either missed them or came unprepared. This term, we created placeholder assignments for each quiz and exam so that students could receive reminders (or check their calendars) regarding upcoming assessments.

##### We Needed to Be More Proactive About Testing Accommodations

In arranging the details of our MBT implementation, we wanted to be mindful of how our testing system would impact students with testing accommodations. First, we would need to be sure that these students would be able to take all reattempts in the testing settings provided for by their individual accommodations. Because our campus testing center has finite capacity, the number of testing occasions made available to all students would have to be limited. The nine testing dates throughout the semester seemed a reasonable compromise between allowing sufficient chances and not falling into the testing/grading overload trap.

Second, because the quizzes came partway through a class period, students that wished to make use of their testing accommodations would have to get up and excuse themselves from the classroom at quiz time. This is not an issue unique to MBT, but we are concerned with the stigma a student may feel at needing to conspicuously exit a classroom for each quiz. We also don’t know that we have a perfect solution here – for many students, this has seemed to pose no issue, but one student did bring it up as a source of discomfort in course surveys. Communicating about these testing dynamics with students with accommodations early on in the course and working with them to find arrangements that don’t provoke alienation should be a priority.

# How It Went: **Students’ Experience**

###### We have a variety of data sources to support that **this was a positive experience for our students**:

**Pre- and Post-Course Surveys**

We worked with Megan Sanders from the Trefny Center to develop a pre- and post-course survey that contained both open-ended and Likert-scale questions asking students to reflect upon both this course and other courses. Open-ended responses were coded by the two of us individually and later reconciled.

**Mid-Semester Survey**

Students weren’t specifically asked to discuss MBT on their mid-semester course survey, but a number of students commented on it. While data from this survey is not presented below, the mid-term survey provided us with insightful data about how students were feeling during the term.

**Grade and Outcome Data**

We stored grade data from throughout the semester and can compare it to results from previous semesters. We can also determine student success rates at different points in the semester, as well as information about individual outcomes.

### What were the **benefits** of the MBT system used in this course?

**THE SYSTEM FELT FAIR.**

Teaching evaluation scores were high across the board. The students overwhelmingly (88%) felt the system was fair (see above).

**The system both helped students learn and deepened their understanding.**

The chart above shows that an overwhelming majority of students agreed on their post-course survey, while only 6% of students disagreed, with the statement “**This mastery-based grading system deepened my understanding of the course content.**” A similar majority of students agreed, while only 6% of students disagreed, with the statement “**This mastery-based grading system helped me learn the material**.”

*“**It definitely forced me to learn each concept completely instead of being able to do just part of a specific type of problem.”*

*“It allows me to feel like I can prioritize having a good understanding of material rather than trying to memorize things right before a test which often feels gimmicky and doesn’t really translate to being comfortable with the material in the future.” *

**Grades better reflected learning.**

When asked to agree with the statement “My grades in other math classes accurately reflect what I’ve learned,” the pre-course and post-course surveys yielded comparable results, with about 45% of students agreeing on both the pre- and post-course surveys. In the case of Linear Algebra, that number jumps to a striking 84%. In our open-ended question about grades on the post-course survey, **56% of students made comments about their grade measuring understanding or learning in this course**, compared to only 39% making such comments about courses in general. Because of how strongly the grade was tied to demonstrated learning in the course, students clearly observed the connection. As one student said, “You have to really know the material to get a good grade.”

*“The grade represents my success in this course, and my ACTUAL understanding, instead of the too common ‘hopefully put right answer on exam and move on.’”*

*“I felt like my grade was actually able to reflect what I had learned, even if it was not at the first exam. When I didn’t get a concept the first go around, it would still matter if I learned it by the next few weeks. This system felt less arbitrary.”*

**Students benefited from having clear expectations.**

As exam questions were tied directly to learning outcomes, the students knew exactly what was expected of them. As one student said, *“While other courses may state outcomes, this system makes them matter more and it is clear why they exist.”* Where this seemed to most benefit students was in study habits. On the post-course survey, 46% of students commented on their studying being more targeted or focused in an open-ended question about study habits.

**The system was truer to the learning process.**

Students commented regularly on how failure opportunities or chances for improvement and growth were part of the learning process. In fact, when asked about the benefits of the MBT system, **45%**** of students made comments about how the system allowed them with low-stakes opportunities to make mistakes and then learn and grow from those mistakes**. The final quote highlighted here illustrates the other means through which the system reflects the learning process: students mentioned how they could individually focus on what and when they learned. As another student said,* “I felt like I wasn’t punished if I didn’t understand something fast enough.”*

**The System Encouraged Good Study Habits.**

**Note**: This chart represents a thematic analysis of responses to an open-ended question.

In addition to having clear expectations, regular assessments required students to interact with course content more frequently. For the purpose of retention, seeing more students studying more regularly was a positive for us. Some of the students agreed:

**Students were less stressed/anxious.**

In addition to being more comfortable with making mistakes on exams, students also felt less anxious taking exams in Linear Algebra compared to other math courses. Part of our motivation for implementing this system was our belief that basing a grade upon a few timed assessments wasn’t an accurate measure of student learning. Since there are multiple chances to achieve each outcome, individual quizzes and even exams aren’t as worrying as they would be without the grading scheme. The reduction in stress is something that has already been documented in other studies, and we were happy to see this outcome both experienced and appreciated by our students.

## What were the **drawbacks** of the MBT system used in this course?

**Some students were concerned about forgetting after mastering or not learning as well.**

Given that students didn’t have to reattempt an outcome once it was mastered, this was a reasonable concern, and 13% of students on the pre-course survey and 15% on the post-course survey mentioned concerns about the lack of reinforcement in an open-ended question about drawbacks. One student commented that, “It could make the goal for mastering to get it once and forget it rather than to need that outcome again and again later in the course.” It was interesting, however, to note that while 26% of students expressed concerns about forgetting content on the pre-course survey, only 15% expressed concerns in the post-course survey. While it is true that they only needed to prove they could complete each outcome once in the course, in most courses outcomes are assessed on exams only a small number of times – often only once or twice. Additionally, Linear Algebra content builds on itself, so students had to demonstrate expertise with earlier material to succeed with later outcomes, as seen in the following quotes:

**Some students were unhappy about a lack of partial credit.**

%

### Pre-Course Survey

### Students who mentioned a “lack of partial credit” as a drawback of the MBT system.

%

### Post-Course Survey

While only 10% of students expressed this concern in an open-ended question in the pre-course survey about drawbacks of the grading system, that number jumped to 22% on the post-course survey. Given that students are accustomed to a system that often awards partial credit, this concern is both expected and valid. We want to ensure our students truly learn the outcomes, however, and allowing them to “partially learn them” doesn’t equate with learning. Although students didn’t particularly mention “liking” the absence of partial credit, the students were still overwhelmingly positive about the grading system and were often okay with having no partial credit.

**STUDENT EFFORT ISN’T NECESSARILY REFLECTED IN THE GRADE.**

Student concern for a lack of partial credit is connected to the idea some students have about the relationship between grade and effort. While our goal in this system is to connect grades to learning, many students believe effort should play a large role in their grades.

*“In my opinion, a grade at the end of a course reflects the time and effort I spent learning material. The way classes usually structure it though is that your grade represents how well you can take a test or memorize something from this material. The grade for this class represents complete understanding. Unless you show absolute understanding, any progress or better understanding than at the start is difficult to recognize in a final grade. I worry my grade in this class won’t reflect just how much time I’ve put into learning this and how much I’ve grown.”*

**PROCRASTINATION IS STILL AN ISSUE FOR SOME STUDENTS.**

Due to the fact that students had multiple assessment attempts for each outcome, some of them put off studying until the end of the semester. With any alternative grading system, instructors must be aware that their course competes for student attention. Students may not always responsibly manage the additional flexibility MBT offers when faced with inflexible deadlines from other courses.

*“I have put off studying as it feels less ‘final’ with the quizzes and exams, and pushed a lot of it to the end of the semester.”*

“*It felt easier to fall behind as it was more forgiving*.”

This can be managed by keeping track of which students haven’t mastered as many outcomes as their peers at various points of the semester and reaching out to them. Additionally, many students do figure out how to work effectively in this system after a period of adjustment:

*“It is also easier to slack off studying for this class overall, so I’ve learned a lot of self-control with my studying and doing it in a more timely manner than before.”*

**Stress didn’t disappear entirely.**

In our post-course survey, 16% of students mentioned as a drawback the way in which work can “pile up” at the end of the course. While procrastination was a challenge, it also simply took some students longer to master some of the outcomes than their peers which could cause distress, especially as the course comes to an end.

*“If you’re behind having like 13/18 mastery things left to do is daunting.”*

*“If you don’t quickly get outcomes at the beginning of the course, it becomes really easy to feel overwhelmed and like most of your grade is in the air.” *

*“Now that we are closer to the end, the anxiety going into tests is much greater because the stakes are much higher for the last few outcomes.”*

As we don’t have control over the length of the course, we recognize that the course must have an end-date and that any deadline will bring with it anxiety for some students. While it still seems the stress experienced by students in this course is less than in traditional courses, it could be further mitigated by not testing over content that is introduced in the last couple weeks of the course.

**OUR TAKEAWAYS / Frequently asked questions**

**TRY IT OUT**

It seems like higher education is in a moment right now, partially Covid-induced, where we know we want to move past last century’s dominant teaching practices into something new. Probably the new thing won’t be one-size-fits-all, but MBT can be a stepping stone for those starting this journey. Don’t be afraid to jump in! Start with something that feels doable and continue to improve and iterate upon your system. You don’t need to entirely revamp a course to start experimenting with alternative grading. You could begin with a single assignment and build from there. For ideas on ways to get going with alternative assessment, we recommend this post on “small alternative grading.”

**COLLABORATE**

It’s more fun to work with someone on figuring out the details. Like we mention above, there were many ups and downs before and during the first semester of implementation. Having two heads to brainstorm and troubleshoot as well as celebrate the victories made it much more enjoyable and manageable. On a similar note, engage your colleagues! Conversations with folks in our department and across campus, many facilitated by Trefny Center events, have helped us define and refine our vision of what we want out of our grading system. This resource in particular sprang from close collaboration with Carter Moulton at the Trefny Center. We hope that these discussions can also drive the culture around grading to something that yields a more positive learning experience for our students.

**COMMUNICATE**

In addition to building trust with our students, it is also important to communicate to colleagues within and without one’s department about plans to try out novel pedagogies. As MBT minimally disrupts traditional course modalities, we felt it was easy to make the case for trying it. Our experience has been the opposite of what one might fear — rather than needing to tip-toe and convince skeptical department heads and higher-ups, conversations have revealed lots of curiosity and interest among faculty across campus. We have learned about numerous other instructors trying out alternative assessment ideas writ large and small, and met with concerns not about whether it would work but whether it would *scale*.

**Still unsure? **

##### Browse some of the most frequently asked questions we’ve heard from folks around campus!

##### Isn’t this a lot of work to get started?

Any time we try something new in our classes there is an associated start-up cost. In this case, determining the outcomes, planning the schedule, and working with our university’s learning management system took careful consideration. However, now that we’ve done it, the workload in implementing MBT a second semester is comparable to that of previous semesters. Our hope is that by sharing our process, data, and resources, you don’t feel like you need to start from scratch!

##### Won’t this mean I have more grading to do?

We found that the grading load was comparable to a traditional semester, but distributed across the semester. Grading goes much more quickly when you aren’t deciding on point values. When the goal is giving feedback that students will actually use to improve their learning, we found that grading is more fun.

##### Will I be writing more quiz/exam problems?

The short answer is yes. However Linear Algebra lends itself well to a MBT system. For most of the outcomes there are many variations we could quickly and easily find to create suitable problems.

##### My students are used to a traditional grading scheme. Won’t a new system be confusing or stressful for students?

In our experience, no. On our preliminary survey we had some students express anxiety, but by the end of the semester, they reported feeling much less anxious than they did in their other classes. Among alternative grading systems, this is one of the simplest, as the exam score is defined as the number of outcomes mastered divided by the total number of outcomes.

##### My students are used to getting partial credit. Won’t they be unhappy that there is no partial credit?

Some students were: 22% responded to a question about drawbacks of the system by referencing the lack of partial credit. This is understandable, given that many of our students were accustomed to receiving partial credit for their work in a math class. However, we find that when students do not have the opportunity or incentive to learn from or fix their mistakes, they don’t. In this system, it was to their benefit to correct their understanding so that they would be successful on a subsequent attempt. As one student said:

*“[A drawback was] the all or nothingness of it. I live on partial credit and the lack of it in this class was hard to get used to. With that said, because I had to get everything right, I feel like I had a better overall understanding of the content.”*

##### If students don’t have to redo outcomes they’ve mastered, won’t they forget how to do them?

Linear Algebra is a course that does build upon itself, so we feel that in this setting, early ideas are reinforced throughout the course. The following student quote supports this:

*“I expected that a potential drawback of this system would be lack of review of content that was covered early in the semester, but I didn’t feel that this was a substantial problem, because of how interconnected everything was.“*

However, we can also imagine that in other settings, a “master-once-and-done” system may not be appropriate. There are a number of alterations one can make to MBT, and some of those include revisiting certain outcomes on a final.

##### Won't this cause grade inflation?

We don’t believe grades are a scarce commodity! In this system, a top grade clearly connects to a student being able to correctly complete almost all of the course outcomes. Grade inflation often refers to an increase of grades due to lowering standards, which is not what is happening in this system. If we can support more students in truly learning, that is our idea of success.