Encouraging Effective Study Habits Through Low-Stakes Testing in First-Year Mathematics
For years, I shared a common frustration with colleagues: the sense that students only begin studying a week before the exam. At the time, this was more of a suspicion than a substantiated fact. However, the introduction of an online homework platform for mathematics education at TU Delft provided a new level of transparency. It allowed us to monitor student activity throughout the course, and the data confirmed our suspicions. A significant number of students indeed postponed serious engagement with the material until just before the exam.
This approach to studying is widely recognized as ineffective. Cramming may temporarily boost performance by overloading the short-term memory, but it does little to support long-term retention. Moreover, it places a heavy burden on the working memory, which is inherently limited in its capacity to process and store new information. The result is often cognitive overload, reduced comprehension, and poor performance under pressure.
In contrast, research in cognitive psychology consistently highlights the benefits of spaced and interleaved practice. Spaced practice involves distributing learning over time, with intervals between sessions. This spacing allows for partial forgetting, which in turn strengthens memory through the effort of retrieval. Interleaved practice, on the other hand, involves mixing different topics or problem types within a single study session. This method promotes deeper understanding and improves the ability to transfer knowledge to new contexts.
These insights formed the foundation of my Senior University Teaching Qualification (SUTQ) project, which aimed to encourage students to adopt more effective, evidence-based study strategies.
The project was implemented in a first-year mathematics course at TU Delft, taught during the first academic quarter. As such, most participants were students encountering university-level mathematics for the first time. For several years, mathematics service education at TU Delft has made use of an online homework platform that enables students to practice per topic and receive tailored feedback on their responses. The platform also allows instructors to create tests composed of randomly selected, parameterised questions from a larger pool, providing a flexible and scalable assessment environment.
To guide students toward improved study habits, I designed four formative tests within this digital platform. The tests were spread out over the quarter. Each test was voluntary, available for a period of two weeks, and covered all topics introduced up to that point in the course. At the end of the quarter, students could earn a bonus on the
final exam, graded on a scale from one to ten, equal to their average score on the bonus tests multiplied by half a point, with a maximum of 0.5 bonus points.
Importantly, these tests were not intended as summative assessments. Students were allowed unlimited attempts during the availability window, with their highest score counting toward the bonus. The tests could be taken at any time and from any location, without supervision. While students were informed that they could use any resources they wished, I encouraged them to simulate exam conditions as closely as possible, particularly by avoiding tools that would not be permitted during the final, pen-and-paper exam.
Despite the relatively modest incentive, a maximum of half a bonus point on the final exam, the bonus tests proved to be highly popular among students. Of the 303 students who sat the final exam, 204 participated in at least three out of the four bonus tests. An additional 57 students completed one or two tests, while 42 students did not engage with the bonus tests at all. Analysis of the results revealed a strong correlation between the number of bonus tests completed and performance on the final exam. However, given the nature of the available data, it was not possible to determine whether this relationship reflects a causal effect. It may be that more motivated or better-prepared students were simply more likely to participate in the bonus tests.
To gain insight into how students approached the bonus tests, I conducted an anonymous survey, which was completed by approximately one third of the cohort. Although students had been advised not to use calculators or other external tools, nearly 30% of respondents admitted to having used a calculator at least once during the tests. Interestingly, fewer than 15% reported using an AI tool such as ChatGPT to assist them.
The survey also explored students’ perceptions of the bonus test approach. Of those who responded, 45% agreed with the statement “The bonus tests were an extra motivation to keep up with the course.” Furthermore, 57% at least somewhat agreed with the statement “Without these tests I would have started studying for this course much later in the quarter.” These responses suggest that the intervention not only encouraged earlier engagement with the material but was also appreciated by a substantial portion of the students.
This project suggests that even low-stakes, repeatable online tests with small incentives can positively influence how students approach studying and may contribute to improved learning outcomes.