AI as a tool, not a crutch: Classroom experiences in data science
AI can be a useful support tool in data science education, but it also makes it easier to confuse polished output with real understanding. The challenge is not to ban AI, but to use it in ways that support learning rather than replace it, especially when core skills and reasoning are at stake.
Not long ago I had a student who had done perfectly fine on the assignments. Nothing spectacular, nothing suspicious, just solid work. Then came the oral exam and he failed. Not a borderline case but getting basic things wrong and on some questions there were no answers. The moment we stepped away from the polished assignments and into actual understanding, there was a big difference. That moment stuck with me. Not because it proves some big theory, it doesn’t, but because it pointed at something that’s getting harder to ignore. Student work can now look much better on the outside than the understanding on the inside. The code runs and the neatly structured report reads very well. But how much of the material is now internalized by the students? That, for me, is the real tension with AI in education.
I’m not anti-AI. That would be silly as it is clearly very useful. Students use it to fix code, summarize papers, clean up their writing, get unstuck, or sketch a plan. Teachers use it too, myself included. That’s not the interesting part anymore. The interesting part is where it starts to go wrong. Because in teaching, a good end-product is not the same as learning. AI makes it very easy to forget that, or to hide it. Sometimes from the teacher but also from the student.
A student can now produce something that looks like understanding without creating the understanding. That’s the part that worries me. Not the tool itself, but the illusion it can create. It can help produce work that looks convincing while the foundation underneath is thin. That’s why I keep coming back to the phrase: use AI as a tool, not a crutch. A tool is something you use to complete a task. A crutch is something a person depends on. It helps you through a difficult phase, so you don’t have to struggle.
If a student uses AI to tidy up wording, outline a plan, clean documentation, or generate a few practice questions, fine. But if the AI tool is doing all the hard work, if AI is used to complete tasks that are associated with primary learning outcomes, then we cannot be sure that learning those outcomes is happening. Students may complete the task successfully with AI support, yet fail to form durable understanding of the process, reasoning, or choices involved. They got to an output, but the understanding did not stick. The task got finished, but the skill needed to achieve the task did not get learned. However, because the result was there, for students it may have felt like they understood it more deeply than they actually did.
Preventing problematic AI use is probably not about banning the tools or pretending they are not there. That battle is already lost, and rightly so. The more sensible question is where AI use is acceptable, and where it starts to interfere with what students are actually supposed to learn. For me, that means being much more explicit about the difference between support tasks and core learning tasks. If AI helps a student plan a project, improve the wording of a report, generate practice questions, or sort out documentation, that is one thing. But if it starts taking over the reasoning, the interpretation, or the analytic choices themselves, then it is no longer just support. Then it is starting to replace the learning process rather than assist it. AI tools should not be used for primary learning outcomes as these are the skills students should master themselves and outsourcing work related to these learning outcomes makes for a shallow learning process.
That also means assessment has to do more of the heavy lifting. If we want to know whether students actually understand the material, then polished written work on its own is no longer enough. Oral exams help. Short on-the-spot assignments may help too. Not because they are “AI-proof” in some sense, but because they make it harder for students to hide behind a smooth end product. The same goes for discussing responsible use more openly. Students need to know that they remain responsible for what they hand in, including whatever came out of a chatbot. If AI produces nonsense, bias, or plain errors, that does not magically become someone else’s problem. Part of teaching now is helping students see that these tools can be useful, but also wrong, shallow, or overconfident. That may be one of the main things they need to learn about AI in the first place.
Students also need to realize that they came here to acquire skills, not just to clear hurdles. Education is not an obstacle course where the goal is to get to the end with minimal effort and no discomfort. Some of the difficulty is the point. If AI is used to bypass every hard part, then students may complete the course, but miss much of what they came to learn.
Footnote: This contribution is part of the presentation given at BeNeLux Mathematics Conference (BeNeLux MC), with the title: Evidence-Based Practices in Teaching Mathematics at the University Level, organised by 4TU+.AMI, Annoesjka Cabo