Students get smarter with AI
In the discussion about the emergence of tools such as ChatGPT or Claude, the question of how educational institutions should deal with the use of Large Language Models (LLMs) is becoming increasingly urgent.
Cursor's recent article “How do we teach students to use ChatGPT?” also raises questions about Artificial Intelligence (AI). These technologies are now so widespread that they cannot be ignored, and students routinely employ them for their assignments. Instead of focusing only on control mechanisms or banning their use, I argue for a more fundamental approach: students should be taught the basics of how LLMs work so they can interact with them critically.
In high school, these models can be seen as a panacea because they quickly provide correct answers to simple questions. This makes regulation necessary, as the technology works well for simple tasks, but students are also - understandably - quick to choose the easiest solution. In higher education, the situation is different. The output on a difficult question may look impressive at first glance, with complex terms and fancy formulas, but in reality is often completely incorrect.
Basic knowledge about how this digital brain works helps students better understand and more effectively deploy AI technology. By understanding how the model works, they can better assess and supplement output. The goal of every student is to look critically at the world, and this certainly applies to this technology as well. Fortunately, the basic math behind these models is not as complicated as often thought. There are countless learning resources on the Internet, which when combined with frequent use give a good sense of how the underlying model works. I dare to claim to have this understanding, as I consult this digital tool above average - and that's putting it mildly.
Take, for example, creating a data visualization. As a Data Science student, I know exactly what I want: a scatterplot showing the relationship between two variables, complete with a neat trend line and clear labels. This used to mean endlessly flipping through documentation looking for the right syntax. 'Was it plt.figure or plt.plot? Where does it say again how to add that trendline?' An LLM gives me the exact code I need within seconds. No time wasted on technical details, but immediately on with visualizing what I have in my head.
To use technology effectively, I advocate that all students receive a basic understanding of how these models work. How exactly this is to be shaped, I am happy to leave to the education experts, if there are any left after all the budget cuts. Only with that knowledge can we as students use AI to strengthen our own knowledge and skills, rather than falling into laziness. The goal is to find a balance where we can reap the benefits of these technologies without undermining the integrity of our learning. LLMs should complement our learning experience, not replace critical thinking and deep understanding.
Wob Knaap is a student Data Science at TU/e. The views expressed in this column are his own.
Discussion