As a college professor, I get a lot of questions about homework — and lately they have almost all been about how artificial intelligence (AI) would change it. After all, if AIs can pass many medical, bar and economics exams, then they can certainly handle high-school or college homework.
Homework has long been a staple of the academic experience. How would it evolve as more students master the capabilities of (rapidly improving) AI systems? Or, to ask a slightly more pointed question: How am I supposed to know whether I am grading the student or the AI?
Big changes are in the offing, but they would arrive slowly. Classroom practices, for better or worse, are among the stickiest of human institutions. A lot of instruction has not changed much for thousands of years, even if modern chalk is better than its ancient precursors.
The main point is that grades would come to mean something different. Traditionally, at least in theory, grades have been a measure of how well a student understands the material. If they got an “A” in US history, presumably they could identify many of the founders. In the future, an “A” would mark a kind of conscientiousness: It would mean that, at the very least, they applied their AI consistently to the questions at hand. Whether that counts as “cheating” or “allowed” would depend on the policies of the relevant educational institution, but anti-AI software is not reliable and anti-AI rules cannot be enforced very readily.
“Applied their AI consistently” might sound unimpressive as a certification, but I have known many students over the years who do not meet even that standard. They might neglect to hand in homework or fail to monitor due dates. They migh not know the relevant material — often they do not — and it is not at all clear to me that current AI technology would automatically enable them to get good grades.
In other words, an academic system replete with AI is still testing for something, even if it is much less glorious than what we might have hoped for. Over time grades would come to indicate not so much knowledge of the material as a student’s ability to be organized and prepared.
To be clear, these are good habits to cultivate — and keep in mind that this new system would not be so dissimilar from the “status quo.” Students have collaborated on homework since homework has existed, including with their parents, whether they have been allowed to or not. The AI eases and speeds this collaborative process. There have never been entirely honest grades, not even in the “good old days.”
Still, there might continue to be a need or wish to test for knowledge of the actual material. This would have to be done in person. Maybe there would be exercises to be completed in the classroom, as my Bloomberg Opinion colleague Adrian Woolridge has noted, or oral exams in the “Oxbridge” style.
These adjustments are most likely to take place in fields where results matter in a direct and measurable way, such as physical engineering. They are perhaps less likely in the humanities, where a student who is using AI to fake knowledge probably would not get very far anyway.
Another kind of adjustment would involve assigned projects, to be created by the student and the AI working in tandem. I taught a class to law students last spring, and one of the requirements was that they collaborate with AI on a research paper — transparently of course, and with an explanation of how the AI was used. On average, students who learned how to work with the AI wrote better papers. This kind of collaborative skill would only become more crucial, and homework would evolve to reward that reality. In this context, using an AI to “cheat” is no longer an issue.
On the downside, collaborative projects with AI usually are less predictable and less “cookie cutter,” and thus harder for instructors to grade. That seems like a small price for teaching students this all-important skill.
There is no doubt that AI would change the nature of not only homework, but also instruction. A professor who can recite a lot of information might no longer seem so impressive: Why pay for what an AI can deliver more cheaply? Instead, the focus would shift to what only humans (so far) can provide: inspiration, charisma, mentoring. That, too would be a change for the better.
Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is coauthor of Talent: How to Identify Energizers, Creatives, and Winners Around the World. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.