The appearance late last year of artificial intelligence (AI) tools—that can “write” essays or stories or poems on any subject, generate artwork, and produce many other things that used to be regarded as the sole achievement of the human creative and thinking process—has caused ripples of concern in the educational world. Some schools abroad have banned the use of these AI tools in academic work.
Teachers worry that students may be tempted to turn to downloadable apps like ChatGPT to produce reports that synthesize the literature on any given subject, and pass them off as their own, in fulfillment of school assignments. Some believe that the growing popularity of these user-friendly tools could set off a “cheating epidemic” in schools.
The concerns from educators and teaching institutions are basically two-fold: 1. That these tools may encourage the easy resort to dishonest shortcuts, and; 2. That the increasing automation of the learning process may produce students who cannot think properly, who are unable to reason on their own, and who uncritically accept everything that is fed to them.
These are valid apprehensions. But they are nowhere as serious as the risks posed by autonomous or sufficiently capable intelligent systems—the ultimate product of AI. The most important of these issues are the “value alignment” problem and the “containment” problem.
AI machines are designed to achieve certain goals in the most optimal way possible. To perform their assigned work, they are programmed to access and process vast amounts of information within the reach of their computational resources. This could result in decisions that may not be aligned with the values we take for granted as humanity’s values (even as there is no easy consensus on what these are).
This leads to the problem of containment—how to ensure that automated decisions are reviewable and reversible before they result in catastrophic harm. Here we are not even talking of the spooky possibility of emergent consciousness—of machines developing their own way of reasoning through repeated machine-learning under varying conditions.
The state-of-the-art is such that developments in AI are so fast that observers and analysts believe a thoughtful consideration of these risks must be built into every serious discussion of artificial intelligence. Because the potential benefits for many areas of human endeavor are limitless, no one who has closely observed this field is calling for a halt to basic research in AI.
Rather, as Stuart Russell argues on Edge.org: “The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment.”
This brings us back to the challenges posed by AI in the educational environment. Out of curiosity, I recently downloaded ChatGPT. I wanted to see for myself how “intelligent” the text it generates is in the various categories it offers—story, outline, poem, apology letter, reply, song, paper, etc. My first impression is that it’s entertaining; I wouldn’t be surprised if some think of it as just another game. But I wouldn’t find it funny at all if any of my students submitted these generated texts as their own work.
Although it bore clear attempts at embellishment, the poem I prompted (about gray mornings and the stillness of the forest) came out flat and formulaic. The story I requested (on the pros and cons of AI) was too general, almost as if it was an attempt to expand the headings of a Wikipedia entry. But the outline (on the concept of globalization) was quite useful—at least as a starting point for a sensible discussion on the topic. Here, perhaps, is a way of repurposing a tool like this—use it to tease out your own thoughts on a given subject, a means of to get out of the barren object fixation that a blank page can often induce.
But, at this point, I wouldn’t worry too much about the potential abuses of AI tools in the educational setting. The challenge they pose is no more difficult than the old problem of detecting plagiarized and ghost-written work. A conscientious and well-informed teacher would know or would at least have a way of probing real authorship. Moreover, if a required paper or report specifically calls for an application of concepts to real-world situations and problems actually tackled in class, students might think twice before submitting a piece of work that is totally alien to them.
A lot depends on the quality and style of teaching to which today’s students are exposed. Unimaginative or lazy teachers who are completely dependent on textbooks, and who quite often are just slightly ahead of their students in their readings, are more likely to be the recipients of plagiarized and AI-generated submissions. If they’re not equipped to teach their students how to think, chances are they wouldn’t be able to tell the fake from the real.
If the primary function of modern education is to prepare citizens to live in the future, we cannot underestimate the significance of artificial intelligence. For the future we face is a world filled with smart machines that seek to automate all aspects of everyday life. Our hope is that in such a world, the people we graduate from our schools are themselves so complex and so capable of critical thinking that they cannot easily be replaced by or reduced to adjuncts of superintelligent machines.
——————