Open AI’s new chatbot is raising fears of cheating on homework, but its potential as an educational tool outweighs its risks
NYT
Recently, I gave a talk to a group of K-12 teachers and public school administrators in New York. The topic was artificial intelligence, and how schools would need to adapt to prepare students for a future filled with all kinds of capable AI tools.
But it turned out that my audience cared about only one AI tool: ChatGPT, the buzzy chatbot developed by Open AI that is capable of writing cogent essays, solving science and math problems and producing working computer code.
ChatGPT is new — it was released in late November — but it has already sent many educators into a panic. Students are using it to write their assignments, passing off AI-generated essays and problem sets as their own. Teachers and school administrators have been scrambling to catch students using the tool to cheat, and they are fretting about the havoc ChatGPT could wreak on their lesson plans. (Some publications have declared, perhaps a bit prematurely, that ChatGPT has killed homework altogether.)
Cheating is the immediate, practical fear, along with the bot’s propensity to spit out wrong or misleading answers. But there are existential worries, too. One high school teacher told me that he used ChatGPT to evaluate a few of his students’ papers, and that the app had provided more detailed and useful feedback on them than he would have, in a tiny fraction of the time.
“Am I even necessary now?” he asked me, only half-joking.
Some schools have responded to ChatGPT by cracking down. New York City public schools, for example, recently blocked ChatGPT access on school computers and networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content.” Schools in other cities have also restricted access.
It’s easy to understand why educators feel threatened. ChatGPT is a freakishly capable tool that landed in their midst with no warning, and it performs reasonably well across a wide variety of tasks and academic subjects. There are legitimate questions about the ethics of AI-generated writing, and concerns about whether the answers ChatGPT gives are accurate. (Often, they’re not.) And I’m sympathetic to teachers who feel that they have enough to worry about, without adding AI-generated homework to the mix.
But after talking with dozens of educators over the past few weeks, I’ve come around to the view that banning ChatGPT from the classroom is the wrong move.
Instead, I believe schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalised tutoring, and better prepare students to work alongside AI systems as adults. Here’s why.
The first reason not to ban ChatGPT in schools is that, to be blunt, it’s not going to work.
Sure, a school can block the ChatGPT website on school networks and school-owned devices. But students have phones, laptops and any number of other ways of accessing it outside of class. (Just for kicks, I asked ChatGPT how a student who was intent on using the app might evade a schoolwide ban. It came up with five answers, all totally plausible, including using a VPN to disguise the student’s web traffic.)
Some teachers have high hopes for tools such as GPTZero, a programme built by a Princeton University student that claims to be able to detect AI-generated writing. But these tools aren’t reliably accurate, and it’s relatively easy to fool them by changing a few words, or using a different AI programme to paraphrase certain passages.
AI chatbots could be programmed to watermark their outputs in some way, so teachers would have an easier time spotting AI-generated text. But this, too, is a flimsy defence. Right now, ChatGPT is the only free, easy-to-use chatbot of its calibre. But there will be others, and students will soon be able to take their pick, probably including apps with no AI fingerprints.
Even if it were technically possible to block ChatGPT, do teachers want to spend their nights and weekends keeping up with the latest AI detection software? Several educators I spoke with said that while they found the idea of ChatGPT-assisted cheating annoying, policing it sounded even worse.
“I don’t want to be in an adversarial relationship with my students,” said Gina Parnaby, chair of the English department at the Marist School, an independent school for grades 7 through 12 outside Atlanta. “If our mindset approaching this is that we have to build a better mousetrap to catch kids cheating, I just think that’s the wrong approach, because the kids are going to figure something out.”
Instead of starting an endless game of whack-a-mole against an ever-expanding army of AI chatbots, here’s a suggestion: For the rest of the academic year, schools should treat ChatGPT the way they treat calculators — allowing it for some assignments, but not others, and assuming that unless students are being supervised in person with their devices stashed away, they’re probably using one.
Then, over the summer, teachers can modify their lesson plans — replacing take-home exams with in-class tests or group discussions, for example — to try to keep cheaters at bay.
The second reason not to ban ChatGPT from the classroom is that, with the right approach, it can be an effective teaching tool.
Cherie Shields, a high school English teacher in Oregon, told me that she had recently assigned students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health: The Story of an Hour, by Kate Chopin, and The Yellow Wallpaper, by Charlotte Perkins Gilman. Once the outlines were generated, her students put their laptops away and wrote their essays longhand.
The process, she said, had not only deepened students’ understanding of the stories. It had also taught them about interacting with AI models, and how to coax a helpful response out of one.
“They have to understand, ‘I need this to produce an outline about X, Y and Z,’ and they have to think very carefully about it,” Shields said. “And if they don’t get the result that they want, they can always revise it.”
Creating outlines is just one of the many ways that ChatGPT could be used in class. It could write personalised lesson plans for each student (“explain Newton’s laws of motion to a visual-spatial learner”) and generate ideas for classroom activities (“write a script for a Friends episode that takes place at the Constitutional Convention”). It could serve as an after-hours tutor (“explain the Doppler effect, using language an eighth grader could understand”) or a debate sparring partner (“convince me that animal testing should be banned”). It could be used as a starting point for in-class exercises, or a tool for English language learners to improve their basic writing skills. (Teaching blog ‘Ditch That Textbook’ has a long list of possible classroom uses for ChatGPT.)
Even ChatGPT’s flaws — such as the fact that its answers to factual questions are often wrong — can become fodder for a critical-thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s.
ChatGPT can also help teachers save time preparing for class. Jon Gold, an eighth grade history teacher at Moses Brown School, a pre-K through 12th grade Quaker school in Providence, Rhode Island, said that he had experimented with using ChatGPT to generate quizzes. He fed the bot an article about Ukraine, for example, and asked it to generate 10 multiple-choice questions that could be used to test students’ understanding of the article. (Of those 10 questions, he said, six were usable.)
Ultimately, Gold said, ChatGPT wasn’t a threat to student learning as long as teachers paired it with substantive, in-class discussions.
“Any tool that lets students refine their thinking before they come to class, and practise their ideas, is only going to make our discussions richer,” he said.
Now, I’ll take off my tech columnist hat for a second, and confess that writing this piece has made me a little sad. I loved school, and it pains me, on some level, to think that instead of sharpening their skills by writing essays about The Sun Also Rises or straining to factor a trigonometric expression, today’s students might simply ask an AI chatbot to do it for them.
I also don’t believe that educators who are reflexively opposed to ChatGPT are being irrational. This type of AI really is (if you’ll excuse the buzzword) disruptive — to classroom routines, to long-standing pedagogical practices, and to the basic principle that the work students turn in should reflect cogitation happening inside their brains, rather than in the latent space of a machine learning model hosted on a distant supercomputer.
But the barricade has fallen. Tools like ChatGPT aren’t going anywhere; they’re only going to improve, and barring some major regulatory intervention, this particular form of machine intelligence is now a fixture of our society.
“Large language models aren’t going to get less capable in the next few years,” said Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. “We need to figure out a way to adjust to these tools, and not just ban them.”
That’s the biggest reason not to ban it from the classroom, in fact — because today’s students will graduate into a world full of generative AI programmes. They’ll need to know their way around these tools — their strengths and weaknesses, their hallmarks and blind spots — in order to work alongside them. To be good citizens, they’ll need hands-on experience to understand how this type of AI works, what types of bias it contains, and how it can be misused and weaponised.
This adjustment won’t be easy. Sudden technological shifts rarely are. But who better to guide students into this strange new world than their teachers?
– This article originally appeared in The New York Times.