Does AI have a place in schools?
- NYC ban on ChatGPT in public schools repealed as students return to class.
- Artificial intelligence could be a useful tool for teachers.
- Videos spreading misinformation to children on YouTube were generated by AI.
Just under a year ago, artificial intelligence in education was the cause of massive angst in schools. Initially, the advent of generative AI was deemed a major threat – New York City public schools banned ChatGPT from in-school devices a week after its launch.
Now, as students are returning for a new school year, the early alarm felt by educators has given way to pragmatism and more understanding of the ways that generative AI will impact the classroom.
David Banks, the chancellor of New York City Public Schools, wrote that the district was now “determined to embrace” generative AI.
One reason that the AI-hysteria has lessened is a better understanding of its weaknesses. Its tendency to “hallucinate” or fabricate information have somewhat assuaged fears that it would enable rampant cheating and plagiarism; there are pretty obvious tells that a piece of writing was generated by artificial intelligence.
Students have also been handing in work with citations that don’t exist, though in fairness, lawyers have done this too.
Artificial intelligence in education isn’t a new thing – for the past decade it’s been used by systems like Carnegie Learning and Aleks to analyze student response to questions and adapt learning materials to meet their individual needs.
Generative AI is what caused the spike of alarm, but it can also be employed as part of a curriculum. No doubt, education will be disrupted by artificial intelligence, but can it be harnessed, rather than regarded as hostile?
Seven months ago, artficial education in education was seen by many as the enemy.
Using AI in education
The most direct application for generative AI chatbots in education is in translation and language learning. At one point, stocks in Duolingo dropped due to investors’ fears that users would use the free option of ChatGPT rather than the app. Instead, Duolingo began integrating ChatGPT features into the app.
In March 2023, the company launched Duolingo Max, which uses GPT-4 technology to add two new features to Duolingo Super. Those features explain wrong answers to users and offer a role-playing feature that allows users to practice conversations in different scenarios.
As well as the use of chatbots to outsource homework and essay tasks, generative AI also offers benefits for students learning to write – or write more clearly. Grammarly is a well-known tool that uses AI to help users with their writing.
ChatGPT and others can rewrite, edit, or make changes to documents as needed. Alternatively, it might be used by creative writers to generate prompts, dialogue and story ideas. Teachers could also use it in this way when setting tasks.
Generative AI could be to the literary world what the calculator is to mathematics: providing aid in the tedious parts of the work and allowing for more to be achieved, despite specialists arguing that the tedious parts are significant to an understanding of the whole.
In early childhood education, AI can be used to create interactive games to teach children academic and social skills. It can also be used to power monitoring systems that track children’s behavior and provide insights into their development.
As with every sector, there’s a dark cloud over the ease that AI can bring to a job: if you can use it, what’s stopping your employer cutting out the middleman and giving a chatbot your job title?
Luckily for human beings with mortgages or rents to pay, although AI is useful in some educational settings, it’s not likely to replace teachers anytime soon. The pandemic proved the value of human-to-human, in-school learning. Still, AI is a helpful tool for teachers and, at the very least, saves time on admin, which can be redirected to more valuable tasks.
The tasks that AI programs like ChatGPT can help with include creating lesson plans, assisting with grading papers and homework, offering personalized help to students, and even creating videos from textbooks.
Plagiarism isn’t a new problem
Plagiarism did not just pop out of thin air the day OpenAI launched ChatGPT. It has always been a consideration that teachers have had to adapt to. In fact, even in the world of generative AI, it still depends on a very human impulse – the moment at which someone decides to take what isn’t their own and present it as though it is.
That hasn’t stopped a widespread pushback against the bots using material from the internet for training.
Turnitin, a plagiarism checker, has developed an AI detection tool that highlights which portions of a piece of writing may have been generated by AI. After reviewing more than 65 million submissions, it found that 10.3 percent of those contained AI writing in potentially more than 20 percent of their work.
There is a four percent false positive on its detector. Because of this, Turnitin recommends educators have conversations with students, rather than instantly failing them or making immediate accusations based on the checker’s results. “It’s just supposed to be information for the educator to decide what they want to do with it,” says Annie Chechitelli, Turnitin’s chief product officer. “It is not perfect.”
AI: exposing education
One argument is that AI is highlighting the flaws in the education system. Its ability to pass assessments doesn’t reflect the ‘intelligence’ of machine learning, but our failure to build education systems that nurture and value unique human intelligence.
A tool like ChatGPT has no understanding or knowledge – at least, not yet. It just collates text input based on statistics. For more on this, Google-search ‘stochastic parrots.’
The academic success of artificial intelligence in educational assessments demonstrates that they don’t test knowledge and understanding, but rather whether students have collected and memorized information.
Teachers are designing lesson plans specifically with this in mind. If a chatbot can easily churn out decent work in response to assignments, they might need adjusting: Educators may try giving their assignments to a chatbot to see what’s generated, says Emily Isaacs, executive director of the Office for Faculty Excellence at Montclair State University in New Jersey.
Lisa Parry, a K–12 school principal and AP English Language and Composition teacher in rural Arlington, South Dakota, told Wired that she plans to have her English students use ChatGPT as “a search engine on steroids” to help brainstorm essay topics. “ChatGPT has great power to do good, and it has power to undermine what we’re trying to do here academically,” she says. “But I don’t want to throw the baby out with the bathwater.”
To stay ahead of AI in education, the sector will have to radically rethink what education is for and how success is measured.
Artificial intelligence in education, education on AI
Another key change will be the need to educate children (and their parents, and their teachers!) on AI. In Japan, regulations stressing the importance of students understanding artificial intelligence have already been enforced.
Key to effective use of AI is critical thinking – something that generative chatbots haven’t shown much skill at yet. Not only will tasks set to test it make it harder for students to use chatbots to complete work, but critical thinking will (hopefully) slow the spread of misinformation that AI has accelerated.
The trouble is that children aren’t equipped to distinguish fact from fake. Investigative BBC journalists found more than 50 YouTube channels in more than 20 languages spreading disinformation disguised as STEM [Science Technology Engineering Maths] content.
The videos included pseudo-science, false information and conspiracies. YouTube recommends these videos alongside legitimate educational content – and videos tagged as “educational content” are more likely to be recommended to children.
Revenue on YouTube videos incentivizes creators to chase as many views as possible, and to use AI to generate so-called educational content, which will be pushed by the algorithm and shown to children all over the world.
As part of their experiment, the journalists shared some of the recommended false science content to two groups of 10-12-year-olds – one in the UK and one in Thailand.
When the journalists then explained to the children those videos were made using AI and contained false information, they were shocked.
“I’m actually really confused. I thought it was real,” said one boy. Another said: “I would’ve probably believed it, if you hadn’t told us it was fake.”
“These videos do well because they are conspiratorial [suggesting they are exposing a secret most people don’t know about],” says Professor Vicki Nash, Director of the Oxford Internet Institute. “We are all fascinated by things that run counter to what we’re officially told, and children are more susceptible to this than adults.”
As AI tools continue to improve, misleading content will become easier to create and the quality of the videos will no longer indicate whether they are artificially generated content. The waters get muddier still if we think about some of the potential uses of AI in educational settings.
If a teacher generates a video summary from a textbook, the robotic voice-over and simple animations of an AI-generated clip will become synonymous with educational resources. As artificial intelligence in education becomes more and more the norm, we run the risk of making students less and less wary of AI-generated content over time, irrespective of the facts of the real world.
Could AI be a disruptive but positive influence on the education system?
September 28: correction of Turnitin’s false positive rate; only 4%.