[ad_1]
Students around the world are using generative AI tools to write reports and complete assignments. Teachers also use similar tools to grade tests. What the heck is going on here? Where is all this heading? Can education return to a pre-artificial intelligence world?
How many students are using generative AI in their schools?
Many high school and college students embraced popular generative AI writing tools like OpenAI’s ChatGPT as soon as they started gaining international attention in 2022. The incentive was very clear. With just a few simple prompts, large language models (LLMs) at the time could comb through vast databases of articles, books, and archives and spit out relatively consistent short-form essays and question responses in seconds. is completed. The language wasn’t perfect, and the model had a tendency to fabricate facts, but it was enough to fend off some educators who weren’t equipped to spot obvious signs of AI manipulation at the time.
This trend spread like wildfire. According to a recent Pew Research survey, about 1 in 5 high school teens who have heard of ChatGPT say they have already used the tool in class. A separate report on the ACT, which produces one of the two most popular standardized tests for college admissions, found that nearly half (46%) of high school students used the AI to complete the task. claims to have used. A similar trend is occurring in higher education. More than a third (37%) of U.S. college students surveyed by online education magazine Intelligent.com have used ChatGPT to generate ideas, write papers, or both. I answered.
These AI tools are being introduced into marking papers. Turnitin, a prominent plagiarism detection company used by educators, recently stated: wired Evidence of AI manipulation was found in 22 million college and high school papers submitted through the company’s service last year. Turnitin claims that of his 200 million papers submitted in 2023, more than 11% of his content is composed using AI-generated material. And while the use of generative AI is generally easing in society, students are showing no signs of slowing down.
Shortly after students started using AI writing tools, teachers turned to other AI models to try to stop it. As of this writing, dozens of tech companies and startups claim to have developed software that can detect signs in AI-generated text. Teachers and professors across the country already rely on them to varying degrees. But critics say that even years after ChatGPT became popular, AI detection tools are still far from perfect.
In our recent analysis of 18 different AI detection tools: International Journal of Educational Integrity It highlights a lack of overall accuracy. None of the models studied could accurately distinguish between AI-generated content and human writing. To make matters worse, only five models achieved accuracy above 70%. As AI description models improve over time, detection can become even more difficult.
Accuracy is not the only issue that limits the effectiveness of AI detection tools. Over-reliance on these developing detection systems risks penalizing students for using useful AI software that would otherwise be allowed. That very scenario recently happened to a student named Marley Stevens at the University of North Georgia. She claims that an AI detection tool interpreted her use of the popular spelling and writing aid Grammarly as cheating. Ms. Stevens earned a zero on that essay, which she claims disqualified her from the scholarship she was aiming for.
“I spoke with the professor, the dean, and the dean. [they said] I was ‘unintentionally cheating,”’ Stevens claimed in a TikTok post. The University of North Georgia did not immediately respond. pop science Request for comments.
There is evidence that current AI detection tools also mistakenly confuse real human writing with AI content. In addition to common false positives, Stanford University researchers warn that detection tools can unfairly penalize writing by non-native speakers. More than half (61.2%) of the essays written by U.S.-born, non-native speaking eighth graders included in this study were classified as being generated by AI. 97% of essays by non-native speakers were flagged as AI-generated by at least one of the seven different AI detection tools tested in the study. Widespread deployment of detection tools could put additional pressure on non-native speakers who are already burdened with overcoming language barriers.
How are schools responding to the rise of AI?
Educators are scrambling to find solutions to the influx of AI writing. Some major school districts in New York and Los Angeles have chosen to completely ban the use of ChatGPT and related tools. Professors at universities across the country are reluctantly beginning to use AI detection software, despite recognizing known accuracy shortcomings. One of those educators, a composition professor at Michigan Technological University, said in an interview that these detectors, while acknowledging their flaws, can be beneficial and may be a disadvantage to some students. “It’s a tool that has the potential to help.” Inside higher education.
On the other hand, some companies are taking the opposite approach and leaning into AI education tools with more open arms. According to the state of Texas, texas tribuneJust this week, the state’s education department moved to replace thousands of human standardized test scores with an “automated grading system.” The agency claims the new system for scoring open-ended written responses included in state public exams could save the state $15 million to $20 million a year. An estimated 2,000 temporary graders will also lose their jobs. Elementary schools elsewhere in the state are reportedly experimenting with using AI learning modules to teach children basic core curriculum, supplemented by human teachers.
AI in education: the new normal
The potential of AI write detection tools may evolve to improve accuracy and reduce false positives, but they alone are unlikely to return education to the pre-ChatGPT era. Some scholars argue that rather than fighting the new normal, educators should instead embrace AI tools in classrooms and lecture halls and teach students how to use them effectively. In a blog post, the MIT Sloan researchers argue that professors and teachers can still restrict the use of certain tools, but they should do so through clearly written rules that explain why. They write that students should feel free to approach teachers to ask if and when AI tools are appropriate.
Others, like former Elon University professor CW Howell, believe that explicitly and intentionally exposing students to AI-generated writing in a classroom environment may make them less likely to actually use AI. It is claimed that there is. Asking students to grade essays generated by AI, Howell writes, wired, It can give students first-hand experience in noticing how AI frequently fabricates sources and hallucinates quotations from the fictitious ether. When viewed through a new lens, AI-generated essays can actually: Improve education.
“By showing students how flawed ChatGPT is, they were able to regain confidence in their own minds and abilities,” Howell wrote.
But if AI fundamentally changes the economic landscape, as some doomsday buffs believe, then students will always be able to design prompts to train the AI and predict the new AI-dominated future. You can spend your days learning how to help build.
[ad_2]
Source link