After a year of research in the GenAI4ED project, one thing is clear: cheating continues to be a strong pillar in conversations around the use of generative Artificial Intelligence (GenAI) in secondary education. The dominant concern is not simply how GenAI supports learning, but whether it undermines academic integrity. While this concern is understandable, our findings suggest that the deeper issue emerging in schools is a growing sense of mistrust between teachers and students. As one, teacher interviewee reflected on their concerns about mistrust,
“GenAI has made me more suspicious of students. I used to ask them to do the writing at home; now we do it in class because I know that a lot of times what they bring is created by an AI tool”.
Debates about technology use among students are gaining momentum. Concerns about dependency and cognitive impact have encouraged governments to introduce age-related restrictions on phone and social media use. This debate highlights an emerging tension around the use of tools like GenAI in education and society more broadly. One the one hand, initiatives such as Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024, which limits access for under-16s, are being considered across Europe. On the other hand, the focus is shifting from restriction to integration. Countries such as Italy or Greece are experimenting with pilot programmes that embed AI into education, including nationwide deployments of ChatGPT Edu, or training initiatives like Microsoft Elevate, designed to support responsible AI use among teachers and students.
Undoubtedly, digital tools, and GenAI in particular, have caused an upheaval in schools. During the EC-TEL TAICO Dialogue Lab, a conference workshop held in Newcastle in 2025 about AI in education, discussions on the future of education and assessment highlighted that this disruption may also be an opportunity. Rather than focusing solely on preventing cheating, educators are beginning to reconsider how assessment, authorship, and digital literacy are defined in increasingly digitalised learning environments. Schools continue to play a central role in preparing students for the workforce, and emerging competencies such as digital literacy and critical AI literacy are becoming essential. For example, one student interviewee reflected on the blurred lines around what qualifies as authorship,
“If you ask ChatGPT to write you this essay and you submit it, it’s obviously not going to be fair. It’s called plagiarism”.
This raises questions on how students understand the distinction between GenAI as a learning support versus increasing risks of overreliance and dependency. These changes, however, reveal a broader dynamic than academic dishonesty alone. As teachers become more cautious and students more defensive, a cycle of mistrust begins to take shape. The question of fairness extends beyond student behaviour to include teacher practices as well. Some educators acknowledge using GenAI to generate classroom exercises with minimal revision, and students are increasingly aware of the signs of AI-generated content. This mutual awareness raises new ethical questions: if teachers use GenAI to save time, how do students interpret expectations around their own use? A student interviewee contemplated,
“I know it’s wrong […] but I’m thinking, it saves me time, and if teachers are going to use it for whatever they want, then I’ll do the same”.
A GenAI4ED review of 59 journal publications revealed that teachers and students frequently frame GenAI through an ethical lens, raising questions about plagiarism, authorship, and the validity of current assessment methods. Similarly, our review of 72 policies or guidelines on GenAI for secondary education, revealed academic honesty as the most frequent recommendation directed at students, accounting for 15% of all guidance. A minority of papers and policies suggest recognising AI use explicitly or implementing a variety of restrictive policies. But by focusing on cheating, policies miss an opportunity to discuss how systems are built, and how they can be used responsibly.
Our interviews show that while the issue of cheating is a central concern, other matters are at stake. Students described difficulty navigating the boundary between support and substitution when using GenAI. Teachers reported adapting their classroom practices in response by, for example, dedicating teaching hours to written assignments, instead of designating them as homework.
Ultimately, rebuilding trust requires moving beyond surveillance of student-AI use, and toward shared models of responsible practice. Teachers, students, and parents need clear guidance not only on what constitutes misuse, but also what constitutes responsible use. To leverage GenAI purposefully and effectively, stakeholders need the right tools to assess the available GenAI apps, and recognise their limitations and potential impact. As such, it is GenAI4ED’s mission to support users in critically selecting and evaluating GenAI tools that align with their educational goals, fostering both critical awareness and responsible innovation in the classroom.
Author: Alba Paz-López (Trilateral Research).

