A recent Google-commissioned study reveals a significant disconnect in the ongoing debate around AI in schools. Instead of seeking outright prohibition, young people are actively integrating artificial intelligence into their learning and creative processes, signaling a clear demand for effective guidance and digital literacy from educators and policymakers. This insight challenges the prevailing narrative of banning access, advocating for a human-first approach to technology design.
The study, which surveyed over 7,000 European teenagers, highlights that a majority are already using AI weekly for schoolwork or creative tasks. They report AI makes learning more engaging (50%), explains difficult topics (47%), and provides instant feedback (47%). This data underscores that students are not asking for basic AI explanations; they are seeking assistance to leverage these tools effectively, yet over a quarter believe their schools have not approved any AI tools, with another 13% unsure of policies.
The solution, therefore, is not blanket prohibition but the implementation of clear, age-appropriate guardrails. This means explicitly defining acceptable use, outlining citation protocols, and teaching verification methods for AI-generated content. Teens are implicitly asking for these structured guidelines, and the report explicitly calls for curriculum-level AI and media literacy, alongside harmonized standards that protect users while preserving access to information.
Shifting the Digital Dialogue
Beyond AI, the report emphasizes the critical role of video in youth learning, with nearly 84% watching educational content weekly. Personalized recommendations, often criticized, are also valued by teens for discovering genuinely interesting content, especially when combined with active search and peer sharing. These platforms, despite their challenges, serve as vital learning environments, and young people are advocating for smarter, safer versions, not their elimination.
Young people are not naive to digital risks; they express concerns about misinformation and desire help in evaluating AI-generated content. They seek clarity and fairness, manifested through user-friendly privacy settings, developmentally appropriate policies, and inclusive tools. From Save the Children’s perspective, these findings align with a rights-based approach, emphasizing that protection and participation must coexist, making one-size-fits-all bans counterproductive.
The evidence is clear: teens are already leveraging AI and video for learning and creation. The imperative now is to cultivate safer digital environments by default and elevate digital literacy across the board. Practical steps include requiring clearer, default-on safety and privacy controls across platforms, integrating AI and media literacy into school timetables, and empowering parents as the primary line of support through national educational programs.



