Artificial intelligence is transforming the way people learn, work, and make decisions. Yet across sectors, one lesson is becoming increasingly clear: successful AI adoption is not only about technological capability. It is about whether people trust the tools, understand how to use them, and feel that innovation supports rather than replaces human agency. This is where projects such as GenAI4ED and SkillAIbility meet in a particularly meaningful way.
At first glance, education and manufacturing may seem far apart. One focuses on classrooms, teachers, students, and school communities; the other on production environments, workforce transformation, and industrial innovation. However, both sectors are facing a similar challenge. They are trying to integrate AI in ways that are practical, responsible, and human-centred. Both are also asking similar questions: What skills are needed? How do people build trust? How can organisations adopt AI without losing sight of inclusion, wellbeing, and human value?
GenAI4ED approaches these questions from the perspective of secondary education. The project is working to support the responsible and meaningful use of Generative AI in school settings by combining research, co-design, stakeholder engagement, policy guidance, and practical tools. In this context, AI is not treated simply as a set of applications to be introduced into classrooms, but as a broader educational challenge that affects teachers, students, parents, and decision-makers. Issues such as AI literacy, transparency, ethical use, critical thinking, and practical support are central to the project’s work.
SkillAIbility addresses a parallel challenge in the manufacturing sector. As reflected in its recent article on the social and economic shifts behind human–AI integration in manufacturing, the project highlights that AI adoption is not only a technical question but also an organisational and societal one. Human-centred AI in manufacturing depends on lifelong learning, workforce empowerment, inclusion, and the ability of organisations and workplaces to adapt responsibly to change. In this way, SkillAIbility contributes to an Industry 5.0 vision in which people remain central to technological transformation.
What makes the dialogue between these two projects so valuable is precisely this shared perspective. In both education and manufacturing, AI raises expectations of greater efficiency, better decision-making, and new forms of support. But in both sectors, the quality of adoption depends on the same underlying conditions: people need guidance, not just tools; institutions need governance, not just innovation; and stakeholders need to be included, not simply informed after decisions have already been made.
One important common lesson is the central role of skills. In schools, the conversation often focuses on whether teachers and students are prepared to use GenAI responsibly and critically. In workplaces, the conversation shifts toward upskilling and reskilling employees so they can work confidently alongside AI-enabled systems. In both cases, the deeper issue is not only technical knowledge, but the ability to understand what AI can and cannot do, to question outputs, to make informed choices, and to act responsibly in context.
A second shared lesson is trust. Trust does not emerge automatically from innovation. It is built through transparency, participation, and relevance. School communities need to know why an AI-related approach is useful, how data and outputs should be interpreted, and what safeguards are in place. Workers and organisations need similar clarity about the role of AI, the implications for work practices, and the way human oversight is preserved. Trust, therefore, is not a communication add-on; it is one of the core conditions for uptake.
A third point of convergence is inclusion. AI systems can widen inequalities if they are introduced without attention to access, literacy, or the diversity of users’ needs. This is true in classrooms and in workplaces. Human-centred AI means recognising that people engage with technology from different starting points and under different conditions. It also means designing implementation pathways that are sensitive to social, cultural, and organisational realities. This is why stakeholder engagement matters so much in both projects: people affected by AI adoption should have a voice in shaping how that adoption happens.
Finally, both projects show that responsible AI adoption requires translation between research and practice. Evidence, policy, and innovation all matter, but they need to be turned into forms that users can understand and apply. For GenAI4ED, that may mean practical guidance for educators, parents, and policymakers, alongside training resources and policy briefs. For SkillAIbility, it means setting new standards to help organisations and workers engage with AI through frameworks that support skills development, participation, and resilience. In both cases, communication is not only about visibility; it is about making complex change usable and meaningful.
This is why collaboration across projects and sectors is so important. Education and manufacturing may operate in different environments, but they can still learn from one another. Both sectors are navigating the human side of AI integration. Both are looking for ways to combine innovation with inclusion and trust. And both demonstrate that a sustainable AI transition is not just a matter of technology adoption, but of how societies prepare people to live, learn, and work with emerging systems.
As AI continues to reshape both schools and workplaces, cross-project exchanges like this one can help build stronger shared understanding. They can also remind us that the future of AI should not be defined only by what systems can do, but by what people need in order to use them well. Across classrooms and factories alike, the most resilient path forward is one that keeps human needs, capabilities, and values at the centre.
Authors: Theodora Giatagana (Found.ation), Jakob Nennmann (MADE s.c.a r.l.)

