In this final respect, he may not be wrong. Almost 30 years later, AI systems are now shown to be susceptible to bias within the data and instructions used to train them, as demonstrated by Google’s recent embarrassing Gemini launch2. This could result in AI either generating results that are inaccurate or results that will show skewed perspectives. In this respect, this leads to the acknowledgment of how crucial critical thinking skills are. AI should be used as one tool in the teaching and learning box, a starting point for further investigation of the matter, using other sources. In this way, although clearly we are seeing some huge steps forward in the way the internet and AI is changing everyday life, it is just one reference point and cannot be used to reinforce any disparities of educational outcomes and opportunities for students.
So where does NAIS Pudong take its stance? Unfortunately, the answer isn’t easy. We undoubtedly embrace technology in our school and agree that it is vital for our students to remain at the forefront of understanding this technology and how it can benefit us. We will embrace AI where we can see clear advantages for our students, but will also remain alert to the challenges ahead and monitor areas in which we foresee issues. Specifically:
We need to be sure that students do not develop an over-reliance on AI.
If they become overly dependent on AI for completing tasks and assignments, it could hinder the development of critical thinking skills, problem-solving abilities, and creativity. Relying too heavily on AI may lead to a passive learning experience, where students simply follow instructions without actively engaging with relevant materials.
Everyone in school must be aware of privacy and data security issues.
AI technologies often rely on collecting and analysing large amounts of data, raising concerns about privacy and data security. If sensitive student information is not properly protected, it could be vulnerable to misuse or unauthorised access, leading to potential harm to students' academic and personal well-being.
As alluded to earlier, we all need to be alert to algorithmic bias and discrimination.
AI systems are susceptible to biases present in the data used to train them, which could result in unfair treatment or discrimination against certain groups of students. If AI algorithms perpetuate existing inequalities or reinforce stereotypes, it could exacerbate disparities in educational outcomes and opportunities.
Critically, especially in the area of wellbeing, we shouldn’t allow any sense of a replacement of human interaction.
While AI can enhance certain aspects of education, such as personalised learning and adaptive feedback, it cannot fully replace the value of human interaction and mentorship. Over-reliance on AI may lead to a loss of meaningful teacher-student relationships and social-emotional learning opportunities, which are essential for holistic development.
And finally, in the spirit of our alertness to UNICEF’s Strategic Development Goals, we need to remain alert to a digital divide.
Access to AI technologies and digital resources is not uniform across all schools and communities. The digital divide exacerbates existing inequalities in education, with students from under-served or marginalised backgrounds facing barriers to accessing and benefiting from AI-powered tools and resources.
Addressing these potential issues requires careful consideration of ethical, social, and pedagogical implications, as well as robust safeguards to mitigate risks and ensure equitable access and outcomes for all students. Collaboration among educators, policymakers, technologists, and stakeholders is essential to navigate the complexities of AI integration in education responsibly and ethically, and we can assure you that NAIS Pudong and Nord Anglia Education will remain at the forefront of development in this respect.