|

Workshop 3: Trustworthy AI: Large Language Models for Children and Education

Rapporteur: Francesco Vecchi
 

Applications like ChatGPT had serious technological advancements in the last few months and they are playing a significant role in the so-called AI revolution. Generating human languages can be an invaluable tool for various applications, with the potential of not only revolutionising customer services, but also language translation and content generation, while smoothing interactions between users and machines.

ChatGPT4 is a large language model, not a large knowledge model. This means that it does not story knowledge, but it maps the statistical relationship of the tokens: LMMs just identity patterns and create rules out of them. Interestingly, LMM can hallucinate and state false facts, because they only find correlations. What may be surprising is that grammar, humour, even literature are nothing but patterns, but we must keep in mind that AI-generated texts are always fictional, and the result of a statistical equation. Therefore, they can be influenced by bias, user surveillance, and user influence for nudging. What they can surely be used in is translation, where they perform really well.

The Italian Data Protection Authority stopped the use of ChatGPT in Italy because they think that “the market is not a laboratory and people are not guinea pigs”. Such forms of technology that can have a measurable impact on society cannot be used before they reach a reasonable level of maturity. Secondly, one cannot forget that the current AI market is actually monopolistic and lead by 5-6 corporations: too many fundamental rights and freedoms are at stake. Main problem is that the market will rise faster than regulation. EU regulation on AI is going in the right direction, but it will not be implemented before 2025. What do we until then? Finally, it is clear that children need special protection, and their age and identity data should not be appropriated by platforms and services that are not designed for them. Thus, children should be considered as they were legally unable to enter in any kind of personal-data and digital-service contract.

Speaking of LLMs in education, risks for children are significant since their bodies and brains are still evolving: for instance, they are less able to distinguish reality from AI-generated content. Moreover, speaking about of information-related risks, LLMs can perpetrate certain biases and disinformation content, creating over-exposure of certain kinds of information. Finally, there are several risks related to human relations for children, who can mistake LLMs for teachers with relational drawbacks such as depression, addiction, and anxiety (like Social Media). Going to education specific risks related to tools and abilities, LLMs can develop reading, writing, and analytical skills, but they raise the issue of the veracity and quality of information. It is not a question of shutting children from AI, but we need to make sure that they are ready to use thems safely. We must focus on designing the LLMs putting children’s rights at the center to prevent these risks. Digital literacy of parents and children and teachers is important to manage the challenge, but it is also crucial to advocate for children’s rights to the developers and LLMs and LLM-powered features. Of course, these are designed by someone with a specific purpose, and this should entail responsibility for the design, the outcome and the oversight of the system.

LMMs are not only used by students, but also by educators, for instance to quickly create lessons content, lessons plans, revision materials or quiz sections. Tools based on LLMs are used in the classroom, too. This sort of tools such as ChatGPT are increasingly employed in ed tech tools, and they are also able to provide more personalised learning options that can be implemented through virtual tutors. Speaking of practical uses, the focus has primarily been on saving time for educators, or creating more personalised learning support for students. There is, of course, the issue of plagiarism. Still, in EDUCATE Programme it is stated that the human intelligence and its peculiarities should be celebrated: they encourage the education system to further the development of those sorts of skills in our learners.

Finally, what is important is to find the right way to gradually and consciously approach these tools, in order to protect the most fragile users while improving education services. However, regulation is certainly needed but it cannot be too specific, otherwise it will be outdated in a few years, thus resulting in being ineffective. The solution is to keep up with the times, agreeing on the core principles.

Source: https://comment.eurodig.org/eurodig-2023-messages/workshops/workshop-3-trustworthy-ai-large-language-models-for-children-and-education/