Workshop 3: Trustworthy AI: Large Language Models for Children and Education
Rapporteur: Francesco Vecchi
Applications like ChatGPT had serious technological advancements in the last few months and they are playing a significant role in the so-called AI revolution. Generating human languages can be an invaluable tool for various applications, with the potential of not only revolutionising customer services, but also language translation and content generation, while smoothing interactions between users and machines.
ChatGPT4 is a large language model, not a large knowledge model. This means that it does not story knowledge, but it maps the statistical relationship of the tokens: LMMs just identity patterns and create rules out of them. Interestingly, LMM can hallucinate and state false facts, because they only find correlations. What may be surprising is that grammar, humour, even literature are nothing but patterns, but we must keep in mind that AI-generated texts are always fictional, and the result of a statistical equation. Therefore, they can be influenced by bias, user surveillance, and user influence for nudging. What they can surely be used in is translation, where they perform really well.
The Italian Data Protection Authority stopped the use of ChatGPT in Italy because they think that “the market is not a laboratory and people are not guinea pigs”. Such forms of technology that can have a measurable impact on society cannot be used before they reach a reasonable level of maturity. Secondly, one cannot forget that the current AI market is actually monopolistic and lead by 5-6 corporations: too many fundamental rights and freedoms are at stake. Main problem is that the market will rise faster than regulation. EU regulation on AI is going in the right direction, but it will not be implemented before 2025. What do we until then? Finally, it is clear that children need special protection, and their age and identity data should not be appropriated by platforms and services that are not designed for them. Thus, children should be considered as they were legally unable to enter in any kind of personal-data and digital-service contract.
Speaking of LLMs in education, risks for children are significant since their bodies and brains are still evolving: for instance, they are less able to distinguish reality from AI-generated content. Moreover, speaking about of information-related risks, LLMs can perpetrate certain biases and disinformation content, creating over-exposure of certain kinds of information. Finally, there are several risks related to human relations for children, who can mistake LLMs for teachers with relational drawbacks such as depression, addiction, and anxiety (like Social Media). Going to education specific risks related to tools and abilities, LLMs can develop reading, writing, and analytical skills, but they raise the issue of the veracity and quality of information. It is not a question of shutting children from AI, but we need to make sure that they are ready to use thems safely. We must focus on designing the LLMs putting children’s rights at the center to prevent these risks. Digital literacy of parents and children and teachers is important to manage the challenge, but it is also crucial to advocate for children’s rights to the developers and LLMs and LLM-powered features. Of course, these are designed by someone with a specific purpose, and this should entail responsibility for the design, the outcome and the oversight of the system.
LMMs are not only used by students, but also by educators, for instance to quickly create lessons content, lessons plans, revision materials or quiz sections. Tools based on LLMs are used in the classroom, too. This sort of tools such as ChatGPT are increasingly employed in ed tech tools, and they are also able to provide more personalised learning options that can be implemented through virtual tutors. Speaking of practical uses, the focus has primarily been on saving time for educators, or creating more personalised learning support for students. There is, of course, the issue of plagiarism. Still, in EDUCATE Programme it is stated that the human intelligence and its peculiarities should be celebrated: they encourage the education system to further the development of those sorts of skills in our learners.
Finally, what is important is to find the right way to gradually and consciously approach these tools, in order to protect the most fragile users while improving education services. However, regulation is certainly needed but it cannot be too specific, otherwise it will be outdated in a few years, thus resulting in being ineffective. The solution is to keep up with the times, agreeing on the core principles.
Recent Comments on this Site
3rd July 2023 at 2:58 pm
I agree with Michael’s comment.
See in context
3rd July 2023 at 2:56 pm
This first message makes no sense. Please take into consideration the comment made by Torsen.
See in context
3rd July 2023 at 2:37 pm
3 The Ukrainian Internet resilience is impossible without worldwide cooperation, help and support. There are very good examples of such cooperation, and not very good. These lessons also have to be documented and analysed.
See in context
3rd July 2023 at 12:14 am
In responding to the points around the impact encryption, I would ask that the comments I made around the UK’s Online Safety Tech Challenge Fund and academic paper by Ian Levy and Crispin Robinson are added to the key messages.
I referenced a paper by Ian Levy and Crispin Robinson, two internationally respected cryptographers from the UK’s National Cyber Security Centre, which set out possible solutions to detecting child sexual abuse within End-to-End Encrypted Environments that companies could be exploring to balance both the rights to privacy and the rights of children to grow up in a safe and secure environment free from child sexual abuse.
The link to the paper is copied below:
[2207.09506] Thoughts on child safety on commodity platforms (arxiv.org)
And the UK Safety Tech Challenge Fund:
Lessons from Innovation in Safety Tech: The Data Protection Perspective – Safety Tech (safetytechnetwork.org.uk)
It is important that we balance the concerns about the breaking of encryption, with the possibilities that should be being explored to prevent child sexual abuse from entering or leaving these environments.
Andrew Campling also made points about the right to privacy not being an absolute right and the need to balance this right, with other rights- another point I think that is worth reflecting in this final paragraph.
See in context
3rd July 2023 at 12:00 am
I agree with the amendment Torsten has proposed to the initial text.
See in context
2nd July 2023 at 11:58 pm
I would be careful about saying these images have been created consensually. Just because an image is “self-generated” it does not mean it has been created through “sexting”. Children are being “groomed” and “coerced” into creating these images as well.
I agree- however, with the rewritten text above regarding what companies currently do and what they will be required to do if the EU proposal becomes law and is clearer than what was written in the initial text.
See in context
2nd July 2023 at 3:21 pm
The Internet has changed how war is fought, and how it is covered by media. At
the same time, the war has put “One world, one Internet” to a stress test. The foundations of global and interoperable Internet should not be affected by the deepening geopolitical divide, even though it has fragmented the content layer.
No one has the right to disrupt the global network that exists as a result of voluntary cooperation by thousands of networks. The mission of Internet actors is to promote and uphold the network, and to help restore it if destroyed by armed aggression.
The war has been accompanied by heightened weaponization of the content layer of the Internet. New EU legislation is expected to curb at least the role of very large platforms in spreading disinformation and hate speech.
See in context
2nd July 2023 at 2:36 pm
I kindly suggest the following changes:
Please add these two important points that were said by the speakers/audience:
– There is an initiative on the Nordic level to protect children from the harms of the Internet, and this initiative has already been promulgated into legislation in Denmark.
– As the role of parents is crucial in educating children to use the Internet in a savvy way, also parents need education. That’s why we need adult education also from beyond the formal education system, just like the adult education system in Finland already provides training in basic digital skills.
See in context
2nd July 2023 at 2:35 pm
I kindly suggest the following changes:
– governs => governments
– Replace this: ”Therefore, the contemporary political landscape requires three-level trust: political power; knowledge organisations; and individual.”
– By this:
– ”Therefore, the contemporary political landscape requires three levels of trust: trust in basic societal functions and structures of the society, trust in knowledge organizations, and trust between one another as individuals.”
See in context
2nd July 2023 at 2:32 pm
I kindly suggest the following changes:
Replace this: ”Thus, one of the key priorities is to enhance citizens digital literacy and education going beyond only digital competencies and including cultural aspects.”
with this: ”Thus, one of the key priorities is to enhance citizens’ digital literacy and education by going beyond just digital competencies and including also ethical, social and cultural dimensions.”
Add this important point that was said by the speaker: Responsibility for digital information literacy education lies not only with the formal education system, but also cultural institutions, NGOs, youth work play a key role.
See in context