Workshop 3: Trustworthy AI: Large Language Models for Children and Education
Rapporteur: Francesco Vecchi
Applications like ChatGPT had serious technological advancements in the last few months and they are playing a significant role in the so-called AI revolution. Generating human languages can be an invaluable tool for various applications, with the potential of not only revolutionising customer services, but also language translation and content generation, while smoothing interactions between users and machines.
ChatGPT4 is a large language model, not a large knowledge model. This means that it does not story knowledge, but it maps the statistical relationship of the tokens: LMMs just identity patterns and create rules out of them. Interestingly, LMM can hallucinate and state false facts, because they only find correlations. What may be surprising is that grammar, humour, even literature are nothing but patterns, but we must keep in mind that AI-generated texts are always fictional, and the result of a statistical equation. Therefore, they can be influenced by bias, user surveillance, and user influence for nudging. What they can surely be used in is translation, where they perform really well.
The Italian Data Protection Authority stopped the use of ChatGPT in Italy because they think that “the market is not a laboratory and people are not guinea pigs”. Such forms of technology that can have a measurable impact on society cannot be used before they reach a reasonable level of maturity. Secondly, one cannot forget that the current AI market is actually monopolistic and lead by 5-6 corporations: too many fundamental rights and freedoms are at stake. Main problem is that the market will rise faster than regulation. EU regulation on AI is going in the right direction, but it will not be implemented before 2025. What do we until then? Finally, it is clear that children need special protection, and their age and identity data should not be appropriated by platforms and services that are not designed for them. Thus, children should be considered as they were legally unable to enter in any kind of personal-data and digital-service contract.
Speaking of LLMs in education, risks for children are significant since their bodies and brains are still evolving: for instance, they are less able to distinguish reality from AI-generated content. Moreover, speaking about of information-related risks, LLMs can perpetrate certain biases and disinformation content, creating over-exposure of certain kinds of information. Finally, there are several risks related to human relations for children, who can mistake LLMs for teachers with relational drawbacks such as depression, addiction, and anxiety (like Social Media). Going to education specific risks related to tools and abilities, LLMs can develop reading, writing, and analytical skills, but they raise the issue of the veracity and quality of information. It is not a question of shutting children from AI, but we need to make sure that they are ready to use thems safely. We must focus on designing the LLMs putting children’s rights at the center to prevent these risks. Digital literacy of parents and children and teachers is important to manage the challenge, but it is also crucial to advocate for children’s rights to the developers and LLMs and LLM-powered features. Of course, these are designed by someone with a specific purpose, and this should entail responsibility for the design, the outcome and the oversight of the system.
LMMs are not only used by students, but also by educators, for instance to quickly create lessons content, lessons plans, revision materials or quiz sections. Tools based on LLMs are used in the classroom, too. This sort of tools such as ChatGPT are increasingly employed in ed tech tools, and they are also able to provide more personalised learning options that can be implemented through virtual tutors. Speaking of practical uses, the focus has primarily been on saving time for educators, or creating more personalised learning support for students. There is, of course, the issue of plagiarism. Still, in EDUCATE Programme it is stated that the human intelligence and its peculiarities should be celebrated: they encourage the education system to further the development of those sorts of skills in our learners.
Finally, what is important is to find the right way to gradually and consciously approach these tools, in order to protect the most fragile users while improving education services. However, regulation is certainly needed but it cannot be too specific, otherwise it will be outdated in a few years, thus resulting in being ineffective. The solution is to keep up with the times, agreeing on the core principles.
Recent Comments on this Site
3rd July 2024 at 2:48 pm
The ideas discussed in this session were much broader. I propose to ionclude the following:
Citizens’ expectations from governments are increasing, and effective use of digital technologies can help meet these demands. Beyond technology development, it’s essential to cultivate digital skills and a forward-thinking mindset in the public sector. The main challenge is changing work habits and focusing on problem-solving before technology implementation. Digital services must be citizen-centric, secure, and user-friendly.
Open policy-making and innovative thinking are crucial, along with safe experimentation spaces like GovTech Labs. These labs test new policies and technologies, fostering innovation through skill development and co-creation. Design thinking and user experience should prioritize simplicity and functionality.
Success in digital services depends on organizational maturity and a clear vision supported by citizens and legislation. Challenges include digital skill gaps, data analysis capabilities, and regulatory barriers, requiring a shift towards enabling innovation.
Future challenges include digital identification, AI regulations, and ensuring technology accessibility for all, including senior citizens. Practical strategies and public co-creation are necessary for meaningful change.
See in context
3rd July 2024 at 12:27 pm
Like David, I don’t think cybersecurity and ‘crypto-technologists’ should be considered non-technical.
See in context
3rd July 2024 at 12:26 pm
I think Torsten’s suggestion for the last sentence of para.3 is a good one. Ross Anderson’s “chat control” paper made a convincing case that domestic violence and sexual abuse are closely linked, and that preventive measures which ignore one in favour of the other are less likely to be effective.
See in context
3rd July 2024 at 12:14 pm
Thanks Torsten – I think the changes made result in a more balanced statement without sacrificing relevant detail. I remain concerned at the use of the word “exponential” without reference to substantiating evidence, for the reasons I set out in my previous comment.
See in context
3rd July 2024 at 11:04 am
[Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation.]
I would add here: Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation, keeping in mind that also these methods can be circumvented.
See in context
3rd July 2024 at 11:01 am
The session organizers and participants modified this message to better reflect the discussion at the workshop as follows:
The interplay of privacy and safety: The participants of Workshop 1a of EuroDIG believe privacy and child safety are intertwined and inseparable, advocating that legal solutions to combat child sexual abuse online must strive to optimise both. These measures should be centred on children’s rights and their best interests, as a way forward to achieve this balance.
See in context
3rd July 2024 at 11:00 am
The session organizers and participants modified this message to better reflect the discussion at the workshop as follows: CSA is currently increasing exponentially and has serious consequences for the rights and development of children. For this reason, recognising such depictions and preventing child sexual abuse should go hand in hand. Participants are concerned about the safety of users, including with regard to the potential use of CSAM detection technology. Breaches of confidential communication or anonymity are seen critically. At the same time, advantages are recognised in the regulations, e.g. with regard to problem awareness or safety by design approaches. Age verification procedures are perceived as both a risk and an advantage, with a caution on risks to anonymity and participation.
See in context
3rd July 2024 at 10:58 am
After a meeting among the workshop organizers, this message was changed as follows: Advancements in legal and regulatory measures on Child Sexual Abuse (CSA): Workshop 1a discussed three recent measures on the protection of children from online Child Sexual Abuse (CSA): the proposed EU CSA Regulation (CSAR), the new UK Online Safety Act, and the positive results from the Lithuanian Law on the Protection of Minors against detrimental effects of public information. An agreement was found on the need for better regulation in this field, emphasising the accountability of online service providers for monitoring illegal and harmful material and safeguarding minors.
See in context
2nd July 2024 at 1:02 pm
From my perspective, the comments on technology take up too much space in this message. This topic was explored in more depth in another workshop. It also leaves too little room for other aspects that played a role in the exchange. Therefore, here is a suggestion to change the message:
CSA is currently increasing exponentially and has serious consequences for the rights and development of children. For this reason, recognising such depictions and preventing sexual violence should go hand in hand. Participants are concerned about the safety of users, including with regard to the potential use of technology. Breaches of confidential communication or anonymity are seen critically. At the same time, advantages are recognised in the regulations, e.g. with regard to problem awareness or safety by design approaches. Age verification procedures are perceived as both a risk and an advantage. It can improve the protection of children on the internet, limit the spread of CSA material and empower children. However, this should not be at the expense of anonymity and participation.
See in context
1st July 2024 at 5:53 pm
New technology-open proposal for the first sentence of the paragraph, as there was no explicit request in the workshop to exclude CCS:
To detect CSAM online, only techniques that can protect privacy by not learning anything about the content of a message other than whether an image matches known illegal content should be used.
See in context