Skip to Content
|
Skip to Table of Contents
- Type of content: Self-generated abuse material and pathological content are emerging as the most trending harms to vulnerable groups online. This is further compounded by the privacy paradox, where privacy protection protocols create unintended vulnerabilities.
- Privacy-Respectful, Inclusive, and Accessible Technical Approaches: Client-side scanning for detecting CSA online involves methods that can be privacy-respecting, even though concerns are raised about anti-grooming techniques analysing visual and textual data. AI deployment involves issues of proxies, biases, and accuracy, necessitating inclusive and accurate data for effective task-oriented models that respect privacy, especially leveraging metadata. Authorities play a crucial role in double-checking these measures’ effectiveness and privacy compliance. Looking ahead, the European ID based on private blockchain technology may be a future-proof solution for robust verification and privacy protection.
- Diversity and Multi-Stakeholder Philosophy: A diversified multi-stakeholder approach is required to ensure that solutions are comprehensive in addressing harmful online content. Significant weight should be given to civil society, including individuals from non-technical backgrounds (e.g. psychology, ethics, cybersecurity, crypto-technologists, political science, etc.).
Source: https://comment.eurodig.org/eurodig-2024-messages/workshops/workshop-1b-protecting-vulnerable-groups-online-from-harmful-content-new-technical-approaches/
· Type of content: Child Sexual Abuse (CSAM) was mentioned; self-generated abuse material that can be voluntarily produced, shared and misused or due to coercion by online organizer groups (sextortion), both with severe consequences as suicidal actions; Pathological content (live broadcasting content appealing to violence, alcohol consumption, sexual abuse, etc).
It was an attempt not only to react, but also to reflect on other thoughts and suggest a possible solution. In view of your response, I suggest formulating it as follows. By the way, sources with current data are listed in the workshop wiki. In my view, it is not necessary to repeat them here.
new proposal paragraph 1, ws 1b:
Child sexual abuse material and other unlawful content massively violates the rights of those affected and those who happen to come into contact with it by accident. All stakeholders recognise that measures and regulations must be put in place to protect vulnerable groups, e.g. children. They also acknowledge that the rights and needs for protection from violence and abuse as well as privacy and participation of all must be equally guaranteed.
With respect, the proposed revision does not address the objections in the comment from 26th June.
The phrase “self-generated abuse material” is unclear and problematic. Does it refer to content generated by the subject of the abuse themselves? Or was the intent to refer to what is more usually called “user generated content” (UGC) – that is, content generated by the users of a platform/service, but not necessarily depicting or referring to themselves?
Further, “pathological” is being used here in an unconventional way, without explaining how its normal use (‘relating to or caused by disease”) is relevant to this issue.
Replacing the word “widespread” with “trending” is not a material improvement, and does not substantiate the claim.
The main question is: is this unclear message an accurate reflection of unclear statements in the workshop, or did the workshop produce a clear message which is expressed unclearly here?
Proposal new paragraph 1, ws1b:
Self-generated abusive material and pathological content are emerging as the most widespread harms to vulnerable groups online. All stakeholders are aware that measures and regulations must be taken to protect vulnerable groups. They are also aware that the rights and needs for protection against violence and abuse as well as privacy and participation must be guaranteed.
Proposal new paragraph 1, Ws 1b:
Self-generated abusive material and pathological content are emerging as the most widespread harms to vulnerable groups online. All stakeholders are aware that measures and regulations must be taken to protect vulnerable groups. They are also aware that the rights and needs for protection against violence and abuse as well as privacy and participation must be guaranteed.
This whole paragraph has no sense at all. the first sentence needs a reference note, to clarify which study supports the statement.
On the second sentence, there is a misguided concept. The so called “privacy paradox”, is a term coined by EDPS Buttarelli, which describes the contradiction between privacy-related attitudes and behavior: we know our privacy is being exploited, and we feel like we’re losing control of our data, yet statistics show we’re not changing our online behavior to claim it back. Instead, we keep feeding data-collection machines that undermine our privacy. The use of this term on the second sentence is inappropriate and out of context.
Finally, in line with previous EuroDIG positions, we must acknowledge that strong end-to-end encryption protects citizens, enterprises and governments. There should be no excuse to undermine it. Therefore, the second sentence also needs a reformulation.