|

3.1 HUMAN RIGHTS AND HUMAN AGENCY

1 Leave a comment on paragraph 1 0 Many of the most important documents that codify human rights were written before the age of digital interdependence. They include the Universal Declaration on Human Rights; the International Covenant on Economic, Social and Cultural Rights and the International Covenant on Civil and Political Rights; the Convention on the Elimination of all forms of Discrimination against Women; and the Convention on the Rights of the Child.

2 Leave a comment on paragraph 2 0 The rights these treaties and conventions codify apply in full in the digital age – and often with fresh urgency.

3 Leave a comment on paragraph 3 0 Digital technologies are widely used to advocate for, defend and exercise human rights – but also to violate them. Social media, for example, has provided powerful new ways to exercise the rights to free expression and association, and to document rights violations. It is also used to violate rights by spreading lies that incite hatred and foment violence, often at terrible speed and with the cloak of anonymity.

4 Leave a comment on paragraph 4 0 The most outrageous cases make the headlines. The live streaming of mass shootings in New Zealand.98 Incitement of violence against an ethnic minority in Myanmar.99 The #gamergate scandal, in which women working in video games were threatened with rape.100 The suicides of a British teenager who had viewed self-harm content on social media101 and an Indian man bullied after posting videos of himself dressed as a woman.102

5 Leave a comment on paragraph 5 0 But these are manifestations of a problem that runs wide and deep: one survey of UK adult internet users found that 40 percent of 16-24 year-olds have reported some form of harmful online content, with examples ranging from racism to harassment and child abuse.103 Children are at particular risk: almost a third of under-18s report having recently been exposed to “violent or hateful contact or behaviour online”.104 Elderly people are also more prone to online fraud and misinformation.

6 Leave a comment on paragraph 6 0 Governments have increasingly sought to cut off social media in febrile situations – such as after a terrorist attack – when the risks of rapidly spreading misinformation are especially high. But denying access to the internet can also be part of a sustained government policy that itself violates citizens’ rights, including by depriving people of access to information. Across the globe, governments directed 188 separate internet shutdowns in 2018, up from 108 in 2017.105

PROTECTING HUMAN RIGHTS IN THE DIGITAL AGE

7 Leave a comment on paragraph 7 0 Universal human rights apply equally online as offline – freedom of expression and assembly, for example, are no less important in cyberspace than in the town square. That said, in many cases it is far from obvious how human rights laws and treaties drafted in a pre-digital era should be applied in the digital age.

8 Leave a comment on paragraph 8 0 There is an urgent need to examine how time-honoured human rights frameworks and conventions – and the obligations that flow from those commitments – can guide actions and policies relating to digital cooperation and digital technology. The Panel’s Recommendation 3A urges the UN Secretary-General to begin a process that invites views from all stakeholders on how human rights can be meaningfully applied to ensure that no gaps in protection are caused by new and emerging digital technologies.

9 Leave a comment on paragraph 9 0 Such a process could draw inspiration from many recent national and global efforts to apply human rights for the digital age.106 Illustrative examples include:

  • 13 Leave a comment on paragraph 13 0
  • UNESCO has used its Rights, Openness, Access and Multi-stakeholder governance (ROAM) framework to discuss AI’s implications for rights including freedom of expression, privacy, equality and participation in public life.110
  • 14 Leave a comment on paragraph 14 0
  • The Council of Europe has developed recommendations and guidelines, and the European Court of Human Rights has produced case law, interpreting the European Convention on Human Rights in the digital realm.111

15 Leave a comment on paragraph 15 0 We must collectively ensure that advances in technology are not used to erode human rights or avoid accountability. Human rights defenders should not be targeted for their use of digital media.112 International mechanisms for human rights reporting by states should better incorporate the digital dimension.

16 Leave a comment on paragraph 16 0 In the digital age, the role of the private sector in human rights is becoming increasingly pronounced. As digital technologies and digital services reach scale so quickly, decisions taken by private companies are increasingly affecting millions of people across national borders.

17 Leave a comment on paragraph 17 0 The roles of government and business are described in the 2011 UN Guiding Principles on Business and Human Rights. Though not binding, they were unanimously endorsed by the Human Rights Council and the UN General Assembly. They affirm that while states have the duty to protect rights and provide remedies, businesses also have a responsibility to respect human rights, evaluate risk and assess the human rights impact of their actions.113

18 Leave a comment on paragraph 18 0 There is now a critical need for clearer guidance about what should be expected on human rights from private companies as they develop and deploy digital technologies. The need is especially pressing for social media companies, which is why our Recommendation 3B calls for them to put in place procedures, staff and better ways of working with civil society and human rights defenders to prevent or quickly redress violations.

19 Leave a comment on paragraph 19 0 We heard from one interviewee that companies can struggle to understand local context quickly enough to respond effectively in fast-developing conflict situations and may welcome UN or other expert insight in helping them assess concerns being raised by local actors. One potential venue for information sharing is the UN Forum on Business and Human Rights, through which the Office of the High Commissioner for Human Rights in Geneva hosts regular discussions among the private sector and civil society.114

20 Leave a comment on paragraph 20 0 Civil society organisations would like to go beyond information sharing and use such forums to identify patterns of violations and hold the private sector to account.115 Governments also are becoming less willing to accept a hands-off regulatory approach: in the UK, for example, legislators are exploring how existing legal principles such as “duty of care” could be applied to social media firms.116

21 Leave a comment on paragraph 21 0 As any new technology is developed, we should ask how it might inadvertently create new ways of violating rights – especially of people who are already often marginalised or discriminated against. Women, for example, experience higher levels of online harassment than men.117 The development of personal care robots is raising questions about the rights of elderly people to dignity, privacy and agency.118

22 Leave a comment on paragraph 22 0 The rights of children need especially acute attention. Children go online at ever younger ages, and under-18s make up one-third of all internet users.119 They are most vulnerable to online bullying and sexual exploitation. Digital technologies should promote the best interests of children and respect their agency to articulate their needs, in accordance with the Convention on the Rights of the Child.

23 Leave a comment on paragraph 23 0 Online services and apps used by children should be subject to strict design and data consent standards. Notable examples include the American Children’s Online Privacy Protection Rule of 2013 and the draft Age Appropriate Design Code announced by the UK Information Commissioner in 2019, which defines standards for apps, games and many other digital services even if they are not intended for children.120

HUMAN DIGNITY, AGENCY AND CHOICE

24 Leave a comment on paragraph 24 0 We are delegating more and more decisions to intelligent systems, from how to get to work to what to eat for dinner.121 This can improve our lives, by freeing up time for activities we find more important. But it is also forcing us to rethink our understandings of human dignity and agency, as algorithms are increasingly sophisticated at manipulating our choices – for example, to keep our attention glued to a screen.122

25 Leave a comment on paragraph 25 0 It is also becoming apparent that ‘intelligent’ systems can reinforce discrimination. Many algorithms have been shown to reflect the biases of their creators.123 This is just one reason why employment in the technology sector needs to be more diverse – as noted in Recommendation 1C, which calls for improving gender equality.124 Gaps in the data on which algorithms are trained can likewise automate existing patterns of discrimination, as machine learning systems are only as good as the data that is fed to them.

26 Leave a comment on paragraph 26 0 Often the discrimination is too subtle to notice, but the real-life consequences can be profound when AI systems are used to make decisions such as who is eligible for home loans or public services such as health care.125 The harm caused can be complicated to redress.126 A growing number of initiatives, such as the Institute of Electrical and Electronics Engineers (IEEE)’s Global Initiative on Ethics of Autonomous and Intelligent Systems, are seeking to define how developers of artificial intelligence should address these and similar problems.127

27 Leave a comment on paragraph 27 1 Other initiatives are looking at questions of human responsibility and legal accountability – a complex and rapidly-changing area.128 Legal systems assume that decisions can be traced back to people. Autonomous intelligent systems raise the danger that humans could evade responsibility for decisions made or actions taken by technology they designed, trained, adapted or deployed.129 In any given case, legal liability might ultimately rest with the people who developed the technology, the people who chose the data on which to train the technology, and/or the people who chose to deploy the technology in a given situation.

28 Leave a comment on paragraph 28 0 These questions come into sharpest focus with lethal autonomous weapons systems – machines that can autonomously select targets and kill. UN Secretary-General António Guterres has called for a ban on machines with the power and discretion to take lives without human involvement, a position which this Panel supports.130

29 Leave a comment on paragraph 29 0 The Panel supports, as stated in Recommendation 3C, the emerging global consensus that autonomous intelligent systems be designed so that their decisions can be explained, and humans remain accountable. These systems demand the highest standards of ethics and engineering. They should be used with extreme caution to make decisions affecting people’s social or economic opportunities or rights, and individuals should have meaningful opportunity to appeal. Life and death decisions should not be delegated to machines.

THE RIGHT TO PRIVACY

30 Leave a comment on paragraph 30 0 The right to privacy131 has become particularly contentious as digital technologies have given governments and private companies vast new possibilities for surveillance, tracking and monitoring, some of which are invasive of privacy.132 As with so many areas of digital technology, there needs to be a society-wide conversation, based on informed consent, about the boundaries and norms for such uses of digital technology and AI. Surveillance, tracking or monitoring by governments or businesses should not violate international human rights law.

31 Leave a comment on paragraph 31 0 It is helpful to articulate what we mean by “privacy” and “security”. We define “privacy” as being about an individual’s right to decide who is allowed to see and use their personal information. We define “security” as being about protecting data, on servers and in communication via digital networks.

32 Leave a comment on paragraph 32 0 Notions and expectations of privacy also differ across cultures and societies. How should an individual’s right to privacy be balanced against the interest of businesses in accessing data to improve services and government interest in accessing data for legitimate public purposes related to law enforcement and national security?133

33 Leave a comment on paragraph 33 0 Societies around the world debate these questions heatedly when hard cases come to light, such as Apple’s 2016 refusal of the United States Federal Bureau of Investigation (FBI)’s request to assist in unlocking an iPhone of the suspect in a shooting case.134 Different governments are taking different approaches: some are forcing technology companies to provide technical means of access, sometimes referred to as “backdoors”, so the state can access personal data.135

34 Leave a comment on paragraph 34 0 Complications arise when data is located in another country: in ‎‎2013, Microsoft refused an FBI request to provide a suspect’s ‎emails that were stored on a server in Ireland. The United States ‎of America (USA) has since passed a law obliging American ‎companies to comply with warrants to provide data of American ‎citizens even if it is stored abroad.136 It enables other ‎governments to separately negotiate agreements to access their ‎citizens’ data stored by American companies in the USA. ‎

35 Leave a comment on paragraph 35 0 There currently seems to be little alternative to handling cross-‎border law enforcement requests through a complex and slow-‎moving patchwork of bilateral agreements – the attitudes of ‎people and governments around the world differ widely, and the ‎decision-making role of global technology companies is ‎evolving. Nonetheless, it is possible that regional and ‎multilateral arrangements could develop over time. ‎

36 Leave a comment on paragraph 36 0 For individuals, what companies can do with their personal data ‎is not just a question of legality but practical understanding – to ‎manage permissions for every single organisation we interact ‎with would be incredibly time consuming and confusing. How to ‎give people greater meaningful control over their personal data ‎is an important question for digital cooperation. ‎

37 Leave a comment on paragraph 37 0 Alongside the right to privacy is the important question of who ‎realises the economic value that can be derived from personal ‎data. Consumers typically have little awareness of how their ‎personal information is sold or otherwise used to generate ‎economic benefit. ‎

38 Leave a comment on paragraph 38 1 There are emerging ideas to make data transactions more ‎explicit and share the value extracted from personal data with ‎the individuals who provide it. These could include business ‎models which give users greater privacy by default: promising ‎examples include the web browser Brave and the search engine ‎DuckDuckGo.137 They could include new legal structures: the ‎UK138 and India139 are among countries exploring the idea of a ‎third-party ‘data fiduciary’ who users can authorise to manage ‎their personal data on their behalf. ‎

 
 
Next: 3.2 Trust and social cohesion

Source: https://comment.eurodig.org/digital-cooperation-report/3-1-human-rights-and-human-agency/