WS 8: Artificial intelligence, ethics and the future of work
Su Sonia Herring
- Artificial intelligence (AI) must be accountable, transparent, modifiable – privacy, and determinability by design is a must.
- Unintended and unexpected consequences in the development of AI and robotics are unavoidable.
- There must be an ethical code for algorithm developers.
- The education system needs to be revamped to prepare future workers with the necessary skills to deal with the new forms of jobs that AI will bring.
- Interdisciplinary teams are needed to relieve the burden of engineers and they need to be educated about ethics.
- AI technology needs a common, international framework; ethical clearance is not sufficient.
- ‘I’m just an engineer’ is not an excuse when developing AI.
- AI is or will become a race, expecting adherence to ethical code by developers is not realistic.
- We need a kill switch for automated systems.
- AI can be part of the solution to the problem of future of work.
Recent Comments on this Site
31st July 2021 at 6:00 pm
There have been some comments about the messages on the WS-16 mailing list rather than being logged as part of the messages procedure. The final result was a list of messages agreed by consensus.
See in context
11th July 2021 at 5:39 pm
I suggest to rephrase more concrete: There must be a global collaborative effort in the form of dialogic regulation between governments, tech companies, and civil society to develop a solution grounded in human rights that will address disinformation and harmful content
See in context
11th July 2021 at 5:36 pm
delete “being”
See in context
11th July 2021 at 5:34 pm
I suggest tp rephrase a bit more concrete: Liberal approaches of governments towards online platforms at there start of the platform economy led to …
See in context
11th July 2021 at 5:24 pm
Although I was the one who mentioned this during the session, I am not sure that we should push for frequency regulation – besides, it is very likely to be be outside our scope
See in context
11th July 2021 at 1:12 pm
NEW WORDING PROPOSED:
One institution ALONE CANNOT solve the problem. Multistakeholder approach IS needed, TO BUILD AN HARMONIOUS SYSTEM WHERE HARD AND SOFT REGULATION MECHANISMS FIND A BALANCE WITHIN THEIR RESPECTIVE BOUNDARIES, MANDATES AND ACCOUNTABILITY MECHANISMS. IN PARTICULAR Platforms have a big stake, and should be required to develop transparent self/co-regulation.
See in context
11th July 2021 at 1:09 pm
propose to add at the end of the phrase: “Those defenses should be strengthened by media education: a field where public service broadcasters have a special role to play based on their remits.
See in context
11th July 2021 at 1:01 pm
This legitimacy needs to arise from clear legislative frameworks in which hard regulation and soft regulations could find an equilibrium, each one with its own specific role and with clear boundaries and accountability mechanisms.
See in context
11th July 2021 at 12:57 pm
[that will address disinformation and harmful content]
See in context
10th July 2021 at 9:16 pm
As I mentioned during the session, I believe we should be careful with using the term ‘content moderation’ in the context of the Internet infrastructure level, as these services are typically very far removed from the actual content. I would like to suggest amending this paragraph to read: “Recent cases show that certain infrastructure providers unwillingly take action that could be argued to be content moderation by suspending services for the platforms in an ad-hoc manner without any transparent policy. But infrastructure services have limited possible options, which tend to be temporary solutions (clearing cache), overbroad reactions (limiting access) or options that open up websites to cyberattack (terminating services of particular users).”
See in context