Rapporteur: Marco Lotti, Geneva Internet Platform
- Trustworthiness should be regarded as a prerequisite for innovation. When addressing it, we shall look at two sides: One that regards the characteristics product (i.e. its ethically relevant characteristics) and one that is related to how trustworthiness is communicated to the people. One solution could be developing a standardised way of describing an ethically relevant framework of AI systems. As an example, an independent organisation formed by four Danish organisations launched a new company labelling system in 2019 that aims to make it easier for users to identify companies who are treating customer data responsibly.
- Striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications. The European Commission’s White Paper addresses this aspect especially in high-risk scenarios when rapid responses are needed. Trustworthiness can also be a driver for innovation.
- AI and data are interlinked: It is difficult to make sense of large data sets without AI, and AI applications are useless if fed with poor quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes addressing sharing, protection, and standardisation of data. However, AI also presents important peculiar characteristics (such as ‘black box’ and self-learning elements) that make it necessary to update existing frameworks that regulate other technologies.