0

Written by Heritiana Ranaivoson & Luciano Morganti


Draft Ethics Guidelines for Trustworthy AI

On the 18th of December 2018, the European Union’s high level expert group on artificial intelligence today released draft ethics guidelines. They were open for comments until the 18th of January 18 (extended to 1st of February).

This post recaps MediaRoad’s response, in particular arguing for the media sector to be also considered, in particular as a use case.  It follows the structure of the guidelines.

 

Introduction: Rationale and Foresight of the Guidelines 

The Guidelines seem to be addressing everyone. It appears that it is aimed at developers and deployers of AI solutions. This should be made more explicit. Two other audiences would deserve similar (but of course adapted) guidelines: (i) policy-makers and persons in charge of regulating AI; (ii) non-professional users of AI systems, i.e. general public. Category (i) may be the primary target for the next document to be drafted by the AI HLEG. Category (ii) would deserve their own document, which would probably give a greater place to data literacy, e.g. in relation to these Ethics Guidelines

The Guidelines state that a “mechanism will be put in place that enables all stakeholders to formally endorse and sign up to the Guidelines on a voluntary basis.” Hence the guidelines are not legally binding. We are curious to see how that will be done, and in particular what will be the incentives for AI developers and deployers to endorse these guidelines. We are afraid there is no way to ensure that once endorsed, these guidelines are indeed respected and followed-up.

Conversely, regulation can play an important role in ensuring the principles are enforced. For example, the GDPR is mentioned in these Guidelines as a law to be respected by developers and deployers of AI. Beyond, we have high hopes for the GDPR as a tool to contribute to transparency (e.g. point 2 of Article 22). The GDPR obligation for data controllers demands that where personal data are being processed automatically (profiling), that the data subject is informed about the rationale of the processing and its possible consequences. This obligation will allow further scrutiny (for users, researchers, etc.) of content recommendation algorithms that are dominant in social media and search results.

 

Chapter I: Respecting Fundamental Rights, Principles and Values – Ethical Purpose

Section 3.4 on “Equality, non-discrimination and solidarity including the rights of persons belonging to minorities” could be improved by specifying that in an AI context, services should be designed to be truly inclusive and accessible to all, independent notably of age and disabilities. Besides, minorities should be considered and included not only regarding access but also production. The latter (inclusion in production) is mentioned in Chapter 2, section 2, as a non-technical method to achieve requirements but we think it should be a principle (in this Chapter) or at least a Requirement (Chapter 2, section 1).

The Principle of Non maleficence should include the idea that an AI developer should adopt a “data minimalism” approach, i.e. only asking for data they really need. The current text takes it as granted that it is a must to collect, store, use, etc. data and that what matters is the way it is done. On the contrary, it is important to always have as a question in the whole development and deployment process: is it even needed to collect; store, use, etc. data? This is in particular important for vulnerable demographics. Otherwise, we run the risk of increasing defiance against AI.
In the same Principle, it is unclear why immigrants are put in the same category as children.
Finally, also the same Principle mentions diversity and inclusion as principles, but it is equally important to have minorities also involved among developers and deployers, not only as users. This is however addressed in Chapter 2, section 2.

The Principle of Explicability should start with the necessity of informing users that AI is being used, even before explaining. This is addressed only in chapter 3 (p.26).

 

Chapter II: Realising Trustworthy AI

It is good that diversity in data is mentioned as important for traceability (p.20). However, it needs to go beyond it applied to the dataset to be applied to the team developing and/or deploying. While the idea appears as one of the non-technical methods, diversity as a whole (hence not only regarding data but also teams, provided content, options, etc.) would deserve to be a requirement in this Chapter or a Principle in Chapter 1.

 

Chapter III: Assessing Trustworthy AI

We advise that media becomes one of the use cases. It is key to closely consider how AI is transforming the media value chain, from content production to the audiences’ experience. Account should also to be taken of abusive practices by online platforms involving a content recommendation to users. Recent scandals, such as Facebook and Cambridge Analytica, have raised debates around the potential impact of algorithms on elections and on the shaping of social movements. If there is any reality in phenomena such as filter bubbles or echo chambers; if AI plays a role in the distribution and spreading of fake news; more generally if diversity is really a core value for the European Union; it means that AI in the media sector already has an impact on our democracies, and is, therefore, a core issue.

 

General Comments

The Draft Ethics Guidelines for Trustworthy AI are an important document. The major point is that it goes beyond a list of ethical principles and shows clear concern for the implementation. A particularly important Requirement is transparency, on how algorithms work, on the data they use, allowing users to understand the underlying biases.

Our main general comment is that media is an important field to consider when drafting guidelines on AI, while it is currently belittled, e.g. on page 3 with the example of recommending a song (thus belittling the importance of music recommendation).

New technologies, from smartphones voice-controlled speakers to wearable devices, are vastly increasing the amount of digital data we produce. In this context, AI is transforming the way media professionals analyze and transform data, with an impact on the whole society. For example, robot journalism (or news automation), while having started in the late 80s, is becoming an important part of news production (notably on sport or stock exchange). It speeds up news production and generates a vast amount of content in a matter of sector to be distributed and consumed in print and online.

AI is also core in the automated personalization processes. Faced with content overload, consumers are supplied with recommendation systems designed to help them select what they are going to watch or listen to. AI-based recommendation systems are used to create tailored services, which are then pushed to mobile or web applications.

AI has obvious benefits. It can play a key role in the standardization of solutions for accessibility services (e.g. for the semi-automatic generation of subtitles) as well as the application of new production methods. Robot journalism can free the time of journalists from doing a mundane task and give them more time to investigative journalism. It can easily adapt to human request and improve their reporting and can produce content in different languages, such as collecting daily economic data and writing similar articles based on the data every day.

However, AI also raises challenges, which could prove dangerous for the media sectors, and beyond for the whole society. This could first represent a threat with the possibility of job losses for media workers, in particular journalists, replaced by news automation. The development of AI will also lead to the creation of new jobs. It is anyway at this stage difficult to predict the exact impact, whether positive or negative, on media workers.

AI is also a technology used in the development of so-called deepfakes (manipulated digital videos that overlay another person’s face onto a body or change what people actually said), making the lines between the fake and the real become increasingly blurred.

Finally, regarding the impact of personalization, there is a risk of filter bubbles developing, that is to say, situations where users do not obtain access to and, hence, remain unaware about some types of content. Data-driven and fully automated personalization models are not sufficiently looking into how to include diversity and serendipity in algorithmic functions to broaden the consumer’s experience.

One common feature of these threats caused by AI is that the solution often relies on the AI technology itself, provided that it follows ethical guidelines, such as the ones drafted here. Thus, AI can be used to develop fact-checking tools that can prevent fake news to spread. For example, Truly Media is an online verification platform to authenticate content published online. The development of such tools itself requires careful consideration of what fake news are and more generally of the ethical rules that should frame their use.

 

Authors:

Dr Heritiana Ranaivoson is Senior Researcher and Project Leader at imec-SMIT-Vrije Universiteit Brussel (Belgium). He holds a MSc in Economics and Management from the Ecole Normale Supérieure de Cachan and a PhD in Industrial Economics from Université Paris 1, Panthéon-Sorbonne. He has led several projects for European Commission, Unesco, Google, etc. His main research interests are cultural diversity, media innovation, wearables and the economic impact of digital technology on cultural industries.

 

Luciano Morganti is Professor at the VUB in the Communication Department where he teaches in the international master New Media and Society in Europe. He teaches courses related to New Media, the European Public Sphere, and Internet Governance. He is a visiting professor at the College of European Political and Governance Studies department and the Development Office. Luciano graduated in Philosophy at La Sapienza Rome (1994), he has a master degree from the College of Europe  – Bruges (1997) – European Advanced Studies – Human resources development and a master degree from the ISC – Saint Louis – Brussels (2002) – Interactive Multimedia Project – Cybercommunication. He obtained his PhD from the Vrije Universiteit Brussel (2004) – Communication Studies. His main research interests are the European Public Sphere and Citizens Participation, Internet Governance and the changes brought by New Digital Media to our societies. @MorgantiL; @BrusselsTalking

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.