By Kati Bremme, Innovation & Prospective Directorate

 Algorithms are made to solve problems. Generating suspicion in some, perceived as a miracle solution by others, Artificial Intelligence (AI) is everywhere, impacting every industry. Some, though, struggle a bit more to fully embrace it, an example being the media. Compared to the financial or health sectors, the media’s capacity to acquire the necessary tools to integrate AI is less flexible and dynamic. In its latest AI Predictions Report, the PwC firm pinpoints these differences, reporting that 20% of interviewed executives plan to deploy AI in their enterprise, but only 7% in the media sector.

However, the application fields for AI in the mediums of written press, cinema, radio, television and advertising are broad: automation of business processes and customer relationships, social network monitoring and listening, information verification, predictive analysis of success, video creation and post-production, voice and conversation assistants, automated drafting, personalization, recommendation, optimization of content dissemination, emotion tracking and accessibility.

The following panorama does not claim to be exhaustive but offers an illustration of the ways in which AI is used throughout the value chain of information and entertainment media.
These applications hold the potential to spark momentum into an industry that is reinventing itself.

Why now?

Though it emerged in the 50s, Artificial Intelligence has been experiencing a second spring in the past years thanks to a combination of three beneficial factors: the exponential growth of computers’ capacity, the mass of available data, and open source software rendering the technology more generally available, such as Tensorflow, Keras, Torch, Pytorch, Python language.

The algorithms and platforms necessary to run them are now accessible on the cloud (often made available by GAFA) and allow the media to embark on the adventure of algorithms. Machine learning has turned into deep learning, which is characterized by an AI that no longer needs humans to feed it with calculations, but which instead feeds itself with billions of data to build cognitive functions on its own. Performing AI becomes learning AI. Google’s AutoML system even created a network of AI neurons without human intervention. AI becomes contextual, multidisciplinary, and perhaps soon self-aware …

Uses of AI throughout the media as viewed by EBU

Audience-wise, adopting AI has the advantage of being easily apprehended by its users, among other things. Humans don’t need to adapt to AI, or acquire new skills (as was the case in prehistory with MS DOS for example). We interact with AI using the most simple and natural tool: our voice, or even images. Some ethical questions remain, which we will address at the end of this text.

Four major categories for the use of AI in the media are emerging: Marketing and Advertising, Research and Documentation, Innovation in user experience and Services.

Time has therefore come to adopt this new technology for the benefit of the audience. In the words of Antonio Arcidiacono, the new technical director of the EBU, “AI is becoming mainstream“. Proof in 12 examples of uses:

Artificial intelligence as a tool for augmented information

The fear of being replaced by robots, shared by journalists as well, is not a new one. AI will indeed come to replace certain tasks and lead certain trades to become obsolete. In that regard, the year 2020 will be pivotal. According to Gartner, AI will by then eliminate 1.8 million existing jobs while creating 2.3 million new jobs. The future of journalists, however, is not jeopardized. Though there is a reality to “robot reporters”, which many newsrooms already use to speed up the production process, the content to which they are assigned remains confined to very specific typologies.

The Associated Press agency has been publishing robot reporters-produced wires for standardized financial news announcements since 2015. That same year, Le Monde called upon a Syllabs journalist-robot to cover the departmental and regional elections. With Heliograf, which was developed in 2016 for the Olympic Games, The Washington Post uses AI to ensure coverage of small-scale events such as local student sporting events that attract audiences too small to justify sending a human reporter to cover them. Finland’s YLE television uses its Voitto bot to generate 100 articles and 250 images per week. However, cultural differences are observable in how newsrooms adopt new technology. It varies between Northern and Southern countries, but also between public and private media services, private services being more focused on a performance-based logic.

Though one speaks of robot reporters, the extent of their accomplishment is not so much creation of content, but rather the assembling of existing material into predetermined templates. However, the technology is advancing and language generators are increasingly able to take context into account in order to select the most suitable format.

AI can also help journalists to analyze data and detect trends based on multiple sources of information varying from conventional open sources to new sources such as the data published by Wikileaks. With its capacity to scan and analyze massive quantities of data, AI enables the constant monitoring of trends on social networks and the detection of weak signals. In that, it contributes to the accomplishment of one of the public service’s mission: enabling the public to easily find the information it is searching for to be better informed. Associated Press uses NewsWhip to detect trends on Twitter, Facebook, Pinterest and LinkedIn. The Reuters agency uses News Tracer to detect trends and breaking news on Twitter as well as to support the production of content. The system, designed with Alibaba, collects, categorizes, annotates and sorts news items.

Beyond the detection of trends on social networks, AI is also able to analyze massive volumes of data, in a manner that humans would not be able to. A new type of investigative journalism is thereby emerging, built on a collaboration between humans and machines. Monitoring has the ability to tap into multiple sources of information varying from conventional open sources to new sources such as the data published by Wikileaks, or the data each person generates via connected objects (smartphone, connected watch, electric scooter…). The Panama Papers result from the processing of 2.6 terabytes of data and pattern tracking through algorithms.

Seeing as part of the journalist’s work is automated, AI forces us to rethink and reaffirm journalistic values to return to an “authentic” form of journalism by taking individual users into consideration. However, beware of adding uselessly to an already enormous mass of information: the content generated by AI must remain relevant and that is only possible through smart collaboration between man and machine. The right balance needs to be struck between human judgment and automation, intuition, experience and creativity to increase efficiency levels when it comes to the collection, processing and validation of information.

Artificial intelligence as a tool to counter fake news

Though AI is able to generate fake news, it can also contribute to detecting it. From false information circulated by bots with a Slavic accent to Deep Fakes imitating Barack Obama speaking, the progress made by AI towards harmful purposes is impressive. So much so that Open AI recently put an end to its GPT-2 project; its creators being frightened by the level of sophistication the AI had attained. AI has at times been touted as a miracle remedy, including by Mark Zuckerberg at his first hearing before the US Congress in the wake of the Cambridge Analytica scandal, during which he kept responding to all of the embarrassing questions: “I don’t know, our AI team will fix it”. Needless to say, truth does not magically result from Big Data. However, the very technology used to create a fake is also the one used to detect it, and AI is thus an important ally to be harnessed when fighting misinformation.

As we know, the problem with fake news is not so much that people no longer trust in the media, but rather that they so readily trust in any fake news. With its advanced analytical capabilities, AI can automate, at least in part, the verification of information and the validation of photos/videos authenticity using image recognition, metadata analysis, and real-time comparison of information with databanks.

Combined with blockchain, AI can also be used to authenticate information. Facebook, more or less successfully, uses AI to detect “semantic patterns” said to be characteristic of fake news. Truepic and Serelay, used by the Wall Street Journal’s team in charge of verifying information, rely on blockchain technology to authenticate images. ADOBE uses an algorithm to detect manipulated image. DeepNews.ai is a tool that is thought mainly for aggregation platforms. It selects the most relevant news stories on the Internet, and the algorithm then takes into consideration the breadth of treatment of the topic, the expertise, the analytical qualities as well as the means implemented – relying on a convolutional neural network.

AFP’s Medialab team has led several projects – WeVerify being the latest to date – that support journalists in detecting fake news, including by searching for the precise source of photos and videos when these are possibly not corresponding to the events they are supposed to describe.

Again, the algorithm is not a miracle solution. Most initiatives and tools work hand in hand with human beings, whose abilities to analyze and verify sources—even through a quick phone call—still exceed those of robots. While an algorithm can simply be trained to optimize searches by using a given content’s click rate data, this is of no use when it comes to fake news detection; the datasets used to train the algorithm must here be coded by human fact-checkers.

Artificial intelligence as a tool to improve conversations on the Internet

Hate speech, discrimination, violence and trolls plague the Internet. AI can use natural language processing (NLP) to automatically analyze contents, sort them and implement automatic moderation on a 24/7 basis. However, the automatic analysis of content has its limits. Even the most sophisticated forms of AI used by the platforms cannot prevent the dissemination of violent images in real time, as was recently witnessed once again during the Christchurch shooting. Platforms do not fully rely on AI moderation but choose instead to moderate content using a combination of AI and human moderators. AI is not going to solve the problem of content moderation online – and relieve the misery of Facebook’s human moderators – anytime soon. As matter of fact, it may never do, if technology doesn’t progress to the point where it can perceive and understand certain nuances such as humor.

Automatic systems are nevertheless a must in order to analyze massive amounts of content available on social networks, detect nuisances, determine which contents are to be potentially deleted (referring to humans in cases of doubt) and even prevent the online transfer of questionable content by blocking the upload of hate-filled images. Algorithms also pave the way for the return of comment sections on websites, which had often been shut down by editors for lack of moderating resources. The New York Times uses the Perspective tool in order to assess the level of toxicity of comments through keyword recognition. Its hope is to thereby pass from 10% of articles being open to comments to 80%. The Guardian and The Economist have also adopted this tool.
AI can thus be used to give audiences greater opportunities to express themselves by automating a certain number of tasks, even though it doesn’t replace humans when it comes to dealing with nuances that go beyond robots’ capacity to understand.

Artificial intelligence as a tool to harness the potential of voice

Natural language processing and voice recognition have contributed to the development of conversational assistants (chatbots, smart speakers) capable of dialoguing with humans. Voice control already accounts for 20% of searches (Meeker), and is forecasted to reach 50% by 2020 (ThinkWithGoogle). Voice assistants are a new media portal.

When we give voice commands to Google Home, Amazon’s Alexa or Apple’s Siri, AI is used to process our voice. This same neural network and Natural Language Processing technology can be used to design specific concepts and determine keywords that will trigger actions. Conversely, through Natural Language Generation, AI can transform text into voice. Billions of data pieces are needed to train algorithms to translate our accents, dialects, outlandish formulations and other language distinctiveness into mathematical formulas that can be understood by a robot. That’s also why Alexa needs to listen in on all of our conversations, according to Jeff Bezos.

Google developed Bidirectional Encoder Representations from Transformers (BERT), whose release marks a significant evolution in the development of voice AI: with an accuracy rate of 93.2%, computers are now capable of learning language contingencies and can apply what they learn to a multitude of tasks.

Many tools are being developed to better harness the full potential of the voice, the most natural of all means of communication. Lyrebird is a Canadian start-up that creates ultra-realistic artificial voices and voice avatars. Alexa now has a professional newscaster voice for the reading of news. Google’s AI is now capable of recognizing a voice even if it never heard it before. The AI-powered voice takes on the intonations and style of a human newscaster after a text-to-speech training of only a few hours. Snips.ai offers a fully embedded voice assistant service for professional builders, whatever the medium used is, that is respectful of users’ privacy. VSO is becoming the new SEO, a major issue for the media, and Google now proposes podcasts in its search results.

However, though Google’s virtual assistant Duplex is capable of imitating your voice and your faults to make appointments, there is a 25% of it that are actually human beings working in a call center.

Artificial intelligence as a tool to foster interactivity and engagement

In 1960, MIT’s Artificial Intelligence Lab created the ELIZA machine, which simulated a Rogerian psychotherapist by rephrasing most of the claims made by the “patient” into questions directed at him/her. Thanks to AI, the possibilities for interactions are much more developed today. Chatbots originally used question and answer libraries, but advances in artificial intelligence increasingly allow them to “analyze” and “understand” messages through Natural Language Processing (NLP) technologies and to have learning capabilities empowered by Machine Learning. Whether for the purposes of information consumption or customer interaction (Gartner Marketing forecasts 85% of human-free interactions by 2020), the automation of dialogue is becoming more sophisticated and personalized.

Building basic bots is made more easily accessible, with turnkey solutions such as Facebook’s Messenger service, or platforms such as Omnibot, Politibot and Sently that distribute plug-and-play solutions – the latter having developed specific formats for the media.

Conversational interaction is a way for the media to provide a closer user experience, be it through bots embedded in messaging services to directly reach users (1.6 billion users for WhatsApp, 1.3 billion for Facebook Messenger), or bots directly integrated within the websites and apps.

Chatbots automate the relationship, foster engagement and offer immediate personalization. Quartz developed its Bot Studio to propose personalized conversational narratives. The Guardian has its own chatbot since 2016, CNN and The Wall Street Journal use Facebook Messenger to disseminate information, and NBC offers breaking news through the Slack app. The BBC incorporated a bot in its articles for the purpose of interacting with the audience.

Interactive fiction content is also developed: The Inspection Chamber is a format that the BBC created to interact through conversation, StoryFlow offers interactive audio stories targeting children, The Wayne Investigation is an interactive sound fiction available through connected speakers using Amazon’s Alexa. Alexa also adapts Choose Your Own Adventure into an audio version. With OLI, Radio France offers bedtimes stories for the connected speaker in the child’s bedroom.

Beyond these examples, AI can lead to innovation with respect to storytelling in the advertising, marketing, film and audio sectors, whether as a simple assistant or a content creator.

Artificial intelligence in extended reality

Thanks to technological advances, chatbots are being transformed into virtual companions that are actually capable of discussing and debating. Artificial intelligence and virtual reality appear to be two different fields of research, but technological development shows that the two domains are increasingly interconnected. Initially restricted to the gaming world, these new technologies are slowly emerging in the realm of audiovisual creation. AI will transform storytelling with virtual beings that are capable of advanced interactions with human beings.

With its “Whispers in the Night” project, Fable studio took the plunge and started to create virtual animated beings using Artificial Intelligence. To create these AI enhanced computer animations, the same technology than that wich Epic Games or Magic Leap are based on is also used, for immersive storytelling purposes. Emoshape uses the “Emotion Processing Unit” (EPU) component to determine users’ emotions in real time and enable robots to respond with an emotional state in tune with that of the user. Technology even uses science to optimize interactions and make them as realistic as possible. The MIT Media Lab customized a VR headset that incorporates a device capable of detecting the user’s emotions. This physiological capture module is comprised of electrodes that collect “galvanic skin response” (GSR) data, as well as of photoplethysmography (PPG)-type sensors that collect heart rate data.

Being as it is less apprehensive of humanoid robots than European countries, China has launched AI-empowered newscasters through its Xinhua news agency: a male version named Qiu Hao (that speaks Chinese and English) was first introduced on November 9, 2018 and was followed by a female version (Xin Xiamomeng) on February 19, 2019. Powered by artificial intelligence and machine learning, they can independently comment live videos and read texts displayed on a teleprompter.

AI as a tool for indexing, archiving and optimizing searches

Search engines used to run exclusively on text but progresses brought about by AI now enable searches on images, videos and sounds. Combining technologies of image recognition, machine learning, speech-to-text, NLP, face, object and location recognition, AI can automate the creation of content metadata to improve archiving but also encourage their discoverability. Data structuring, like the EBUCore format, is the essential step for the automated processing. Conversions of data formats, transcoding, audio and subtitles extraction or even transfers/copies/purges (FTP, HTTP) are all automatable content management tasks enabling almost live cataloging. Automated indexing also speeds up the work of journalists and eases fact-checking.

The lifetime of a given content is very short, and without proper metadata it is impossible to retrieve a specific topic among all that has been produced; hence the importance of optimizing metadata generation. Metadata generation is made faster, cheaper and more accurate thanks to AI, provided that it is trained with enough data.

It is almost impossible for the media to develop fully controlled proprietary solutions. Many turnkey tools exist, often supported through cloud systems, by Microsoft, Google, Amazon, IBM, OpenText, Oracle and so many others.

Newsbridge, very active in the media sector, offers a solution for real-time and automated indexing of rushes, via image recognition. Meanwhile, it contributes to optimizing the production process of a topic and to sustaining the later reuse of the contents. A live translation feature is also available for interviews.

Editor is an AI-based tool used since 2015 by the NYT in order to simplify the verification and formatting of information. When writing the article, the journalist uses tags to point out the key elements – the machine then learns to identify these elements, to understand the topic of the article and does a real-time search to retrieve information on the subject. The BBC News Lab has launched a similar tagging technology called Juicer and another tool called Summa that uses language recognition to better index content. LEANKR enables precise video indexing with automated tagging, smart thumbnail creation, and a search engine embedded in the video using Natural Language Processing, speech-to-text and OCR.

AI helps to optimize the accuracy of search results. Computer vision technologies also make it possible to better process image contents and speed up the production process. Machines can now easily identify individuals or situations in photos, generate legends or feed more complete databases.

Artificial intelligence as a tool for enhanced targeting and personalizing

Recommendation algorithms are not a new thing. As a matter of fact, Tapestry, the pioneer in this field, celebrated its 25th anniversary in 2017. AI is also a tool supporting the adaption of content dissemination strategies in real time, thanks to recommendation algorithms. AI is used to analyze social networks and identify the most appropriate dissemination time, to analyze audience data, to automate generation of titles/summaries/illustrations with keywords and hashtags, guaranteeing content visibility, personalized newsletters, custom playlists…

AI is able to customize content according to each user’s profile: personalized according to preferences, user activity patterns, and contextual data (location, time, weather …). Focus groups are now replaced by the actual behavior of existing users base.

Textbook cases of personalization are Amazon, Facebook and Netflix. The latter fully personalizes its home page. Its Meson system coupled with Machine Learning (through the collection of data fostering constant adaptation) even suggests a personalized visual (9 versions) on which the user is most likely to click according to his/her activity pattern and context. The objective is to identify the largest combo of shows that could correspond to segments in order to satisfy users rather than having content corresponding to the greatest number. Algorithms are then serving creativity and diversity, which are chosen over standardization.

AI can automate the curation of content, update thematic playlists on a regular basis, profile users to personalize recommendations. According to a Reuters study, 59% of the media use artificial intelligence to recommend articles or plan to do so. Your Weekly Edition is the NYT’s personalized newsletter launched in June 2018. It sends a personalized content selection (via algorithmic & human curation) for one single purpose: to show the user only contents that he/she hasn’t seen already. Amazon Personalize allows developers with no Machine Learning experience to easily create personalization capabilities. Freshr is a Messenger bot for young adults (20-35 years old) that summarizes the latest most important news according to the user’s tastes, in only 5 minutes each morning.

Recommendation algorithms are far from perfect. The economist Matthew Gentzkow even speaks of a “personalization paradox” to describe their deceptive side. How many times have we been offered content we already purchased, or just content posted by our friends on Facebook? Here as well, AI progress can help strike the right balance between personalization and intelligent content promotion. And it might well be that traditional methods are at times just as effective: RAD, Radio Canada’s journalism lab, invites their audience to respond to online surveys for the purpose of offering them content adapted to their expectations.

Artificial Intelligence contributing to greater accessibility

On the one hand, automatic transcription technologies make journalists’ lives easier by optimizing their work time, and at the same time contribute to making content accessible to people with disabilities, through the automation of subtitles (speech-to-text), audio synthetizing of text (text-to-speech), contextual recognition of images for real-time audio description or translation.
AI Media TV offers captions and transcriptions for live events and in replay. It just recently launched the Scribblr.ai service. Trint is a transcription tool funded by Google DNI, which is used to automatically transcribe audio and video flows. It is used by the AP and integrated into Adobe Premiere. Mediawen manages the translation of video content in real time using IBM Watson and text-to-speech solutions, in synthetic voice or subtitling. AFP has developed the Transcriber tool, which allows its journalists to automate the transcription of interviews.

Artificial Intelligence as a tool for video production and creation

With a growing requirement for the media to produce short formats adapted to social networks, many start-ups that offer turnkey solutions have been developed. AI can then be used to automatically generate text from graphic materials, or a video based on texts. AI is also used in the different technical stages of recording and broadcasting. It is involved in image post-production and special effects production. The number of solutions comprising AI building blocks has exponentially grown in recent years in the development of video editing and media management.

AI is able, thanks to image recognition, to analyze video rushes to produce coherent editing. Most major editing software vendors, such as Adobe, Avid, and Elemental (an affiliate of Amazon) have already added automatic video processing features to save editors time. Adobe and Stanford, for example, have developed an AI program that automates some of the video editing work while still giving humans creative control over the final result. The tool can for example suggest different editing options for a dialogue scene. Gingalab offers to create automated and personalized videos. The app is able to automatically generate best-scenes according to a predefined editorial line (humor, tension, focus on a character …), provides simplified editing tools, automatically publishes on social networks and aggregates analytics.
In September 2018, the BBC aired a program entirely made from archives: “Made By Machine: When AI Met The Archive”. The one hour format of assembled archive clips found among the vast archive library of the BCC shows some inconsistencies (a weakness that the AI writers of Sunspring, It’s No Game and Zone Out had already been criticized about).
Even though the Generative Adversarial Networks (GAN) technology helps improve the copying of creations by robots, AI is in no place to replace artists. It remains based solely on probabilistic and combinatorics systems that have no symbolic intelligence or emotional capacity.

Artificial intelligence as a tool to monetize and predict success

From advanced audience analysis to detection of the right target, the benefits brought about by machine learning algorithms support marketing professionals in discerning between conjectures and essential tasks. AI, by cross-referencing behavioral data, audience analysis and trend detection is able to predict the potential commercial success of content before its release. Advanced analytics are used to discover patterns, correlations and trends that improve decision-making processes. AI is used throughout the marketing process; starting with customer acquisition (audience analysis and segmentation, scoring and targeting, visual identification of the context), involved in the transformation (personalization and recommendation, creation of contents, optimization of sites and supports, automated campaign management) all the way through retention (conversational agents, client program automation, behavioral analytics, attribution calculation and predictions).

AI is now able to collect “emotional data” in order to analyze our behavior, not only by our clicks, but also by our emotions. This is the last level of personalization: media that offer content adapted to our current emotional context. Frank Tapiro, from Datakalab, describe this transformation as follows: “Thirty years long, I created emotions. Now, I use neuroscience and data to measure emotion”. Amazon is even working on a bracelet that will be able to detect our emotions.

Prevision.io is an online platform (SAAS) that automatically creates predictive models from datasets (internal or external, structured or unstructured) and displays the results on dashboards. This automated machine learning platform identifies predictive scenarios regarding audience losses, unsubscribing, and advertising price management. It promotes the transparency of its solution, explaining each result and proposing action recommendations and/or impact assessments. The group Le Parisien-Les Echos recently won a Google DNI financing for an anti-churn program. Entitled High Fidelity, this project should allow the sharing of data from call centers, newsletters, print mailings and interactions from apps and websites, and predict domino effect in churn to avoid the massive loss of readers. The NYT, for its part, sells premium advertising spaces based on the reader’s feelings with “Project Feels”. Vionlabs is a Swedish company involved in content indexing based on automated recognition of emotions. It analyzes content and creates graphs representing the different emotional moments. These data will then be able to feed a recommendation engine based on emotions.

AI is used to acquire a very precise understanding and knowledge of users and in turn be able to target the best time – and the best way – to invite them to switch to a paid subscription. AI here serves as a decision support and anti-churn tool.

AI and the issue of ethics applied to the media sector

In the midst of a crisis of confidence, the use of AI and opaque recommendation algorithms involving behavioral analysis may not be an obvious choice for the media. The use of AI requires the establishment of clear rules and transparent documentation for the audience. Big Data used to feed AI is based on massive data collection (including personal). Ownership of data and independence from third-party sources are key for the development of an independent ecosystem, and could be critical to the long-term survival of businesses, especially those in the media sector.

However, most of the datasets and algorithms available on GAFA clouds appear to be biased or even racist.

How then can the values of the public service (information, education) be integrated into a recommendation algorithm? How should it be ensured that a topical public debate brings together? How can recommendation continue to be performed maintaining social cohesion? What is the degree of recommendation we desire? Where is the right balance between personalization and content discovery?

The English government has launched an observatory of the use of AI in the public service. The BBC applies its ethical rules to the “Responsible Machine Learning in the Public Interest” program, joined by the EBU. The latter’s Big Data working group is considering the ethical use of algorithms in public service media so as to avoid bias and to respond to the challenges of this tool not yet fully mastered – inequality in the face of artificial intelligence, neurohacking, technological sovereignty, and above all the need for the complementarity of the brain with artificial intelligence.

The interpretability and the explicability of AI are the biggest challenge. The intelligibility of algorithms in general and particularly those of artificial intelligence has become a dominant requirement, as mentioned for example in the French Villani report, and highlighted in Europe with the introduction of GDPR. Being transparent starts with clearly indicating that a content or a recommendation is wholly or partly offered by an algorithm.

On the other hand, the possibilities of AI do make it possible to reach niche audiences for which the media didn’t have the means to create content. Algorithms enable the creation of entirely customized playlists on highly targeted topics. And perhaps the media can also give space to emptiness. In that respect, Jonnie Penn, guest author at the Impact of AI on Media EBU Workshop in November 2018, calls for “data deserts“, “protected areas from data“, to make room for “healthy differences of opinions“.


The buzz around AI can trigger expectations that are too high: AI is not the miracle remedy and, as described in most cases hereabove, needs to be associated with human inputs, especially for content creation. However, it is already operational on the demand side in the areas of broadcasting, access to content and monetization. It has great potential for social good to help navigate the mass of content by optimizing search and personalizing recommendation, and to prevent manipulation.

More applications are expected to be developed on top of those already mentioned, such as autonomous cars… But this new technology requires awareness to be raised, on the side of the actors involved in the media value chain, but also on the side of the audience, from the youngest to the oldest, so that they can grasp the challenges of AI.

AI is useful for certain tasks, but does not replace humans. The greatest added value of the media is (or should be) the production of complex content affecting judgment, interpretation, creativity and communication – areas where humans still surpass algorithms, and will certainly continue to for years to come.

But AI can also help to ask the right questions. How can value be created for the user? AI has a very significant impact on society, and the role of the media is to ensure that it is used wisely, especially by public service media.

Use cases are still to be invented, always remaining mindful not to use AI where there is no real need or real added value in its use. Just because the technical ability to integrate it exists doesn’t mean it is always relevant, as Jonn Penn notes: “Machine learning is like salt: you can add it but if you have too much it is unhealthy“.

To access the complete map with more examples (not exhaustive of course), you can download the English version.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.