We put new custom features into shows using an ethical version of AI

Credit: LightField Studios/Shutterstock
“Turn away now if you don’t want to know the score,” they told the news before releasing the football results. But imagine if your TV knew which teams you were watching, which results to remember, or knew it was ignoring football altogether and telling you about something else. With media personalization, which we are working on with the BBC, this kind of thing becomes possible.
Significant challenges remain to scale live production, but there are other aspects of media personalization that are closer. Indeed, media personalization already exists to some extent. It’s like your BBC iPlayer or Netflix suggesting content based on what you’ve watched previously, or your Spotify curation playlists that you might like.
But what we’re talking about is customization within the program. This may include adjusting the length of the program (you may be offered an abbreviated or extended version), adding subtitles or graphics, or improving the dialogue (to make it more intelligible if, for example , you are in a noisy place or if your hearing is starting to go). Or it may include providing additional program-related information (much like you can access now with the BBC’s red button).
The big difference is that these features would not be generic. They would see shows repackaged to your own tastes and tailored to your needs, based on where you are, what devices you have connected, and what you are doing.
To deliver new kinds of media personalization to audiences at scale, these features will be powered by artificial intelligence (AI). AI works through machine learning, which performs tasks based on information from large datasets fed into it to train the system (an algorithm).
This is the subject of a partnership between the BBC and the Center for Vision, Speech and Signal Processing at the University of Surrey. Known as Artificial Intelligence for Personalized Media Experiences, or AI4ME, the partnership aims to help the BBC better serve audiences, especially new audiences.
Recognize the difficulties of AI
The AI Principles of the Organization for Economic Co-operation and Development (OECD) require AI to benefit people and the planet, incorporating fairness, security, transparency and accountability.
Yet AI systems are increasingly accused of automating inequalities due to biases in their training, which can reinforce existing biases and disadvantage vulnerable groups. This can take the form of gender bias in recruitment or racial disparities in facial recognition technologies, for example.
Another potential problem with AI systems is what we call generalization. The first death recognized by a self-driving car is an example of this. Having been trained on road footage, which likely captured many cyclists and pedestrians separately, he did not recognize a woman pushing her bike down a road.
So we need to keep retraining AI systems as we learn more about their real-world behavior and desired outcomes. It is impossible to instruct a machine for all eventualities, and impossible to predict all potential unintended consequences.
We don’t yet know exactly what kind of problems our AI might present in the area of custom media. This is what we hope to discover through our project. But for example, it could be something like improved dialogue that works better with male voices than female voices.
Ethical concerns don’t always become a priority in a technology-driven company, unless government regulation or a media storm demands it. But isn’t it better to anticipate and solve these problems before reaching that point?
The citizen council
To properly design our personalization system, it requires audience engagement from the start. This is key to bringing a broad perspective to technical teams who can suffer from narrowly defined performance metrics, “groupthink” within their departments, and a lack of diversity.
Surrey and the BBC are working together to test an approach to using people – normal people rather than experts – to oversee the development of AI in media personalisation. We are testing “citizen councils” to create a dialogue, where the insights we get from the councils will inform the development of technologies. Our citizens’ council will have diverse representation and will be independent of the BBC.
First, we frame a workshop theme around a particular technology we’re investigating or design issue, such as using AI to cut out a presenter from a video, to replace them in another video. The workshops make it possible to draw opinions and exchange with experts around the theme, such as one of the engineers. The council consults, deliberates and then produces its recommendations.
The themes give the Citizen Council a way to examine specific technologies against each of the OECD AI Principles and discuss acceptable uses of personal data in media personalization, regardless of commercial or political interests.
There are risks. We may not adequately reflect diversity, there may be a misunderstanding around proposed technologies or a reluctance to hear others’ points of view. What if board members are unable to reach consensus or begin to develop bias?
We can’t measure which disasters are averted by going through this process, but new ideas that influence engineering design or new problems that allow solutions to be considered sooner will be signs of success.
And a round of advice isn’t the end of the story. We aim to apply this process throughout this five-year engineering research project. We’ll share what we learn and encourage other projects to take this approach to see how it translates.
We believe this approach can bring broad ethical considerations to engineering developers during the early stages of designing complex AI systems. Our participants are not beholden to the interests of big tech or governments, but they convey the values and beliefs of society.
People’s perception of personalized media messages in real time
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Quote: Future of TV: We’re putting new custom features in shows using an ethical version of AI (2022, March 8) Retrieved March 8, 2022 from https://techxplore.com/news/2022 -03-future-tv-personalized-features-ethical.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.