BBC and University of Surrey researchers use ethical forms of AI to incorporate new personalized elements into media broadcasts

Notice on the news before posting football results: “Turn away now if you don’t want to know the score.” Think about what might happen if your TV figured out which teams you were watching and which results to replay or if it learned to ignore football altogether and tell you about something else. This becomes possible through media personalization, which academics have been working on with the BBC. While there are still significant hurdles to the application of live fabrication, other aspects of media personalization are coming closer. To some extent, media personalization already exists. It’s similar to your BBC iPlayer or Netflix recommending things to you based on what you’ve already seen or your Spotify playlists you might enjoy.
What the researchers are talking about, however, is customization inside the app. This can include things like changing the system duration (you’ll most likely receive a shortened or extended version), adding subtitles or visuals, or improving the dialogue (to make it more intelligible if, for example, you’re in a noisy environment where your hearing starts to pick up). Alternatively, this may include providing additional information about that system (similar to what you can currently do with the BBC Red Button).
The significant difference is that these choices would not be generic. They could watch repackaged shows based on your own preferences and tailored to your needs, depending on where you are, what units you’re connected to, and what you’re doing. Synthetic intelligence will be used to deliver new types of media personalization to viewers at scale (AI). Machine learning is how AI works. It performs tasks based on large sets of data provided in the system to train it (an algorithm).
Recognize AI Challenges
The Organization for Economic Co-operation and Development (OECD) AI Principles require AI to benefit humanity and the environment, including fairness, security, transparency and accountability. However, AI technologies are increasingly accused of automating inequalities due to biases in their formation, which can perpetuate existing misconceptions and disadvantage vulnerable teams. For example, gender biases in recruitment or racial gaps in the applied sciences of facial recognition are examples.
Another possible downside of AI technologies is generalization, which researchers are asking for help with. This is exemplified by the first known fatality caused by a self-driving car. He did not recognize a girl pushing her bike down a street after learning from street footage, which most likely recorded multiple cyclists and pedestrians separately.
As a result, researchers must continue to retrain AI systems to learn more about their real-world behaviors and desired outcomes. It is impossible to instruct a computer for every possible scenario, and it is impossible to foresee every possible unintended consequence.
Researchers don’t yet know what kinds of problems their AI can cause in the world of personalized media. This is what they want to discover through their mission. However, dialogue enhancement may work better with male voices than with female voices. Moral issues don’t always become a priority in a technology-driven company until government legislation or a media storm demands it. Isn’t it better to anticipate and solve these difficulties before they go so far?
The Citizen Council
To develop a successful personalization system, researchers need to involve the audience from the start. This is essential for introducing a broad perspective into technical groups suffering from narrowly defined efficiency goals, departmental “groupthink” and a lack of variation.
The Center for Vision, Speech and Signal Processing at the University of Surrey and the BBC are collaborating to test a mechanism to use advances in AI in media personalization, called Artificial Intelligence for Experiences custom media, or AI4ME. Researchers are experimenting with “citizen councils” to create a conversation in which feedback from the councils will inform technology development. Their citizen council could have lots of examples and be independent from the BBC.
First, they create a workshop topic around a technology they’re researching or a design issue, like using AI to cut out a presenter from a video and replace them with someone. another. Sessions spark insights and allow for conversation with experts from many fields, making you feel like one of many engineers. After that, the council consults, deliberates and makes recommendations.
The topics provide the Citizen Council with a framework for evaluating specific technologies in light of the OECD AI Guidelines and debating the appropriate uses of personal information in media personalization, regardless of commercial or political motives. There are risks. They would not properly reflect diversity and there would be misunderstandings about suggested technologies or a refusal to listen to others’ points of view. What if board members are unable to reach consensus or acquire bias?
Researchers will not determine how many tragedies are averted through this process. However, new ideas that influence the technical design or new points that allow to consider remedies early will be markers of success. And a series of tips is not even the beginning. They intend to follow this path throughout the five-year technical analysis mission. They will discuss what they have learned and invite new groups to try this strategy to see how it works.
According to the researchers, this strategy can bring broad moral issues within reach of engineering builders during the early stages of building complex AI technologies. Their members will not be subject to the goals of big tech or governments, but they will represent the values and ideals of society.
AI4ME: https://ai4me.surrey.ac.uk/
The references:
Suggested