AI in cultural media: let’s talk about it
It’s no secret (and honestly, it’s getting kind of boring) that AI is everywhere. Whether you like it or not, AI has integrated into our workplaces, conversations, and daily lives.
But what is interesting is its pace.
Over the last few months, we’ve seen a change from AI driving back-end digital behaviors to developing main character energy in our culture. As AI begins to shape our society, we must look at its current rules, the boundaries it is pushing, and the implications this could have. We spoke to Sophie Booth from Brainlabs, who’s done just that.
The AI shift
As a digital agency, Brainlabs is no stranger to AI. AI content can be lacking, to say the least, with much of it not even fooling your 80-year-old nan. But as it’s advancing, we’re seeing an emergence of content that’s actually convincing. Things that make us question – is this AI or human-generated? Or worse, not realise it is AI-generated at all!
As the lines blur between reality and robots, there’s no denying its influence. And that got us thinking – how is something with so much power regulated? What rules are in place to stop people abusing AI and protect influencers who are creating their own content?
The recent viral assumed ‘Fred Again X Lily Allen’ collaboration put a spotlight on these rules, or lack of them. Turns out, the new hit song ‘Somebody Else’ is an e.motion track that although eerily sounds like Allen, the singer has stated she has nothing to do with it. e.motion hasn’t confirmed it’s an AI version of Allen’s voice, but the timing with her recent divorce seems less than coincidental. The song fits with the vibe of a break-up comeback anthem. If the voice is AI-generated, we’re looking at non-approved IP usage, although no one seems to be questioning this.
AI in cultural media
And it’s not just AI in music that’s raising an ethical eyebrow. It’s increasingly creeping into social media, TV, and adverts. It seems to be coming at us from every angle, and taking over every screen.
A new reality dating show, ‘Heartwired’, premiered on 3rd March. The twist? Not all the contestants are human. The aim of the game is to ‘spot the hot from the bot’, so players could be falling for the love of their life or a smooth-talking string of code. Sure, everyone is a willing participant, but the whole premise feels a little weird.
Sophie says, ‘The fast-paced developments we’ve seen in the last few months are fascinating, as to me they represent AI jumping over a new boundary. AI is no longer being used solely to generate images for our social consumption; it’s the human character trait and personality generation capabilities that, in recent months, have become the central feature of mainstream culture. Insinuated or otherwise, it’s spilling out of its social ‘home’ onto our music streaming platforms and connected TVs at a notable pace.’
But while ‘Heartwired’ is arguably a bit of fun, the use of AI in Netflix’s No. 1 series ‘American Murder: Gabby Petito’ is much creepier. Petito’s voice reads her journal entries throughout the series, but it’s not really her. There is a brief disclaimer at the start, and Petito’s family has permitted the synthetic recreation, but viewers aren’t happy. While the filmmakers argued authenticity, audiences felt uncomfortable and raised ethical concerns. Obviously, Petito herself could never consent.
On social media, virtual influencers are emerging daily, bringing moral questions with them. Shudu, a digital model created in 2017, has sparked cultural controversy. Shudu, who’s partnered with the likes of Vogue and Cosmopolitan, is a black woman. The problem is that her creators are white men. They’ve stated that Shudu’s image comes from real black models, but critics aren’t having it. The backlash is, at best, Shudu’s creators are profiting off a black woman’s image, without hiring real black models. And, at worst, they’ve been accused of digital blackface.
AI is no longer just running in the background, but creeping into center stage with evolving human-like capabilities. It’s debated that these advancements may spill over moral lines, and the regulations around AI policies are also unclear.
So, what are the rules around AI IP usage?
Turns out, there aren’t many. In the UK, there are currently no general statutory regulations of AI at all.
The EU is more on it. They recently passed the world’s first comprehensive AI regulation, which means brands must disclose AI-generated or manipulated content that resembles real people, places, or events.
The U.S. is set to follow but is dragging its feet. However, Tennessee has introduced a law requiring businesses to get consent before replicating an artist’s voice for advertising. And, on a federal level, lawmakers are considering proposals that would force AI developers to get permission before using personal data to train their models.
Despite the lack of regulations, brands are setting their own rules from fears of IP issues, privacy risks, and potential brand damage. Even Google has set limits, banning AI-generated images of people and brand logos in its new gen AI product.
Where do we draw the line?
This is the question on many people’s minds. Is the rise of AI in cultural media groundbreaking? A bit of fun? Or is this the start of something more sinister?
As AI continues to accelerate, we must consider the potential for serious harm. Especially the impacts of putting it front and center in cultural media.
Sophie says. ‘As an agency, we utilize AI as a core tool across our proprietary tech suite for increased efficiency. We recognize the role AI plays in today’s landscape and work with our clients who use it to promote safe and ethical usage.’
But we’re also keen to hear our esteemed peers’ thoughts and positions on this and get industry conversations started. So now’s your chance – let us know your stance on AI usage in cultural media today, or even just any dubious usages you may have come across recently!
Chat with us today to have your say.