There was a grand total of nine sessions as well as a suite of screenings centred on artificial intelligence at Content London this year. This was in addition to wider discussions infiltrating many other sessions. AI was by some margin the the hottest and most controversial topic. Where to begin? The stats and predictions came thick and fast. ChatGPT was able to reach 100m users in just two months. By the end of the year 25-30% of social media content will be AI-generated. Writer Simon Mirren said AI could shrink the cast and crew of his Criminal Minds from 400 to just 30.
In the lively (to say the very least) ‘The future of entertainment’ debate, Deep Fusion Films’ Benjamin Field’s summation of the current AI landscape was ‘[Tech companies] selling our own work back to us in order to make it cheaper and put us out of work… it doesn’t really make sense.’
Giving some much-needed practical answers how generative AI is being applied to scripted TV creation right now, was the ‘How AI innovation helped build India’s next hit procedural crime series’ case study. The session focused on upcoming Indian detective drama Apex: Infinity, which made extensive use of AI both in the pre-production stages – providing set and costume design blueprints, and editing and refining scripts for example. And in post-production – visual effects, creating marketing materials, and dubbing the series to make it accessible to audiences anywhere in the world. We think these kind of practical applications is where we’ll see AI shine the most in TV for the moment, rather than anything too world-altering.
The message was a well trodden one – AI will streamline the creative process and make it easier and cheaper for the industry to realise their lofty visions. But where does that leave the set designers, cinematographers, marketers, dubbing voice actors and countless others whose crafts are being ‘streamlined’ by machines? Even more food for thought was offered in the UFA-hosted session focused on German soap Unter Uns, which for its 30th anniversary used AI to bring back an iconic character (pictured) whose actress passed away more than 15 years ago.
Back in the ‘future of entertainment’ session, Mirren was hopeful of the ‘democratising effect’ of AI when it came enabling storytellers without multi-million pound budgets to bring their stories to life. He likened the generative AI boom to the advent of music samplers in the 70s and 80s. Giving underprivileged creatives the ability to create full tracks without lavish recording equipment or a studio full of musicians, this gave birth to the endless creativity of hip-hop and dance music. This itself through raised questions of exactly who was ‘in charge of’ the AI boom, and how democratising it could be when the panel on stage was comprised of no less than seven white men of a certain age…
But when it comes to content created predominantly by AI, is this something viewers even have the desire to consume? Or is it the essence of human creativity at the heart of a piece of art which makes us want to watch, or listen? Will Page, ex-Spotify, highlighted the fact the viral A.I.-generated Drake song released in April did indeed race to 20m streams in the blink of an eye. But these came from around 20m different listeners. Next to nobody played it twice. There is an incomprehensibly huge influx of AI-created media on the horizon – David Jenkinson dared to imagine a not-too-distant future where ‘a hundred Game of Thrones could be created in a day’, at the tap of a button. But will audience demand get anywhere close to matching a near infinite supply?
Whether AI-created TV will warrant more than a novelty watch or not though, this may be inconsequential once we truly can’t distinguish what is AI-generated and what is human-crafted. While extremely impressive, the live motion footage generated by text-to-video tool Sora (pictured) and shown by OpenAI’s Chad Nelson in his keynote wasn’t perfect. The clips still showed some signs of jankiness and an uncanny inability to emulate real world physics. But it’s sensible to assume the gap will inevitably become indistinguishably minute. Consensus was reached that it’s vital for content distributors to clearly signpost when and where AI has been used in content creation – as Instagram has been doing with its labelling of posts on its platform for example.
Somewhat reassuringly on a core level, Nelson said writing and storytelling would become more important than ever as the tools to manifest these imaginary stories and worlds become ubiquitous. Humanity’s stories will still be written by humans, even if they’re refined and brought to life with the help machines.
Back in the week’s opening session Allied Global Marketing pointed to the trend that major economic booms stemming from new tech – the advent of moving pictures, or the internet for example – usually occur once they reach 25% penetration among the general public. AI currently sits at 24%. It does feel like we’re at a precipice when it comes to AI’s effects on the world and on our industry.
Another one of the most impressive things about AI is the near exponential rate it adds to what it can do, and how well it can do it. AI was also a major focus subject at last year’s Content London. But the conversation was no way near as lively or polarised. By next year’s conference will we be closer to coming to terms with how to harness and regulate AI? Or (more likely), will there be even more questions to answer? Right now, it’s vital that we don’t bury ours heads. We need to get familiar and stay familiar with artificial intelligence as it advances – how best to use it effectively but ethically. It’s become cliche by this point, but we’ll for a last time this year echo the much repeated mantra that, for now at least, ‘AI won’t steal your job, but someone using it may…’