Why OpenAI’s imitation of Scarlett Johansson drew actor’s ire | Latest News India
Hollywood icon Scarlett Johansson expressed “shock, anger, and disbelief” on Tuesday after learning that artificial intelligence company OpenAI had demonstrated a new version of its chatbot featuring a voice that sounded “eerily similar” to her own.
The voice, named “Sky,” was one of five options users could select when interacting with the human-like “GPT-4o” version of OpenAI’s AI technology, which was unveiled last week.
Initially, the demonstration was met with awe and admiration for the capabilities of the new model, called Omni. Not only could it identify and describe its surroundings, but it could also mimic human speech patterns, detect moods, and engage in flirtatious and humorous conversation.
However, the initial amazement quickly gave way to the realisation that OpenAI’s creation bore a striking resemblance to the virtual assistant featured in the 2013 Hollywood science fiction film Her. In the movie, the character portrayed by Joaquin Phoenix develops a romantic relationship with Samantha, an AI chatbot voiced by Johansson.
According to Johansson’s statement, OpenAI’s chief executive, Sam Altman, approached her in September, requesting that she lend her voice to the system. Altman believed that her participation would provide comfort to those who were apprehensive about the technology.
Despite declining Altman’s offer, Johansson later discovered that the “Sky” voice possessed an uncanny resemblance to her own, to the extent that even her closest friends and media outlets were unable to distinguish between the two.
“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said in her statement, as reported by the Associated Press.
Johansson further noted that Altman hinted at the intentional similarity between the voices when, on May 13, he posted a single-word tweet on X: “her”, a reference to the movie.
Prior to Johansson’s statement, OpenAI had moved to address the criticism and concerns around the voices. In a blog post, the company said AI voices “should not deliberately mimic a celebrity’s distinctive voice” and claimed that the voice of “Sky” belonged to a “different professional actress”.
Later, in a statement to the Associated Press following Johansson’s scathing response, Altman attempted to distance the company from the allegations. He said the voice actor behind “Sky” was cast “before any outreach” to Johansson, implying the similarity was purely coincidental.
While it may be difficult to ascertain the extent to which GPT-4o’s “Sky” voice was intentionally designed to resemble Johansson’s character “Samantha” without an official statement from Altman or OpenAI executives, the evidence suggesting such a connection is compelling. Altman’s personal outreach to Johansson, his company’s subsequent attempt to negotiate just days before the launch, and his cryptic “her” tweet on May 13 all point to a deliberate effort.
Altman has previously cited Johansson’s character as an inspiration for the direction he envisions AI interactions taking in the future.
Her was intended to serve as a cautionary tale, highlighting the potential challenges and complexities that may arise as humans increasingly interact with the technologies they create.
Since the release of ChatGPT in late 2022, which demonstrated the ability to match human performance in various written and cognitive tasks, concerns have been raised regarding the impact of such technology on employment, education, creative expression, and society as a whole.
In the realm of the arts, these concerns became clear when people began to create convincing, but completely synthetic works in the styles of musicians such as Drake, The Weeknd and Grimes. In films, AI has already the ability to learn and recreate, to a degree indiscernible by humans, the appearance and mannerisms of actors and the literary styles of script writers.
It turned into a full-blown conflict when the Writer’s Guild of America went on Hollywood’s second largest work strike seeking, among other things, contractual guardrails to prevent AI from recreating or building atop their work.
This isn’t to say progress in technology has not been or cannot be inspired by works of fiction.
Amazon’s Jeff Bezos has frequently referred to Alexa as being inspired by the Star Trek “computer”, which could understand and respond to voice commands. Many perceive the metaverse, championed by Mark Zuckerberg’s Meta (formerly Facebook), as bearing a striking resemblance to the virtual reality depicted in Neal Stephenson’s novel Snow Crash, where individuals escape their physical realities to inhabit a virtual one. In October, Elon Musk suggested that the design of the Tesla CyberTruck was influenced by the cyberpunk film Blade Runner.
However, the crucial distinction lies in the ethical considerations and transparency surrounding these developments. In her statement, Johansson revealed that her lawyers have requested Altman and OpenAI to provide a detailed explanation of the exact processes through which the company created the “Sky” voice that so closely resembled her own.
This incident has redirected attention to OpenAI, which recently disbanded its “superalignment” team dedicated to mitigating the long-term risks associated with artificial intelligence. The company dissolved the group weeks ago and announced the departure of co-founder Ilya Sutskever and superalignment team co-leader Jan Leike.
Moreover, Johansson’s revelation of Altman’s personal involvement has brought him back into the spotlight. In November, Altman, who has now risen to the same level of prominence as other Big Tech leaders like Satya Nadella, Sundar Pichai, and Tim Cook, found himself at the centre of a boardroom controversy when he was briefly dismissed and rehired four days later.
The latest saga, to quote Johansson needs some crucial questions answered from Altman and OpenAI, especially at a time when “we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities”.
Moving forward, the onus is on OpenAI and other AI companies to prioritise ethical considerations and maintain a high level of transparency in their practices – especially when it digests the works of others to train their AI models.