Sam Altman at the wheel or not, OpenAI’s tech knots the fortunes of many
Quite how this unfolds, whether Sam Altman returns or doesn’t as CEO of OpenAI, it is clear that OpenAI’s unfolding saga of mismanagement doesn’t simply restrict the chaos within their headquarters. The otherwise avoidable scenario began late last week, when the board of directors of OpenAI fired Sam Altman and also removed chairman Greg Brockman in a surprise move. Though as it has since turned out, lacked conviction or any sort of solid foundational basis to stand on – the biggest and only allegation against him, not being “consistently candid in his communications with the board”.
In an increasingly artificial intelligence (AI) focused world as it is structured today, OpenAI doesn’t work in isolation. Their GPT and Dall-E large language models (LLMs) are the backbone of many an AI product available today. The biggest of those being, Microsoft. It is therefore no surprise that CEO Satya Nadella, are believed to be playing what they’d hope would be a pivotal role in getting the AI company’s house back in some semblance of order. Perhaps with Altman, back at the wheel.
Also Read: Ousted OpenAI CEO Sam Altman joins executives at headquarters: Report
At the time of writing this, the same board is considering options including resigning as a whole, while Sam Altman is believed to be back at OpenAI headquarters as conversations continue about his potential return. It may or may not work out. He too is keeping his options open. There is next to no confidence in OpenAI’s existing board. Days earlier, at its Ignite 2023 keynote, Microsoft talked about its already extensive AI vision for Windows PCs and a wide range of services including Microsoft 365, as the GPT-based Bing Chat got rebranded to Copilot and there’s also a new Copilot Studio conversational AI for organisations to build their own, customised ‘copilots’. Where does this underlying capability come from?
The answer is emphatic – OpenAI’s custom GPTs, which OpenAI had announced earlier in the month, at their DevDay conference.
Snap, an augmented reality (AR) tech company, has detailed plans for its next generation of smart spectacles. It’ll use, of course, OpenAI’s GPT model to define the intelligent capabilities in its new lenses. With the ChatGPT Remote API, they’re giving Lens developers an ability to integrate ChatGPT in their Lenses, the expectation being better conversational capabilities to define the experience.
This also wouldn’t be the first time Snap and OpenAI have worked together. Snap’s My AI chatbot also uses the GPT engine as its foundation for text conversations with a machine, much like OpenAI’s own ChatGPT and Microsoft’s Bing chatbot. Snap, with upcoming AI filters for smart spectacles, will have identified virtual reality experiences as a unique use case for AI to make it easier to build new tools.
Earlier this month, Snap shipped the Lens Studio 5.0 Beta to 3,30,000 developers who’ve created as many as 3.5 million lenses for the platform. Ask your smart spectacles how far away Neptune is from Earth, and you’ll have the answer flash in front of your eyes – approximately 2.7 billion miles.
Under Altman, OpenAI had been actively looking for partnerships as it wanted to create an open-source data set for training large language models. Anyone could have access to this dataset to train an LLM they were building or refining. This would be in addition to much more detailed, often specific, private data sets for training proprietary AI models or prototypes. Some other partnerships include imaging library Shutterstock for training the Dall-E text-to-image generator and Salesforce looking for an AI assistant in conversations with customers.
Pragmatism and the human element
Under Sam Altman’s leadership, a consistent theme (at least with outward communication) had been about aligning generative AI, and the inevitable pitstop that’ll be artificial general intelligence (AGI), with the needs of humanity. They said all the right things, particularly as expectation grew that global regulations on AI would be inevitable at some point. This year, that theme was amplified.
In February, Altman talked in detail about planning for AGI and beyond. The principles OpenAI said it was working with, included maximizing the good and minimising the bad whilst AGI would be an amplifier for humanity, wider access to AGI alongside governance mechanisms and navigate inevitable “massive” risks as well as deploy less powerful version of the technology for greater accuracy.
In June, in an interview with Bloomberg, Altman was asked, why should OpenAI be trusted. “You shouldn’t,” his candid response. Only trust OpenAI if they were doing the right thing, democratising control. “The board can fire me, and I think that’s important. I think the board over time should get democratized to all of humanity. There’s many ways that could be implemented”, his response. Altman went on to explain why OpenAI had a “weird” hierarchy, citing he believed the technology they were building should belong to “humanity as a whole”.
The same structure which led to him being fired, with weak arguments to side with as justification. Did OpenAI develop AI tech or capabilities which Altman didn’t tell the board about?
Altman isn’t one to throw caution to the wind. Despite being an investor in tech start-up Humane’s recently unveiled AI Pin wearable, he’s still reserving his judgement on what it can and cannot do. “Plenty of technology that looked like a sure bet ends up selling for 90 percent off at Best Buy,” he said last month, at WSJ’s Future of AI conversation.
For now, OpenAI will be steered by former Twitch boss Emmett Shear as interim CEO. Shear will take over from Mira Murati, who was CEO for just the weekend. The AI company has gone through CEOs in the span of a few days. Things may change dramatically by the time you read this.