By Michael Roberts
This blog was originally published by Michael Roberts on Tuesday, 21 November 2023. Since the blog was written, Sam Altman has returned to OpenAI but the content of the blog is still very relevant and provides an informative analysis of developments in the AI industry.
**************
The shock sacking of Sam Altman, the founder of OpenAI, by his own board reveals the contradictions emerging in the development of ChatGPT and other ‘generative artificial intelligence’ models driving the AI revolution.
Will AI and these language learning models (LLMs) bring wonderful new benefits to our lives, reducing hours of toil and raising our knowledge to new heights of human endeavour?; or will generative AI lead to the increased domination of humanity by machines and even greater inequality of wealth and income as the owners and controllers of AI become ‘winners take all’ while the rest of humanity is ‘left behind’?
Altman wanted OpenAI to be a big money-making operation
It seems that the OpenAI board sacked their ‘guru’ leader Altman because he had ‘conflicts of interest’ ie Altman wanted to turn OpenAI into a huge money-making operation backed by big business (Microsoft is the current financial backer), while the rest of the board continued to see OpenAI as a non-profit operation aiming to spread the benefits of AI to all with proper safeguards on privacy, supervision and control.
The original aim of OpenAI was as a non-profit venture created to benefit humanity, not shareholders. But it seems that the carrot of huge profits was driving Altman to change that aim. Even before, Altman had built a separate AI chip business that made him rich. And under his direction, OpenAI had developed a ‘for-profit’ business arm, enabling the company to attract outside investment and commercialise its services.
As the FT put it: “this hybrid structure created tensions between the two “tribes” at OpenAI, as Altman called them. The safety tribe, led by chief scientist and board member Ilya Sutskever, argued that OpenAI must stick to its founding purpose and only roll out AI carefully. The commercial tribe seemed dazzled by the possibilities unleashed by ChatGPT’s success and wanted to accelerate (ie make money). The safety tribe appeared to have won out for now. “
Altman’s exalted status in the AI industry
Altman is not a scientist, but it seems he is a great ideas man, an entrepreneur in the Bill Gates tradition (with Microsoft). Under Altman, OpenAI has been transformed in eight years from a non-profit research outfit into a company reportedly generating $1bn of annual revenue. Customers range from Morgan Stanley to Estée Lauder, Carlyle and PwC.
The success has made Altman the de facto ambassador for the AI industry, despite his lack of a scientific background. Earlier this year, he embarked on a global tour, meeting world leaders, start-ups and regulators in multiple countries. Altman spoke at the Apec Asia-Pacific regional summit in San Francisco just a day before he was sacked.
Altman apparently has “a ferocious ambition and ability to corral support”. He has been described as “deeply, deeply competitive” and a “mastermind”, with one acquaintance saying there is no one better at knowing how to amass power. As a result, he has a ‘cult’ following among his 700-plus employees most of whom signed a letter demanding his re-instatement and the resignation of the safety tribe on the board.
Financial losses from developing ChatGPT
OpenAI has lost half a billion dollars in developing ChatGPT, so it was about to launch a sale of shares worth $86bn before the split on the board. That would have continued the non-profit approach. Now with Altman and others joining Microsoft as employees, it seems that the OpenAI may be swallowed up by Microsoft for a pittance and so end the company’s ‘non-profit’ mission.
What all this shows is that those who think that the AI revolution and information technology will be developed by capitalist companies for the benefit of all are being deluded. Profit comes first and last – whatever the impact on safety, security and jobs that AI technology has on humanity over the next few decades.
Some fear that AI will become ‘God-like’ ie a superintelligence developing autonomously, without human supervision and eventually controlling humanity. So far, AI and LLMs do not exhibit such ‘superintelligence’ and, as I have argued in previous posts, cannot replace the imaginative power of human thinking. But they can hugely increase productivity, lower hours of toil and develop new and better ways of solving problems if put to social use.
AI should be under public ownership and democratic control
What is clear is that AI development should not be in the hands of ‘ambitious’ entrepreneurs like Altman or controlled by the mega tech giants like Microsoft. What is needed is an international, non-commercial research institute akin to Cern in nuclear physics. If anything requires public ownership and democratic control in the 21st century, it is AI.
From the blog of Michael Roberts. The original, with all charts and hyperlinks, can be found here.
The featured image shows Sam Altman in 2019. The image is from Wikimedia Commons. Attribution: TechCrunch, CC BY 2.0 , via Wikimedia Commons