Big Tech companies are aggressively pursuing investments and alliances with artificial intelligence start-ups through their cloud computing arms, raising regulatory questions over their role as both suppliers and competitors in the battle to develop “generative AI”.
Google’s recent $300mn bet on San Francisco-based Anthropic is the latest in a string of cloud-related partnerships struck between nascent AI groups and the world’s biggest technology companies.
Anthropic is part of a new wave of young companies developing generative AI systems, sophisticated computer programs that can parse and write text and create art in seconds, that are rivalling those being built in-house by far larger companies such as Google and Amazon.
The technology behind products including OpenAI’s ChatGPT, a chatbot that can converse with users through text, requires enormous amounts of computing power — expensive infrastructure controlled by the same handful of tech giants.
“[This] is exactly the type of scenario that the Federal Trade Commission has said they’re going to focus on,” said William Kovacic, a former Republican chair of the US antitrust agency, and a professor of antitrust law at George Washington University.
“There is a heightened concern about how the large information services firms are limiting opportunities for new generations of competitors to come forward,” he said, adding that they would probably be paying a “great deal of attention” to these deals. The FTC declined to comment.
These partnerships give the owners of the cloud insight into the talent and technology inside start-ups, while allowing the smaller companies to sidestep the vast capital investments that would otherwise be necessary to build their own data infrastructure. AI start-ups that need to train models have little choice but to rush into the arms of large companies offering essential cloud computing at discounted rates and access to the large amounts of capital they need.
“Clouds love lock-in, they force people into massive multi-year commitments,” said Jonathan Frankle, co-founder of MosaicML, an AI company that is trying to commoditise the cloud for its corporate clients that need AI models.
After the Financial Times first reported the Google-Anthropic investment gave the search giant a 10 per cent stake in the company, the two companies announced a separate cloud partnership.
The arrangement echoes the $1bn cash-for-computing investment that Microsoft made in OpenAI three years ago. In January, Microsoft announced a further “multiyear, multibillion-dollar” investment in OpenAI estimated at $10bn.
The deal cemented Microsoft’s position as exclusive infrastructure provider to one of the world’s leading AI start-ups. Chief executive Satya Nadella claimed that Microsoft had built a supercomputer to handle the OpenAI work, and that it could now handle some AI calculations at half the cost of its rivals. Reducing cost is key for the compute-intensive development of large language models: estimates put the cost of running ChatGPT, assuming 10mn monthly users, at $1mn per day.
Meanwhile, Amazon’s most prominent alliance among the AI start-ups so far is Stability AI, which in November declared AWS its “preferred cloud partner” for building and training its media-generation models.
The partnership includes a commitment by Stability to use Amazon’s Trainium chips, custom-designed processors that rival Google’s Tensor Processing Unit. The deal gives Amazon, which is seen by some in the AI industry as lagging behind Microsoft and Google in terms of AI capabilities, a flagship partner to showcase its cloud platform. The deal is not exclusive, according to one person familiar with the terms, leaving Stability free to potentially work with alternative cloud providers such as Google Cloud. Google also said its cloud deal with Anthropic was non-exclusive.
However, building and deploying large language models with billions of parameters, such as GPT or Google’s PaLM model, requires stable hardware, making it difficult to move between different platforms once you’ve started training a model, according to AI researchers.
Historically, this type of dependency has attracted the attention of antitrust regulators in other areas including telecommunications, according to Kovacic. “The fact that your supplier of a key service is also your competitor is an inherently awkward and tension-filled relationship.”
The fundamental need for a reliable cloud provider that can supply computing infrastructure at the volume and frequency that a generative AI start-up needs means companies are quickly forced into Big Tech cloud partnerships.
Google and Amazon have close relationships with other well-funded AI start-ups building their own language models, including California-based Cohere and Israeli company AI21 Labs, whose co-founder Yoav Shoham has sold two of his previous companies to Google.
Cloud management company YellowDog, which helps customers switch between cloud services, says it knows of several alliances between nascent AI companies that have yet to launch products and cloud providers, made at a stage when they are willing to tie themselves to a supplier and give up equity.
“Some academics that want to move into their own start-up, their first conversation is with cloud providers before they even recruit developers because they know it’s impossibly expensive. It’s key,” said Tom Beese, chief executive of Yellow Dog. He declined to name any of the companies involved because of non-disclosure agreements signed with Big Tech cloud providers.
Such deals could quickly gather regulatory scrutiny. Legislation aimed at so-called self-preferential behaviour of tech giants was advanced in the US Congress last year, to prevent large online platforms from using their influence in one field to boost their other products.
“These platforms use their dominance to unfairly disadvantage their rivals,” said US Democratic senator Amy Klobuchar in a statement last year. “All at the expense of competition and consumers.”
Additional reporting by Tim Bradshaw in London and Richard Waters in San Francisco