AI founders vie for big wealth in unicorn frenzy
When Mustafa Suleyman partnered with billionaire LinkedIn founder Reid Hoffman last year on a startup called Inflection AI, he saw the potential in their prized project: a chatbot meant to be an “emotional companion that is kind, encouraging and rational.”
Now, so do deep-pocketed investors.
A $225 million funding round in May 2022, long before markets were whipped into a frenzy over artificial intelligence, vaulted the company past the $1 billion "unicorn" threshold. Inflection AI declined to disclose its current valuation. But Suleyman says it’s now worth billions — giving Suleyman a sizable fortune in the hundreds of millions.
Investor exuberance over AI’s potential, from infrastructure inspection to language translation and image recognition, has sent the wealth of up-and-comers like Suleyman skyrocketing, while also bolstering the fortunes of established billionaires in the space. The year’s biggest surge came from AI chip maker Nvidia Corp., whose founder Jensen Huang nearly tripled his net worth to $38.5 billion this year, according to the Bloomberg Billionaires Index. Oracle Corp.’s Larry Ellison became the world’s fourth-richest person, while Google’s Sergey Brin and Larry Page added billions after announcing plans for an AI-powered chatbot as part of a revamped search engine.
Data from Pitchbook shows $35 billion in AI deals this year through May, with a record $12.8 billion of that coming from companies working on generative AI, or algorithms that can be used to create new content from text to videos. That’s nearly quintuple the amount from the same period last year.
“This is capitalism at its best. You want capital to chase opportunity, and that drives creativity and invention,” Suleyman said. But it’s also a world laden with risk, with investors potentially pouring money into overhyped startups. “Of course, some people are going to lose their shirts.”
The current funding frenzy took off in in January, when OpenAI, the company founded by Sam Altman that created ChatGPT, set the record for AI fundraising with a $10 billion raise from Microsoft Corp. at a $29 billion valuation.
One beneficiary: Anthropic, an AI safety and research firm co-founded by siblings and former OpenAI executives Daniela and Dario Amodei. It raised $450 million at a $5 billion valuation, the biggest in AI since OpenAI, with backing from Google. The firm has said it will use the funds to make a safer chatbot experience.
Based on the Amodei siblings’ minority stakes, the injection has likely boosted their fortunes by hundreds of millions.
BOOM AND BUST
But while internet booms past created massive fortunes, they also have a history of ending in major busts — what makes AI any different?
In 2017, Gregg Johnson, a former Salesforce executive, joined Invoca, a company that uses AI to for conversation intelligence that allows firms to better track metrics on sales and marketing. The company has come a long way from its humble beginnings in 2008: It now has about $100 million in recurring revenue and 400 employees, said Johnson, who is CEO. Last year, it raised $83 million for a $1.1 billion valuation.
While Invoca has a proven track record stretching back more than a decade, these days “a lot of companies are getting insane amounts with just $3 [million] to $5 million revenue,” he said. Johnson and other industry leaders fear the banking community is back to “throwing money at AI startups in a willy-nilly way” as they did in 2021, potentially seeding the ground for a big selloff if the cash finds its way to overhyped players, he said.
James Penny, the chief investment officer of TAM Asset Management and a veteran investor who foresaw the headwinds now threatening the ESG movement, echoed Johnson’s concerns. He said the current landscape reminds him of the early days of the tech bubble that burst in 2000, wiping more than 70% off the Nasdaq.
While Johnson and Penny have history on their side, the reality is that building an AI startup is years-long, capital-intensive venture. Founders need the money if they want to have a real chance.
In 2014, Abhinai Srivastava co-founded Mashgin, a company looking to swap out conventional checkout kiosks at stores with ones that would use AI and computer vision to check prices — in essence, replacing barcodes.
A large American bank soon gave Srivastava his big break, agreeing to install the AI checkout kiosks in its cafeterias. The bank eventually installed thousands of them in its New York offices.
Mashgin has now expanded into stadiums, including Madison Square Garden and Detroit’s Ford Field. Convenience stores such as Circle K and other markets also use its AI kiosks. Last year, the group raised $62.5 million at a $1.5 billion valuation, bringing Srivastava’s personal stake value to more than $200 million.
“We thought it would take three to six months, but it took five years for us to get our first product going,” he said. Proving a product’s viability in the lab is one thing, but “the real world is the place that is hard.”
In the near term, regulation could slow down investors.
Since founding OpenAI, Sam Altman has taken a lead role in the debate surrounding AI regulation. OpenAI is working alongside a select group of companies, including Anthropic and Google, to conduct an evaluation of AI systems for the White House.
Of course, having AI tycoons help write the rules of the road for their own industry comes with drawbacks. OpenAI lobbied for significant elements of Europe’s AI Act to be watered down, according to recent reporting by Time magazine.
In late May, Altman joined hundreds of other AI enthusiasts in signing a one-line statement released by the Center for AI Safety a nonprofit research group: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Hoffman did not sign the statement. But Suleyman did.
Suleyman said he recognizes the potential pitfalls of AI and believes the risks of the technology necessitate tighter regulation — both from governments and from the companies themselves.
“Some people are going to create AIs that act like humans and try to convince people they’re human,” he said. “It will turbocharge the spread of manipulative persuasive storytelling. We’ll have to take a much more aggressive approach to online moderation and platform responsibility.”