Inside OpenAI’s Raid on Thinking Machines Lab
If anyone ever makes an HBO Max series about the AI industrythe events of this week will make quite the episode.
On Wednesday, OpenAI's CEO of Applications, Fiji Simoannounced that the company had rehired Barret Zoph and Luke Metz, co-founders of Mira Murati's AI startup, Thinking Machines Lab. Zoph and Metz left OpenAI at the end of 2024.
We reported last night on two stories forming about what led to the departure, and have since learned new information.
A source with direct knowledge says Thinking Machines management believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati's trust, the source says, and disrupted the couple's working relationship. The source also claimed that Murati fired Zoph on Wednesday — before he knew he was leaving for OpenAI — because of what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned Zoph was returning to OpenAI, Thinking Machines was internally concerned about whether it had shared confidential information with competitors. (Zoph did not respond to several requests for comment from WIRED.)
Meanwhile, in a memo to employees on Wednesday, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday — ahead of the date he was fired. Simo also told employees that OpenAI does not share Thinking Machines' concerns about Zoph's ethics.
In addition to Zoph and Metz, another former OpenAI researcher who worked at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT creator, according to Simo's announcement. At least two other Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was the first to report the additional hires.
A separate source familiar with the matter pushed back on the perception that the recent personnel changes were entirely related to Zoph. “This has been part of a long discussion at Thinking Machines. There were discussions and misunderstandings about what the company wanted to build – it was about the product, the technology and the future.”
Thinking Machines Lab and OpenAI declined to comment.
In the aftermath of these events, we've heard from several researchers at leading AI labs who say they're exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI's brief ouster of Sam Altman in 2023, known within OpenAI as “the blip.” Murati played a key role in that event as the company's then-chief technology officer, according to reporting from The Wall Street Journal.
In the years since Altman's ouster, the drama in the AI industry has continued, with the departure of co-founders at several major AI labs, including xAI's Igor Babuschkin, Safe Superintelligence's Daniel Gross, and Meta's Yann LeCun (he reportedly co-founded Facebook's long-standing AI lab, FAIR).
Some may argue that the drama is justified for an emerging industry whose expenses are contributing to America's GDP growth. Also, if you buy the idea that one of these researchers might make a few breakthroughs on the path to AGI, it's probably worth following where they go.
That said, many researchers started working for ChatGPT's breakout success and seem surprised that their industry is now the source of near-constant scrutiny.
As long as researchers can keep raising billion-dollar seed rounds for the sake of it, we predict the AI industry's power shake-ups will continue apace. HBO Max writers, lock in.
How AI Labs train agents to do your job
People in Silicon Valley have mused about AI displacing jobs for decades. In recent months, efforts to actually get AI to do economically valuable work have become much more sophisticated.
AI labs create their intelligence on the data they use to create AI agents. Last week, WIRED reported that OpenAI has asked third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI's agents. The companies ask employees to scrub these documents of all confidential data and personally identifiable information. While it's possible that some company secrets might pass as names, that's probably not what OpenAI is after (though the company could get into serious trouble if that happens, experts say).