A coalition of current and former employees from leading AI companies, including Microsoft-backed OpenAI and Alphabet’s Google DeepMind, issued an open letter on Tuesday expressing concerns about the risks associated with emerging AI technology.
The group, consisting of 11 current and former OpenAI employees and two from Google DeepMind, criticized the financial motives of AI companies, arguing that these motives obstruct effective oversight. “We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter stated.

The letter highlighted potential dangers of unregulated AI, such as the spread of misinformation, the erosion of independent AI systems, and the exacerbation of existing inequalities, which could ultimately lead to “human extinction.” The researchers pointed out instances where image generators from companies like OpenAI and Microsoft produced photos containing voting-related disinformation, despite existing policies against such content.
According to the letter, AI companies have “weak obligations” to share information with governments regarding the capabilities and limitations of their systems, and these firms cannot be relied upon to do so voluntarily.
This open letter is the latest in a series of warnings about the safety concerns surrounding generative AI technology, which can rapidly and inexpensively create human-like text, imagery, and audio. The group urged AI firms to establish a process allowing current and former employees to raise concerns about risks and to refrain from enforcing confidentiality agreements that prohibit criticism.
