In an open letter, the makers of ChatGPT urge tech companies to enhance whistleblower protections, allowing researchers to report AI dangers without fear of retaliation.
A group of current and former employees at OpenAI, the company behind ChatGPT, has written an open letter expressing concerns about the potential risks of artificial intelligence (AI) technologies. While acknowledging AI’s benefits, they also warned about the dangers of increased inequality, manipulation, and loss of control over AI systems.
Published on Tuesday, the open letter calls for tech companies to strengthen whistleblower protections, enabling researchers to warn about AI dangers without fearing retaliation, several media outlets reported.
A group of current and former employees at OpenAI, the company behind ChatGPT, has written an open letter expressing concerns about the potential risks of artificial intelligence (AI) technologies. While acknowledging AI’s benefits, they also warned about the dangers of increased inequality, manipulation, and loss of control over AI systems.
The employees argue that AI companies have financial incentives to avoid oversight and fail to adequately inform the public about the risks. They suggest principles for AI companies, including allowing employees to raise concerns anonymously and publicly without fear of retaliation. The letter, signed by 13 individuals—mostly former OpenAI employees and two from Google’s DeepMind—includes four anonymous current OpenAI employees. It requests companies to cease using ‘non-disparagement’ agreements that could strip workers of equity investments if they criticize the company post-employment.
A former OpenAI employee, who worked there from 2018 to 2021 and contributed to the development of techniques that made ChatGPT successful, mentioned he didn’t fear speaking out internally. However, he now worries that the rapid commercialization of the technology pressures companies to overlook risks.
Published on Tuesday, the open letter calls for tech companies to strengthen whistleblower protections, enabling researchers to warn about AI dangers without fearing retaliation, several media outlets reported.
In response, OpenAI stated that it has measures in place for employees to express concerns, including an anonymous integrity hotline. “We’re proud of our track record in providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” the company said. “We agree that rigorous debate is crucial given the significance of this technology and will continue to engage with governments, civil society, and other communities worldwide.”
Following social media backlash over its paperwork for departing employees, OpenAI released all former employees from non-disparagement agreements.
The letter is supported by pioneering AI scientists Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, who have warned about the existential risks of future AI systems.
OpenAI’s plans for ChatGPT
The letter comes as OpenAI begins developing the next generation of AI technology behind ChatGPT and forms a new safety committee following the departure of several leaders, including co-founder Ilya Sutskever, who were focused on the safe development of powerful AI systems.
The broader AI research community has long debated the severity of AI’s risks and how to balance them with commercialization. This debate contributed to the ousting and swift return of OpenAI CEO Sam Altman last year, reflecting ongoing distrust in his leadership.
Recently, a new ChatGPT 4-o product showcase drew criticism from Hollywood star Scarlett Johansson. She expressed surprise at hearing ChatGPT’s voice, noting it sounded “eerily similar” to her own, despite previously rejecting Altman’s request to use her voice for the system.
Here’s the full text of the letter by OpenAI employees:
We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.
We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks [1, 2, 3], as have governments across the world [4, 5, 6] and other AI experts [7, 8, 9].
We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.
AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.
So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.
We therefore call upon advanced AI companies to commit to these principles:
That the company will not enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit
>That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organisation with relevant expertise
>That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected
>That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
Read More About Artificial Intelligence