Former OpenAI employees push for protection of whistleblowers who flag artificial intelligence risks

OpenAI employees and former employees are calling for the ChatGPT maker and other AI companies to protect their whistleblowers who warn about safety risks in AI technology.

In an open letter published on Tuesday, tech companies are asked to strengthen whistleblower protections. This will allow researchers to raise concerns internally and publicly about the development of high performing AI systems without fear of retaliation.

Daniel Kokotajlo is a former OpenAI employee who left the company in the first quarter of this year. In a statement, he said that tech companies “ignore the risks and impacts of AI” while they rush to create AI systems better than humans, also known as artificial general Intelligence.

He wrote: “I left OpenAI because I had lost faith that they would behave responsibly, especially as they pursued artificial general intelligence.” “They, and others, have bought into the approach of’move quickly and break things.’ That is the opposite to what is needed for a technology that powerful and poorly understood.”

OpenAI responded to the letter by saying that it has already implemented measures to allow employees to voice concerns. This includes an anonymous integrity hotline.

The company said, “We are proud of our record in providing the safest and most capable AI systems. We also believe in our scientific approach when it comes to managing risk.” “We believe that a rigorous debate on the importance of this technology is essential. We will continue to engage governments, civil societies and other communities throughout the world.”

The letter is signed by 13 people, the majority of whom are former OpenAI employees and two others who currently work or have worked at Google’s DeepMind. Four of the signatories are anonymous employees at OpenAI. The letter urges companies to stop forcing workers into “nondiscrimination” agreements which can punish them for criticizing the company.

OpenAI has released all of its former employees who were bound by non-disparagement contracts due to social media backlash.

Stuart Russell and pioneering AI scientist Yoshua Hinton have both signed the open letter. They won together computer science’s most prestigious award. All three have warned of the dangers that future AI systems may pose to human existence.

OpenAI, which is behind ChatGPT, has announced that it will begin developing the next generation AI technology and form a new safety panel, just a few days after losing several leaders, among them co-founder Ilya Sukseker, who was part of a group focused on safely creating the most powerful AI system. The AI community has been battling for years over how to balance the short-term and longer-term risks of AI with its commercialization.

These conflicts were a major factor in the ouster of OpenAI CEO Sam Altman and his swift return last year. They continue to fuel mistrust towards his leadership.

Scarlett Johansson was outraged when she heard that ChatGPT sounded “eerily” similar to her own voice, even though she had previously refused Altman’s invitation to lend her voice to ChatGPT. AP

Related Articles