Skip to main content

It’s been an unsettling period for the Artificial Intelligence industry with OpenAI at the center of attention.  Employees of  leading AI companies such as OpenAI and Google’s DeepMind signed an open letter demanding for ‘a right to warn’ about AI development in their companies that could pose significant risk. There are many parts to this demand and I’m here to walk you through them.

An open letter signed by 7 former and 4 current employees of OpenAI, 1 current and 1 former employee of DeepMind (formerly Anthrophic) was published on the 4th of June 20224. This document received endorsements from influential figures in the AI industry including Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

What is the root of the concerns:

AI is a transformative technology that is reshaping the world. Consequently, there has also been an increase in cyber crimes, misinformation, manipulation, loss of control of autonomous AI systems. These escalating risks are among the  concerns raised by the signatories (some of which were part of the OpenAI’s recently dismantled Safety Committee) who have come to realize that their company isn’t really doing as much as it says it is doing to ensure the safety of their technology.  

According to Daniel Kokotajlo, “I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence”. He and his colleagues alleged that in a bid to maintain their leading position and financial gain, their employers are engaging in dangerous AI research and development with little regard for safety. 

They claim vital information about potential risks is being withheld from regulators, policymakers and the general public. Previous attempts to lay their complaint through internal protocols fell on deaf ears and. They demand the right to speak openly about these issues. 

The complicated demand:

You may wonder why these people can’t  speak up openly or why they need to go through this protocol. Well, it’s not that simple. 

These employees seek the right and freedom to share their concerns -without divulging any intellectual property or trade secrets- with company boards, regulators, independent organisations with relevant expertise and, if necessary, directly to the public, without the fear of retaliation. 

The fear of backlash or job loss is the reason why 6 out of the 13 signatories choose to be anonymous. They have requested the establishment of an anonymous reporting mechanism to enable them and other safety-conscious employees to lay complaints about processes that do that align with safety regulations.

Additionally, the open letter calls for the removal of non-disparagement agreements. This would allow them to criticise the company for risk-related concerns without the fear of losing their financial benefits such as equity tied to the company.

OpenAI however argues it created channels for concerns to be shared. They deny that they don’t enforce disparagement laws and would never harm their employees’ financial benefits. The company insists that they are building safe products. However, Geoffery Hinton explained that companies don’t have a lot of incentive to stick to the commitments they publicly declare if the public remains unaware of their actions.

The broader Implications:

As Cybersecurity professionals, integrity and transparency are key components in ensuring the security of technology. The AI era has brought unprecedented security challenges and these allegations raised by these employees are deeply concerning. As advocates for safer AI tools and products, it would be a  disappointment that the very companies pioneering AI technologies are not prioritizing safety and can’t be held accountable.

The success of this open letter may provide a clearer view of the companies practices and pave the way for stronger regulations and frameworks to govern the responsible development of AI

About the author: