A former OpenAI employee strongly criticizes the company and compares it to the one that built the Titanic

William Saunders, a former security employee of OpenAI, has just compared the AI company to White Star Line, the company that built the Titanic. Saunders, who worked for three years on OpenAI’s superalignment team, explained in a recent interview that he resigned because he didn’t want to “end up working for the AI Titanic”.

ChatGPT DOWNLOAD

During his time at OpenAI, Saunders questioned whether the company resembled more the NASA Apollo program or the Titanic. His concerns were focused on OpenAI’s plan to achieve Artificial General Intelligence (AGI) while launching paid products. According to Saunders, the company prioritizes the creation of new “shiny products” instead of focusing on safety and risk assessment, making it less like the Apollo program.

Saunders noted that the Apollo program was characterized by carefully predicting and assessing risks, maintaining “sufficient redundancy” to adapt to serious problems, as happened with Apollo 13. In contrast, the White Star Line built the Titanic with watertight compartments and promoted it as unsinkable, but didn’t provide enough lifeboats, which ultimately resulted in tragedy when the well-known disaster occurred.

The former OpenAI employee fears that the company relies too much on its current (and supposedly insufficient) security measures, and suggested that OpenAI should delay the release of new AI models to investigate potential dangers. Saunders, who led a group focused on understanding the behaviors of AI language models, expressed the need to develop techniques to evaluate whether these systems “hide dangerous capabilities or motivations”.

super artificial intelligence

Saunders also expressed his disappointment with OpenAI’s actions to date. In February, he left the company, and in May, OpenAI disbanded the superalignment team shortly after releasing GPT-4, their most advanced AI model. OpenAI’s response to safety concerns and their accelerated pace of development have been criticized by various employees and experts, who warn about the need for greater government control to prevent future catastrophes.

In early June, a group of employees from DeepMind and OpenAI published an open letter, highlighting that current oversight regulations are insufficient to prevent a disaster for humanity. Additionally, Ilya Sutskever, co-founder of OpenAI and former chief scientist, resigned to start Safe Superintelligence, a startup that’ll focus on researching AI with a priority on safety.

ChatGPT DOWNLOAD

Author: Pedro Domínguez

{ "de-DE": "", "en-US": "Publicist and audiovisual producer in love with social networks. I spend more time thinking about which videogames I will play than playing them.", "es-ES": "Publicista y productor audiovisual enamorado de las redes sociales. Paso más tiempo pensando a qué videojuegos voy a jugar que jugándolos.", "fr-FR": "Publicitaire et producteur audiovisuel passionné par les réseaux sociaux. Je passe plus de temps à penser aux jeux vidéo auxquels je jouerai qu'à y jouer.", "it-IT": "", "ja-JP": "", "nl-NL": "", "pl-PL": "", "pt-BR": "", "social": { "email": "", "facebook": "", "twitter": "", "linkedin": "" } }