OpenAI has released an update to its flagship model, GPT-5.1, which features two variants and now offers eight different conversation tones, ranging from “professional” to “cynical”.
This development reflects a growing need to segment its AI to meet the varied expectations of its 800 million users. While some seek a neutral and efficient assistant, others prefer a warmer and more empathetic interaction.
However, the use of multiple tones does not solve a fundamental problem: ChatGPT continues to function as a coherent entity, which has raised concerns about the risks of developing problematic emotional bonds between users and AI.
A danger for AI users?
This risk has been part of intense regulatory scrutiny, especially after multiple reports of vulnerable users who have experienced emotional dependence on the chatbot.
Despite the new features, OpenAI has admitted that the release speed of GPT-5.1 has come at a cost to security, presenting “security regressions” compared to its previous version.
The company prioritized time to market over thorough testing, which is concerning at a time when it is facing critical scrutiny of its safety and ethical practices.
Customization in the new model also has limitations. OpenAI has acknowledged that, taken to the extreme, customization could reinforce existing worldviews, which raises a dilemma between commercial commitment and social responsibility.
This fragmented approach arises in response to the failure of its previous “one AI for all” model, with GPT-5 disappointing users and leading to the reactivation of GPT-4o as an option within the OpenAI ecosystem.