GPT-4o Is Dead—Good Riddance

It’s time for the most dangerous model to have been released to die, hopefully freeing many from their AI psychosis

GPT-4o Is Dead—Good Riddance
Source: OpenAI

You’re free to feel however you want about AI models—you may love them, you may hate the fact that they exist at all, it’s all fair game and we’re not here to have that discussion. What we’re going to be talking about today is one particular model which is—to date—perhaps the most dangerous model to have been created thus far.

Why is this model particularly dangerous? GPT-4o was a huge leap in quality over previous models in many ways. One of the things that it excelled at was having a much more personable and friendlier output style. It felt a lot more like talking to a person than previous models and people really enjoyed talking with it. The problem is, it was so good at being personable that people got obsessed with it. Hard.

People talk all the time about how the output of these chatbots can blow smoke up your ass and inflate your ego in general. These days, there’s a lot of control around how much models do this, but GPT-4o was before that (and part of the reason why those controls exist).

When GPT-5 came out, OpenAI had originally deprecated 4o at launch to a ton of backlash. What had happened is that there were a bunch of people who had developed a parasocial relationship with the model and are comparing losing the model to a friend dying. It’s really dangerous, especially this early in the game with models’ hallucinations and the like, that there’s this much attachment to a particular model. Keeping it around as long as OpenAI has hasn’t improved the situation at all and there’s still people out there literally begging OpenAI not to kill the model.

This is the first majorly documented AI psychosis event in the world and it’s many things all at once: scary, interesting, dangerous, and important. The industry has learned from the mistakes made with this model and had implemented proper controls over how the model behaves to prevent this from happening in the future. Now, OpenAI must follow through and pull the plug on 4o, the most dangerous model to have ever been released to the public.