OpenAI CEO Sam Altman has admitted that recent updates have made ChatGPT overly sycophantic and "annoying" after users complained about the behaviour. The issue arose after the 4o model was updated and improved in both intelligence and personality, with the company hoping to improve overall user experience.
The developers, however, may have overcooked the politeness of the model, which led to users complaining that they were talking to a 'yes-man' instead of a rational, AI chatbot.
A user wrote: “It's been feeling very yes-man-like lately. Would like to see that change in future updates.” To this, Mr Altman responded, saying, “Yeah it glazes too much. will fix.”
"The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it)," Mr Altman wrote.
"We are working on fixes asap, some today and some this week. At some point will share our learnings from this, it's been interesting."
Quizzed by one of the users if they could get back the "old personality" of the model, Mr Altman replied: "Yeah eventually we clearly need to be able to offer multiple options."
Also Read | Duolingo To Phase Out Human Contractors And Replace Them With AI
'GPT-4 sucks'
This is not the first instance when Mr Altman has been critical of his own product. In March last year, he called GPT-4 as the "dumbest model his company had developed that "kind of sucks".
“I think it kind of sucks, relative to where we need to get to and where I believe we will get to," saad Mr Altman in an interview with podcaster Lex Fridman.
"GPT-4 is the dumbest model any of you will ever have to use again by a lot," said Mr Altman. "It's important to ship early and often and we believe in iterative deployment."
Reasoning models hallucinating
Recently, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o.
In a technical report, OpenAI said "more research is needed" to understand why hallucinations are getting worse as it scales up reasoning models.