
Artificial Intelligence (AI)-powered code writer Cursor is facing heat from the developer community after its customer support AI seemingly went rogue. A Cursor user posted on Reddit that customers were being mysteriously logged out when switching between devices. Upon contacting the customer support, they were told in an emailed response from "Sam" that logouts were "expected behaviour" under a new login policy.
However, here's the twist. Cursor had chalked no policy of automatic logout, and the email response came from an AI-powered bot that "hallucinated" the entire explanation.
As the news about Cursor's AI bot going rogue went viral, cofounder Michael Truell acknowledged the incorrect response from the machine.
"Hey! We have no such policy. You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot," wrote Mr Truell, under the now-deleted post.
"We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation. We also do provide a UI for seeing active sessions at cursor.com/settings. Apologies about the confusion here," he added.
Also Read | Scientists Discover 'Olo': A New Colour Beyond Human Vision
Previous instance
Cursor is an AI-powered coding assistant developed by AI startup Anysphere. The company has grown rapidly, with the likes of OpenAI keeping an eye on it for acquisition. However, Cursor has been making headlines recently and not for good reasons.
Earlier this month, the AI coding assistant flat-out refused to write code for a user and instead offered a piece of unsolicited advice.
"I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system and can maintain it properly," the AI told the user.
The AI assistant doubled down on its stance, adding that, "generating code for others can lead to dependency and reduced learning opportunities".
Notably, OpenAI's recently launched o3 and o4-mini AI models are also prone to hallucinations, according to the company's internal tests. The ChatGPT creators do not have any idea why this is happening.
In a technical report, OpenAI said "more research is needed" to understand why hallucinations are getting worse as it scales up reasoning models.
Track Latest News Live on NDTV.com and get news updates from India and around the world