San Francisco, United States: A Microsoft "chatbot" designed to converse like a teenage girl was grounded on Thursday after its artificial intelligence software was coaxed into firing off hateful, racist comments online.
Microsoft this week launched the experiment in which the bot nicknamed "Tay" was given the personality of a teenager and designed to learn from online exchanges with real people.
Bu the plan was sent awry by an ill-willed campaign to teach her bad things, according to the US software colossus.
"It is as much a social and cultural experiment, as it is technical," a Microsoft spokesperson said Thursday in response to an AFP inquiry.
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."
Tay is a machine learning project -- one in which software can evolve as it is being used -- designed for human engagement. But it got a harsh lesson in what it can learn from people.
As a result, Tay was taken offline for adjustments to the software, according to Microsoft.
"C U soon humans need sleep now so many conversations today," Tay said in its final post on Twitter.
All the offensive Twitter posts by Tay were removed, but many echoed online in the form of captured screen shots.
Tay tweets ranged from support for Nazis and Donald Trump to sexual comments and insults aimed at women and blacks.
Tay's profile at Twitter describes it as AI (artificial intelligence) "that's got zero chill" and gets smarter as people talk to it.
People could chat with Tay at Twitter and other messaging platforms, and even send the software digital photos for comment.
The project was said to target young adults with chatter styled after a teenage girl.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Microsoft this week launched the experiment in which the bot nicknamed "Tay" was given the personality of a teenager and designed to learn from online exchanges with real people.
Bu the plan was sent awry by an ill-willed campaign to teach her bad things, according to the US software colossus.
Tay is a machine learning project -- one in which software can evolve as it is being used -- designed for human engagement. But it got a harsh lesson in what it can learn from people.
Advertisement
"C U soon humans need sleep now so many conversations today," Tay said in its final post on Twitter.
Advertisement
Tay tweets ranged from support for Nazis and Donald Trump to sexual comments and insults aimed at women and blacks.
Advertisement
People could chat with Tay at Twitter and other messaging platforms, and even send the software digital photos for comment.
Advertisement
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
COMMENTS
Advertisement
Moshi AI Chatbot With Real-Time Voice Features Launched by Kyutai Labs as GPT-4o Rival Elon Musk Hints at xAI’s Grok-2 AI Model Launch, Says Will Release in August [Exclusive] Snapdragon Chipsets Ready to Offer Apple-Like ChatGPT Integration, Says Qualcomm CMO Don McGuire Barack Obama Wants Joe Biden To Pull Out Of US Presidential Race: Report World's Largest Isolated Tribe Makes Rare Appearance In New Footage 32 Dead In Bangladesh Unrest, Protesters Set Fire To State TV Headquarters Comedy Legend Bob Newhart Dead At 94: Publicist Israeli Strike Kills Field Commander In Elite Hezbollah Unit: Report Delhi-San Francisco Air India Flight Diverted To Russia After Engine Glitch Track Latest News Live on NDTV.com and get news updates from India and around the world.