Anthropic's New AI Tool Analyses Your Screen And Acts On Your Behalf

The new capability, called "computer use," can interpret what a user is seeing on their computer and - with permission - take actions for them by browsing the web, clicking buttons and typing.

Advertisement
Read Time: 4 mins
Anthropic also introduced a new update on Claude 3.5 Sonnet model, with improved reasoning and coding

Artificial intelligence startup Anthropic is releasing a new tool that can understand what's happening on a user's computer screen and complete a range of online tasks for them - the latest example of tech companies expanding from chatbots that offer pithy responses to so-called AI agents that can act on a person's behalf.

The new capability, called "computer use," can interpret what a user is seeing on their computer and - with permission - take actions for them by browsing the web, clicking buttons and typing, Anthropic said Tuesday. The company is releasing a beta version to developers using its Claude technology, after testing the service with a limited set of enterprise customers in recent weeks.

A growing number of AI companies are investing in building agents that field tasks for users with minimal human supervision, an attempt to fulfill the promise of artificial intelligence to radically increase productivity in our personal and professional lives. On Monday, Microsoft Corp. launched a set of agent tools designed to send emails and manage records for workers. Salesforce Inc. touted its enterprise agent apps for customer service at its Dreamforce event last month.

Anthropic is taking a different approach than many other companies have with agent tools. Rather than integrate with various applications on the backend, its technology can process what's happening on a user's computer screen in real time. The company said this method creates a more intuitive experience.

"It's going to be the first model ever to be able to use a computer the way that people do," Jared Kaplan, co-founder and chief science officer at Anthropic, said in an interview with Bloomberg News.

In a pre-recorded demo, an Anthropic employee used the tool to figure out the logistics of taking a friend for a morning hike with views of the Golden Gate Bridge. Anthropic's AI agent was able to search on Google to find hikes, map a route, check the sunrise time and send a calendar invite with details including what kind of clothing to wear - all with no human input beyond an initial prompt.

Anthropic has positioned itself as a safety-conscious AI company, but the new tool might invite added scrutiny. Technology that can access a user's screen activity comes with heightened safety and security concerns. When Microsoft, for example, unveiled its AI-enabled "Recall" feature that created a record of everything users do on their computers, a backlash ensued over worries that the software could be vulnerable to hacking. It ended up relaunching the product with security upgrades.

Advertisement

The use of AI agents also raises the stakes for any errors. It's one thing for an AI system to hallucinate a response in a chatbot screen; it's another to make a mistake while acting on a person's behalf online or offline.

Kaplan said Anthropic has red-teamed, or pressure-tested, the feature for vulnerabilities and set certain guardrails around actions that the tool is allowed to perform. For example, the company said users will be "nudged away" from activities such as engaging on social media, creating accounts and interacting with government websites. Additionally, developers can put in place restrictions on when the tool can access a user's computer. They can also add human oversight at various steps in the process.

Advertisement

Though the tool can handle a range of tasks on a computer, it still struggles with some actions that humans can do easily, such as scrolling, dragging and zooming, the company said in a blog post.

"The model is not perfect. It still makes mistakes," said Kaplan. "It's not perfectly reliable by any means yet. We wanted to experiment with developers slowly and understand what feedback and risks emerge so that we're prepared and can improve safety training in any areas where we find that there are potentials for abuse."

Early partners ahead of launch including Canva, Asana and Replit have already been using the tool in areas such as graphic design, project management and coding, the company said. In the future, Anthropic may integrate some of the computer use capabilities into its consumer products, Kaplan said.

Advertisement

As part of Tuesday's release, Anthropic introduced a new, upgraded Claude 3.5 Sonnet model that has improvements in areas such as coding and reasoning. The company also launched a more capable version of its cheaper and faster model, Claude 3.5 Haiku.

Featured Video Of The Day
7 Indians Injured In Christmas Market Attack In Germany: Sources
Topics mentioned in this article