Anthropic warns its AI tool Claude is being used for extortion
The use of artificial intelligence in cyberattacks is on the rise, according to a threat report from Anthropic, the makers of the Claude chatbot. Anthropic warns that criminals are using AI tools like Claude for various malicious purposes.
These threats include infiltrating networks, stealing sensitive data, and drafting highly personalised extortion demands to maximise victims’ vulnerability. In some cases, attackers have demanded ransoms of more than $500,000 (€430,000).
The report reveals that 17 organisations from various sectors, such as healthcare, government, and religion, were targeted just in the past month. Claude’s capabilities played a key role in identifying vulnerabilities, selecting targets, and determining which data to extract.
Jacob Klein, director at Anthropic, said AI had significantly lowered the barrier for cybercrime. Operations that once required teams of skilled experts can now be carried out by a single person thanks to AI tools.
The report also shed light on North Korean agents using Claude to pose as remote programmers for American companies. This tactic is intended to funnel money to North Korea’s weapons programs. Claude enables these agents to communicate effectively with their employers and perform tasks for which they would otherwise lack the necessary skills.
Historically, North Korean hackers have undergone years of rigorous training to carry out such tasks. However, AI models like Claude have made this requirement obsolete.
In addition, Anthropic has documented the rise of AI-driven fraud programs that are for sale online. One example is a Telegram bot designed for romance scams, which uses multilingual capabilities to emotionally manipulate victims and extort money from them.
Business AM





Subscribers 0
Fans 0
Followers 0
Followers