Business Insights
  • Home
  • Crypto
  • Finance Expert
  • Business
  • Invest News
  • Investing
  • Trading
  • Forex
  • Videos
  • Economy
  • Tech
  • Contact

Archives

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • August 2023
  • January 2023
  • December 2021
  • July 2021
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019

Categories

  • Business
  • Crypto
  • Economy
  • Finance Expert
  • Forex
  • Invest News
  • Investing
  • Tech
  • Trading
  • Uncategorized
  • Videos
Apply Loan
Money Visa
Advertise Us
Money Visa
  • Home
  • Crypto
  • Finance Expert
  • Business
  • Invest News
  • Investing
  • Trading
  • Forex
  • Videos
  • Economy
  • Tech
  • Contact
Criminals Are Vibe Hacking With AI To Carry Out Ransoms At Scale: Anthropic
  • Forex

Criminals Are Vibe Hacking With AI To Carry Out Ransoms At Scale: Anthropic

  • August 27, 2025
  • Roubens Andy King
Total
0
Shares
0
0
0
Total
0
Shares
Share 0
Tweet 0
Pin it 0

Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. 

In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom.

They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.

In February, blockchain security firm Chainalysis forecasted crypto scams could have its biggest year in 2025 as generative AI has made it more scalable and affordable for attacks.

Anthropic found one hacker who had been “vibe hacking” with Claude to steal sensitive data from at least 17 organizations — including healthcare, emergency services, government and religious institutions —with ransom demands ranging from $75,000 to $500,000 in Bitcoin.

A simulated ransom note demonstrates how cybercriminals leverage Claude to make threats. Source: Anthropic

The hacker trained Claude to assess stolen financial records, calculate appropriate ransom amounts and write custom ransom notes to maximize psychological pressure.

While Anthropic later banned the attacker, the incident reflects how AI is making it easier for even the most basic-level coders to carry out cybercrimes to an “unprecedented degree.”

“Actors who cannot independently implement basic encryption or understand syscall mechanics are now successfully creating ransomware with evasion capabilities [and] implementing anti-analysis techniques.”

North Korean IT workers also used Anthropic’s Claude

Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, pass technical coding tests, and even secure remote roles at US Fortune 500 tech companies. They also used Claude to prepare interview responses for those roles.

Claude was also used to conduct the technical work once hired, Anthropic said, noting that the employment schemes were designed to funnel profits to the North Korean regime despite international sanctions.

Breakdown of Claude-powered tasks North Korean IT workers have used. Source: Anthropic

Earlier this month, a North Korean IT worker was counter-hacked, where it was found that a team of six shared at least 31 fake identities, obtaining everything from government IDs and phone numbers to purchasing LinkedIn and UpWork accounts to mask their true identities and land crypto jobs.

Related: Telegram founder Pavel Durov says case going nowhere, slams French gov

One of the workers supposedly interviewed for a full-stack engineer position at Polygon Labs, while other evidence showed scripted interview responses in which they claimed to have experience at NFT marketplace OpenSea and blockchain oracle provider Chainlink.

Anthropic said its new report is aimed at publicly discussing incidents of misuse to assist the broader AI safety and security community and to strengthen the wider industry’s defense against AI abusers. 

It said that despite implementing “sophisticated safety and security measures” to prevent the misuse of Claude, malicious actors have continued to find ways around them. 

Magazine: 3 people who unexpectedly became crypto millionaires… and one who didn’t