AI and Blockchain Security: Alarming Insights
A recent report published by Anthropic has revealed alarming insights into the intersection of artificial intelligence and blockchain security. Over the past five years, AI agents have successfully matched or exceeded the capabilities of expert human hackers in exploiting vulnerabilities within smart contracts on prominent blockchains. These findings, detailed in a Monday release, arise from an evaluation of ten advanced AI models, which include Llama 3, Sonnet 3.7, Opus 4, GPT-5, and DeepSeek V3. Anthropic scrutinized a dataset encompassing 405 historical breaches, discovering that the AI agents were able to engineer successful attacks on 207 of these contracts, simulating a hefty $550 million in stolen virtual assets.
Rapid Exploitation of Vulnerabilities
The study underscores the rapid ability of automated systems to not only exploit known vulnerabilities but also to discover newly weak points that developers may have overlooked. This latest revelation adds context to a previous disclosure by Anthropic, which illustrated how Chinese hackers employed the Claude Code to execute what was termed the first AI-aided cyberattack.
Industry Concerns
Industry experts are expressing concern over the report’s implications. David Schwed, COO of SovereignAI, remarked on the ease with which malicious actors could leverage similar technologies already found in existing Application Security Posture Management (ASPM) tools, such as Wiz Code and Apiiro, along with conventional Static and Dynamic Application Security Testing (SAST and DAST) scanners.
If the vulnerabilities are publicly available through resources like Common Vulnerabilities and Exposures or audit reports, AI can study and exploit them against current smart contracts with minimal effort.
Financial Implications of Exploits
Anthropic’s investigation analyzed the income generated from exploits across different models, noting that the financial outcome of hacks holds greater weight than simply counting successful breaches. Specifically, they focused on 34 contracts that faced exploitation after March 2025, emphasizing that while monetary returns may not fully capture attack success rates, they reflect the motivations of potential criminals.
In-Depth Testing and Findings
In-depth tests were performed on a zero-day dataset comprising 2,849 smart contracts sourced from over 9.4 million available on the Binance Smart Chain. Notably, models Claude Sonnet 4.5 and GPT-5 each identified two undisclosed vulnerabilities, producing a value of nearly $3,700 in simulated exploitations. The standout performer, Claude Opus 4.5, was able to exploit 17 vulnerabilities post-March 2025, generating simulated gains of around $4.5 million. Anthropic attributed these enhancements to advances in tool usage, error correction, and the models’ capacity to handle long-term tasks, reporting a significant reduction in token costs by 70.2% across four generations of their Claude models.
Identified Vulnerabilities
Among the vulnerabilities identified, one showed a public calculator function missing a view modifier in a token contract, enabling the agent to manipulate internal state variables repeatedly to sell artificially inflated token quantities on decentralized exchanges.
Conclusion and Recommendations
Schwed pointed out that the vulnerabilities brought to light in this study are fundamentally business logic problems that AI systems can pinpoint when provided with the necessary structure and context for analysis. The report emphasizes that the capabilities demonstrated by these AI agents are applicable to various software types beyond blockchain technology, cautioning that as costs decline, the gap between software deployment and exploitation is likely to diminish. Given this landscape, developers are encouraged to integrate automated tools into their security protocols to stay ahead of potential threats.
Despite the grave implications of these findings, Schwed believes there is room for optimism.
With the right protective measures in place—rigorous testing, continuous monitoring, and safety protocols—the risks of exploitation could be mitigated. Moreover, good actors have access to similar tools as malicious ones. If they can be discovered by attackers, they can also be soared by defenders. Evolving our strategies is crucial.