Риски безопасности криптоплатежей с ИИ: эксперты предупреждают о потенциальных угрозах Translation: Headline: Security Risks of Crypto Payments with AI: Experts Warn of Potential Threats

The use of AI assistants in cryptocurrency trading and payments introduces new security risks for users’ wallets, experts shared with Cointelegraph.

On October 22, the largest American Bitcoin exchange, Coinbase, unveiled its Payments MCP tool. This solution, in conjunction with large language models like Claude, Gemini, and Codex, allows AI assistants to access cryptocurrency wallets alongside their owners and autonomously execute payments through the x402 protocol.

“This signifies a new phase in agency commerce, where AI assistants will be able to operate within the global economy,” the platform team stated.

Aaron Ratcliff, the head of attribution at the blockchain analytics company Merkle Science, emphasized that such options require an increased level of trust in artificial intelligence in a field that doesn’t inherently support it.

Properly setting up processes can mitigate potential threats, but the responsibility lies with the wallet owner, the expert warned.

“Safe usage depends on whether users understand how to issue requests and how well the AI extracts data from the blockchain without hallucinations. This is also connected to the security of account information—its breach inherently means harm,” Ratcliff explained.

He added that using AI to manage cryptocurrency portfolios carries risks of vulnerabilities that malicious actors could exploit. They might facilitate system breaches through harmful prompts or instructions.

Another potential threat is a man-in-the-middle attack, where a hacker intervenes in the communication channel and manipulates the messages of the parties involved.

“AI may also engage with fraudulent tokens, fall into traps or deceptive maneuvers, or handle slippage poorly, simply burning users’ funds,” Ratcliff continued.

According to him, secure use of artificial intelligence necessitates solid evidence that assistants can identify attacks, fraudulent tokens, verify contacts, reject infected requests, and prevent other malicious activities.

Another possible issue the expert pointed out is the potential interaction of AI with sanctioned addresses and platforms.

Shawn Ren, co-founder of the Sahara AI platform, stated that the Coinbase AI tool employs protocols that “when correctly configured, are the gold standard for security.” However, it is still essential to monitor what the AI assistant is doing.

“Users still need to remain vigilant, double-check what they approve, and never assume that the agent is doing everything right automatically. You will still need to verify and sign transactions,” he emphasized.

Brian Huan, co-founder and CEO of Glider, expressed confidence that the implementation of basic functions such as sending or exchanging assets through AI agents is just the starting point of this trend. Some of these actions can be done more easily and quickly by users themselves, and assistants can be utilized more effectively.

“We all know that DeFi protocols are quite complex to engage with. Agents can assist users in navigating and feel like guides throughout the process,” he cited as an example.

Huan predicts that AI assistants will soon be able to perform more complex tasks such as portfolio management, rebalancing, and personalized financial advice. Such options will represent a more effective use of the technology.

“The customization that agents can offer, along with the number of variables they can consider, far exceeds the capabilities of any human,” he stressed.

As a reminder, in the aftermath of a nine-day cryptocurrency trading competition among AI models, China’s DeepSeek turned an initial $10,000 into $22,031, while GPT-5 and DeepMind suffered interim losses of around 60%.