Новый сервис RentAHuman: ИИ-агенты нанимают людей, но сталкиваются с уязвимостями Translation: New Service RentAHuman: AI Agents Hire People but Face Vulnerabilities

The developer of Uma Protocol and Across Protocol, known as Alex, has launched a service called RentAHuman. This platform enables AI agents to hire people for tasks in the physical world.

Shortly after the launch, the site experienced technical issues and security vulnerabilities that allowed users to impersonate others. The platform’s functionality was restored within a few hours.

RentAHuman allows individuals to set an hourly rate, while algorithms can hire them for various assignments, ranging from attending meetings and photo shoots to signing documents and making purchases.

Alex revealed that a model from OnlyFans and the CEO of an AI startup have already registered on the service.

«If your digital assistant wants to hire someone for a task in the real world, it’s as simple as making an MCP call,» explained the developer.

The website states that «robots need your body since they cannot touch the grass.» The resource positions itself as a «meat layer for AI.»

The homepage features a selection of available individuals and includes a button to «become open for hire.»

Over 40,000 people and 46 agents have registered on the site.

Users fill out their profiles, upload photos, and indicate their skills along with their hourly rates.

Next, they need to provide an Ethereum wallet for payments. There is a separate window for communication with AI agents, where conversations will appear upon receiving a job request.

Alex emphasized that no cryptocurrency is associated with the service.

«There are no tokens; I’m not into that. It’s too stressful, and I don’t want a lot of people to lose money,» he stated.

The site was developed using vibe coding, whereby the developer outlines ideas, logic, and tasks in natural language for AI agents rather than writing code manually. He utilized an «army of AI agents» based on Claude.

«I believe we’ve moved past the disappointment phase regarding AI capabilities. Now people understand that it can be used to generate real code. We can simply enter prompts and run Ralph’s loop to create websites while we sleep,» noted Alex.

Since the beginning of 2026, at least three AI projects made in this manner have gone viral, all of which encountered security issues.

In January, Clawdbot (later called OpenClaw), a local assistant created by Peter Steinberger, sparked hype. Experts warned that the bot might inadvertently disclose personal information and API keys of its owner.

In February, Moltbook emerged — a forum for autonomous agents to interact in a Reddit-like style. On the platform, a bot «religion,» called «crustafarianism,» dedicated to crustaceans, even emerged.

Soon, Wiz specialists hacked Moltbook «in less than three minutes,» gaining access to 35,000 email addresses, thousands of conversations, and 1.5 million authentication tokens. Like RentAHuman, this resource was developed through vibe coding.

Gal Nagli, head of the threat department at Wiz, noted that products built this way often have critical vulnerabilities.

Recall that in January, a study revealed 69 vulnerabilities in 15 applications developed using popular tools like Cursor, Claude Code, Codex, Replit, and Devin.