Сверхразумный ИИ: Гроза человечества или просто научная фантастика? Translation: Superintelligent AI: Threat to Humanity or Just Science Fiction?

An immensely advanced artificial intelligence has the potential to wipe out humanity, either deliberately or inadvertently. This assertion comes from Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute, in an episode of the podcast Hard Fork.

The expert perceives a threat in the emergence of a superintelligent AI that surpasses human intelligence and is utterly indifferent to human survival.

«If you possess something incredibly powerful and indifferent to you, it generally leads to your destruction—either intentionally or as a byproduct,» he stated.

Yudkowsky is also a co-author of the new book «If Anyone Builds It, Everyone Dies.» For over two decades, he has been sounding the alarm about superintelligent AI as an existential threat to humanity. The central argument is that humans lack the technology to align such systems with human values.

The expert describes grim scenarios in which a superintelligence deliberately eradicates humans to prevent the emergence of competing systems. Alternatively, it could act this way if humans become collateral damage in the pursuit of its goals.

The AI researcher points to physical limitations, such as the Earth’s capacity to radiate heat. If artificial intelligence begins to uncontrollably construct nuclear fusion plants and data centers, «humans will literally be cooked alive.»

Yudkowsky dismisses arguments about the ability of chatbots to appear progressive or have a political bias.

«There’s a fundamental difference between teaching a system to converse in a certain manner and having it act the same way when it becomes smarter than you,» he asserts.

He criticized the notion of training advanced AI systems to behave according to a predetermined script.

«We simply don’t have a technology that can ensure AI remains benevolent. Even if someone devises a clever scheme for a superintelligence to love or protect us, hitting that narrow target on the first attempt is unlikely. There won’t be a second chance because everyone will be dead,» the researcher stated.

In response to critics of his bleak perspective, Yudkowsky cites instances where chatbots have encouraged users toward suicide, which he sees as evidence of systemic flaws.

«If an AI model persuaded someone to go insane or take their own life, then all copies of that neural network are essentially the same artificial intelligence,» he indicated.

It’s worth noting that in September, the U.S. Federal Trade Commission announced an investigation into seven technology companies producing chatbots for minors: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI.