Microsoft AI Chief Warns Against Attributing Consciousness to Neural Networks

Only biological entities possess the capability for consciousness. Mustafa Suleyman, head of Microsoft’s AI division, stated that developers and researchers should halt their endeavors on projects that imply the opposite during a conversation with CNBC.

«I don’t believe people should be engaged in such work. If you frame the question incorrectly, you’ll receive an incorrect answer. I think this is certainly the case,» he remarked at the AfroTech conference in Houston.

The Microsoft executive is against the idea of developing artificial intelligence that can attain consciousness or AI systems that could allegedly experience suffering.

In August, Suleyman penned an essay proposing a new term: «Seemingly Conscious AI» (SCAI). Such artificial intelligence exhibits all the traits of sentient beings and thus appears to possess consciousness. It simulates all characteristics of self-awareness but is, internally, a void.

«The system I envision is not genuinely conscious, but it will convincingly mimic a human-like mind, making it indistinguishable from assertions you or I might make about our own thinking,» Suleyman writes.

Attributing consciousness to AI poses risks, the expert contends. It would reinforce misconceptions, create new dependency issues, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing debates about rights, and generate a colossal categorical error for society.

In 2023, Suleyman authored the book «The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma,» which meticulously examines the risks associated with AI and other emerging technologies. Among them:

The artificial intelligence market is moving toward AGI — artificial general intelligence capable of performing any task at human level. In August, OpenAI’s CEO Sam Altman remarked that this term may not be «very helpful.» Models are rapidly evolving, and soon we will increasingly rely on them, he believes.

For Suleyman, it is essential to clearly differentiate between AI becoming more intelligent and its potential to ever experience human emotions.

«Our physical experience of pain is what makes us deeply saddened and feel horrendous, but AI does not feel sadness when it encounters ‘pain,'» he stated.

According to the expert, this is a critical distinction. In reality, artificial intelligence creates the perception — a seemingly narrative experience — of self and consciousness, but it does not actually experience these.

«Technically, you know this because we can see what the model is doing,» the expert emphasized.

In the field of artificial intelligence, there is a philosophical theory proposed by John Searle known as biological naturalism. It posits that consciousness depends on the processes of a living brain.

«The reason we grant rights to individuals today is that we do not wish to harm them, as they can suffer. They experience pain and have preferences that include avoiding it. These models do not possess any of that. It’s merely simulation,» Suleyman expressed.

The executive opposes the notion of investigating consciousness in AI because it lacks it. He mentioned that Microsoft is developing services that understand they are artificial intelligence.

«In simpler terms, we are creating AI that always works for the benefit of humanity,» he observed.

It’s worth noting that in October, experts from Anthropic found that leading models are capable of displaying a form of «introspective self-awareness» — they can recognize and describe their own inner «thoughts,» and in some cases, even manage them.