Грубо, но точно: как прямолинейность повышает эффективность ответов ИИ Translation: Rude, but Accurate: How Directness Increases AI Response Efficiency

A recent study from the University of Pennsylvania revealed that large language models (LLMs) tend to generate more accurate responses when prompted in a blunt manner rather than politely.

The published paper states that straightforward prompts yielded correct answers 84.8% of the time, compared to soft phrasing which achieved accuracy of 80.8%.

Researchers reworded 50 fundamental questions across mathematics, science, and history in five different tones—ranging from «very polite» to «very rude.» They then requested ChatGPT-4 to respond to each of these prompts.

The findings contradicted prior conclusions suggesting that models allegedly «reward» tactfulness.

«Contrary to expectations, impolite prompts consistently outperformed polite ones. This may indicate that new language models respond differently to the tone of the prompt,» wrote authors Om Doberia and Akhil Kumar.

In a 2024 study titled «Should We Respect LLM? A Cross-Linguistic Study on the Impact of Prompt Politeness on Language Model Performance,» researchers concluded that rude requests often degrade answer quality, while excessive politeness offers no significant advantages.

These new insights indicate that contemporary models have moved away from behaving like «social mirrors» and now function more as strictly utilitarian machines that value directness over politeness.

The work corroborated more recent data from the Wharton School regarding the art of prompt formulation for achieving more precise responses. It turned out that tone is as crucial a factor as word choice.

It is worth recalling that in May, a study from George Washington University demonstrated that being polite with AI models is a futile use of computational resources.