Пентагон внедрит ИИ-бота Grok, несмотря на недавние скандалы Translation: Pentagon to Implement AI Bot Grok Despite Recent Controversies

The AI bot Grok is being integrated into the Pentagon’s network alongside Google’s generative AI engine. This was announced by U.S. Secretary of Defense Pete Hegseth, as reported by the New York Post.

«We will soon have leading global AI models in every unclassified and classified network within our department,» the official stated.

Hegseth mentioned that the chatbot is set to begin its operations in the Department of Defense at the end of January. It will «provide all necessary data» from military IT systems, including information sourced from the intelligence division.

In his speech, the politician emphasized the need for optimizing and accelerating technological innovations within the armed forces. According to him, the Pentagon possesses «battle-tested operational data obtained over two decades of military and intelligence operations.»

«Artificial intelligence is only as good as the data it receives. We will ensure that data is available,» Hegseth added.

The Secretary of Defense expressed his desire to see «responsible AI systems» in the Pentagon. He pledged to «cut through the overgrown bureaucratic forest and clear the debris—preferably with a chainsaw.»

«We must ensure the dominance of American military AI so that no adversary can use the same technology to threaten our national security or our citizens,» the Pentagon chief asserted.

This announcement comes just days after Grok found itself at the center of yet another controversy, this time related to the generation of sexual content.

Malaysia and Indonesia have blocked access to the chatbot. Regulators in the EU, the UK, Brazil, and India are demanding investigations into Grok’s role in the spread of deepfakes.

The British organization Internet Watch Foundation noted that its analysts discovered «criminal images» of children aged 11 to 13, allegedly created using the chatbot.

Previously, Grok has faced criticism for disseminating falsehoods and questionable information.

In December, the chatbot provided inaccurate data about a mass shooting at Bondi Beach in Australia. In response to a query about a video showing a random bystander, Ahmed al-Ahmed, grappling with the shooter, the AI stated:

«It appears to be an old viral clip of a man climbing a palm tree in a parking lot, possibly to trim it. As a result, a branch fell onto a parked car. Searches across various sources yielded no confirmed information about the location, date, or injuries. It may be a staged event, authenticity not confirmed.»

In July, users noticed that the neural network relied on Elon Musk’s opinions when formulating answers, including topics such as the Israel-Palestine conflict, abortion, and immigration legislation.

Observations suggest that the chatbot has been intentionally tuned to consider Musk’s political views when addressing contentious issues.

Earlier, the billionaire stated that his startup would rewrite «all human knowledge» to train a new version of Grok, as today’s models use «too much junk in any foundational model tuned on uncorrected data.»

Subsequently, Grokipedia—an AI-based online encyclopedia «oriented toward truth»—was launched.

It should be noted that in November, users pointed out Grok 4.1’s bias, as the new model significantly overestimated Musk’s capabilities.