Инновационные угрозы в киберпространстве: от фейковых юридических компаний до кибератак на AI-системы Translation: Innovative Threats in Cyber Space: From Fake Legal Firms to Cyber Attacks on AI Systems

We have compiled the most significant cybersecurity news from the past week.

In August, the FBI issued a warning about scammers impersonating fake crypto law firms. These criminals were stealing funds and personal information from clients under the guise of offering fictitious services for the recovery of lost assets.

The main target audience for these scammers consists of victims of cryptocurrency hacks who are attempting to reclaim their stolen funds.

Law enforcement agencies reported that the scammers employed a wide range of manipulative tactics, including exploiting the desperation of victims and creating a false sense of security by posing as representatives or collaborators with governmental entities. Additionally, the actions of these cybercriminals tarnished the reputations of individuals and organizations whose names were misused.

When selecting assistants for cryptocurrency recovery, the FBI recommended paying attention to:

Press Gazette noted that at least six publications, including Wired and Business Insider, have removed content from their websites in recent months. The reason for this was that articles published under the name Margot Blanchard had been generated by AI, according to media reports.

In May, Wired published an article titled «They Fell in Love While Playing Minecraft. Then the Game Became Their Wedding Venue.» The article mentioned Jessica Hu, a 34-year-old minister from Chicago known as a «digital officiant» on Twitch and Discord. However, media outlets were unable to verify her existence, and several weeks later, Wired retracted the piece, citing a violation of editorial standards.

According to Press Gazette, in April, Business Insider published two essays by Blanchard. However, the publication removed them last week.

On August 21, Wired management acknowledged their oversight:

“If anyone should be able to recognize AI fraudsters, it’s Wired. And in fact, we usually do… Unfortunately, one slipped through the cracks.”

The publication explained that on April 7, one of their editors received a pitch from someone named Margot Blanchard about the «growing popularity of hyper-niche internet weddings.» The email exhibited «all the signs of a great story for Wired.» After the customary back-and-forth regarding the assignment and payment, the editor commissioned the article, which was published on May 7.

According to Wired, a few days later, the editorial team realized the author could not provide sufficient information about herself, while the journalist insisted on payment through PayPal or check.

Upon further investigation, it was discovered that the story was fabricated.

“We made mistakes: the article didn’t undergo proper fact-checking and wasn’t edited by a senior editor… We responded quickly when we discovered the deception and took steps to prevent it from happening again. In this new era, every editorial team must be prepared for such occurrences,” noted Wired’s editorial team.

Press Gazette reported that the first indication of something awry came from Dispatch magazine’s editor, Jacob Furedi. He revealed that he received a pitch from Blanchard to write a piece about «Gravemont, a defunct mining town in rural Colorado that has been repurposed into one of the most secretive death investigation training centers.» He requested the alleged freelancer to provide queries from government registries, which she ignored.

Meta is analyzing and storing images from devices. According to ZDNET, some Facebook users discovered two activated options in Meta’s app settings that grant the company access to their photo gallery. This is intended to use AI to offer «personalized creative ideas,» such as travel collages.

Reports indicate that these AI feature options called «photo gallery usage suggestions» were turned on for users who claimed they had not given consent.

If a user clicks «allow,» they consent to Meta’s terms for AI use and face analysis. Then, Facebook utilizes images from the gallery (including creation dates and the presence of people or objects) to offer collages, themed albums, summary posts, or AI-enhanced versions of photographs.

Researchers from Trail of Bits developed a new type of attack to steal user data. This method relies on embedding malicious commands into images processed by AI systems before entering a large language model.

The technique involves using full-size images with «invisible» instructions that manifest when the quality is reduced via resizing algorithms. When uploaded to AI, these images are automatically downscaled to enhance performance and conserve resources.

Depending on the system, image resizing algorithms can lighten the image using nearest neighbor, bilinear, or bicubic interpolation.

In Trail of Bits’ example with bicubic downscaling, the dark areas of the malicious image turn red, while hidden text appears in black.

From the user’s perspective, nothing unusual happens, but in reality, the model executes the concealed instructions that could lead to data leaks or other risky actions.

The researchers confirmed that their method is applicable to:

On August 25, Google announced the imminent termination of the ability to host software from unverified developers on Google Play. The new security system for Android will block the installation of malicious applications downloaded from third-party sources.

“While the threat is more related to third-party sources, the requirement for developer verification will now apply to both apps from Google Play and those in third-party stores,” the team added.

Early access to verification will open in October, and by March 2026, the system will be available to all Android app developers. In September, the mandatory identity verification requirement will come into effect in Brazil, Indonesia, Singapore, and Thailand, and by 2027, it will be implemented worldwide.