OpenAI Amplifies Security Measures Following Threats from Rival DeepSeek

OpenAI has revised its security protocols to safeguard its intellectual property against corporate espionage amid concerns over theft by Chinese competitors, as reported by the Financial Times citing insider sources.

In recent months, the company has implemented stricter measures for managing confidential information and enhanced employee screening processes. This initiative gained momentum following the release of a competing model by the Chinese AI startup DeepSeek.

OpenAI claims that the Chinese firm improperly replicated its technology using a technique known as «distillation,» which involves training a neural network based on responses from another LLM.

The Financial Times noted that this incident prompted Sam Altman’s startup to adopt «much stricter» measures. The company is «aggressively» expanding its workforce, particularly in cybersecurity teams.

These stringent policies were first introduced in OpenAI’s San Francisco offices last summer to limit employee access to critical information.

The company now keeps a significant portion of its patented technologies in isolated environments—computers are offline and not connected to any other networks. Additionally, biometric checks are now in place at OpenAI’s offices, allowing employees access to specific areas only after their fingerprints have been scanned.

OpenAI has also bolstered the physical security at its data centers. It has joined other firms in Silicon Valley that have tightened employee and candidate vetting in response to heightened espionage threats from China.

It is worth noting that in June 2024, retired U.S. Army General Paul Nakasone joined OpenAI’s board of directors and its Security and Protection Committee.