Deepseeks Innovative Approach: Profiting from Language Models in a Competitive Market

Recent data released by Deepseek indicates that language models can yield significant profits, even if priced lower than OpenAI’s offerings.

Deepseek has provided a rare insight into the operational expenses and potential profitability of its AI services. The figures suggest that the company could theoretically achieve a profit margin of 545% by fully monetizing its services, maintaining an open-source strategy, and setting lower prices than competitors like OpenAI.

During a 24-hour testing period, Deepseek’s models processed 608 billion input tokens and 168 billion output tokens. The company managed to cache over half of the input data (56.3%), which significantly reduced costs.

To optimize efficiency, Deepseek employs a dynamic resource allocation system. During peak hours, all nodes handle data output requests, while at night, when demand decreases, resources are redirected to research and training tasks.

The hardware infrastructure supporting this operation costs $87,072 per day, utilizing an average of 226.75 server nodes. Each node is equipped with eight Nvidia H800 GPUs, with a rental cost of two dollars per hour for every GPU.

A single H800 node processes approximately 73,700 input tokens per second during pre-filling and 14,800 output tokens during decoding, with an average output speed of 20-22 tokens per second.

If Deepseek charged full rates for each processed token using its premium R1 model (0.14 dollars per million input tokens when cached, 0.55 dollars when not cached, and 2.19 dollars per million output tokens), its daily revenue could reach $562,027.

However, actual earnings are considerably lower than these theoretical projections, the company reports. The standard V3 model from Deepseek costs less than a dollar, most services are offered for free, and the company even provides discounts during evening hours. Currently, the only source of revenue comes from API access.

Deepseek’s unusual transparency reflects industry dynamics: despite the theoretical potential for high profitability from AI language models, achieving these profits proves challenging. Due to market competition, complex pricing structures, and the need to offer free services, actual profits are often substantially diminished.

In this context, OpenAI’s recent pricing strategy is particularly noteworthy. The latest version of GPT-4.5 is significantly more expensive than both its predecessors and competitors like Deepseek, despite offering only marginal performance improvements.

Deepseek’s findings indicate that language models are evolving into publicly accessible services, where premium pricing no longer corresponds to tangible performance advantages. This creates additional pressure on Western AI firms such as OpenAI, which face substantial operational costs while market conditions are pushing prices down, resulting in billions in losses.

This may explain why OpenAI’s client manager, Adam Goldberg, recently emphasized the necessity of controlling the entire value chain for success in the AI domain—from infrastructure and data to models and applications. As language models become widely accessible, a company’s competitive edge may lie less in the models themselves and more in its ability to integrate and optimize the complete technological stack.

Source