GPT-4.5: A Milestone in the Ongoing Journey of AI Scalability, Says OpenAIs Chief Scientist

OpenAI has unveiled its largest language model to date, known as GPT 4.5. According to Mark Chen, the Chief Scientist at OpenAI, this development illustrates that the scaling of AI models has not yet reached its limits.

On Thursday, OpenAI announced its latest language model, GPT 4.5, describing it as the most significant and powerful chat model they have created so far. The company plans to roll it out initially to Pro users, with subsequent availability for Plus, Enterprise, Team, and Edu users in the upcoming weeks.

For Chen, GPT 4.5 addresses the skepticism of critics who doubt that research labs can continue making advancements by creating larger models. “GPT 4.5 is indeed proof that we can persist with the scaling paradigm,” Chen explained in a conversation. “This represents a leap to the next order of magnitude.”

When asked why the new model was not labeled GPT-5, Chen clarified that OpenAI aims to adhere to recognizable naming patterns. With predictable scaling—such as the transition from GPT-3 to 3.5—they can anticipate what enhancements larger computational resources and increased efficiency will offer. The new model aligns with the expectations set for GPT 4.5.

Chen emphasized that OpenAI now has the capability to scale across two different dimensions. “GPT 4.5 is our latest experiment in scaling along the axis of unsupervised learning, but we also have reasoning abilities,” he expressed.

He attributed the longer development time between GPT-4 and 4.5 to the company’s strong focus on developing reasoning paradigms. These two approaches work in tandem: “You need knowledge to build reasoning on top. A model cannot blindly learn reasoning from scratch,” Chen noted. Both paradigms reinforce one another and create feedback loops.

Chen remarked that GPT 4.5 is «intelligent» in a different way than reasoning models, possessing a significantly larger amount of world knowledge. Compared to GPT-4, users preferred the new model for everyday applications by 60%. For productive and intellectual tasks, this number rose to nearly 70%.

Regarding potential limitations in scaling, Chen was clear: “We are observing similar results. GPT 4.5 is the next step in this unsupervised learning paradigm.” He explained that OpenAI is very rigorous in its approach, using predictions based on all previously trained models to gauge expected performance.

Beyond traditional benchmarks, where GPT 4.5 shows similar improvements as seen with the shift from GPT-3.5 to GPT-4 according to OpenAI’s data, Chen indicated that the model also possesses new capabilities. He mentioned its ability to create ASCII art—a task that previous models struggled with.

Chen also dismissed claims that developing GPT 4.5 was particularly challenging. “The development of all our foundational models is experimental. It often involves pauses at certain points, analyzing what’s happening, and rerunning tests,» he clarified. While this was not specifically characteristic of GPT 4.5, OpenAI has employed similar methods with GPT-4 and the O-series models.

However, it’s noteworthy that a significantly smaller model, Claude 3.7 Sonnet, outperforms GPT-4.5 in many areas, making GPT-4.5 appear relatively outdated given that its data was obtained only in October 2023. One reason for this may lie in the training data, as there have been substantial advancements in synthetic data since 2023.

[Source](https://the-decoder.com/gpt-4-5-is-proof-that-we-can-continue-the-scaling-paradigm-says-openais-chief-research-officer/)