OpenAI releases GPT-4.5 claiming 10X efficiency over GPT-4, but says it’s ‘not a frontier model’


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


It’s here: OpenAI has announced the release of GPT-4.5, a research preview of its latest and most powerful large language model (LLM) for chat applications. Unfortunately, it’s far-and-away OpenAI’s most expensive model (more on that below).

It’s also not a “reasoning model,” the new class of models offered by OpenAI, DeepSeek, Anthropic and many others that produce “chains-of-thought,” or stream-of-consciousness like text blocks in which they reflect on their own assumptions and conclusions to try and catch errors before serving up responses/outputs to users. It’s still more of a classical LLM.

Nonetheless, acording to OpenAI co-founder and CEO Sam Altman’s post on the social network X, GPT-4.5 is: “the first model that feels like talking to a thoughtful person to me. i have had several moments where i’ve sat back in my chair and been astonished at getting actually good advice from an AI.”

However, he cautioned that the company is bumping up against the upper end of its supply of graphics processing units (GPUs) and has had to limit access as a result:

bad news: it is a giant, expensive model. we really wanted to launch it to plus and pro at the same time, but we’ve been growing a lot and are out of GPUs. we will add tens of thousands of GPUs next week and roll it out to the plus tier then. (hundreds of thousands coming soon, and i’m pretty sure y’all will use every one we can rack up.)

this isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.

Starting today, GPT-4.5 is available to subscribers of OpenAI’s most expensive subscription tier, ChatGPT Pro ($200 USD/month), and developers across all paid API tiers, with plans to expand access to the far less costly Plus and Team tiers ($20/$30 monthly) next week.

GPT‑4.5 is able to access search and OpenAI’s ChatGPT Canvas mode, and users can upload files and images to it, but it doesn’t have other multimodal features like Voice Mode, video, and screensharing — yet.

OpenAI is hosting a livestream event today at 12 pm PT/ 3 pm ET, where OpenAI researchers will discuss the model’s development and capabilities.

Advancing AI with unsupervised learning

GPT-4.5 represents a step forward in AI training, particularly in unsupervised learning, which enhances the model’s ability to recognize patterns, draw connections, and generate creative insights.

During the livestream, OpenAI researchers noted how it was trained on data generated by smaller AI models and that this improved its “world model.” They also stated it was pre-trained across multiple data centers concurrently, suggesting a decentralized approach similar to that of rival lab Nous Research.

This training regimen apparently helped GPT-4.5 learn to produce more natural and intuitive interactions, follow user intent more accurately, and demonstrate greater emotional intelligence.

The model builds on OpenAI’s previous work in AI scaling, reinforcing the idea that increasing data and compute power leads to better AI performance.

Compared to its predecessors, GPT-4.5 is expected to produce fewer hallucinations, making it more reliable across a broad range of topics.

What makes GPT-4.5 stand out?

According to OpenAI, GPT-4.5 is designed to create warm, intuitive, and naturally flowing conversations. It has a stronger grasp of nuance and context, enabling more human-like interactions and a greater ability to collaborate effectively with users.

The model’s expanded knowledge base and improved ability to interpret subtle cues allow it to excel in various applications, including:

Writing assistance: Refining content, improving clarity, and generating creative ideas.

Programming support: Debugging, suggesting code improvements, and automating workflows.

Problem-solving: Providing detailed explanations and assisting in practical decision-making.

GPT-4.5 also incorporates new alignment techniques that enhance its ability to understand human preferences and intent, further improving user experience.

How to access GPT-4.5

Starting today, ChatGPT Pro users can select GPT-4.5 in the model picker on web, mobile, and desktop. Next week, OpenAI will begin rolling it out to Plus and Team users.

For developers, GPT-4.5 is being made available through OpenAI’s API, including the Chat Completions API, Assistants API, and Batch API. It supports key features like function calling, structured outputs, streaming, system messages, and image inputs, making it a versatile tool for various AI-driven applications. However, it currently does not support multimodal capabilities such as voice mode, video, or screen sharing.

Pricing and implications for enterprise decision-makers

Enterprises and team leaders stand to benefit significantly from the capabilities introduced with GPT-4.5. With its enhanced reliability and natural conversational abilities, GPT-4.5 can support a wide range of business functions:

Improved Customer Engagement: Businesses can integrate GPT-4.5 into support systems for faster, more natural interactions.

Enhanced Content Generation: Marketing and communications teams can produce high-quality, on-brand content efficiently.

Streamlined Operations: AI-powered automation can assist in debugging, workflow optimization, and strategic decision-making.

Scalability and Customization: The API allows for tailored implementations, enabling enterprises to build AI-driven solutions suited to their needs.

At the same time, the pricing for GPT-4.5 through OpenAI’s API for third-party developers looking to build applications on the model appears shockingly high, at $75/$180 per million input/output tokens compared to $2.50/$10 for GPT-4o.

And with other rival models released recently — from Anthropic’s Claude 3.7 to Google’s Gemini 2 Pro to OpenAI’s own reasoning “o” series (o1, o3-mini high, o3) — the question will become if GPT-4.5s value is worth the relatively high cost, especially through the API.

Early reactions from fellow AI researchers and power users vary widely

The release of GPT-4.5 has sparked mixed reactions from AI researchers and tech enthusiasts on the social network X, particularly after a version of the model’s “System Card” (a technical document outlining its training and evaluations) leaked earlier (included below at the bottom of this article), revealing a variety of benchmark results ahead of the official announcement.

Teknium (@Teknium1), the pseudonymous co-founder of rival AI model provider Nous Research, expressed disappointment in the new model, pointing out minimal improvements in MMLU (multilingual understanding) scores and real-world coding benchmarks compared to other leading LLMs.

“It’s been 2+ years and 1000s of times more capital has been deployed since GPT-4… what happened?” he asked.

Others noted that GPT-4.5 underperformed relative to OpenAI’s o3-mini model in software engineering benchmarks, raising questions about whether this release represents significant progress.

However, some users defended the model’s potential beyond raw benchmarks.

Software developer Haider (@slow_developer) highlighted GPT-4.5’s 10x computational efficiency improvement over GPT-4 and its stronger general-purpose capabilities compared to OpenAI’s STEM-focused o-series models.

AI news poster Andrew Curran (@AndrewCurran_) took a more qualitative view, predicting that GPT-4.5 would set new standards in writing and creative thought, calling it OpenAI’s “Opus.”

These discussions underscore a broader debate in AI: Should progress be measured purely in benchmarks, or do qualitative improvements in reasoning, creativity, and human-like interactions hold greater value?

Still in research preview

OpenAI is positioning GPT-4.5 as a research preview to gain deeper insights into its strengths and limitations. The company remains committed to understanding how users interact with the model and identifying unexpected use cases.

“We’re sharing GPT-4.5 as a research preview to better understand its strengths and limitations,” OpenAI stated. “Scaling unsupervised learning continues to drive AI progress, improving accuracy, fluency, and reliability.”

As OpenAI continues to refine its models, GPT-4.5 serves as a foundation for future AI advancements, particularly in reasoning and tool-using agents. While GPT-4.5 is already demonstrating impressive capabilities, OpenAI is actively evaluating its long-term role within its ecosystem.

With its broader knowledge base, improved emotional intelligence, and more natural conversational abilities, GPT-4.5 is set to offer significant improvements for users across various domains. OpenAI is keen to see how developers, businesses, and enterprises integrate the model into their workflows and applications.

As AI continues to evolve, GPT-4.5 marks another milestone in OpenAI’s pursuit of more capable, reliable, and user-aligned language models, promising new opportunities for innovation in the enterprise landscape.



Source link

About The Author

Scroll to Top