OpenAI just dropped GPT-5.4 Thinking and Pro. From a 1M token context window to native computer control — here is why the AI landscape just shifted forever.

OpenAI isn't slowing down. Just days after the GPT-5.3 Instant release, the company has officially unveiled its most sophisticated reasoning engine to date: GPT-5.4 Thinking and Pro.
This isn't just an incremental update; it’s a full-scale replacement of previous iterations, designed to turn the LLM from a "chatbot" into a "reasoning agent."
(Ad block)
(Block read more: link)
The most staggering technical spec of GPT-5.4 is its 1-million-token context window. To put that in perspective: you can now feed the model an entire multi-year codebase or a 2,000-page archive in a single query.
It’s like having a researcher who has read your entire company’s history and remembers every single footnote without losing the "thread of logic."
GPT-5.4 is OpenAI’s first "base model" featuring Native Computer Control. This allows AI agents to interact with software in a closed loop—they can create, run, test, and self-correct code directly on a machine.
For the enterprise sector, OpenAI introduced a proprietary Data Compression technology. This allows the model to maintain high-density context during massive business workflows in finance and predictive analytics, making it significantly more cost-effective for Pro users.
The rollout has already begun for ChatGPT Plus, Team, and Pro subscribers. Developers can access the new models via the Codex API starting today.
The takeaway for legacy users: OpenAI is officially sunsetting GPT-5.2. The company plans to fully deprecate the 5.2 version over the next 90 days, urging all enterprise partners to migrate to the 5.4 architecture immediately.
(Block read more: link)