OpenAI's third DevDay in October 2025 signalled a decisive shift.

Rather than launching a single flagship model or proclaiming progress toward AGI, the company pitched ChatGPT as an operating system and unveiled a toolkit intended to turn developers and enterprises into co‑creators.

They attempt to build a self‑sustaining ecosystem of apps and agents around ChatGPT while lowering the cost of entry with mini models.

A Pivot From Models to Workflow

When Fast Company recapped DevDay, it noted that OpenAI executives talked less about abstract model advances and more about tools to "make the AI do real work that matters".

They have a compelling reason: its consumer subscription revenue, largely from ChatGPT Plus, still far outstrips enterprise API income.

COO Brad Lightcap admitted that early models weren't reliable enough for business tasks and that enterprises needed agents capable of executing processes.

DevDay's lineup—Apps SDK, AgentKit, mini models and new voice and video capabilities—targets this gap.

By courting developers and enterprise teams, OpenAI hopes to build a platform that will generate network effects and, eventually, diversified revenue.

Apps in ChatGPT

A centrepiece of DevDay was Apps in ChatGPT.

Instead of the 2023 GPT Store, which listed custom GPTs in a separate marketplace, the new Apps SDK allows developers to embed interactive apps directly within chat conversations.

These apps can display rich interfaces, maintain state across turns and respond to natural‑language prompts. OpenAI says the new system will expose apps to 800 million weekly ChatGPT users—a huge distribution channel for partners like Booking, Canva, Coursera and Zillow.

The SDK is built on the open Model Context Protocol and will support login and payments.

A forthcoming app directory and revenue‑sharing scheme suggest OpenAI is learning from Apple's App Store: by controlling the platform, it can capture value from each transaction.

Apps are more than a gimmick.

Fast Company likened the move to turning ChatGPT into an "operating system".

Apps can appear contextually based on a user's request or be explicitly invoked, allowing ChatGPT to function as a dashboard for learning, shopping or planning trips.

However, centralized distribution raises familiar concerns about platform power, revenue sharing and privacy.

OpenAI's guidelines require clear privacy policies and minimal data use; whether developers adhere to these rules will shape user trust.

AgentKit: Streamlining Agentic Workflows

The AgentKit announcement was arguably the most strategic.

Sam Altman described it as "a complete set of building blocks" designed to help developers take agents "from prototype to production".

Prior to AgentKit, building autonomous agents required juggling multiple tools and manual prompt tuning. The new suite provides:

  • Agent Builder – a visual canvas where developers drag‑and‑drop nodes to build multi‑agent workflows, preview runs and track versions. Ramp, a fintech company, said this reduced iteration cycles by 70 %, allowing an agent to be built in two sprints instead of two quarters. LY Corporation reportedly assembled a multi‑agent workflow in under two hours
  • Connector Registry – a hub for managing data sources like Dropbox, Google Drive and SharePoint, with support for third‑party Model Context Protocol servers. Guardrails, a modular safety layer, can mask personal data and detect jailbreaks.
  • ChatKit – embeddable chat components that let companies integrate agent conversations into their own products. Canva used ChatKit to embed an agent in under an hour, saving two weeks of development time.
  • Evals for Agents – datasets, trace grading and automated prompt optimization to measure and improve agent performance.

TechCrunch noted that AgentKit signals a competitive move against other platforms and aims to remove friction so enterprises can build complex agents.

VentureBeat added that AgentKit could compete with no‑code automation tools like Zapier and that its beta features will gradually roll out.

By integrating building, deployment and evaluation tools, OpenAI is positioning itself as the full‑stack provider for agentic workflows.

For professionals tasked with automating customer support or sales, AgentKit lowers the barrier to experimentation, but it also deepens dependence on OpenAI's ecosystem.

Mini Models and the Price War

Beyond developer tools, OpenAI introduced a family of smaller, cheaper models.

These models respond to competitive pressure from Google, Anthropic and others, and they broaden access for cost‑sensitive users.

gpt‑realtime‑mini

OpenAI's Realtime API now includes gpt‑realtime‑mini, a speech‑to‑speech model designed for low‑latency voice interactions.

The TechCrunch DevDay recap highlighted that the mini model is about 70 % cheaper than the original realtime voice model, delivering similar voice quality.

The Decoder noted that gpt‑realtime‑mini is roughly 70 % cheaper and that it sits alongside GPT‑5 Pro and Sora 2 in the API lineup.

An in‑depth guide from Eesel explains why the model is significant: unlike previous voice pipelines that chained separate speech‑to‑text and text‑to‑speech models, gpt‑realtime‑mini handles audio input and output in a single model, reducing latency and enabling natural conversations.

It also introduces expressive voices like Marin and Cedar, supports multimodal input (audio plus text) and offers function calling to connect with external services.

gpt‑image‑1‑mini

On the image generation front, OpenAI released gpt‑image‑1‑mini.

According to the company's pricing page, text input costs $2 per million tokens and output is $8 per million tokens, while image tokens cost $2.50 per million for inputs and $8 per million for outputs.

The Decoder summarized that the mini model is roughly 80 % less expensive than the previous image model and targets developers scaling image‑based applications.

The lower prices open the door for creative experimentation—small businesses can afford to generate marketing images or storyboards without incurring the high fees associated with GPT‑Image‑1.

o3‑mini and GPT‑5 Pro

OpenAI also released o3‑mini, a cost‑efficient reasoning model optimized for STEM tasks, offering faster latency and function calling.

The company claims that o3‑mini matches or outperforms earlier models on tasks like math competitions and coding benchmarks while providing a 95 % reduction in per‑token price compared to the debut of GPT‑4.

At the high end, GPT‑5 Pro offers extended reasoning with a 400,000‑token context window and state‑of‑the‑art performance on science questions.

The model is available via the Responses API with a "high" reasoning effort setting and costs $15 per million input tokens and $120 per million output tokens.

The coexistence of mini and premium models illustrates OpenAI's desire to segment the market—offering affordable options for entry‑level tasks while charging a premium for deep research and enterprise‑grade workloads.

Sora 2 and the Challenge of Generative Video

OpenAI's most visually striking demo was Sora 2, an upgraded video model that can generate photorealistic clips obeying physical laws.

The company likened the leap from the original Sora to Sora 2 to going from GPT‑1 to GPT‑3.5.

Sora 2 can simulate world physics—balls rebound off backboards rather than teleport—and handle complex shots like Olympic routines.

It also integrates background soundscapes and can insert a real person's likeness and voice into generated scenes.

A notable part of the announcement was the Sora social app.

Users can create and remix videos, insert themselves into scenes and participate in a feed designed to encourage creativity rather than doomscrolling.

Safety features include parental controls, usage limits for teens and the ability to disable cameo insertion.

However, the app's legal implications quickly became apparent.

TechCrunch reported that OpenAI faced backlash for an opt‑out approach to copyrighted characters and that the company now plans granular, opt‑in copyright controls.

Sam Altman acknowledged that revenue generation for video generation remains unclear and that OpenAI may need to share revenue with rights holders.

For a generation that grew up remixing content on TikTok, Sora offers unprecedented creative power, but its success will hinge on responsible copyright handling and sustainable business models.

Codex Goes Mainstream

While models like GPT‑5 capture headlines, OpenAI's programming assistant Codex quietly transitioned from research preview to general availability.

Daily usage increased more than 10× since May, and engineers at OpenAI merged 70 % more pull requests each week when using Codex.

The general release introduces a Slack integration, enabling developers to ask questions or delegate tasks directly from team chats.

Codex SDK allows companies to embed Codex into their own tools, providing structured outputs, context management and code resume features.

New admin tools give enterprise customers control over environment settings and usage analytics.

Early users like Cisco reported that Codex cut code review time by up to 50 %, while Instacart used the SDK to automate code cleanup and accelerate engineering velocity.

These improvements show that OpenAI intends Codex to become a standard part of enterprise software development, further embedding its models into professional workflows.

Industry Implications: Platform Play and Competitive Pressures

Taken together, the DevDay announcements reveal a two‑pronged strategy:

  • Building a platform ecosystem. By turning ChatGPT into a distribution hub for apps and agents, OpenAI seeks to replicate the success of smartphone operating systems. Developer tools like AgentKit and ChatKit lower the friction of building agentic applications, and the app directory promises new revenue streams. FinancialContent's analysis concluded that these launches "fundamentally reshaped the landscape of AI application development," turning ChatGPT into "an emergent AI operating system". Fast Company observed that ChatGPT's evolution could help enterprises finally see returns on their AI investments. For developers and startups, the platform offers reach and ready‑made infrastructure; for users, it promises a single interface for many tasks. However, a successful platform will also cement OpenAI's gatekeeper role and could amplify concerns about data privacy and platform dependence.
  • Price segmentation and the race to the bottom. VentureBeat's preview noted that competition from Google's Gemini and Anthropic's Claude has narrowed technical performance differences and triggered a price war. OpenAI responded by releasing mini models (o3‑mini, gpt‑realtime‑mini, gpt‑image‑1‑mini) that dramatically reduce costs, while offering premium models like GPT‑5 Pro for high‑stakes applications. The pricing page shows that gpt‑realtime‑mini costs $0.60 per million text input tokens and $2.40 per million output tokens, compared with $4 and $16 for the standard realtime model. For audio, the mini model costs $10 per million input tokens, versus $32 for the full model. These reductions aim to attract high‑volume applications such as call centers and social media platforms, but they also squeeze margins in an industry already grappling with astronomical compute expenses.

Looking Ahead: Opportunities and Challenges

The ability to run apps inside ChatGPT could streamline daily tasks—from planning travel to managing finances—while AgentKit offers career opportunities for those building AI‑powered workflows.

The mini models democratize advanced capabilities such as voice and video, allowing startups and independent creators to experiment without prohibitive costs.

At the same time, the platform's centralization raises issues:

  • Economic sustainability. Training and serving large models remain expensive. Lightcap hinted that enterprise growth is necessary to reach break‑even. Introducing mini models and a social video app increases compute demand; it's unclear how the company will fund these services long‑term or share revenue with rights holders.
  • Regulation and copyright. Sora's cameo feature triggered concerns about unauthorized use of IP. OpenAI's shift to opt‑in controls underscores the need for clear licensing frameworks. Broader regulation of synthetic media and biometric privacy may follow.
  • Competition and interoperability. Google and Microsoft are developing their own agent frameworks and voice models. Open standards like the Model Context Protocol may facilitate cross‑platform agents, but lock‑in remains a risk. For developers, hedging across multiple ecosystems may become necessary.

A Platform Strategy in the Making

DevDay 2025 marked a maturation for OpenAI.

Rather than showcasing a singular breakthrough, the company launched a suite of tools and models that collectively reposition ChatGPT from a conversational product to a nascent AI operating system.

Apps in ChatGPT, AgentKit, mini models and the expansion into voice and video all serve a larger goal: building a platform where developers innovate, users stay engaged and OpenAI captures value across the stack.

This evolution offers powerful new capabilities but also demands critical awareness of platform dynamics, data usage and the ethics of synthetic media.

Whether OpenAI can balance these ambitions with sustainable business models and responsible governance will determine whether its platform dream becomes a durable reality.