How Does OpenAI Make Money? Subscriptions, APIs, and More

OpenAI makes money primarily through subscriptions to ChatGPT, usage-based fees from its developer API, and enterprise contracts. The company hit $10 billion in annualized revenue by mid-2025, nearly doubling from $5.5 billion in December 2024, and is on track to reach $12.7 billion by year’s end. That figure excludes licensing revenue from Microsoft, its largest investor and cloud partner. Here’s how each revenue stream works.

ChatGPT Subscriptions

The biggest and most visible revenue source is ChatGPT, which offers tiered monthly plans for individual users. A free tier gives access to basic models, but paid plans unlock more powerful reasoning, faster responses, expanded memory, and tools like image generation and deep research. ChatGPT Plus sits in the middle of the lineup, offering advanced reasoning with newer models, expanded uploads, and early access to features. ChatGPT Pro sits at the top, with five to twenty times more usage, access to the most capable models, and unlimited image creation.

There’s also a lower-cost Go tier for lighter users. Each step up the ladder gives you more messages, better models, and fewer usage caps. With hundreds of millions of users on ChatGPT, even modest conversion rates to paid plans generate substantial recurring revenue.

Business and Enterprise Plans

OpenAI sells workspace-oriented plans designed for companies. The Business tier starts at two users and bundles everything in the Plus plan with admin controls, single sign-on, compliance support for privacy regulations like GDPR and CCPA, and integrations with over 60 workplace tools including Slack, Google Drive, SharePoint, GitHub, and Atlassian products. Enterprise plans add annual billing options and deeper customization.

These plans are priced per user per month, so a company with hundreds or thousands of seats can represent a large contract. The business tiers also promise that OpenAI won’t train on customer data, which is a key selling point for organizations handling sensitive information.

Developer API Fees

Beyond ChatGPT, OpenAI charges developers who build applications on top of its models through an API. Pricing is based on tokens, which are chunks of text roughly equivalent to a word or part of a word. Developers pay separately for input tokens (what they send to the model) and output tokens (what the model generates back), with output tokens costing significantly more.

Rates vary by model. For a sense of scale, GPT-4o charges around $2.50 per million input tokens and $10 per million output tokens. Smaller, cheaper models like GPT-4o mini cost a fraction of that: $0.15 per million input tokens and $0.60 per million output tokens. Reasoning-focused models like o3 fall somewhere in between. OpenAI also offers a Batch API that returns results within 24 hours at a 50% discount, useful for companies processing large volumes of requests where speed isn’t critical.

Additional tools carry their own fees. Image generation through DALL-E 3 costs $4.40 to $8.80 per 100 images depending on resolution. File search, code interpretation, and real-time audio models each have separate pricing. A company building a customer service chatbot, a coding assistant, or a document analysis tool on OpenAI’s API could easily run up a meaningful monthly bill as usage scales.

Provisioned Capacity for Heavy Users

For organizations with predictable, high-volume workloads, OpenAI offers provisioned throughput units (PTUs) through its partnership with Microsoft Azure. Instead of paying per token, customers reserve a fixed amount of computing capacity and pay an hourly or monthly rate regardless of how much they actually use. Monthly reservations run around $260 per unit for models like GPT-4o, with annual commitments offering further discounts. This model appeals to large enterprises that need guaranteed performance and prefer predictable costs over variable billing.

The Microsoft Relationship

Microsoft has invested billions in OpenAI and resells OpenAI’s models through its Azure cloud platform. For years, Microsoft held exclusive rights to distribute OpenAI’s commercial API through Azure, and the two companies shared revenue from those sales. That arrangement has evolved. Bloomberg reported in 2025 that Microsoft would stop paying a revenue share on OpenAI products it resells through its cloud, in exchange for ending its exclusivity. This means OpenAI can now sell its API directly and through other cloud providers, while Microsoft continues offering the models on Azure without owing OpenAI a cut.

Microsoft’s licensing fees to OpenAI, which are excluded from the $10 billion annualized revenue figure, represent an additional income stream. The exact terms aren’t fully public, but the relationship remains one of the most significant partnerships in the tech industry.

Corporate Structure and Investor Returns

OpenAI’s financial model is unusual. The company started as a nonprofit and later created a for-profit subsidiary to raise the capital needed for AI research. Today, the OpenAI Foundation (the nonprofit parent) appoints all board members and can replace directors at any time. A Safety and Security Committee remains under the Foundation’s control.

All equity holders now own traditional stock that grows proportionally with the company’s success. But the Foundation has a built-in mechanism to claim additional equity if the company’s share price increases more than tenfold over 15 years. The more the price exceeds that threshold, the more equity flows to the Foundation. This structure is designed to ensure that if OpenAI becomes enormously profitable, a significant share of that value supports the nonprofit’s mission rather than flowing entirely to private investors.

Where the Money Goes

Despite rapid revenue growth, OpenAI is not yet profitable. The company lost roughly $5 billion in 2024, reflecting the enormous cost of training frontier AI models and operating the computing infrastructure behind ChatGPT and its API. Training a single large model can cost hundreds of millions of dollars in computing power alone, and serving millions of users in real time requires massive ongoing GPU capacity. OpenAI’s revenue strategy is essentially a race to grow subscription and API income fast enough to outpace those infrastructure costs as the technology matures and becomes more efficient to run.