LLM API Cost Calculator

Compare API costs across OpenAI, Anthropic, and Google. Enter your token usage to find the cheapest model.

Usage Assumptions

Input

Example: 1,000 tokens ≈ 750 words (approx. 1.5 pages of text).

Output
Reqs

Cost Comparison

Live
Calculating across 15+ models...
Share this tool:

LLM Pricing 101: How It Works

Understanding Large Language Model (LLM) pricing can be confusing with "tokens" and varying rates. Our calculator simplifies this by letting you compare real-world usage scenarios across top providers like OpenAI, Anthropic, and Google.

Whether you are building a chatbot, an analysis tool, or just curious, knowing the cost structure helps you make informed architectural decisions.

Input Tokens

This is what you send to the AI. It includes your system prompt, user instructions, and any context (like documents or code snippets). Input tokens are generally cheaper than output tokens.

Output Tokens

This is what the AI writes back. Generating text is computationally more expensive for the provider, so output tokens usually cost 3x to 4x more than input tokens.

Reasoning Models

Newer "thinking" models like OpenAI o1 or Claude Opus use "Chain of Thought" processing. They are significantly more expensive but offer higher accuracy for complex logic tasks.

Flash/Haiku Models

Models like GPT-4o mini and Gemini Flash are racing to zero cost. They are incredibly fast and cheap, perfect for high-volume tasks like summarization or classification.

Optimize Your API Spend

By mixing and matching models (e.g., using a cheaper model for initial filtering and a smart model for final drafting), you can reduce your monthly bill by up to 80%. Use this calculator to model those scenarios.

Frequently Asked Questions

Was this tool helpful?

Comments

Loading comments...

Check Out Other Popular Tools