tool / ai
Token Counter
Paste any text to instantly count tokens and estimate API costs across GPT-4o,
Claude 3.5, Gemini, and more. Uses a cl100k_base approximation — no API calls,
no data leaves your browser.
← Back to tools ~0tokens
0characters
0words
0lines
Based on ~0 input tokens and ~0 output tokens. Prices from public provider pages (may vary).
| Model | Provider | Input cost | Output cost | Total |
|---|
| GPT-4o | OpenAI | $0.0000 | $0.0000 | $0.0000 |
| GPT-4o mini | OpenAI | $0.0000 | $0.0000 | $0.0000 |
| Claude 3.5 Sonnet | Anthropic | $0.0000 | $0.0000 | $0.0000 |
| Claude 3 Haiku | Anthropic | $0.0000 | $0.0000 | $0.0000 |
| Gemini 1.5 Pro | Google | $0.0000 | $0.0000 | $0.0000 |
| Gemini 2.0 Flash | Google | $0.0000 | $0.0000 | $0.0000 |
Pricing per 1M tokens — Input / Output:GPT-4o: $5 / $15GPT-4o mini: $0.15 / $0.6Claude 3.5 Sonnet: $3 / $15Claude 3 Haiku: $0.25 / $1.25Gemini 1.5 Pro: $3.5 / $10.5Gemini 2.0 Flash: $0.075 / $0.3
What are tokens?
Language models don't read text character by character — they process tokens,
which are chunks of text roughly 4 characters long for English. A word like "hello"
is 1 token; "unbelievable" might be 3–4 tokens.
Why does it matter?
LLM APIs charge per token — for both input (your prompt + context) and output (the
response). Knowing your token count helps you optimize prompts, choose the right model,
and predict costs before running large batches.
Approximation accuracy
This tool uses a regex-based approximation of cl100k_base encoding. Results are
typically within 5–10% of the actual count. For exact counts, use the official
tiktoken library or the provider's tokenizer API.
// huntermussel
Building something with LLMs and need help with cost optimization?
HunterMussel designs production AI pipelines with intelligent caching, context compression, and model routing strategies — so you ship faster and spend less on API costs.
Explore AI Automation Services →