Blog Ressourcen Über Suche Themen
AI Development

AI News Digest: Opus 4.7 costs ~40% more, llm-openrouter, Datasette in Sheets

Aktualisiert am 20. April 2026

Kategorie: AI Development
Teilen

AI news digest April 20 2026 on Claude Opus 4.7 tokenizer, llm-openrouter, and Datasette in Google Sheets

Sunday April 20, 2026. The headline story isn’t a new model. It’s that Claude Opus 4.7 quietly changed its tokenizer, and the math on your API bill just got worse. Simon Willison measured it. He also shipped two other useful things. Here’s the digest.


Claude Opus 4.7 quietly got ~40% more expensive

Anthropic’s Opus 4.7 is the first Claude model to ship a new tokenizer. Pricing stayed the same ($5/M input, $25/M output, unchanged from Opus 4.6), but the tokenizer now maps the same input to more tokens. Per Anthropic’s own documentation, the multiplier is roughly 1.0x to 1.35x depending on content type.

Willison updated his Claude Token Counter tool to make this visible: you paste text once and see token counts side by side across Opus 4.7, Opus 4.6, Sonnet 4.6, and Haiku 4.5. He also ran real inputs through it.

What Willison measured

  • Opus 4.7’s own system prompt: 1.46x more tokens than when counted with the 4.6 tokenizer.
  • A 3456x2234 PNG image: 3.01x more tokens than 4.6.
  • A small 682x318 image: basically flat, at 314 tokens on 4.7 vs 310 on 4.6.
  • A 15MB, 30-page text-heavy PDF: 60,934 tokens on 4.7 vs 56,482 on 4.6, a 1.08x multiplier.

Opus 4.7 also accepts images up to 2,576 px on the long edge (~3.75 megapixels), more than 3x the prior Claude image ceiling. So you can send bigger images, but if you do, each one is going to cost dramatically more.

Why this matters

Same list price, more tokens per request, equals higher effective price. For long-context and image-heavy workflows, “Opus 4.7 at the same price as 4.6” is misleading by up to ~40%.

If you’re running production workloads on Claude, two actions today:

  1. Benchmark your actual prompts using Willison’s updated token counter. The multiplier varies by content type, so don’t assume the 1.35x upper bound applies to you.
  2. Revisit your cost model before upgrading. A drop-in replacement of opus-4-6 with opus-4-7 could increase costs noticeably on long-context jobs. This isn’t a reason to avoid 4.7. It’s a reason to measure first.

This fits the broader pattern of transparency work Willison has been doing on Claude behavior. The big labs ship changes quietly; the community has to surface the implications.

→ Simon Willison: Claude Token Counter, now with model comparisons


llm-openrouter 0.6 ships a refresh command

Willison also released llm-openrouter 0.6, the OpenRouter plugin for his llm CLI tool. The headline addition is a new llm openrouter refresh command that forces a refresh of the model list without waiting for the cache to expire.

Small feature, revealing motivation: Simon added it specifically so he could try Kimi 2.6 on OpenRouter the moment it appeared, which ties directly to yesterday’s reporting on KIMI K2.6. When new models drop, you don’t want to wait for your tooling’s cache TTL. You want to point at them immediately.

For anyone running cross-model benchmarks from the terminal, llm plus llm-openrouter is now a clean pattern for “test this prompt against every frontier model” without juggling API keys or SDKs.

→ Simon Willison: llm-openrouter 0.6


Datasette data in Google Sheets

Willison’s third release of the day is a TIL (Today I Learned) writeup documenting three ways to pull Datasette data into Google Sheets:

  1. Sheets’ built-in IMPORTDATA() function, the simplest option, works for public endpoints.
  2. A named function wrapping IMPORTDATA() for cleaner cell formulas.
  3. A Google Apps Script, necessary when you need to send an API token in an HTTP header, which IMPORTDATA() doesn’t support.

The post includes a working example sheet demonstrating all three approaches.

This is the kind of bridge-the-gap tooling that makes Datasette useful outside of dashboards. Stakeholders who live in Sheets can now query a Datasette instance from a cell. It’s mundane in the best way: it unblocks a real workflow most teams hack around with CSV exports.

→ Simon Willison: SQL functions in Google Sheets to fetch data from Datasette


OpenAI and Hyatt: AI among colleagues

OpenAI published a customer story on Hyatt rolling out ChatGPT Enterprise to their workforce, under the framing “AI among colleagues.”

Enterprise customer stories are marketing, but they’re useful marketing: they signal where ChatGPT Enterprise deployments are actually landing at scale, which in turn signals what procurement teams are approving. Hospitality is a notable data point, with hundreds of properties, a distributed workforce, and many non-technical roles. This is the demographic that enterprise chat tools have historically struggled to reach.

The more interesting question, as always with these announcements, is why Hyatt chose OpenAI directly over Microsoft Copilot (which resells the same underlying models through their M365 bundle). Procurement at this scale is usually determined by existing enterprise relationships, not raw capability differences, so the fact that OpenAI won this one suggests their direct enterprise motion is maturing.

→ OpenAI: OpenAI helps Hyatt advance AI among colleagues


Takeaways

  1. Read the tokenizer notes before upgrading. Opus 4.7 is a price increase disguised as a same-price release for any workload where the 1.35x multiplier bites. Measure your prompts first.
  2. Community tooling closes observability gaps faster than labs do. Willison’s token counter is how you’d actually find this out; Anthropic’s release notes alone wouldn’t have told you the real cost impact of your specific workload.
  3. Small CLI features track the model news cycle. llm openrouter refresh exists because Kimi 2.6 is about to drop. The tooling layer is reacting to the model layer in real time.
  4. Enterprise AI is a relationship sale, not a capability sale. Hyatt picking OpenAI direct matters more for what it implies about enterprise motion than for what ChatGPT will do at Hyatt.

For yesterday’s digest on KIMI K2.6, Qwen 3.6 on M5 Max, and Claude system prompt diffs, see April 19’s roundup.

Kategorie AI Development
Teilen

Verwandte Beiträge

Erhalten Sie die neuesten KI-Erkenntnisse in Ihrem Posteingang

Bleiben Sie mit den neuesten Trends, Tutorials und Brancheneinblicken auf dem Laufenden. Treten Sie der Gemeinschaft von Entwicklern bei, die unserem Newsletter vertrauen.

Nur neue Konten. Durch die Übermittlung Ihrer E-Mail akzeptieren Sie unsere Datenschutzrichtlinie