Coverage

7 AI engines tracked. Any new GA assistant within 30 days of launch.

No model provider can show you how their competitors talk about your brand. Aeoniti is the only neutral observatory that watches them all. The list grows as the AI landscape grows.

Currently tracking

Engine Vendor Model Added Notes
ChatGPT OpenAI gpt-4o-mini 2026-05-06 Probed via OpenRouter
Claude Anthropic claude-3.5-haiku 2026-05-06 Probed via OpenRouter
Perplexity Perplexity sonar 2026-05-06 Probed via OpenRouter
Google AI Overview Google gemini-via-aio 2026-05-09 Via DataForSEO AI Overview parser
Llama 4 Meta meta-llama/llama-3.3-70b 2026-05-10 Probed via OpenRouter
Grok xAI x-ai/grok-2 2026-05-10 Probed via OpenRouter
Mistral Large Mistral AI mistralai/mistral-large 2026-05-10 Probed via OpenRouter

On the roadmap

Engine Vendor Target window Notes
DeepSeek V3 DeepSeek Q3 2026 Subject to API availability + customer demand
Qwen 3 Alibaba Q3 2026 Subject to API availability + customer demand
Cohere Command R+ Cohere Q3 2026 Subject to API availability + customer demand
Gemini Pro (direct) Google Q3 2026 Subject to API availability + customer demand
Naver HyperCLOVA X Naver (KR) Q4 2026 Subject to API availability + customer demand
Yandex YaGPT Yandex (RU) Q4 2026 Subject to API availability + customer demand
AI21 Jamba AI21 Labs Q1 2027 Subject to API availability + customer demand

The 30-day commitment

When a new generally-available AI assistant launches with a public API, we commit to adding it to coverage within 30 days. This applies to any model that meets all three:

  • Public API access available to third parties (no allowlist)
  • Generally-available, not closed beta
  • Used by ≥1 customer-facing product as a search/answer assistant

Engine you want covered? Email [email protected] with the model card link.

Why coverage is the moat

Every new AI engine that ships strengthens Aeoniti's moat without us doing anything. OpenAI will never benchmark itself against Claude; Anthropic will never benchmark itself against GPT; Google will never benchmark itself against Perplexity. The third-party neutral position is the only one that gets to compare them — that's our position by structural necessity.

As the AI landscape fragments — and it will, dramatically — the value of a single observatory that watches every assistant grows. Today we cover 7. By Q4 2026 we'll cover 12+. By Q3 2027, we expect to cover every AI assistant that materially shapes brand perception, including regional models (Naver in Korea, Yandex in Russia, Baidu in China). Single-vendor dashboards can't catch up — they're structurally locked into one model family.