ora
LeaderboardMethodologyResearchBlog

Layer 1 - Discovery

AEO / GEO scoring, from a public URL.

Are AI answer engines finding, citing, and recommending you? Discovery is the layer where ora measures Answer Engine Optimization and Generative Engine Optimization - category share of voice, brand-name search accuracy, citation quality, and training-corpus footprint. Every check is computed from your public web surface alone, with no login and no tracker.

What Discovery measures

Discovery is weighted 20 of 100 in the overall ora score. Every check is computed from a public URL with no authentication, no self-reporting, and no panel data.

CheckPoints
Category share of voiceAEO
Run a category prompt across multiple AI search engines in parallel; rank where the product appears against competitors and report a recommended% score.
6
Developer resource discoverabilityAEO
Brand-name search returns the product's developer resources (API docs, OpenAPI spec, dev portal). The cleanest single AEO signal.
6
Brand name discoverabilityAEO
Clean brand-name search (no developer keywords): does the product's domain surface in the top results? The baseline for any implicit AI discovery.
3
Wikipedia / Wikidata entity presenceGEO
Verified Wikipedia article + Wikidata entity with P856 back-reference to the domain. Wikipedia is the single largest source of citations in AI answers.
4
Listed in MCP registries
Smithery, mcp.so, Glama, PulseMCP.
1
NPM/PyPI SDK package
Published SDK or CLI package matching domain or product name.
1
Agent platform configs
Published agent rules/configs (.claude/, .cursor/, .windsurf/) on GitHub.
1
Listed on skills.sh
Official skills published on the skills.sh agent skills directory.
1
Semantic search indexabilityGEO
Heading hierarchy, paragraph density, and content-to-markup ratio for vector embeddings used by retrieval-augmented answers.
2
Training corpus footprintGEO
Brand mentions across Reddit, YouTube, Wikipedia, third-party reviews - whether the brand has enough open-web presence to land in LLM training data.
2
Competitive positioning clarityAEO
Differentiators, comparison pages, value-prop language agents would cite when recommending you over alternatives.
2
Citation quality vs. mentionAEO
When found, is the brand cited as a clickable source vs. a name-drop? Measures RAG extractability.
2
Fresh-encounter recallAEO
Can a clean session - no connector, no API key - describe the product accurately?
2
Knowledge cutoff coverageGEO
Ask frontier models about the brand with no retrieval, no tools, no live web - can they describe the product from training memory alone? Direct test of training-corpus residency.
2
Third-party citation corpusGEO
Sample external content on Reddit, GitHub, Stack Overflow, technical blogs, and tech press; rate the depth and diversity of the corpus future training cuts will absorb.
2

How ora is different

The enterprise answer-engine analytics category - tools that track brand visibility inside ChatGPT, Claude, Perplexity, and Gemini - has emerged in the last eighteen months. ora covers the same AEO/GEO measurement surface and four additional layers, with a few deliberate differences.

Public URL only

No login. No tracker. No browser-extension panels. No CDN log integration. Every check is reproducible from the open web, which is exactly the surface answer engines see.

Every domain in our index, scored automatically

Enterprise answer-engine analytics tools price by tracked-prompt count and gate the interesting features (full engine coverage, prompt-volume datasets, agent analytics) behind custom plans. ora gives every domain a public AEO/GEO score and a public history.

AEO/GEO is one of five layers, not the whole product

Visibility tells you whether agents find you. ora also scores whether agents can understand you (Identity), authenticate (Auth & Access), integrate (Agent Integration), and complete a task with you (User Experience). Visibility alone is a vanity metric if the auth flow blocks the agent at minute two.

Open methodology

Every check is documented, every weight is in source, the scoring formula is in the repo. No black-box visibility score - you can see why your number is what it is and what changes will move it.

How it relates to Identity

Discovery answers can the answer engine reach you. Identity answers once it does, can it understand you. The two layers are coupled - a great llms.txt with a hostile robots policy still loses Discovery, and great Discovery with no JSON-LD still loses Identity. ora ranks them separately so you can fix the stage that's actually limiting you.

See the full methodology for check-by-check scoring and per-layer weights, or run a scan from the homepage to see your AEO/GEO breakdown.

ora is the open AEO/GEO scoring layer for the agent web. No login. No tracker. Public scores for every domain in our index.
© 2026 era labs. All rights reserved.
AboutBlogDocsPrivacyContact