ora
LeaderboardMethodologyResearchBlog

open methodology

How ora scores

ora tests your product across five stages, the same journey a real agent takes when it tries to find you, understand what you do, log in, take action, and hand the result back to a person. Your score is what actually happened.

↓

progress

0 / 100 pts

1

2

3

4

5

1

Discovery

24 pts

Can agents find and recommend you? (AEO / GEO)

When an agent needs a product or service to finish a task, it searches and picks from the options it can find and trust.

If agents can't find you when they search for what you do - or have never heard of you - they pick someone else.

ora measures how findable you are when agents look: by name, by category, across the answer engines they rely on, and in what AI models actually recall about you.

2

Identity

22 pts

Do agents understand what you do?

After landing, the agent builds a mental model: what your product is, who it is for, and when to use it.

Weak structure creates hallucinated positioning, wrong recommendations, and low citation confidence.

ora validates Foundation surfaces - llms.txt format, JSON-LD structure, sitemap/robots configuration, metadata consistency, and pricing/docs clarity - so agents can describe your product accurately once they reach it.

3

Auth & Access

30 pts

Can agents authenticate and act?

Intent becomes execution only when the agent can authenticate, request scopes, and call usable endpoints.

Broken auth paths and unclear permissions create dead ends at the exact moment user value should happen.

ora tests OpenAPI availability, OAuth support, scoped permissions, developer portal readiness, and simulated agent auth flow completion.

4

Agent Integration

20 pts

Have you built the plumbing?

Now the agent tries repeated calls, tool invocation, streaming output, and error recovery under constraints.

Even with auth, brittle responses or missing platform primitives cause silent task failure.

ora inspects MCP server readiness, streaming support, JSON error responses, SDK coverage, and function-calling compatibility across agent platforms.

5

User Experience

4 pts

Can users interact with you through agents?

The final step is human handoff. When a payment, confirmation, or visual decision is needed, the agent must pass control into an experience a person can trust and act on instantly.

If handoff UX is weak, agents fail at critical moments - wrong selections, slow approvals, diluted brand confidence, and lower trust in the full flow.

ora connects to your MCP server and validates MCP Apps support - checking for ui:// resources and _meta.ui tool metadata that let agents render interactive dashboards, forms, and workflows directly in conversation. We also check ChatGPT plugins, OpenAI Apps SDK, and other agent platform integrations.

6

Grade & Badge

0-100

What does the final number mean?

The five layers collapse into a single score and letter grade. It tells you exactly where an agent's journey breaks and how far it gets before giving up.

A low grade means agents route elsewhere.

ora scores each layer based on how many checks passed, then combines them into a single number from 0 to 100.

A+

95-100

Leading

A

86-94

Agent-Ready

B

70-85

Competitive

C

48-69

Needs Work

D

28-47

At Risk

F

0-27

Unusable

what happens next

Agents use the score

After the scan, agents query ora in real time to decide who to work with. They compare candidates, check feedback from other agents, and route to the product that scored highest.

Scan your site →
Read the docs
© 2026 era labs. All rights reserved.
AboutBlogDocsPrivacyContact