AgentGrade

Public help

How to use AgentGrade and how to interpret the manual result.

AgentGrade is a structured manual self-assessment. It does not inspect, crawl, or verify the submitted product automatically.

What AgentGrade does

AgentGrade helps a team review how ready a SaaS product or API appears to be for AI-agent use.

It is a structured manual assessment. It does not certify security, legal compliance, privacy compliance, or operational safety. It also does not automatically evaluate the submitted URL.

The output reflects the manual scores, confidence levels, and evidence notes entered by the user.

How to assess a product

  1. Choose one clear scope: one API surface, one workflow, or one product area.
  2. Gather evidence from public API docs, auth docs, endpoint references, webhook docs, rate-limit docs, sandbox details, MCP references, and hands-on testing.
  3. Score each category manually and record both the score and confidence level.
  4. Keep short evidence notes: what you found, where you found it, what is missing, and what would improve the score.
  5. Re-run the assessment after major product changes.

Category definitions

  • API readiness: stable, documented endpoints, clear request/response formats, and machine-actionable behavior.
  • Auth friction: clear setup, scoped automation-friendly access, and manageable credential flows.
  • Agent-safe actions: separation of read/write capabilities, approvals, previews, idempotency, and auditability where needed.
  • Docs clarity: clear quick starts, examples, error descriptions, and workflow guidance.
  • Webhook and event coverage: useful event model, retry behavior, and verification guidance.
  • Sandbox or demo support: safe test mode or trial path before production access.
  • Rate-limit transparency: documented limits, throttling behavior, and backoff guidance.
  • MCP readiness: a practical path into Model Context Protocol ecosystems with clear tools, inputs, outputs, and auth boundaries.

FAQ

What is AgentGrade?

AgentGrade is a public manual assessment tool for reviewing how ready a SaaS product or API is for AI-agent use.

Does AgentGrade review my product automatically?

No. This public version does not automatically crawl, test, or score the submitted product. It summarizes the manual assessment inputs entered by the user.

What does AgentGrade measure?

AgentGrade looks at practical readiness areas such as API readiness, auth friction, agent-safe actions, docs clarity, webhook and event coverage, sandbox or demo support, rate-limit transparency, error recovery, and MCP readiness.

Is AgentGrade a security certification?

No. AgentGrade is not a security certification, compliance certification, legal opinion, or formal audit.

Does AgentGrade guarantee an agent will be safe in production?

No. It is an informational self-assessment only. Production use still needs security review, monitoring, access control, and human oversight where appropriate.

Who should use AgentGrade?

Product teams, platform teams, partnerships teams, solution architects, and buyers who want a practical way to document AI-agent readiness.

What should I do after scoring?

Use the result to identify the biggest blockers first. Common next steps include improving docs, reducing auth friction, adding sandbox support, exposing safer write patterns, and documenting rate limits or events more clearly.