AI Context Endpoints: A Simpler Alternative to MCP
Everyone's trying to figure out how to make their APIs "AI-native." The current leading solution is MCP (Model Context Protocol) — a standardised way for LLMs to discover and interact with external tools.
But MCP is complicated. What if there was a simpler way?
The Problem with LLM Integration Today
MCP requires:
- Implementing a protocol
- SDKs and tooling
- Infrastructure overhead
- Another dependency to maintain
And here's the thing: it's solving a problem that doesn't need to be this hard.
A Simpler Proposal: The /context Endpoint
What if every API just exposed a single endpoint that returns an AI-ready explanation of how to use it?
GET /api/context/
That's it. One endpoint. Returns a document (markdown, plain text, whatever) that explains:
- What this API does
- How authentication works
- What endpoints are available
- How to perform common operations
- What the response formats look like
- Any gotchas or important patterns
No protocol. No SDK. No special tooling. Just HTTP.
How It Works
When an LLM needs to interact with your API, the first instruction is simple:
"Start by calling GET /context to understand how this API works."
The LLM reads the context, understands the API, and proceeds to make real calls. Self-documenting, always up-to-date, works with any LLM that can make HTTP requests.
Example: NetOrca Consumer Context
Here's what a context endpoint might return for an infrastructure orchestration platform:
# NetOrca Consumer Context
You are interacting with NetOrca, a declarative infrastructure orchestration platform.
## Authentication
Your API key is scoped to specific services.
Use header: `Authorization: Api-Key <key>`
## Check Current State
`GET /orcabase/consumer/submissions/` - List your current submissions
## Available Services
`GET /orcabase/consumer/services/` - See services you can request
`GET /orcabase/consumer/services/{id}/schema/` - Get the schema for a service
## Making Requests
Submissions are declarative. Always send the full desired state.
`POST /orcabase/consumer/submissions/` with your complete YAML/JSON declaration.
Changes are detected automatically. Append new items or modify existing ones.
## After Submission
- Change instances are created for each detected change
- Changes go through approval workflow (if configured)
- Approved changes are deployed automatically
## Tips
- Always GET your current submission before modifying
- Send the complete file, not patches
- Check change instance status to track progress
That's everything an LLM needs to start working with the API. No special integration required.
Why This Is Better Than MCP
| Aspect | MCP | /context Endpoint |
|---|---|---|
| Implementation effort | Days/weeks | Hours |
| Dependencies | Protocol, SDK, runtime | None |
| Works with any LLM | Requires MCP support | Yes, just HTTP |
| Maintenance | Protocol updates, versioning | Update your docs |
| Self-documenting | No | Yes |
| Debugging | Protocol-level complexity | Just read the response |
MCP tries to create a universal standard for tool interaction. But LLMs are already great at understanding documentation and making HTTP calls. We don't need a protocol layer — we need better documentation, served in a predictable location.
The Proposed Standard
Every API that wants to be LLM-friendly should:
-
Expose
GET /contextorGET /api/context— A single endpoint returning AI-ready documentation -
Return plain text or markdown — No special format, just readable documentation
-
Include the essentials:
- What the API does (one paragraph)
- Authentication method
- Key endpoints and what they do
- Common workflows with examples
- Important patterns or gotchas
-
Keep it current — Generate dynamically if possible, or update with your API
-
Scope it appropriately — If you have different API consumers (admin vs user), have different context endpoints
Dynamic Context Generation
The real power comes when you generate context dynamically:
@api_view(['GET'])
def context(request):
user = request.user
accessible_services = get_user_services(user)
context = f"""
# API Context for {user.organization}
## Your Available Services
{format_services(accessible_services)}
## Your Permissions
{format_permissions(user)}
## Recent Activity
{format_recent(user)}
"""
return Response(context, content_type='text/markdown')
Now the context is personalised. The LLM knows exactly what this specific user can do, not just what the API supports in general.
Adoption Path
This doesn't require industry consensus or standards bodies. Any API can add a /context endpoint today:
- Write documentation for your API as if explaining it to a smart colleague
- Serve it at
GET /context - Tell users: "To use our API with an LLM, first call GET /context"
That's it. If enough APIs do this, it becomes a de facto standard. No committee required.
The Future of API Integration
MCP and similar protocols assume LLMs need structured, machine-readable tool definitions. But modern LLMs are remarkably good at understanding natural language documentation and translating that into correct API calls.
The /context pattern leans into this strength. Instead of building complex protocol layers, we give LLMs what they're good at processing: clear, well-written documentation.
It's not as "elegant" as a formal protocol. But it's simpler, more flexible, and works today with every LLM and every API.
Sometimes the best standard is the one that requires no standardisation at all.
Try It
If you maintain an API, try adding a /context endpoint this week. See how it changes the way LLMs interact with your service.
If you're using LLMs to interact with APIs, ask: "Does this API have a /context endpoint?" If not, suggest they add one.
Let's make APIs AI-native the simple way.