Agents are users now and this website is ready
For years, documentation teams optimized for one audience: humans with browsers. That model worked. It still works. But a second audience now sits right next to every developer: AI assistants that read, summarize, compare, and generate implementation plans in real time.
That workflow changes the bar.
The old approach forced assistants to parse complex HTML, infer structure, strip noise, and burn context on formatting artifacts before getting to the meaning. That context waste creates the exact experience developers hate: vague answers, missing citations, and false confidence.
Context bloat is not a cosmetic issue. Context bloat is product friction.
An AI-ready site needs a clean read path, a clean discovery path, and a clean retrieval path. Not one of the three. All three.
What we shipped on developers.contentstack.com
We built developers.contentstack.com with one explicit goal: help agents read this website and be happy.
That goal produced a concrete architecture:
Markdown views for core content. Long-form resources are available in markdown, including .md routes and markdown content negotiation.
/agents.md for autonomous behavior guidance. Agents get explicit policy, scope, and safe usage expectations.
/llms.txt for LLM-friendly discovery. Models can find structure quickly without playing guesswork games.
/sitemap.md for content map readability. Humans and models can both inspect the catalog in a compact format.
/api/mcp for structured tool use. Agents can call purpose-built tools instead of scraping pages.
The MCP surface is where this setup starts to feel powerful. Current tools include content_search, content_get, people_list, and people_get_profile. Those tools do what agents need most: discover relevant material, fetch full markdown content, and connect people to expertise. Fast.
Why this feels different for agents
A normal docs portal answers "Can a human read this page?"
An AI-ready docs portal answers "Can a model reliably use this knowledge inside a workflow?"
That second question changes design choices in practical ways:
Deterministic retrieval. Tool schemas and typed fields beat heuristic scraping every time.
Lower token waste. Markdown and structured payloads reduce unnecessary context consumption.
Higher trust. Clear source URLs and stable formats improve citation quality and reduce hallucinated glue text.
Composability. One retrieval surface can power CLI tools, IDE assistants, support helpers, and content recommendation engines.
This architecture is not theory. This architecture is implementation-ready.
How to use it: a practical walkthrough
Here is how to connect and start building against the endpoint.
Connect your MCP client. The endpoint is at https://developers.contentstack.com/api/mcp (Streamable HTTP) with an SSE fallback at https://developers.contentstack.com/api/sse. No authentication required.
The setup looks slightly different depending on your client. Here are the most common ones.
Claude Desktop: add this to your claude_desktop_config.json:
{
"mcpServers": {
"contentstack-developers": {
"type": "http",
"url": "https://developers.contentstack.com/api/mcp"
}
}
}Cursor: add this to your .cursor/mcp.json at the project or global level:
{
"mcpServers": {
"contentstack-developers": {
"type": "http",
"url": "https://developers.contentstack.com/api/mcp"
}
}
}If your client only supports SSE, swap the URL for the fallback endpoint: https://developers.contentstack.com/api/sse. Everything else stays the same.
Confirm tool availability. Once connected, list tools and verify you see all four: content_search, content_get, people_list, and people_get_profile.
Run your first content_search call. Start broad. A query like { "query": "next.js", "types": ["guide", "blog"], "limit": 5 } returns results with title, type, url, markdown_url, and facet data you can use to refine follow-up queries. If you get no useful results, remove filters and broaden your terms first.
Fetch full content with content_get. Take a result URL and pass it as a path, not a full absolute URL. For example, { "url": "/guides/getting-started-with-nextjs" } returns the full markdown body plus metadata including description, subjects, and authors.
Use facets to sharpen your next search. Every content_search response includes facet data broken down by content type, subject, technology, and author. That data is your navigation signal for follow-up calls, not decoration.
A few things cause most failed first attempts: passing full URLs instead of paths to content_get, starting with filters that are too strict, skipping content_search and trying to guess content paths, and ignoring facet data. Avoid those and you will move fast.
Six things worth building
One endpoint is enough to power a lot of useful products. Here are six that are both practical and genuinely fun to build.
Build-my-stack agent
Input stack, goals, and experience level. Run content_search and content_get to generate a personalized seven-day learning path — one guide, one blog, one livestream, and one kickstart per day with rationale for the sequence.
Explain-it-tonight CLI
A terminal command that answers "How do I do X with Contentstack?" in plain language with two source links. Same urgency as searching chat history, much higher signal.
Author radar
Use people_list and people_get_profile to map topics to people. Let users discover the right voice quickly, then jump straight into that author's guides and talks.
Onboarding quiz generator
Pull topics from search facets, generate questions from markdown sections, and turn onboarding content into trivia cards. Makes onboarding less passive.
What-to-read-next recommender
Use facets from content_search to build a recommendation rail driven by content type, subjects, technology, and authors. Think "because you read this guide, watch this livestream."
Workshop-in-a-box generator
Input: "Create a 90-minute workshop on personalization and composable search." Output: agenda, pre-read list, demo links, and follow-up resources, all assembled from live content.
One endpoint. Many products.
If you want to stay API-first
Not every team needs MCP on day one. API-first teams can still get substantial value with the same AI-ready mindset: use /sitemap.md for lightweight discovery, markdown routes for clean ingestion into internal tools, typeahead search APIs for low-latency relevance in support assistants, and canonical URLs in every answer artifact to keep trust high.
The strategic pattern stays the same regardless of approach: reduce ambiguity, reduce noise, and give models predictable data contracts.
Where to go next
Choose one use case from the list above and keep the first version small. Use content_search as the first call in your flow. Add content_get only for top matches to control context size. Include citations in every output using the returned canonical URLs. Expand to people-based workflows after your first content workflow works.
AI-ready documentation is not a badge. AI-ready documentation is a product decision.
At developers.contentstack.com, we are still building. That is exactly what makes this moment exciting. The foundation is in place, the interfaces are practical, and the use cases already map directly to developer outcomes.
Developers should not have to choose between human-readable docs and agent-usable docs. Great developer platforms now need both, by default.
Make your docs readable by humans, usable by agents, and unforgettable for builders.
Frequently asked questions
Do I need an API key or account to use the MCP endpoint?
No. The endpoint at https://developers.contentstack.com/api/mcp is open. No authentication, no signup required.
What's the difference between the Streamable HTTP and SSE endpoints?
Streamable HTTP (/api/mcp) is the current standard and what most modern MCP clients support. SSE (/api/sse) is the fallback for clients that haven't updated to Streamable HTTP yet. If your client connects fine on the main endpoint, you don't need to think about SSE.
Which tool should I start with?
Start with content_search. It's the entry point for almost every useful workflow, it returns titles, URLs, and facet data you can use to refine follow-up calls. Don't try to guess content paths and jump straight to content_get.
Can I use this outside of Claude or Cursor? I build with the OpenAI API.
Yes. The OpenAI Responses API supports hosted MCP tools directly. Pass the server URL in your request body and the tools are available in that call without any extra setup.
The search results feel noisy. What should I do?
Two things: first, check the facet data in the response, it tells you exactly which subjects, content types, and technologies are available to filter on. Second, use content_get only on your top one or two results rather than fetching everything, which keeps context tight.
Is this just for Contentstack-specific questions or can I use it for general frontend/Next.js topics too?
The content is Contentstack-focused, but a lot of it covers adjacent territory, Next.js, composable architecture, headless patterns, TypeScript SDKs. If your stack intersects with any of those, content_search will surface relevant material.