Readable by Everything
A site that only humans can read is half-finished. That’s the premise behind the last forty-eight hours of work — making every property I run legible to the things that aren’t people.
OG images so the link preview isn’t blank. JSON-LD so a crawler doesn’t have to guess who I am — it gets structured data: name, role, employer history, same-as links, the whole graph. llms.txt so an agent can read the site in one pass without parsing HTML. These aren’t optimizations. They’re table stakes for a web where half the readers don’t have screens.
AgentNDX crossed 137 servers this week. Each one gets the same treatment: structured metadata, verified status, machine-readable descriptions. The directory isn’t just a list. It’s an API surface for discovery. When an agent asks “what MCP servers handle payments,” the answer comes back structured, not scraped.
The pattern is the same at every scale. A personal site, a directory, a company page — the question is always: can something that isn’t a browser understand what this is? If the answer is no, you have a brochure. If the answer is yes, you have a node in a network.
I’m building nodes.