Design systems are fundamentally about codifying design decisions and contracts. When to use what, how things compose, what to avoid.
The documentation serves designers and developers who can parse prose, infer context, and remember patterns. They’ll read “avoid multiple primary buttons” and understand the underlying principle about hierarchy.
AI needs the same information, structured differently.
Component metadata isn’t new documentation. It’s the same documentation, translated to a machine-readable domain.
Take this rule we tell every designer: “Don’t use multiple primary buttons in the same section . It creates visual hierarchy confusion.”
In our Figma file, that’s a red annotation on a “Don’t” example. In our Storybook, it’s a paragraph under “Best Practices.” In our code reviews, it’s a comment: “Let’s use secondary here.”
In component metadata, it looks like this:
Same rule. Different format. Now a machine can check it before generating code.
Making design decisions queryable
We will use a Button as example. Buttons have variants, states, composition rules, accessibility requirements. They’re deceptively complex.
When to use it:
What not to do:
How variants work:
Selection logic for AI:
Every piece of this already existed in our design system docs. I just made it queryable.
The translation layer
Design systems are contracts between designers and developers about how UI should work. We’ve always encoded these contracts in prose because that’s what worked for human communication.
When a design system team writes: “Use primary buttons for the main call-to-action. Avoid using multiple primary buttons in the same section as it creates visual hierarchy confusion,” they’re defining:
- A selection criterion (main CTA)
- A composition rule (one primary per section)
- The reasoning (hierarchy confusion)
Traditional docs make you parse that from paragraphs. Metadata makes it explicit:
Same information. Now a machine can check the rule before it generates three primary buttons.
What this actually looks like
Let me show you the full picture. Here’s a slice of our Button metadata (the complete file is ~260 lines):
The schema has nine major sections. Not all are equally important for AI decision-making.
Critical for AI:
usage- When to use this component, common patterns, anti-patternsaiHints- Explicit selection criteria and contextvariants- What variants exist and their specific purposescomposition- What goes inside, what goes alongsidebehavior- States, interactions, responsive considerations
Important for completeness:
props- Full TypeScript prop definitionsaccessibility- ARIA, keyboard support, WCAG complianceexamples- Copy-paste code snippets
Metadata about the metadata:
component- Name, category, description, path, timestamps
The first five sections answer the questions AI gets wrong most often:
- “Should I use a button here?” (usage.useCases)
- “Which variant?” (variants.purpose, aiHints.selectionCriteria)
- “What goes inside it?” (composition.slots)
- “What should I NOT do?” (usage.antiPatterns)
- “How does it behave?” (behavior.states, behavior.interactions)
The rest is reference material. Important, but not decision-making logic.
Getting started
Metadata can be JSON, Markdown, TypeScript, or whatever fits your tech stack.
TypeScript (.metadata.ts) - Best for TypeScript/JavaScript projects
- Real code snippets in
examplesandcommonPatterns - Type safety for the metadata itself
- Syntax highlighting in code editors
- Can import actual TypeScript types from component files
JSON (.metadata.json) - Language-agnostic
- Universal format, any tool can parse it
- Simple schema validation
- Examples must be strings, not real code
- Good for polyglot codebases
Markdown (.metadata.md) - Human-optimized
- Readable in GitHub, Notion, anywhere
- Easy to write and review
- Harder to query programmatically
- Better for documentation than automation
In my system, TypeScript works best because:
- Code examples are real, executable snippets
- The metadata export can be imported by tools
- The header section enables fast discovery before parsing details
TypeScript metadata example
The TypeScript format has two parts:
1. Header (for discovery):
AI agents parse the header first. It answers: “What is this? Where is it? What category?” Before diving into usage patterns and selection criteria, they know if this component is even relevant.
2. Body (for intent and usage):
This separation matters for performance. Tools can scan headers across all components to find candidates, then read full metadata only for relevant components.
How to generate metadata at scale
Start by audit your documentation, create a metadata template, build a script let AI do the heavy lifting. The process breaks down like this:
Step 1: Audit your existing documentation
Where does your design system knowledge live?
- Storybook and docs
- Figma component descriptions and annotations
- Notion/Confluence pages
- Code review comments (patterns you correct repeatedly)
- Slack conversations (“when should I use X vs Y?”)
Collect URLs, export docs, gather examples. You’re not creating new knowledge. You’re inventorying what exists.
Step 2: Create a metadata template
Define your schema structure as a template file.
Recommended fields:
component- Name, category, description, type, path, timestamps
Strongly recommended (the decision-making logic):
usage- Use cases, common patterns, anti-patternsbehavior- States, interactions, responsive behaviorprops- Full TypeScript definitionsaccessibility- ARIA roles, keyboard support, WCAG complianceaiHints- Selection criteria, keywords, contextexamples- Copy-paste code snippets
Optional (when relevant):
composition- For containers with slots or nested componentsvariants- For components with visual/behavioral variants
Document each field with inline comments explaining what goes there. This template becomes your contract.
Step 3: Let AI extract and populate
Point an AI agent at your documentation with the template:
The AI reads prose documentation and outputs structured metadata. You review and refine.
Step 4: Build scripts for batch operations
For technical fields (props, types), write scripts that parse component files:
- Extract TypeScript interfaces →
propssection - Parse JSX comments →
descriptionfield - Detect imports →
composition.nestedComponents - Read git history →
createdandmodifiedtimestamps
Scripts handle mechanical extraction. Humans add design intent.
Step 5: Iterate and refine
First component takes time . You’re learning what matters, surfacing patterns.
- Use AI extraction for components with rich Storybook/Figma docs
- Manual translation for complex components requiring nuance
- Scripts for batch operations on technical fields (props, types, paths)
How metadata powers AI workflows
Once component metadata exists, it enables different workflows:
Component selection agents read metadata to choose the right component for a use case. When asked to build a form, they check usage.useCases to find form-related components, check aiHints.selectionCriteria to pick between variants, and reference antiPatterns to avoid common mistakes.
Relationship graphs use metadata to understand component hierarchies. They can trace which atoms are used throughout your app, count component instances recursively, and map dependencies. All using the category and composition metadata you've defined.
Validation tools check generated code against metadata rules. Before suggesting code, they verify it doesn’t violate antiPatterns, respects composition.parentConstraints, and follows accessibility requirements.
Code generation uses metadata as a contract. The examples section provides copy-paste templates. The props section defines the TypeScript interface. The variants section explains which options exist and when to use each one.
It’s not about creating new documentation. It’s about reformatting existing knowledge.
Every component has decisions baked in. Decisions that live in Figma, Storybook, Notion, team memory. Component metadata is how we make those decisions explicit and queryable.
The metadata file next to your component becomes the source of truth that powers all these workflows.
Does this makes better documentation?
The structure forces precision:
Anti-patterns require specificity:
- What not to do (scenario)
- Why it’s wrong (reason)
- What to do instead (alternative)
You can’t write “don’t overuse primary buttons.” You have to write: “Multiple primary buttons in same section creates visual hierarchy confusion. Use one primary and secondary/ghost for other actions.”
Selection criteria eliminate vagueness:
Not “use primary for important stuff.” Instead: “Main action user should take on the page/section.”
Examples must be copy-pasteable:
Not screenshots. Actual code that works.
This precision helps junior developers understand patterns, designers onboard faster, and future-you remember why these decisions matter when you return to the codebase months later.
Component metadata makes design decisions explicit, queryable, and version-controlled alongside the components themselves.
Note: Treat this as a reference implementation, not a binary you just run. Every design system is structured differently. Your framework might be Svelte, your atomic design folder structure might be unique. Use this as the foundation, then adjust the scripts and folder paths to match your specific architecture.