· 8 min read

Design system documentation as structured metadata

An approach for structured data that AI agents can query to understand when and how to use components correctly.

Design system documentation as structured metadata
Cristian Morales

Cristian Morales

Product Designer

· 8 min read

Design systems are fundamentally about codifying design decisions and contracts. When to use what, how things compose, what to avoid.

The documentation serves designers and developers who can parse prose, infer context, and remember patterns. They’ll read “avoid multiple primary buttons” and understand the underlying principle about hierarchy.

AI needs the same information, structured differently.

Component metadata isn’t new documentation. It’s the same documentation, translated to a machine-readable domain.

Take this rule we tell every designer: “Don’t use multiple primary buttons in the same section . It creates visual hierarchy confusion.”

In our Figma file, that’s a red annotation on a “Don’t” example. In our Storybook, it’s a paragraph under “Best Practices.” In our code reviews, it’s a comment: “Let’s use secondary here.”

In component metadata, it looks like this:

antiPatterns: [
  {
    scenario: "Multiple primary buttons in same section",
    reason: "Creates visual hierarchy confusion",
    alternative: "Use one primary button and secondary/ghost for other actions"
  }
]

Same rule. Different format. Now a machine can check it before generating code.


Making design decisions queryable

We will use a Button as example. Buttons have variants, states, composition rules, accessibility requirements. They’re deceptively complex.

When to use it:

usage: {
  useCases: [
    "primary-call-to-action",
    "form-submission",
    "navigation-link",
    "secondary-action"
  ]
}

What not to do:

antiPatterns: [
  {
    scenario: "Using button for plain navigation without action",
    reason: "Buttons indicate actions, not navigation",
    alternative: "Use Link component for navigation"
  }
]

How variants work:

variants: {
  variant: {
    options: ["primary", "secondary", "ghost", "danger"],
    default: "primary",
    purpose: {
      primary: "Main call-to-action, high visual prominence",
      secondary: "Alternative or cancel actions",
      ghost: "Minimal visual weight, subtle actions",
      danger: "Destructive actions requiring attention"
    }
  }
}

Selection logic for AI:

aiHints: {
  priority: "high",
  context: "Use for any user action or form submission. Choose variant based on action hierarchy.",
  selectionCriteria: {
    usePrimary: "Main action user should take on the page/section",
    useSecondary: "Alternative actions, cancel buttons",
    useGhost: "Tertiary actions, minimal visual weight",
    useDanger: "Delete, remove, destructive actions"
  }
}

Every piece of this already existed in our design system docs. I just made it queryable.

The translation layer

Design systems are contracts between designers and developers about how UI should work. We’ve always encoded these contracts in prose because that’s what worked for human communication.

When a design system team writes: “Use primary buttons for the main call-to-action. Avoid using multiple primary buttons in the same section as it creates visual hierarchy confusion,” they’re defining:

  1. A selection criterion (main CTA)
  2. A composition rule (one primary per section)
  3. The reasoning (hierarchy confusion)

Traditional docs make you parse that from paragraphs. Metadata makes it explicit:

{
  selectionCriteria: {
    usePrimary: "Main action user should take on the page/section"
  },
  antiPatterns: [{
    scenario: "Multiple primary buttons in same section",
    reason: "Creates visual hierarchy confusion",
    alternative: "Use one primary button and secondary/ghost for others"
  }]
}

Same information. Now a machine can check the rule before it generates three primary buttons.

What this actually looks like

Let me show you the full picture. Here’s a slice of our Button metadata (the complete file is ~260 lines):

export const ButtonMetadata = {
  component: {
    name: "Button",
    category: "atoms",
    description: "Interactive button component for actions, links, and form submissions",
    type: "interactive",
    path: "src/components/atoms/Button/Button.astro"
  },

usage: {
    useCases: [
      "primary-call-to-action",
      "form-submission",
      "navigation-link"
    ],
    commonPatterns: [
      {
        name: "primary-cta",
        description: "Main call-to-action button",
        composition: `<Button variant="primary">Get Started</Button>`
      },
      {
        name: "button-with-icon",
        description: "Button with icon for enhanced communication",
        composition: `<Button variant="primary">
  Submit
  <Icon name="ArrowRight" size={16} />
</Button>`
      }
    ],
    antiPatterns: [
      {
        scenario: "Multiple primary buttons in same section",
        reason: "Creates visual hierarchy confusion",
        alternative: "Use one primary and secondary/ghost for other actions"
      },
      {
        scenario: "Very long text labels",
        reason: "Buttons should be concise and action-oriented",
        alternative: "Use short, clear action verbs (max 2-3 words)"
      }
    ]
  },

  variants: {
    variant: {
      options: ["primary", "secondary", "ghost", "danger"],
      default: "primary",
      purpose: {
        primary: "Main call-to-action, high visual prominence",
        secondary: "Alternative or cancel actions",
        ghost: "Minimal visual weight, subtle actions",
        danger: "Destructive actions requiring attention"
      }
    }
  },

  accessibility: {
    role: "button or link (when href provided)",
    keyboardSupport: "Native browser support - Space/Enter",
    screenReader: "Announces as button or link with text content",
    wcag: "AA",
    notes: [
      "Always provide descriptive text content",
      "For icon-only buttons, add aria-label"
    ]
  },

  aiHints: {
    priority: "high",
    keywords: ["button", "cta", "submit", "action", "click"],
    selectionCriteria: {
      usePrimary: "Main action user should take on page/section",
      useSecondary: "Alternative actions, cancel buttons",
      useGhost: "Tertiary actions, minimal visual weight",
      useDanger: "Delete, remove, destructive actions"
    }
  }
};

The schema has nine major sections. Not all are equally important for AI decision-making.

Critical for AI:

  1. usage - When to use this component, common patterns, anti-patterns
  2. aiHints - Explicit selection criteria and context
  3. variants - What variants exist and their specific purposes
  4. composition - What goes inside, what goes alongside
  5. behavior - States, interactions, responsive considerations

Important for completeness:

  1. props - Full TypeScript prop definitions
  2. accessibility - ARIA, keyboard support, WCAG compliance
  3. examples - Copy-paste code snippets

Metadata about the metadata:

  1. component - Name, category, description, path, timestamps

The first five sections answer the questions AI gets wrong most often:

  • “Should I use a button here?” (usage.useCases)
  • “Which variant?” (variants.purpose, aiHints.selectionCriteria)
  • “What goes inside it?” (composition.slots)
  • “What should I NOT do?” (usage.antiPatterns)
  • “How does it behave?” (behavior.states, behavior.interactions)

The rest is reference material. Important, but not decision-making logic.

Getting started

Metadata can be JSON, Markdown, TypeScript, or whatever fits your tech stack.

TypeScript (.metadata.ts) - Best for TypeScript/JavaScript projects

  • Real code snippets in examples and commonPatterns
  • Type safety for the metadata itself
  • Syntax highlighting in code editors
  • Can import actual TypeScript types from component files

JSON (.metadata.json) - Language-agnostic

  • Universal format, any tool can parse it
  • Simple schema validation
  • Examples must be strings, not real code
  • Good for polyglot codebases

Markdown (.metadata.md) - Human-optimized

  • Readable in GitHub, Notion, anywhere
  • Easy to write and review
  • Harder to query programmatically
  • Better for documentation than automation

In my system, TypeScript works best because:

  1. Code examples are real, executable snippets
  2. The metadata export can be imported by tools
  3. The header section enables fast discovery before parsing details

TypeScript metadata example

The TypeScript format has two parts:

1. Header (for discovery):

export const ButtonMetadata = {
  component: {
    name: "Button",
    category: "atoms",
    description: "Interactive button component...",
    type: "interactive",
    path: "src/components/atoms/Button/Button.astro"
  },
  // ...
}

AI agents parse the header first. It answers: “What is this? Where is it? What category?” Before diving into usage patterns and selection criteria, they know if this component is even relevant.

2. Body (for intent and usage):

usage: {
    useCases: [...],
    commonPatterns: [...],
    antiPatterns: [...]
  },
  // ... rest of schema

This separation matters for performance. Tools can scan headers across all components to find candidates, then read full metadata only for relevant components.

How to generate metadata at scale

Start by audit your documentation, create a metadata template, build a script let AI do the heavy lifting. The process breaks down like this:

Step 1: Audit your existing documentation

Where does your design system knowledge live?

  • Storybook and docs
  • Figma component descriptions and annotations
  • Notion/Confluence pages
  • Code review comments (patterns you correct repeatedly)
  • Slack conversations (“when should I use X vs Y?”)

Collect URLs, export docs, gather examples. You’re not creating new knowledge. You’re inventorying what exists.

Step 2: Create a metadata template

Define your schema structure as a template file.

Recommended fields:

  • component - Name, category, description, type, path, timestamps

Strongly recommended (the decision-making logic):

  • usage - Use cases, common patterns, anti-patterns
  • behavior - States, interactions, responsive behavior
  • props - Full TypeScript definitions
  • accessibility - ARIA roles, keyboard support, WCAG compliance
  • aiHints - Selection criteria, keywords, context
  • examples - Copy-paste code snippets

Optional (when relevant):

  • composition - For containers with slots or nested components
  • variants - For components with visual/behavioral variants
export const ComponentMetadata = {
  component: {
    name: "",
    category: "", // atoms | molecules | organisms
    description: "",
    type: "" // interactive | display | container | input
  },
  usage: {
    useCases: [],
    commonPatterns: [],
    antiPatterns: []
  },
  // ... rest of schema
};

Document each field with inline comments explaining what goes there. This template becomes your contract.

Step 3: Let AI extract and populate

Point an AI agent at your documentation with the template:

"Read [Storybook URL for Button component].
Extract this information and populate the metadata template: - Component name, description, category from the component page - Use cases from 'When to use' section - Anti-patterns from 'Avoid' or 'Don't' sections - Variants and their purposes from the variants table - Props from the API docs - Accessibility notes from a11y section Template: [paste your metadata template] Output complete metadata following this structure."

The AI reads prose documentation and outputs structured metadata. You review and refine.

Step 4: Build scripts for batch operations

For technical fields (props, types), write scripts that parse component files:

  • Extract TypeScript interfaces → props section
  • Parse JSX comments → description field
  • Detect imports → composition.nestedComponents
  • Read git history → created and modified timestamps

Scripts handle mechanical extraction. Humans add design intent.

Step 5: Iterate and refine

First component takes time . You’re learning what matters, surfacing patterns.

  • Use AI extraction for components with rich Storybook/Figma docs
  • Manual translation for complex components requiring nuance
  • Scripts for batch operations on technical fields (props, types, paths)

How metadata powers AI workflows

Once component metadata exists, it enables different workflows:

Component selection agents read metadata to choose the right component for a use case. When asked to build a form, they check usage.useCases to find form-related components, check aiHints.selectionCriteria to pick between variants, and reference antiPatterns to avoid common mistakes.

Relationship graphs use metadata to understand component hierarchies. They can trace which atoms are used throughout your app, count component instances recursively, and map dependencies. All using the category and composition metadata you've defined.

Validation tools check generated code against metadata rules. Before suggesting code, they verify it doesn’t violate antiPatterns, respects composition.parentConstraints, and follows accessibility requirements.

Code generation uses metadata as a contract. The examples section provides copy-paste templates. The props section defines the TypeScript interface. The variants section explains which options exist and when to use each one.

It’s not about creating new documentation. It’s about reformatting existing knowledge.

Every component has decisions baked in. Decisions that live in Figma, Storybook, Notion, team memory. Component metadata is how we make those decisions explicit and queryable.

The metadata file next to your component becomes the source of truth that powers all these workflows.

Does this makes better documentation?

The structure forces precision:

Anti-patterns require specificity:

  • What not to do (scenario)
  • Why it’s wrong (reason)
  • What to do instead (alternative)

You can’t write “don’t overuse primary buttons.” You have to write: “Multiple primary buttons in same section creates visual hierarchy confusion. Use one primary and secondary/ghost for other actions.”

Selection criteria eliminate vagueness:

Not “use primary for important stuff.” Instead: “Main action user should take on the page/section.”

Examples must be copy-pasteable:

Not screenshots. Actual code that works.

This precision helps junior developers understand patterns, designers onboard faster, and future-you remember why these decisions matter when you return to the codebase months later.

Component metadata makes design decisions explicit, queryable, and version-controlled alongside the components themselves.

Note: Treat this as a reference implementation, not a binary you just run. Every design system is structured differently. Your framework might be Svelte, your atomic design folder structure might be unique. Use this as the foundation, then adjust the scripts and folder paths to match your specific architecture.