Skip to content

Chat Assistant

The AI chat assistant (H key) lets you interact with your design through conversation. The AI panel has four tabs: Chat, Spec, Map, and GitHub.

ActionExample prompt
Design”Design a cafe with POS, kitchen, and order queue”
Ask”What’s the weakest part of this architecture?”
Modify”Add a cache between the API and database”
Analyze”Which components are single points of failure?”
Explain”Why is the queue backing up?”
Subsystem”Zoom into the API gateway”

Type your message in the chat input and press Send. The AI uses your canvas context and conversation history to generate a response.

The real power of the chat isn’t one-shot generation. It’s iteration. Start rough, simulate, find problems, and ask the AI to fix them. Each cycle teaches the AI more about your system and what you care about.

A typical loop looks like:

  1. Draft - “Design an e-commerce checkout with cart, payment, and inventory”
  2. Simulate - Press Play and watch where requests pile up or get lost
  3. Ask - “Why is the payment service dropping requests?”
  4. Fix - “Add a retry on the payment service with 3 attempts”
  5. Stress test - “What happens if traffic doubles?”
  6. Analyze - “What is the maximum throughput before the database becomes a bottleneck?”
  7. Revise - “Add a cache between the API and the database”
  8. Repeat

Each follow-up message builds on the previous context. The AI remembers your design decisions and can explain tradeoffs: “Adding a cache improves read latency from 200ms to 5ms but introduces stale data. Want me to add a TTL?”

The AI gets smarter about your design the more context you give it:

  • Describe constraints - “We have a max budget of $200/month” or “The team only knows Python”
  • State priorities - “Latency matters more than cost” or “We need 99.9% uptime”
  • Reference real systems - “This should work like Stripe’s webhook retry logic”
  • Challenge assumptions - “Why did you pick Kafka here instead of a simple queue?”

The AI can reason about design tradeoffs. Try prompts like:

  • “What are the tradeoffs between polling and WebSockets for this notification system?”
  • “Should we use a single database or split reads and writes?”
  • “Compare caching at the API layer vs at the database layer”
  • “What happens if we remove the queue? Is it worth the simplicity?”
  • “Is this over-engineered for 100 users/day?”

Design questions:

  • “Design a food delivery app backend”
  • “Add authentication to this system”
  • “How would Netflix handle this differently?”

Debugging questions:

  • “Why is the queue backing up during simulation?”
  • “Which component is the bottleneck?”
  • “What happens if this service goes down?”

Metric and scaling questions:

  • “What throughput do I need for 10,000 daily users?”
  • “How many instances of the worker should I run?”
  • “Set realistic latency numbers for each component”

Code and implementation:

  • “What libraries would I need to build this?”
  • “Write the API routes for the gateway component”

Use file attachments to give the AI deeper context about your project:

  • Requirements docs (.md, .txt) - Product specs, PRDs, user stories
  • Existing code (.py, .js, .ts) - Current implementation to reverse-engineer or improve
  • API schemas (.json) - OpenAPI specs, database schemas, config files
  • Meeting notes (.md, .txt) - Stakeholder feedback, design review notes
  • Architecture docs (.md) - Existing system documentation to extend or migrate

Every AI change goes through review. The chat shows a diff of proposed component and connection changes. Click Apply to accept or Reject to dismiss. This applies to first drafts, layout changes, and iterative modifications.

Select a component and use the chat to focus discussion on that single component. The AI will set behaviors, metrics, and requirements scoped to your selection.

Ask the AI to set component behaviors:

“Make the queue component filter out requests with priority < 3”

The AI will assign the appropriate behavior mode and configure its parameters. All 12 behavior modes are supported: passthrough, transform, filter, queue, split, delay, condition, retry, rate limit, circuit breaker, batch, and replicate.

Attach files (.py, .txt, .md, .js, .ts, .json, etc.) to provide additional context. Useful for importing existing specs, requirements docs, or code snippets that inform your design.

  • Up to 5 files per message
  • Max 100KB per file
  • Max 100K characters total (message + attachments)

After each interaction, the AI suggests follow-up actions as clickable chips to push your design forward through the full loop: draft, simulate, stress test, analyze, revise.

The AI keeps the last 6 messages in full context. Older messages are compressed to 200 characters to save tokens while preserving conversation continuity.

  • 100K character input limit per message (including attachments)
  • 500 AI credits per month (Pro)
  • Chat is a Pro feature

The Spec tab generates a full Product Requirements Document from your current canvas.

  1. Open the AI panel with H
  2. Click the Spec tab
  3. Click Generate AI Spec

The AI analyzes your components, connections, behaviors, and simulation results to produce a structured markdown document covering:

  • Overview (executive summary)
  • Architecture (pattern and reasoning)
  • Components (type, role, behavior, interfaces, dependencies)
  • Data Flow (end-to-end trace, fan-out/fan-in, filtering, queuing)
  • Failure Modes and Resilience (bottlenecks, cascade paths, hardening recommendations)
  • Non-Functional Requirements (throughput, latency, capacity, scaling)
  • Implementation Notes (tech stack, integration points, monitoring)
  • Open Questions (3-5 gaps in the design)

You can copy the spec to clipboard or download it as a .md file.

Spec is a Pro feature.