Skip to content

Building Blocks

Chinilla uses 7 universal building blocks. They work for any domain: software, restaurants, hospitals, factories, logistics. No infrastructure jargon required.

BlockIconDescription
PersonUserA user, worker, or actor
StepCogAn action or process
StorageDatabaseHolds data, items, or state
DecisionGitBranchRoutes to different paths
TriggerZapStarts a flow or process
ToolWrenchAn external service or resource
ChannelArrowLeftRightA communication path

These blocks map to anything. A “Step” can be a microservice, a kitchen station, or an assembly line stage. A “Person” can be a user, a nurse, or a delivery driver. The AI handles the domain-specific details when you describe your system.

Every component supports:

  • Name - Display label on the canvas
  • Description - Hover card detail text
  • Metrics - Throughput, capacity, processing time (see Metrics below)
  • Requirements - Framework, language, runtime, OS, hardware, dependencies
  • Cost - Monthly and setup cost estimates
  • Infrastructure - Protocol (HTTP, gRPC, WebSocket, AMQP, Kafka, TCP, UDP, GraphQL, MQTT) and scaling config (min instances, max instances, scale trigger expression)
  • Behavior - Programmable processing mode (see Behaviors)
  • Subsystem - Drill into a component as its own nested system

Metrics define the performance characteristics of a component. They affect simulation behavior and what numbers the AI uses when generating code or analyzing your design.

MetricWhat it meansExample values
Throughput (req/s)How many requests the component handles per second at normal loadAPI: 500, Database: 1000, Human worker: 2
CapacityMaximum concurrent requests the component can hold or processQueue: 100, Server: 50, Worker: 5
Processing time (ms)How long one request takes to process end to endCache hit: 5, API call: 200, ML inference: 2000

These values drive the simulation engine and stress testing. The AI also references them when generating code (e.g. setting thread pool sizes from capacity, or adding time.sleep() from processing time).

How to think about each metric:

  • Throughput is your steady-state rate. Ask: “How many requests per second does this handle under normal conditions?”
  • Capacity is your ceiling. Ask: “How many things can be in-flight at once before we start dropping or queuing?”
  • Processing time is your latency. Ask: “How long does a single request take from start to finish?”

Real-world examples:

ComponentThroughputCapacityProcessing time
Coffee shop register2 req/s1500ms
REST API gateway500 req/s10050ms
PostgreSQL database1000 req/s20010ms
ML prediction service20 req/s52000ms
Message queue (Kafka)10000 req/s5002ms
Human reviewer0.1 req/s110000ms

Tips:

  • Leave metrics at 0 if you don’t know yet. The AI can suggest values based on your component’s description.
  • Use round numbers to start (100 req/s, 50ms). Refine after running a simulation.
  • Compare metrics across components to spot bottlenecks: if your API handles 500 req/s but your database handles 100, the database is the constraint.

The Infrastructure section lets you define how a component communicates and scales.

Protocol sets the primary communication protocol for the component. This affects AI code generation (appropriate client libraries) and the deterministic Python export (protocol-aware imports).

ProtocolTypical use
HTTPREST APIs, web services
gRPCInternal microservice RPC
WebSocketReal-time bidirectional streams
AMQPMessage broker integration (RabbitMQ)
KafkaEvent streaming
TCP / UDPLow-level network services
GraphQLQuery-based APIs
MQTTIoT device messaging

Scaling defines instance boundaries and autoscale triggers:

  • Min instances - Minimum replica count (baseline capacity)
  • Max instances - Maximum replica count (scale ceiling)
  • Scale trigger - Expression that triggers scaling (e.g. cpu > 80%, queue.depth > 100)

When a component has maxInstances > 1, the Python export generates a ThreadPoolExecutor pool sized to maxInstances and the AI code generator uses concurrency patterns like thread pools or multiprocessing.