Step 4 — Interpret Simulation Results
After each simulation run, review the per-node metrics carefully before acting on them.
Health status signals
| Status | Meaning | Action |
|---|---|---|
| ✅ Healthy (green) | Service is handling load within configured limits | Proceed to AI Recommendations |
| ⚠️ Warning (amber) | Service is approaching a quota or latency threshold | Investigate node configuration; consider scaling |
| ❌ Critical (red) | Service has breached a limit or become a bottleneck | Resolve before proceeding — this will cause real-world failures |
Latency interpretation
Latency figures in node metrics represent simulated processing time at the configured load level.
- Lambda latency significantly higher than other nodes — indicates cold start or under-provisioned memory. Consider provisioned concurrency or a memory increase.
- Uniform latency spikes across all nodes — typically indicates an upstream bottleneck (e.g. Route 53 or API Gateway throttling) propagating downstream.
- Latency growth correlated with RPS increase in a Ramp test — confirms the architecture is not scaling horizontally as load increases. Auto-scaling configuration should be reviewed.
Cost signals
Available on Pro and above.
The Est. Cost metric provides a live monthly projection at the current RPS. At the end of each simulation run, this figure is saved to Execution History.
- Compare cost across simulation runs where architecture has changed to measure the monetary impact of each change.
- If cost is increasing faster than RPS, the architecture has a cost-efficiency problem — typically over-provisioned services or missing caching layers. This is a direct input for the AI Recommendations engine.
- Use the cost estimate as a sanity check on your simulation parameters. If the projected monthly cost looks unrealistic for your use case, revisit your RPS setting.