Simulation & Testing
A robust Manifest Studio workflow combines stub simulation, an interactive debugger, and configurable error/retry policies. Together these let you validate data flow, step through executions, and handle failures—all before deploying to production.
1. Stub Simulation Workflow
Use stubbed handlers to run your manifest end-to-end without real APIs or LLM calls.
Purpose
Validate connector wiring and data flow
Test branching logic in isolation
Iterate on prompt templates and parsing
Key Steps
Define Stub Handlers For each Resource or Tool node, configure a stub response.
{ "lamports": 1500000000, "tokens": [ { "mint": "So1111…1112", "balance": 42 } ] }Enable Simulation Mode In your Manifest Metadata, add a simulation block:
{ "metadata": { "manifestName": "MyWorkflow", "version": "1.0.0", "simulation": { "enabled": true, "stubs": { "res_account": "solscan-account-stub", "tool_parse": "parse-balances-stub" } } } }Run Stub Workflow Click Run Validation (or Run Simulation) to launch the Validation Terminal. The client routes calls to your stub handlers.
Inspect Outputs
In the Validation Terminal you’ll see each stubbed call and its canned response.
Use the Live JSON Preview to verify connector mappings propagate stub data.
2. Interactive Debugger & Timeline View
Trace every node execution, inspect payloads, and replay your workflow.
Step-by-Step Execution
Pause before or after any node to examine its inputs and outputs.
Toggle Debug Mode in the Manifest Studio toolbar.
Timeline View
Visualize the chronological order of node invocations, HTTP calls, and LLM sampling.
Playback execution, with duration metrics and status indicators.
Data Snapshots
Click any connector edge to expand full JSON payloads.
View logs, error messages, or truncated outputs inline.
Using the Debugger UI
Activate Debug Mode Toggle the debugger icon in the top toolbar.
Set Breakpoints Click the left margin of any node to pause before it runs.
Inspect Context When paused, the Properties Panel shows:
Node configuration (URL patterns, prompt text, schema)
Incoming connector data
Last output value
Step Over / Into
Step Over: advance to the next node
Step Into: dive into sub-steps (e.g. inside a tool execution or SSE stream)
Review Timeline After completion, switch to the Timeline tab to see a waterfall chart of events.
3. Error Handling & Retry Policies
Define how your workflow responds to transient failures, timeouts, and invalid inputs.
Error Types
Validation Errors Missing or malformed arguments in Resource or Tool nodes.
Execution Errors HTTP 5xx, DNS failures, rate limits, or invalid JSON responses.
LLM Errors Model timeouts, policy violations, or user rejections.
Configuring Retry Policies
In your Manifest Metadata or per-node onError block:
{
"metadata": {
"retryPolicy": {
"maxAttempts": 3,
"backoff": {
"type": "exponential",
"initialDelayMs": 500,
"maxDelayMs": 5000
},
"retryOn": ["networkError", "rateLimit", "timeout"]
}
},
"nodes": [
{
"id": "tool_parse",
"type": "tool",
"onError": {
"action": "retry",
"maxRetries": 2,
"fallbackTo": "default_parse_stub"
}
}
]
}maxAttempts Total attempts including the initial call.
backoff Delay strategy:
fixedorexponential.retryOn List of error categories that trigger retries.
Node-Level Handlers
Attach custom error handlers to emit fallback values or trigger alerts:
{
"id": "res_account",
"type": "resource",
"onError": {
"action": "fallback",
"value": { "lamports": 0, "tokens": [] }
}
}Global Error Catchers
Use a special catch node or UI hook to consolidate unhandled failures:
{
"onFailure": {
"notify": "[email protected]",
"logLevel": "critical"
}
}Enable metrics and monitoring dashboards to track error rates, latencies, and retry counts in real time.
4. Putting It All Together
By combining:
Stub Simulation to validate flows without dependencies
Interactive Debugging to trace logic and timing
Configurable Retry & Error Policies for resilience
you can develop, test, and maintain complex MCP workflows safely and efficiently—ready for production with one click.
Last updated