Chain AI agents, code steps, MCP tools, and conditional logic into pipelines that run on schedule, on webhook, on file upload, or on demand. No code required to build. Full control when you want it.
Every workflow is a sequence of steps. Mix and match AI reasoning, custom code, external tool calls, branching logic, and data transformations in a single pipeline.
Execute a pre-configured AI agent with its own system prompt, tools, and model selection. Agents run in Standard (V2) or Super (SuperV2) runtime with full tool-calling support and streaming output.
Send a direct prompt to any supported LLM without creating a separate agent. Ideal for one-off classification, summarization, or extraction tasks where a full agent definition would be overkill.
Run AI-generated or hand-written JavaScript in a secure V8 sandbox. Access fetch, JSON, Math, and Date APIs with zero LLM cost. Perfect for data parsing, API calls, and calculations between AI steps.
Invoke tools from any deployed MCP server directly in your workflow. Query databases, call REST APIs, or interact with external systems using the Model Context Protocol standard.
Execute utility tools for common operations: send emails, create Jira tickets, read and write files, or call any HTTP endpoint. Built-in actions cover the most common integration patterns out of the box.
Add branching logic with dotted-path resolution on step outputs. Route workflow execution down different paths based on AI classification results, data thresholds, or any computed value.
Reshape data between steps without writing code or consuming LLM tokens. Map fields, filter arrays, merge objects, and prepare payloads so every downstream step receives exactly the structure it expects.
Every workflow needs a trigger. Orckai supports four distinct trigger modes so your automations fire exactly when you need them to, whether that is once a minute or once a quarter.
X-Webhook-Secret header, validated securely to prevent replay and brute-force attacks.
cron: "*/5 * * * *"
POST /api/webhooks/{workflowId}
input: { file: "invoice_Q4.pdf" }
POST /api/workflows/{id}/execute
Every step in an Orckai workflow can reference the output of any previous step using the {{variable}} syntax. This means your AI agent's classification result can feed directly into a condition step, which routes to different code steps, which format the final payload for an action step.
Transform steps let you reshape data without consuming LLM tokens. Map fields from one schema to another, extract nested values, filter arrays, or merge multiple step outputs into a single clean object that downstream steps can consume.
{{step_1.result.category}}
Step 1: Agent (classify-email)
output: { category: "billing",
priority: "high" }
Step 2: Condition
if: {{step_1.category}} == "billing"
then: goto Step 3a
else: goto Step 3b
Step 3a: Code (fetch-invoice)
input: {{step_1.priority}}
output: { invoice: {...}, total: 4250 }
Step 4: Action (send-email)
to: billing@company.com
body: "Invoice {{step_3a.invoice.id}}
Total: ${{step_3a.total}}"
Not every step needs an AI model. Code steps run JavaScript in a secure sandbox, giving you full programmatic control for data processing, API integrations, and business logic without spending a single token.
The sandbox isolates code execution from the host environment while providing safe APIs including fetch, JSON, Math, Date, and regular expressions. Each execution runs with configurable timeouts to prevent runaway processes.
fetch API for calling external servicesinputs object
// Fetch exchange rates and compute totals
const rates = await fetch(
'https://api.exchangerate.host/latest'
).then(r => r.json());
const invoices = inputs.step_2.invoices;
const converted = invoices.map(inv => ({
...inv,
usd: inv.amount * rates.rates.USD,
eur: inv.amount * rates.rates.EUR
}));
const total = converted.reduce(
(sum, inv) => sum + inv.usd, 0
);
return {
invoices: converted,
totalUSD: total.toFixed(2),
rateDate: rates.date
};
When a workflow runs, you see exactly what happened at every step. Inspect inputs, outputs, timing, errors, and token usage. Retry failed steps or re-run entire workflows from the dashboard.
Every workflow run produces a detailed execution log. Expand any step to see its input payload, output data, execution duration, token count, and any errors. Identify bottlenecks and debug failures without guesswork.
Failed at step 4 of 6? Retry just that step or re-run the entire workflow from the beginning. Orckai preserves execution history so you can compare runs, track improvements, and audit every decision the system made.
Workflows execute through a reliable async job queue with built-in retry logic. Monitor queue depth, job states, and throughput via the built-in monitoring UI. Metrics feed into dashboards for production observability.