Steps & Workflow Design
Steps are the fundamental unit of work in JsWorkflows. Understanding them — and designing around them deliberately — is the difference between a workflow that is reliable, efficient, and easy to maintain and one that times out, retries constantly, or runs up unnecessary costs.
What a step is
Section titled “What a step is”A step is a single method on your Workflow class. Each time a step runs, it is a fresh, independent worker invocation with:
- Its own execution context (no shared in-memory state between steps)
- Up to 5 minutes of CPU time
- Full access to the platform API (
api.scheduleNextStep,api.runStore, etc.) - Its own
data,headers, andapiarguments
When api.scheduleNextStep() is called, the current step exits normally. After the specified delay, the next step is dispatched as a brand-new invocation — not a continuation of the same process.
Why steps exist
Section titled “Why steps exist”Most webhook handlers run for under a second. But real workflows often need to:
- Wait — send a follow-up email 3 days after an order, check a payment status 10 minutes later
- Spread load — process 500 products without hitting Shopify’s API rate limits in a single call
- Run in parallel — process every line item at the same time instead of sequentially
- Respond fast — acknowledge a Shopify webhook immediately, do the actual work asynchronously
All of these are solved by splitting the work across steps.
When start should (and shouldn’t) do work
Section titled “When start should (and shouldn’t) do work”The start step has a strict responsibility: acknowledge the trigger and queue what needs to happen next.
For Shopify webhook triggers, Shopify expects a response within a few seconds. If your start step makes multiple API calls, waits for rate limits, or processes large lists, you risk:
- Shopify marking the delivery as failed and retrying up to 19 times
- Running the same logic repeatedly on each retry
- Burning credits on duplicated processing
Keep start fast:
async start(data, headers, api) { // ✓ Validate the payload if (!data.id) return;
// ✓ Deduplicate (important for Shopify webhook retries) const { locked } = await api.dedupe(`my-workflow:${data.id}`); if (locked) return;
// ✓ Schedule the real work await api.scheduleNextStep({ delay: 10, action: 'processOrder', payload: { orderId: data.id }, }); // start() returns here — Shopify gets a fast 200}Linear chains
Section titled “Linear chains”The simplest multi-step pattern is a linear chain: start → stepA → stepB → done.
class Workflow { async start(data, headers, api) { await api.scheduleNextStep({ delay: '1 day', action: 'sendFollowUp', payload: { orderId: data.id, email: data.email }, }); }
async sendFollowUp({ orderId, email }, headers, api) { // Runs 24 hours later — send a review request email await api.scheduleNextStep({ delay: '7 days', action: 'sendSecondFollowUp', payload: { orderId, email }, }); }
async sendSecondFollowUp({ orderId, email }, headers, api) { // Runs 8 days after the order — final follow-up }}Each step only needs to know about the next one. Steps do not share memory, so pass everything the next step needs in payload.
Fan-out (parallel branches)
Section titled “Fan-out (parallel branches)”Call api.scheduleNextStep() multiple times in one step to launch parallel branches. Each call starts an independent invocation of the target method. The run is not marked complete until all branches finish.
class Workflow { async start(data, headers, api) { // Fan out — one branch per line item, all run simultaneously for (const item of data.line_items) { await api.scheduleNextStep({ delay: 10, action: 'processItem', payload: { itemId: item.id, sku: item.sku, quantity: item.quantity }, }); } // start() exits — all N branches are now queued }
async processItem({ itemId, sku, quantity }, headers, api) { // Each branch runs independently and in parallel api.log(`Processing item ${sku} ×${quantity}`); // ... update inventory, notify supplier, etc. }}Fan-out is ideal for processing lists where each item is independent. Instead of a loop that makes 100 sequential API calls in one step (hitting rate limits, risking timeout), you launch 100 branches that each make one call.
Collecting results from fan-out branches
Section titled “Collecting results from fan-out branches”Use api.runStore.push() to accumulate results from parallel branches, then read them in a consolidation step:
class Workflow { async start(data, headers, api) { for (const item of data.line_items) { await api.scheduleNextStep({ delay: 10, action: 'checkStock', payload: { itemId: item.id, sku: item.sku }, }); } }
async checkStock({ itemId, sku }, headers, api) { const res = await fetch(`https://your-inventory-api.com/stock/${sku}`); const { quantity } = await res.json();
// Push results from all parallel branches into one list await api.runStore.push('stockResults', { sku, quantity, inStock: quantity > 0 }); }}The run state (api.runStore) is scoped to the current run and shared across all branches.
Handling Shopify API rate limits with steps
Section titled “Handling Shopify API rate limits with steps”Shopify’s REST API allows approximately 2 requests per second (leaky bucket). Shopify’s GraphQL API has a cost-based quota. If your workflow processes large lists, sequential API calls in a single step will hit these limits.
Pattern: spread work across stepped branches
Instead of calling the Shopify API 200 times in a loop inside one step, fan out into 200 branches each making one call. Spread the load by staggering delays:
async start(data, headers, api) { const items = data.line_items; for (let i = 0; i < items.length; i++) { await api.scheduleNextStep({ // Stagger by 1 second per item to stay under the rate limit delay: 10 + i, action: 'updateItem', payload: { itemId: items[i].id }, }); }}Pattern: page through large result sets across steps
async start(data, headers, api) { await api.scheduleNextStep({ delay: 10, action: 'fetchPage', payload: { cursor: null, pageNum: 1 }, });}
async fetchPage({ cursor, pageNum }, headers, api) { const res = await fetch( `https://${env.SHOPIFY_STORE}/admin/api/${env.SHOPIFY_API_VERSION}/graphql.json`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: `query GetProducts($cursor: String) { products(first: 50, after: $cursor) { nodes { id title } pageInfo { hasNextPage endCursor } } }`, variables: { cursor }, }), } ); const { data: gqlData } = await res.json(); const { nodes, pageInfo } = gqlData.products;
// ... process this page of products (nodes)
if (pageInfo.hasNextPage) { await api.scheduleNextStep({ delay: 10, action: 'fetchPage', payload: { cursor: pageInfo.endCursor, pageNum: pageNum + 1 }, }); }}Idempotency — designing steps to be safe to retry
Section titled “Idempotency — designing steps to be safe to retry”Cloudflare may retry a failed step. External systems may deliver duplicate events. Design every step to be idempotent: running it twice with the same input produces the same result as running it once.
Practical strategies:
-
Deduplicate at the entry point — use
api.dedupe()instartwith a key derived from the event ID:const { locked } = await api.dedupe(`${data.id}`, 300);if (locked) return; -
Check before writing — before tagging an order or sending a message, check if it already happened
-
Use idempotency keys — most external APIs (Stripe, etc.) support an idempotency key header; use the event ID as the value
Payload design
Section titled “Payload design”The payload you pass to scheduleNextStep is persisted until the step runs. Keep it lean:
- Pass only what the next step needs — do not forward the entire Shopify order object if the next step only uses the order ID and email
- Avoid large blobs — payloads over 1.5 MB are automatically stored in object storage, which adds latency
- Do not depend on in-memory state — each step starts fresh; any computation from a previous step must be passed in
payloador stored inapi.runStore
Choosing a delay
Section titled “Choosing a delay”The minimum delay is 10 seconds. The maximum is 400 days.
| Use case | Recommended delay |
|---|---|
| Acknowledge fast, process immediately | 10–30 seconds |
| Post-purchase email (not spammy) | "1 hour" to "1 day" |
| Review request follow-up | "7 days" |
| Subscription renewal reminder | "30 days" |
| Stagger concurrent API calls | 1 second per item (minimum 10 for first) |
Chain depth
Section titled “Chain depth”Each call to scheduleNextStep increments the chain depth. A chain of start → A → B → C has depth 3. A fan-out of 100 branches from start has depth 1 — all branches are at the same level, not counted separately.
A platform-wide circuit breaker prevents runaway infinite loops, but there is no per-plan chain depth limit.
When NOT to use a step
Section titled “When NOT to use a step”Not every workflow needs multiple steps. Use a single start step when:
- The work completes in well under a second (e.g. just writing to
api.runStoreor calling one fast API) - You are using an HTTP trigger where the caller expects a synchronous response
- You are in a scheduled trigger and the total work is a short loop that completes quickly
The overhead of scheduling a step (minimum 10 seconds) is unnecessary for work that could safely run in start.