Running Workflows

This page covers how to run workflows locally — for both development (testing on the canvas) and production (processing real triggers).

The Two Modes

CommandPurposeHandles
tensorify watchDev loop — test from the canvas editorTest signals only
tensorify runProduction — process real incoming requestsLive webhook / API calls

These are not interchangeable. watch does not process real webhook requests. run does not receive canvas test signals.

Development: tensorify watch

The watch loop connects your local machine to the canvas editor. Use it while building a workflow to iterate quickly.

tensorify watch <workflowId>

How the dev loop works

Canvas Editor
    │
    │  Click "Test" → "Run to Selected"
    ▼
Tensorify Servers (signal routing)
    │
    │  WebSocket (your CLI process)
    ▼
Your Machine (tensorify watch)
    │
    │  Executes workflow locally
    ▼
Canvas Editor (outputs appear in node panels)
  1. You configure nodes on the canvas and click Test
  2. Select which node to run up to, then click Run to Selected
  3. Tensorify routes the signal to your active watch process
  4. The workflow runs locally — your Python environment, your file system, your secrets
  5. Node outputs appear in the editor so you can inspect each step

Iterating quickly

You do not need to restart watch when you change node settings. Edit the node in the canvas, click Test again, and the updated configuration is used immediately.

Production: tensorify run

tensorify run registers your machine as an active runner and processes real incoming webhook and API trigger requests.

tensorify run <workflowId>

When a request arrives at your deployed workflow endpoint, Tensorify's hooks service routes it to your runner process via WebSocket. The workflow executes locally and the result is recorded.

Keeping it alive

In production, use a process manager:

# Using pm2
pm2 start "tensorify run wf_abc123" --name my-workflow
pm2 save
pm2 startup

# Using systemd (create /etc/systemd/system/tensorify-workflow.service)
# Then: systemctl enable tensorify-workflow && systemctl start tensorify-workflow

Or use Docker:

FROM node:20-slim
RUN apt-get update && apt-get install -y python3 python3-pip && rm -rf /var/lib/apt/lists/*
RUN npm install -g @tensorify.io/cli
ENV TENSORIFY_API_KEY=""
CMD ["tensorify", "run", "<your-workflow-id>"]

Checking runner status

Go to the Runners page in the sidebar. You will see all active runners and their last heartbeat time. A runner must show as Connected before it can receive requests.

Exporting a Workflow

tensorify export bundles the compiled Python so you can run it without the Tensorify CLI at all.

tensorify export <workflowId> --output ./my-workflow

The exported bundle contains:

  • main.py — the entry point
  • utils.py — shared classes and helpers
  • requirements.txt — all Python dependencies

Run it directly:

cd my-workflow
pip install -r requirements.txt
python main.py

This is useful for scenarios where you want to deploy to infrastructure that cannot maintain a persistent CLI connection, or where you want to fork the code and modify it manually.

Cloning Generated Code

tensorify clone downloads the raw generated Python source for inspection — without the additional run scaffolding that export includes.

tensorify clone <workflowId> ./workflow-source

Use this to audit the generated code, understand what Python Tensorify produces from your canvas, or use it as a starting point for a fully custom implementation.

Next Steps

On this page