Running Workflows
This page covers how to run workflows locally — for both development (testing on the canvas) and production (processing real triggers).
| Command | Purpose | Handles |
|---|---|---|
tensorify watch | Dev loop — test from the canvas editor | Test signals only |
tensorify run | Production — process real incoming requests | Live webhook / API calls |
These are not interchangeable. watch does not process real webhook requests. run does not receive canvas test signals.
The watch loop connects your local machine to the canvas editor. Use it while building a workflow to iterate quickly.
tensorify watch <workflowId>
Canvas Editor
│
│ Click "Test" → "Run to Selected"
▼
Tensorify Servers (signal routing)
│
│ WebSocket (your CLI process)
▼
Your Machine (tensorify watch)
│
│ Executes workflow locally
▼
Canvas Editor (outputs appear in node panels)
- You configure nodes on the canvas and click Test
- Select which node to run up to, then click Run to Selected
- Tensorify routes the signal to your active
watchprocess - The workflow runs locally — your Python environment, your file system, your secrets
- Node outputs appear in the editor so you can inspect each step
You do not need to restart watch when you change node settings. Edit the node in the canvas, click Test again, and the updated configuration is used immediately.
tensorify run registers your machine as an active runner and processes real incoming webhook and API trigger requests.
tensorify run <workflowId>
When a request arrives at your deployed workflow endpoint, Tensorify's hooks service routes it to your runner process via WebSocket. The workflow executes locally and the result is recorded.
In production, use a process manager:
# Using pm2
pm2 start "tensorify run wf_abc123" --name my-workflow
pm2 save
pm2 startup
# Using systemd (create /etc/systemd/system/tensorify-workflow.service)
# Then: systemctl enable tensorify-workflow && systemctl start tensorify-workflow
Or use Docker:
FROM node:20-slim
RUN apt-get update && apt-get install -y python3 python3-pip && rm -rf /var/lib/apt/lists/*
RUN npm install -g @tensorify.io/cli
ENV TENSORIFY_API_KEY=""
CMD ["tensorify", "run", "<your-workflow-id>"]
Go to the Runners page in the sidebar. You will see all active runners and their last heartbeat time. A runner must show as Connected before it can receive requests.
tensorify export bundles the compiled Python so you can run it without the Tensorify CLI at all.
tensorify export <workflowId> --output ./my-workflow
The exported bundle contains:
main.py— the entry pointutils.py— shared classes and helpersrequirements.txt— all Python dependencies
Run it directly:
cd my-workflow
pip install -r requirements.txt
python main.py
This is useful for scenarios where you want to deploy to infrastructure that cannot maintain a persistent CLI connection, or where you want to fork the code and modify it manually.
tensorify clone downloads the raw generated Python source for inspection — without the additional run scaffolding that export includes.
tensorify clone <workflowId> ./workflow-source
Use this to audit the generated code, understand what Python Tensorify produces from your canvas, or use it as a starting point for a fully custom implementation.
- Deploying Workflows — configure execution modes and trigger endpoints
- CLI Reference — all flags and options
