A conductor for the swarm
One Director routes intent into the right specialist, splits work in parallel, gathers results, and decides what to do next.
AgentiqFlow is a local-first operating system for AI agents. Run a Director, 25+ specialists, MCP servers, channels, cron, and your own vault — all on hardware you own. No SaaS in the loop. No tokens leaking to a hoster.
Built for people who don't want to rent their intelligence — and who'd rather know exactly which process is running which model on which CPU.
Models run on your hardware. Data stays in your filesystem. Nothing is forwarded unless you say so. No telemetry. No fine-print analytics.
Every agent is a swappable module. Skills are scripts. Channels are adapters. MCP servers plug into the same bus. You ship the swarm you need.
Cron, queues, and durable workflows make agents wait, retry, and hand work back and forth. Your AI is a daemon, not a chat box.
One Director routes intent into the right specialist, splits work in parallel, gathers results, and decides what to do next.
Browse, install, fork. Skills are signed scripts your agents can call — from web crawl to invoice gen to market scrape.
Native iMessage, Discord, email, Slack. Agents reply, react, and ping push notifications when long jobs finish.
Hook in any Model Context Protocol server. GitHub, Linear, Gmail, Drive, Stripe, Postgres. Connect once, reuse forever.
Visual graph editor for repeatable jobs. Branches, retries, queues, timers, embed steps — composable like Lego, durable like a job runner.
Notes, wiki-links, sessions, transcripts. Auto-embedded, semantic search, agent-shared. Yours forever — plain markdown on disk.
API keys, OAuth tokens, browser sessions sealed in a per-machine vault. Agents request scoped access; nothing leaks to logs.
Live stream of dispatches, tool calls, costs, latencies. Drill into a single execution trace, replay it, fork it.
Webull trading, Apple Photos, Calendar, Drive, Notion, Browserbase, Vercel deploy — first-class connectors that respect your scope.
Every layer is on your machine. Outbound only happens when you call a model provider — and you choose which calls leave the box.
Architecture deep-dive →Dashboard, Chat, Live, Memory graph, Workflows. Loads from disk, runs in browser at localhost:8858.
Long-lived host process. Routing, MCP host, channels, queues, cron, license. Streams SSE end-to-end.
Director, swarm scheduler, persona memory, skills runner, providers, embed pipeline.
~/.agentiqflow stores secrets (sealed), sessions, tasks, workflows, and your obsidian-style memory vault.
Anthropic, OpenAI, Gemini, NVIDIA NIM, OpenRouter, Ollama Cloud, local Ollama. Swap per agent.
Configure once. Route per-agent. Mix frontier with open-weights, fast with cheap, hosted with local. Failover and budget caps are built in.
Model spend is on you and your provider — we never markup tokens. AgentiqFlow charges only for the runtime, channels, sync, and support tier you pick.
For solo operators.
DownloadFor power users who run agents 24/7.
Start 14-day trialFor teams + agencies running flows for clients.
Talk to salesSigned with ed25519. SHA-256 published. Auto-update on stable channel. Source on GitHub.
No. The runtime is local. The only network calls are: model providers you configure (using your keys), MCP servers you install, and our update / license server. The update server only sees your license key + device ID + app version.
No. Most agents work fine through hosted providers. You only need a GPU if you want to run local Ollama models or local video / image diffusion.
We never markup tokens. You bring your own API keys (or run local). We optionally offer per-agent budget caps, daily ceilings, and prompt-cache analytics inside the app.
Skills are local scripts your agents can call (Python or Node). MCP servers are tool servers that follow the Model Context Protocol — agents discover them and use their tools. Power-ups are first-party native integrations (Webull, iMessage, Drive) that wrap platform APIs and store credentials in your vault.
Studio plan supports team workspaces with shared workflows, audit logs, role-based access, and SSO/SCIM. Memory and vaults stay device-local; workspace artifacts sync end-to-end encrypted.
Core runtime is MIT-licensed and on GitHub. Premium components (channels, cloud sync, marketplace) are source-available with paid license.
Yes. With local Ollama models, you can run a fully offline swarm. The license server tolerates 30 days offline without re-check.
14 days, no card required. Cancel any time — Personal tier always remains free.
Install in 90 seconds. Bring your keys. Wire up your channels. Watch your swarm do work while you do other things.