TheConsultant
Checking authentication...
🚀 Provisioning Agent
Server
🖥️
Finding server
☁️
Selecting provider
Creating VM
🌐
Assigning IP
🔑
SSH connection
🐳
Installing Docker
📦
Pushing image
Server ready
Agent
🤖
Starting container
📱
QR ready
Waiting for scan...
💬
WhatsApp connected
AI Setup
🧠
Free AI providers (API)
Groq, Gemini, OpenRouter, HuggingFace, Together
🌐
Browser-based AI providers
ChatGPT, Claude.ai, Poe, Perplexity
ETA
Fleet Overview
Live platform health — auto-refreshes every 30s
Total Users
Active Containers
WA Connected
Local (Admin VM)
Worker VMs
Overloaded VMs
Consultant AI
Check
VM Nodes
Worker VM health, capacity and container management
Containers
Every agent container — user, VM, ports, and live resource consumption. Stats refresh every 30s.
Total Containers
Running
WA Connected
Stats Reachable
Avg CPU
Avg Memory
User Container VM / Location Port (purpose) Status CPU Memory Network I/O Actions
Agent container exposes host tcp/<port> → container tcp/3000 (Puppeteer HTTP: /health /qr /logs /page-debug + WebSocket back to orchestrator). "—" under CPU/Mem means the worker VM's Docker socket was unreachable or the container is stopped.
Users
Subscription and container management
UserPlanStatusUser IDContainer NameServer IPContainerJoinedActions
Auto-Healer
Loading...
Automated recovery events and escalations
Total Events
Success Rate
Escalated
Critical
Revenue & Growth
Subscription metrics and MRR breakdown
MRR
VM Expense
Profit
Active Subs
Trial Users
Converting
Churn Rate
Plan Distribution
Subscription Status
VM Expense by Provider
Migration Log
Container migration history
UserFromToStatusByTimeError
Logs & Activity
Container logs, live events, QR diagnostics, and pending commits
Click Refresh to load...
VM Providers
API key status, fallback order, and provider management
Loading...
Fly.io Container Backend
Run agent containers as Fly Machines — no worker VMs required
backend: —
Backend
Live Machines
DB Containers
Running
WA Connected
⚙️ FLY.IO CONFIGURATION
Controls which backend new /containers/start requests use. Existing containers keep their original backend.
🖥️ FLY MACHINES
Machine ID Name State Region User Private IP CPU / Mem Actions
Loading…
AI Request Logs
Every LLM request — see what was sent, which AI replied, and what it said
Click Refresh to load...
Consultant AI
Chat directly with the local Ollama instance on the admin server
Status: unknown
Send a message to start chatting with Consultant AI
Fallback Default Model
Used automatically when all cloud LLM providers fail. Saved to deploy/.env as OLLAMA_MODEL — takes effect immediately, no restart needed.
current: loading…
Server Specs
Loading...
Loaded in Memory
Models currently consuming RAM/VRAM — unload to free resources
Loading...
Installed Models
Manage Ollama models on the admin server
Loading models...
Install Model
Pick from popular models or enter a custom name from ollama.com/library. Cards are cross-checked against the admin server — already installed models show an INSTALLED badge.
System
Maintenance toggles for automated pipelines
Chrome Web Store publishing
When enabled, every push to main that touches src/extension/** bumps the patch version, uploads the zip to Chrome Web Store, and publishes to trusted testers. When paused, pushes skip the whole pipeline — manual publishes via the workflow's workflow_dispatch button still run.
loading…
Loading...
Worker-VM Ollama fallback (auto-provisioning default)
Controls whether auto-provisioned Worker VMs install Ollama during cloud-init as a last-resort LLM fallback. Affects billing-triggered provisioning (new subscribers) and monitor-triggered auto-scaling. When off, cloud-init on those new VMs skips the Ollama install entirely (no binary, no systemd unit, no model pull, no ufw rule) — tenants on those workers fall through to the admin-VM Ollama instead. Manual provisioning from the VMs tab has its own per-VM checkbox and is unaffected by this toggle.
loading…
Loading...