TheConsultant
Checking authentication...
🚀 Provisioning Agent
Server
🖥️
Finding server
☁️
Selecting provider
Creating VM
🌐
Assigning IP
🔑
SSH connection
🐳
Installing Docker
📦
Pushing image
Server ready
Agent
🤖
Starting container
📱
QR ready
Waiting for scan...
💬
WhatsApp connected
AI Setup
🧠
Free AI providers (API)
Groq, Gemini, OpenRouter, HuggingFace, Together
🌐
Browser-based AI providers
ChatGPT, Claude.ai, Poe, Perplexity
ETA
Fleet Overview
Live platform health — auto-refreshes every 30s
Total Users
Active Containers
WA Connected
Local (Admin VM)
Worker VMs
Overloaded VMs
Consultant AI
Check
VM Nodes
Worker VM health, capacity and container management
Users
Subscription and container management
UserPlanStatusUser IDContainer NameServer IPContainerJoinedActions
Auto-Healer
Loading...
Automated recovery events and escalations
Total Events
Success Rate
Escalated
Critical
Live Activity
Real-time platform events via WebSocket
Waiting for events...
Revenue & Growth
Subscription metrics and MRR breakdown
MRR
VM Expense
Profit
Active Subs
Trial Users
Converting
Churn Rate
Plan Distribution
Subscription Status
VM Expense by Provider
Migration Log
Container migration history
UserFromToStatusByTimeError
Logs & Activity
Container logs and agent-start timeline
Click Refresh to load...
QR Diagnostics
QR code flow diagnostics — agent WS status, QR-related logs
Click Refresh to load...
VM Providers
API key status, fallback order, and provider management
Loading...
Fly.io Container Backend
Run agent containers as Fly Machines — no worker VMs required
backend: —
Backend
Live Machines
DB Containers
Running
WA Connected
⚙️ FLY.IO CONFIGURATION
Controls which backend new /containers/start requests use. Existing containers keep their original backend.
🖥️ FLY MACHINES
Machine ID Name State Region User Private IP CPU / Mem Actions
Loading…
AI Request Logs
Every LLM request — see what was sent, which AI replied, and what it said
Click Refresh to load...
Pending Commits
Unmerged branches from GitHub & Claude Code sessions
Loading...
Consultant AI
Chat directly with the local Ollama instance on the admin server
Status: unknown
Send a message to start chatting with Consultant AI
Server Specs
Loading...
Loaded in Memory
Models currently consuming RAM/VRAM — unload to free resources
Loading...
Installed Models
Manage Ollama models on the admin server
Loading models...
Install Model
Pick from popular models or enter a custom name from ollama.com/library