Fleet Overview
Live platform health — auto-refreshes every 30s
Total Users
—
Active Containers
—
WA Connected
—
Local (Admin VM)
—
Worker VMs
—
Overloaded VMs
—
Consultant AI
Check
VM Nodes
Worker VM health, capacity and container management
Containers
Every agent container — user, VM, ports, and live resource consumption. Stats refresh every 30s.
Total Containers
—
Running
—
WA Connected
—
Stats Reachable
—
Avg CPU
—
Avg Memory
—
| User | Container | VM / Location | Port (purpose) | Status | CPU | Memory | Network I/O | Actions |
|---|
Agent container exposes host tcp/<port> → container tcp/3000 (Puppeteer HTTP: /health /qr /logs /page-debug + WebSocket back to orchestrator). "—" under CPU/Mem means the worker VM's Docker socket was unreachable or the container is stopped.
Users
Subscription and container management
| User | Plan | Status | User ID | Container Name | Server IP | Container | Joined | Actions |
|---|
Auto-Healer
Loading...
Automated recovery events and escalations
Total Events
—
Success Rate
—
Escalated
—
Critical
—
Revenue & Growth
Subscription metrics and MRR breakdown
MRR
—
—
VM Expense
—
—
Profit
—
—
Active Subs
—
—
Trial Users
—
Converting
Churn Rate
—
—
Plan Distribution
Subscription Status
VM Expense by Provider
Migration Log
Container migration history
| User | From | To | Status | By | Time | Error |
|---|
Logs & Activity
Container logs, live events, QR diagnostics, and pending commits
Click Refresh to load...
VM Providers
API key status, fallback order, and provider management
Loading...
Fly.io Container Backend
Run agent containers as Fly Machines — no worker VMs required
Backend
—
Live Machines
—
DB Containers
—
Running
—
WA Connected
—
⚙️ FLY.IO CONFIGURATION
Controls which backend new
/containers/start requests use. Existing containers keep their original backend.🖥️ FLY MACHINES
| Machine ID | Name | State | Region | User | Private IP | CPU / Mem | Actions |
|---|---|---|---|---|---|---|---|
| Loading… | |||||||
AI Request Logs
Every LLM request — see what was sent, which AI replied, and what it said
Click Refresh to load...
Consultant AI
Chat directly with the local Ollama instance on the admin server
Status: unknown
Send a message to start chatting with Consultant AI
Fallback Default Model
Used automatically when all cloud LLM providers fail. Saved to
deploy/.env as OLLAMA_MODEL — takes effect immediately, no restart needed.current: loading…
Server Specs
Loading...
Loaded in Memory
Models currently consuming RAM/VRAM — unload to free resources
Loading...
Installed Models
Manage Ollama models on the admin server
Loading models...
Install Model
Pick from popular models or enter a custom name from ollama.com/library. Cards are cross-checked against the admin server — already installed models show an INSTALLED badge.
System
Maintenance toggles for automated pipelines
Chrome Web Store publishing
When enabled, every push to
main that touches src/extension/** bumps the patch version, uploads the zip to Chrome Web Store, and publishes to trusted testers.
When paused, pushes skip the whole pipeline — manual publishes via the workflow's workflow_dispatch button still run.
loading…
Loading...