Minimum
- CPU: 2 vCPUs
- RAM: 4 GB
- Disk: 20 GB SSD
- OS: Linux (Ubuntu 22.04 LTS)
Deploy the MeshOptixIQ Network Discovery engine into your production environment.
The host running the discovery agent must have:
MeshOptixIQ supports both graph and relational backends. Configure the backend using the
GRAPH_BACKEND environment variable.
Best for deep graph traversal and visualization.
GRAPH_BACKEND=neo4j
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=secret
Best for operational stability and recursive CTE optimizations.
GRAPH_BACKEND=postgres
POSTGRES_DSN=postgresql://user:pass@localhost:5432/db
# Connection pool tuning (optional — defaults shown)
POSTGRES_POOL_MIN=2
POSTGRES_POOL_MAX=10
The fastest way to evaluate MeshOptixIQ before committing to a production deployment. No database, no license key, no configuration required — a pre-seeded 20-device campus + datacenter network starts in seconds.
docker run -p 8000:8000 \
-e MESHOPTIXIQ_DEMO_MODE=true \
-e API_KEY=demo \
-e GRAPH_BACKEND=inmemory \
meshoptixiq/meshoptixiq:latest
# Open http://localhost:8000 (API key: demo)
Or use the pre-built compose variant:
docker compose -f docker-compose.demo.yml up
Demo data (300 endpoints, 58 firewall rules, all query categories) resets on restart. When you are ready for a real deployment, pick one of the options below.
Run the discovery engine as a stateless container. It creates the graph and exits, or runs on a schedule.
# 1. Pull the image
docker pull meshoptixiq/meshoptixiq:latest
# 2. Run a one-off discovery
docker run -d \
-e NEO4J_URI=bolt://neo4j.corp.local:7687 \
-e NEO4J_PASSWORD=my-secret-pw \
-v /opt/meshoptix/config.yaml:/app/config.yaml \
meshoptixiq/meshoptixiq:latest
For persistent deployments on a dedicated VM.
[Unit]
Description=MeshOptixIQ Discovery Agent
After=network.target
[Service]
Type=simple
User=meshoptix
ExecStart=/opt/meshoptix/venv/bin/python -m network_discovery.start_agent
Restart=always
[Install]
WantedBy=multi-user.target
The official Helm chart is the recommended path for Pro and Enterprise Kubernetes deployments. It renders all required resources: API Deployment (with optional HPA), collector worker Deployment, dispatcher CronJob, ConfigMap, ServiceAccount, Secret, Service, and optional Ingress.
helm install meshoptixiq helm/meshoptixiq/ \
--set api.key=changeme \
--set neo4j.uri=bolt://neo4j:7687 \
--set neo4j.password=secret \
--set redis.url=redis://redis:6379 \
--set collector.enabled=true
Optional add-ons:
# Enable HPA (Horizontal Pod Autoscaler) and Ingress
helm upgrade meshoptixiq helm/meshoptixiq/ \
--set api.autoscaling.enabled=true \
--set ingress.enabled=true \
--set ingress.host=meshoptixiq.corp.local
The chart includes SSE-safe nginx ingress annotations (proxy_read_timeout 3600) required
for the /events endpoint. See helm/meshoptixiq/values.yaml for the full
options reference.
Redis is optional. Without it, MeshOptixIQ runs in single-instance mode with
in-process rate limiting and snapshot storage. Setting REDIS_URL activates cluster mode,
which enables:
POST /admin/rbac/reload propagates to every pod)meshq collect --worker)# Activate cluster mode — set REDIS_URL on every API container
REDIS_URL=redis://redis:6379
For a 3-pod API cluster with nginx load balancer, use the included compose file:
docker compose -f docker-compose.cluster.yml up -d
Check cluster status: GET /health/redis (no auth required) — returns
cluster_mode and Redis reachability.
| Variable | Default | Description | License |
|---|---|---|---|
SFLOW_ENABLED |
false | Enable sFlow v5 listener on UDP port 6343 | Enterprise |
NETFLOW_ENABLED |
false | Enable NetFlow v5/v9 listener on UDP port 2055; IPFIX on UDP port 9995 | Enterprise |
SYNTHETIC_ENABLED |
false | Enable synthetic monitoring probe scheduler; configure probe targets via POST /synthetic/probes |
Enterprise |
K8S_KUBECONFIG |
(unset) | Path to kubeconfig for external cluster observability. Leave unset for in-cluster mode (ServiceAccount auto-detected) | Enterprise |
In-cluster: Deploy MeshOptixIQ as a Pod with a ServiceAccount that has get/list/watch on Pods, Services, Nodes, and Endpoints. The kubeconfig is auto-detected from the ServiceAccount mount.
External: Set K8S_KUBECONFIG to a kubeconfig file path. The file must be mounted into the container (use a Kubernetes Secret or ConfigMap). Multiple clusters can be configured by pointing to a merged kubeconfig.
Complete production stack with Neo4j, API server, and scheduled discovery:
version: '3.8'
services:
neo4j:
image: neo4j:5.15
environment:
NEO4J_AUTH: neo4j/production-password-here
NEO4J_dbms_memory_heap_initial__size: 2G
NEO4J_dbms_memory_heap_max__size: 4G
NEO4J_dbms_memory_pagecache_size: 2G
volumes:
- neo4j-data:/data
- neo4j-logs:/logs
ports:
- "7687:7687"
restart: unless-stopped
meshoptixiq-api:
image: meshoptixiq/meshoptixiq:latest
environment:
NEO4J_URI: bolt://neo4j:7687
NEO4J_PASSWORD: production-password-here
MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
API_KEY: ${API_KEY}
CORS_ORIGINS: "https://dashboard.example.com"
ports:
- "8000:8000"
depends_on:
- neo4j
restart: unless-stopped
command: ["uvicorn", "network_discovery.api.main:app", "--host", "0.0.0.0"]
meshoptixiq-discovery:
image: meshoptixiq/meshoptixiq:latest
environment:
NEO4J_URI: bolt://neo4j:7687
NEO4J_PASSWORD: production-password-here
MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
volumes:
- ./inventory.yaml:/app/inventory.yaml:ro
depends_on:
- neo4j
restart: unless-stopped
command: ["python", "-m", "network_discovery.scheduler"]
volumes:
neo4j-data:
neo4j-logs:
For large-scale enterprise deployments using Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: meshoptixiq-api
namespace: network-discovery
spec:
replicas: 3
selector:
matchLabels:
app: meshoptixiq-api
template:
metadata:
labels:
app: meshoptixiq-api
spec:
containers:
- name: api
image: meshoptixiq/meshoptixiq:latest
ports:
- containerPort: 8000
env:
- name: NEO4J_URI
value: "bolt://neo4j-service:7687"
- name: NEO4J_PASSWORD
valueFrom:
secretKeyRef:
name: neo4j-credentials
key: password
- name: MESHOPTIXIQ_LICENSE_KEY
valueFrom:
secretKeyRef:
name: meshoptixiq-license
key: license-key
- name: API_KEY
valueFrom:
secretKeyRef:
name: meshoptixiq-api
key: api-key
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: meshoptixiq-api-service
namespace: network-discovery
spec:
selector:
app: meshoptixiq-api
ports:
- port: 80
targetPort: 8000
type: LoadBalancer
For mission-critical deployments:
Run discovery on a schedule using cron:
# Edit crontab
crontab -e
# Add daily discovery at 2 AM
0 2 * * * docker run --rm \
-e NEO4J_URI="bolt://localhost:7687" \
-e NEO4J_PASSWORD="secret" \
-e MESHOPTIXIQ_LICENSE_KEY="${MESHOPTIXIQ_LICENSE_KEY}" \
-v /opt/meshoptix/inventory.yaml:/app/inventory.yaml \
meshoptixiq/meshoptixiq:latest \
meshq ingest --source /app/inventory.yaml \
>> /var/log/meshoptixiq-discovery.log 2>&1
MESHQ_TOKEN_SALT to a random secret (default is publicly known)MESHQ_PROTECT_HEALTH=true to require auth on /health/license and /metricsproxy_read_timeout 3600 for SSE /events endpointCORS_ORIGINS to your specific dashboard origin onlyAPI_KEY; rotate quarterlyAPI_KEY for granular audit trailsprivilege level 15 (Cisco) but restricted to show commands via TACACS+/RADIUSDo NOT store credentials in plaintext. Use one of these approaches:
The enterprise image (meshoptixiq/meshoptixiq:enterprise-latest) adds a startup secrets resolver, OIDC authentication, SIEM audit logging, and APM observability. Credentials are fetched at boot and never touch disk.
docker run -d \
-e SECRETS_PROVIDER=vault \
-e VAULT_ADDR=https://vault.corp.local:8200 \
-e VAULT_AUTH_METHOD=approle \
-e VAULT_ROLE_ID=${VAULT_ROLE_ID} \
-e VAULT_SECRET_ID=${VAULT_SECRET_ID} \
-e VAULT_SECRET_PATH=secret/data/meshoptixiq \
-e AUTH_MODE=both \
-e OIDC_DISCOVERY_URL=https://company.okta.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=meshoptixiq-api-client \
-e AUDIT_LOG_ENABLED=true \
-e SPLUNK_HEC_URL=https://splunk.corp.local:8088/services/collector/event \
-e SPLUNK_HEC_TOKEN=${SPLUNK_HEC_TOKEN} \
-p 8000:8000 \
meshoptixiq/meshoptixiq:enterprise-latest
See the User Guide — Chapter 13 for full enterprise feature documentation.
API_KEY environment variableCORS_ORIGINS to trusted domains onlySee the Monitoring & Operations Guide for comprehensive monitoring setup including:
# Manual backup
neo4j-admin dump --database=neo4j --to=/backups/neo4j-backup.dump
# Automated daily backup script
#!/bin/bash
BACKUP_DIR="/backups/neo4j"
DATE=$(date +%Y%m%d)
docker exec neo4j neo4j-admin dump --database=neo4j \
--to=/tmp/neo4j-${DATE}.dump
docker cp neo4j:/tmp/neo4j-${DATE}.dump ${BACKUP_DIR}/
# Upload to S3
aws s3 cp ${BACKUP_DIR}/neo4j-${DATE}.dump s3://backups/meshoptixiq/
# Manual backup
pg_dump -h localhost -U postgres network_discovery > backup.sql
# Automated with retention
pg_dump network_discovery | gzip > /backups/meshoptix-$(date +%Y%m%d).sql.gz
find /backups -name "meshoptix-*.sql.gz" -mtime +30 -delete
At scale, the default single-process collection model becomes a bottleneck. MeshOptixIQ supports a three-tier async architecture that decouples SSH collection (I/O-bound), parsing (CPU-bound), and graph ingest, each independently scalable.
Syslog events and scheduled sweeps feed two entry points — delta-triggered targeted polls and full Scrapli sweeps — both of which push raw output to Redis for asynchronous parsing and graph ingest.
| Component | Scale Axis | Key Config |
|---|---|---|
| Scrapli collector | concurrency parameter | Default 200; raise ulimit -n for >500 devices |
| Parse workers | Pod replicas | MESHQ_PARSE_CONCURRENCY per pod; add pods freely — stateless |
| Redis queue | Redis cluster | REDIS_URL=redis+cluster:// for HA; Sentinel also supported |
| Graph backend | Neo4j cluster | Cluster bolt URI in GRAPH_BACKEND=neo4j config |
| Variable | Default | Description |
|---|---|---|
MESHQ_PARSE_QUEUE | — | Set to 1 or true to activate Redis parse queue |
MESHQ_PARSE_CONCURRENCY | 4 | Concurrent parse tasks per worker pod |
REDIS_URL | redis://localhost:6379 | Redis connection string (plain, TLS, or cluster) |
MESHQ_GRPC_PORT | 8010 | Port for the gRPC QueryService (see Ch. 15) |
MESHQ_DOCS_PATH | /app/docs | Override path to embedded documentation files |