Product Deployment

Deploy the MeshOptixIQ Network Discovery engine into your production environment.

System Requirements

Minimum

  • CPU: 2 vCPUs
  • RAM: 4 GB
  • Disk: 20 GB SSD
  • OS: Linux (Ubuntu 22.04 LTS)

Recommended (Enterprise)

  • CPU: 4+ vCPUs
  • RAM: 8-16 GB
  • Disk: 100 GB NVMe
  • Network: 1Gbps+ to mgmt VLAN

Network Prerequisites

The host running the discovery agent must have:

Database Configuration

MeshOptixIQ supports both graph and relational backends. Configure the backend using the GRAPH_BACKEND environment variable.

Neo4j (Default)

Best for deep graph traversal and visualization.

GRAPH_BACKEND=neo4j
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=secret

PostgreSQL

Best for operational stability and recursive CTE optimizations.

GRAPH_BACKEND=postgres
POSTGRES_DSN=postgresql://user:pass@localhost:5432/db

# Connection pool tuning (optional — defaults shown)
POSTGRES_POOL_MIN=2
POSTGRES_POOL_MAX=10

Deployment Options

Option 0: Demo Mode (Instant Evaluation)

The fastest way to evaluate MeshOptixIQ before committing to a production deployment. No database, no license key, no configuration required — a pre-seeded 20-device campus + datacenter network starts in seconds.

docker run -p 8000:8000 \
  -e MESHOPTIXIQ_DEMO_MODE=true \
  -e API_KEY=demo \
  -e GRAPH_BACKEND=inmemory \
  meshoptixiq/meshoptixiq:latest
# Open http://localhost:8000  (API key: demo)

Or use the pre-built compose variant:

docker compose -f docker-compose.demo.yml up

Demo data (300 endpoints, 58 firewall rules, all query categories) resets on restart. When you are ready for a real deployment, pick one of the options below.

Option A: Docker Container (Preferred)

Run the discovery engine as a stateless container. It creates the graph and exits, or runs on a schedule.

# 1. Pull the image
docker pull meshoptixiq/meshoptixiq:latest

# 2. Run a one-off discovery
docker run -d \
  -e NEO4J_URI=bolt://neo4j.corp.local:7687 \
  -e NEO4J_PASSWORD=my-secret-pw \
  -v /opt/meshoptix/config.yaml:/app/config.yaml \
  meshoptixiq/meshoptixiq:latest

Option B: Systemd Service

For persistent deployments on a dedicated VM.

[Unit]
Description=MeshOptixIQ Discovery Agent
After=network.target

[Service]
Type=simple
User=meshoptix
ExecStart=/opt/meshoptix/venv/bin/python -m network_discovery.start_agent
Restart=always

[Install]
WantedBy=multi-user.target

Helm Chart — Kubernetes (Pro / Enterprise Recommended)

The official Helm chart is the recommended path for Pro and Enterprise Kubernetes deployments. It renders all required resources: API Deployment (with optional HPA), collector worker Deployment, dispatcher CronJob, ConfigMap, ServiceAccount, Secret, Service, and optional Ingress.

helm install meshoptixiq helm/meshoptixiq/ \
  --set api.key=changeme \
  --set neo4j.uri=bolt://neo4j:7687 \
  --set neo4j.password=secret \
  --set redis.url=redis://redis:6379 \
  --set collector.enabled=true

Optional add-ons:

# Enable HPA (Horizontal Pod Autoscaler) and Ingress
helm upgrade meshoptixiq helm/meshoptixiq/ \
  --set api.autoscaling.enabled=true \
  --set ingress.enabled=true \
  --set ingress.host=meshoptixiq.corp.local

The chart includes SSE-safe nginx ingress annotations (proxy_read_timeout 3600) required for the /events endpoint. See helm/meshoptixiq/values.yaml for the full options reference.

Redis Cluster Mode

Redis is optional. Without it, MeshOptixIQ runs in single-instance mode with in-process rate limiting and snapshot storage. Setting REDIS_URL activates cluster mode, which enables:

# Activate cluster mode — set REDIS_URL on every API container
REDIS_URL=redis://redis:6379

For a 3-pod API cluster with nginx load balancer, use the included compose file:

docker compose -f docker-compose.cluster.yml up -d

Check cluster status: GET /health/redis (no auth required) — returns cluster_mode and Redis reachability.

Optional Feature Environment Variables

Variable Default Description License
SFLOW_ENABLED false Enable sFlow v5 listener on UDP port 6343 Enterprise
NETFLOW_ENABLED false Enable NetFlow v5/v9 listener on UDP port 2055; IPFIX on UDP port 9995 Enterprise
SYNTHETIC_ENABLED false Enable synthetic monitoring probe scheduler; configure probe targets via POST /synthetic/probes Enterprise
K8S_KUBECONFIG (unset) Path to kubeconfig for external cluster observability. Leave unset for in-cluster mode (ServiceAccount auto-detected) Enterprise

Kubernetes Observability Modes

In-cluster: Deploy MeshOptixIQ as a Pod with a ServiceAccount that has get/list/watch on Pods, Services, Nodes, and Endpoints. The kubeconfig is auto-detected from the ServiceAccount mount.

External: Set K8S_KUBECONFIG to a kubeconfig file path. The file must be mounted into the container (use a Kubernetes Secret or ConfigMap). Multiple clusters can be configured by pointing to a merged kubeconfig.

Enterprise Deployment Patterns

Docker Compose Stack (Production)

Complete production stack with Neo4j, API server, and scheduled discovery:

version: '3.8'

services:
  neo4j:
    image: neo4j:5.15
    environment:
      NEO4J_AUTH: neo4j/production-password-here
      NEO4J_dbms_memory_heap_initial__size: 2G
      NEO4J_dbms_memory_heap_max__size: 4G
      NEO4J_dbms_memory_pagecache_size: 2G
    volumes:
      - neo4j-data:/data
      - neo4j-logs:/logs
    ports:
      - "7687:7687"
    restart: unless-stopped

  meshoptixiq-api:
    image: meshoptixiq/meshoptixiq:latest
    environment:
      NEO4J_URI: bolt://neo4j:7687
      NEO4J_PASSWORD: production-password-here
      MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
      API_KEY: ${API_KEY}
      CORS_ORIGINS: "https://dashboard.example.com"
    ports:
      - "8000:8000"
    depends_on:
      - neo4j
    restart: unless-stopped
    command: ["uvicorn", "network_discovery.api.main:app", "--host", "0.0.0.0"]

  meshoptixiq-discovery:
    image: meshoptixiq/meshoptixiq:latest
    environment:
      NEO4J_URI: bolt://neo4j:7687
      NEO4J_PASSWORD: production-password-here
      MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
    volumes:
      - ./inventory.yaml:/app/inventory.yaml:ro
    depends_on:
      - neo4j
    restart: unless-stopped
    command: ["python", "-m", "network_discovery.scheduler"]

volumes:
  neo4j-data:
  neo4j-logs:

Kubernetes Deployment

For large-scale enterprise deployments using Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: meshoptixiq-api
  namespace: network-discovery
spec:
  replicas: 3
  selector:
    matchLabels:
      app: meshoptixiq-api
  template:
    metadata:
      labels:
        app: meshoptixiq-api
    spec:
      containers:
      - name: api
        image: meshoptixiq/meshoptixiq:latest
        ports:
        - containerPort: 8000
        env:
        - name: NEO4J_URI
          value: "bolt://neo4j-service:7687"
        - name: NEO4J_PASSWORD
          valueFrom:
            secretKeyRef:
              name: neo4j-credentials
              key: password
        - name: MESHOPTIXIQ_LICENSE_KEY
          valueFrom:
            secretKeyRef:
              name: meshoptixiq-license
              key: license-key
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: meshoptixiq-api
              key: api-key
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8000
          initialDelaySeconds: 10
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: meshoptixiq-api-service
  namespace: network-discovery
spec:
  selector:
    app: meshoptixiq-api
  ports:
  - port: 80
    targetPort: 8000
  type: LoadBalancer

High Availability Configuration

For mission-critical deployments:

Scheduled Discovery (Cron)

Run discovery on a schedule using cron:

# Edit crontab
crontab -e

# Add daily discovery at 2 AM
0 2 * * * docker run --rm \
  -e NEO4J_URI="bolt://localhost:7687" \
  -e NEO4J_PASSWORD="secret" \
  -e MESHOPTIXIQ_LICENSE_KEY="${MESHOPTIXIQ_LICENSE_KEY}" \
  -v /opt/meshoptix/inventory.yaml:/app/inventory.yaml \
  meshoptixiq/meshoptixiq:latest \
  meshq ingest --source /app/inventory.yaml \
  >> /var/log/meshoptixiq-discovery.log 2>&1

Security Considerations

Security Hardening Checklist

  • ☐ Set MESHQ_TOKEN_SALT to a random secret (default is publicly known)
  • ☐ Set MESHQ_PROTECT_HEALTH=true to require auth on /health/license and /metrics
  • ☐ Never expose Neo4j (7687) or Redis (6379) ports publicly
  • ☐ Run behind TLS-terminating reverse proxy — nginx needs proxy_read_timeout 3600 for SSE /events endpoint
  • ☐ Set CORS_ORIGINS to your specific dashboard origin only
  • ☐ Set a strong API_KEY; rotate quarterly
  • ☐ Use PATs (personal access tokens) instead of root API_KEY for granular audit trails

Network Access Controls

  • Read-Only SSH Access: Service account should have privilege level 15 (Cisco) but restricted to show commands via TACACS+/RADIUS
  • Firewall Rules: Allow only SSH (22) from MeshOptixIQ host to network devices
  • Management VLAN: Deploy MeshOptixIQ in dedicated management VLAN with restricted access

Secrets Management

Do NOT store credentials in plaintext. Use one of these approaches:

  • Environment Variables: Reference from orchestrator secrets
  • AWS Secrets Manager: Use IAM role-based retrieval
  • HashiCorp Vault: Dynamic credential generation
  • Kubernetes Secrets: Encrypted at rest with KMS
  • Enterprise Container: Built-in secrets resolver pulls from Vault, AWS, Azure, or GCP at startup — no credentials in the Compose file

Enterprise Container

The enterprise image (meshoptixiq/meshoptixiq:enterprise-latest) adds a startup secrets resolver, OIDC authentication, SIEM audit logging, and APM observability. Credentials are fetched at boot and never touch disk.

docker run -d \
  -e SECRETS_PROVIDER=vault \
  -e VAULT_ADDR=https://vault.corp.local:8200 \
  -e VAULT_AUTH_METHOD=approle \
  -e VAULT_ROLE_ID=${VAULT_ROLE_ID} \
  -e VAULT_SECRET_ID=${VAULT_SECRET_ID} \
  -e VAULT_SECRET_PATH=secret/data/meshoptixiq \
  -e AUTH_MODE=both \
  -e OIDC_DISCOVERY_URL=https://company.okta.com/.well-known/openid-configuration \
  -e OIDC_CLIENT_ID=meshoptixiq-api-client \
  -e AUDIT_LOG_ENABLED=true \
  -e SPLUNK_HEC_URL=https://splunk.corp.local:8088/services/collector/event \
  -e SPLUNK_HEC_TOKEN=${SPLUNK_HEC_TOKEN} \
  -p 8000:8000 \
  meshoptixiq/meshoptixiq:enterprise-latest

See the User Guide — Chapter 13 for full enterprise feature documentation.

API Security

  • API Key Authentication: Set API_KEY environment variable
  • CORS Configuration: Restrict CORS_ORIGINS to trusted domains only
  • TLS Termination: Use reverse proxy (Nginx, Traefik) for HTTPS
  • Rate Limiting: Implement at load balancer or API gateway level

Monitoring & Alerting

See the Monitoring & Operations Guide for comprehensive monitoring setup including:

Backup & Disaster Recovery

Neo4j Backup

# Manual backup
neo4j-admin dump --database=neo4j --to=/backups/neo4j-backup.dump

# Automated daily backup script
#!/bin/bash
BACKUP_DIR="/backups/neo4j"
DATE=$(date +%Y%m%d)
docker exec neo4j neo4j-admin dump --database=neo4j \
  --to=/tmp/neo4j-${DATE}.dump
docker cp neo4j:/tmp/neo4j-${DATE}.dump ${BACKUP_DIR}/
# Upload to S3
aws s3 cp ${BACKUP_DIR}/neo4j-${DATE}.dump s3://backups/meshoptixiq/

PostgreSQL Backup

# Manual backup
pg_dump -h localhost -U postgres network_discovery > backup.sql

# Automated with retention
pg_dump network_discovery | gzip > /backups/meshoptix-$(date +%Y%m%d).sql.gz
find /backups -name "meshoptix-*.sql.gz" -mtime +30 -delete

Async Collection Worker Architecture

At scale, the default single-process collection model becomes a bottleneck. MeshOptixIQ supports a three-tier async architecture that decouples SSH collection (I/O-bound), parsing (CPU-bound), and graph ingest, each independently scalable.

Syslog events and scheduled sweeps feed two entry points — delta-triggered targeted polls and full Scrapli sweeps — both of which push raw output to Redis for asynchronous parsing and graph ingest.

flowchart LR SYSLOG[Syslog Events] --> DELTA[Delta Collector\nmatch_syslog_trigger] SCHED[Schedule / Manual Sweep] --> FULL[Full Scrapli Sweep\ncollect_all_async] DELTA -->|targeted SSH| RPUSH[RPUSH\nmeshq:collect:parse_queue] FULL -->|200 concurrent SSH| RPUSH RPUSH --> W1[ParseWorker ×1] RPUSH --> W2[ParseWorker ×2] RPUSH --> WN[ParseWorker ×N] W1 --> GP[GraphProvider\nNeo4j / PostgreSQL] W2 --> GP WN --> GP

Scaling Table

Component Scale Axis Key Config
Scrapli collectorconcurrency parameterDefault 200; raise ulimit -n for >500 devices
Parse workersPod replicasMESHQ_PARSE_CONCURRENCY per pod; add pods freely — stateless
Redis queueRedis clusterREDIS_URL=redis+cluster:// for HA; Sentinel also supported
Graph backendNeo4j clusterCluster bolt URI in GRAPH_BACKEND=neo4j config

Environment Variables — Async Collection Tier

Variable Default Description
MESHQ_PARSE_QUEUESet to 1 or true to activate Redis parse queue
MESHQ_PARSE_CONCURRENCY4Concurrent parse tasks per worker pod
REDIS_URLredis://localhost:6379Redis connection string (plain, TLS, or cluster)
MESHQ_GRPC_PORT8010Port for the gRPC QueryService (see Ch. 15)
MESHQ_DOCS_PATH/app/docsOverride path to embedded documentation files