Prerequisites¶
Before installing Home Security Intelligence, ensure your system meets the following requirements.
Hardware Requirements¶
GPU (Required)¶
| Requirement | Minimum | Recommended |
|---|---|---|
| VRAM | 8GB | 12GB+ |
| CUDA Compute | 7.0+ | 8.0+ |
| Combined AI Memory | ~6GB | ~6GB |
The system runs two AI models simultaneously:
- YOLO26 (object detection): ~4GB VRAM (
ai/start_detector.sh:6-7) - Nemotron (risk analysis): ~14.7GB VRAM production (Nemotron-3-Nano-30B) or ~3GB development (Mini 4B)
Supported GPUs:
- NVIDIA RTX 30-series (3060 and above)
- NVIDIA RTX 40-series (any)
- NVIDIA RTX A-series (A2000 and above)
- NVIDIA Tesla/Quadro with 8GB+ VRAM
CPU & Memory¶
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores |
| RAM | 8GB | 16GB+ |
| Storage | 50GB | 100GB+ SSD |
Note: Storage requirements increase with camera count and retention period. Plan for ~1GB/day per active camera.
Network¶
- Cameras must be able to FTP upload to the server
- Local network access (no internet required after setup)
- Default ports: 80 (web), 8000 (API), 8095 (detection), 8091 (LLM)
Software Requirements¶
Operating System¶
| OS | Version | Status |
|---|---|---|
| Ubuntu | 22.04 LTS | Fully Supported |
| Debian | 12+ | Supported |
| macOS | 13+ (Ventura) | Supported (via Podman) |
| Windows | WSL2 | Experimental |
NVIDIA Drivers & CUDA¶
# Verify NVIDIA driver
nvidia-smi
# Required output should show:
# - Driver Version: 535+
# - CUDA Version: 12.0+
Installation guides:
- Ubuntu: NVIDIA CUDA Installation Guide
- macOS: CUDA not available; use MPS backend
Python¶
| Requirement | Version |
|---|---|
| Python | 3.10+ (pyproject.toml:5) |
Installation:
# Ubuntu/Debian
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10 python3.10-venv python3.10-dev
# macOS (via Homebrew)
brew install python@3.10
Node.js¶
| Requirement | Version |
|---|---|
| Node.js | 20.19+ or 22.12+ (frontend/package.json) |
| npm | 10+ |
Note: Vite 7 requires Node.js 20.19+ or 22.12+ for native ESM support. Node.js 18 is NOT supported.
Installation:
# Ubuntu/Debian (via NodeSource)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install nodejs
# macOS (via Homebrew)
brew install node@20
Container Runtime¶
This project supports both Docker and Podman. Choose whichever is available on your system.
| Runtime | Version | License |
|---|---|---|
| Docker Engine | 20.10+ | Free (Linux) |
| Docker Desktop | 4.0+ | Paid (commercial >250 employees) |
| Podman | 4.0+ | Free (Apache 2.0) |
| docker-compose | 2.0+ | Included with Docker |
| podman-compose | 1.0+ | Separate install |
# Verify Docker
docker --version
docker compose version
# OR verify Podman
podman --version
podman-compose --version
Installation:
Docker Installation
Podman Installation
macOS Note: If using Podman on macOS, set
AI_HOST=host.containers.internalbefore starting containers. Docker Desktop useshost.docker.internalby default.
llama.cpp¶
Required for running the Nemotron LLM server.
Installation:
# Build from source (recommended for GPU support)
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make LLAMA_CUDA=1 # For NVIDIA GPU support
# Add to PATH
export PATH="$PATH:$(pwd)"
Alternative: Pre-built binaries available at llama.cpp releases.
Verification Checklist¶
Run these commands to verify all prerequisites:
# GPU
nvidia-smi | head -5
# Python
python3 --version
# Node.js
node --version && npm --version
# Container runtime (choose one)
docker --version && docker compose version # Docker
# OR
podman --version && podman-compose --version # Podman
# llama.cpp
which llama-server
Expected output (Docker example):
NVIDIA-SMI 535.xxx Driver Version: 535.xxx CUDA Version: 12.x
Python 3.10.x or higher
v20.19.x (or v22.12.x+)
10.x.x
Docker version 24.x.x
Docker Compose version v2.x.x
/usr/local/bin/llama-server
Expected output (Podman example):
NVIDIA-SMI 535.xxx Driver Version: 535.xxx CUDA Version: 12.x
Python 3.10.x or higher
v20.19.x (or v22.12.x+)
10.x.x
podman version 4.x.x
podman-compose version 1.x.x
/usr/local/bin/llama-server
Next Steps¶
Once all prerequisites are met, proceed to:
Installation - Set up the environment and download models.