How to deploy with Helm* Chart#
This section shows how to deploy the Live Video Search sample application using Helm chart.
Prerequisites#
Before you begin, ensure that you have the following:
Kubernetes* cluster set up and running.
The cluster must support dynamic provisioning of Persistent Volumes (PV). Refer to the Kubernetes Dynamic Provisioning Guide for more details.
Install
kubectlon your system. See the Installation Guide. Ensure access to the Kubernetes cluster.Helm installed on your system. See the Installation Guide.
Storage Requirement: Ensure enough storage is available in the cluster for PVC-backed services.
See also: System Requirements.
Helm Chart Installation#
To set up the end-to-end application, acquire the chart and install it with the required values and scenario overrides.
1. Acquire the Helm chart#
There are 2 options to get the charts in your workspace:
Option 1: Get the charts from Docker Hub#
Step 1: Pull the specific chart#
Use the following command to pull the Helm chart from Docker Hub:
helm pull oci://registry-1.docker.io/intel/live-video-search --version <version-no>
Refer to release notes for details on the latest version to use.
Step 2: Extract the .tgz file#
After pulling the chart, extract the .tgz file:
tar -xvf live-video-search-<version-no>.tgz
This creates a directory named live-video-search containing chart files. Navigate to the extracted directory:
cd live-video-search
Option 2: Install from Source#
Clone the repository and navigate to the chart directory:
git clone https://github.com/open-edge-platform/edge-ai-suites.git edge-ai-suites
cd edge-ai-suites/metro-ai-suite/live-video-analysis/live-video-search/chart
2. Configure Required Values#
The application requires a few user-provided values. Use user_values_override.yaml as the single user-edit file:
nano user_values_override.yaml
Update these required values:
Key |
Description |
Example Value |
|---|---|---|
|
MinIO user |
|
|
MinIO password |
|
|
PostgreSQL user |
|
|
PostgreSQL password |
|
|
MQTT user |
|
|
MQTT password |
|
|
Embedding model used by search stack |
|
Common optional values:
Key |
Description |
Example Value |
|---|---|---|
|
Optional image registry override |
|
|
Shared image tag |
|
|
Override tag for VSS stack services |
|
|
Override tag for Smart NVR services |
|
|
HTTP proxy |
|
|
HTTPS proxy |
|
|
Use PVC-backed storage paths for MME/DataPrep |
|
|
Retain PVCs on uninstall |
|
|
Enable MME on GPU |
|
|
Enable DataPrep on GPU |
|
|
GPU resource key from device plugin |
|
|
Device mode for GPU deployment |
|
|
USB device path (used with USB profile) |
|
Note: Scenario selection is profile-driven. Use override profiles for mode switching (
default_override.yaml,rtsp_test_override.yaml,usb_camera_override.yaml) instead of setting mode switches inuser_values_override.yaml.
Tag Resolution Note:
global.tagis the fallback image tag. Ifglobal.vssStackTagis non-empty, VSS-side services use it instead ofglobal.tag. Ifglobal.smartNvrStackTagis non-empty, Smart NVR-side services use it instead ofglobal.tag. Leaving stack-specific tags empty makes those services inheritglobal.tag.
Device Note:
global.env.embeddingDevicedefaults toCPUin chart values and is internally resolved for non-GPU mode.
GPU Note: If enabling GPU for search embeddings, set both
global.gpu.multimodalEmbeddingEnabled=trueandglobal.gpu.vdmsDataprepEnabled=true, and also setglobal.gpu.keyandglobal.gpu.device.
3. Build Helm Dependencies#
Run from the chart directory:
helm dependency build
4. Set and Create a Namespace#
Set a namespace variable:
my_namespace=lvs
Create namespace:
kubectl create namespace $my_namespace
NOTE: All subsequent steps assume
my_namespaceis set in your shell.
5. Deploy the Helm Chart#
Deploy one of the following use cases.
Note: Before switching use cases, uninstall the existing release if it is already running:
helm uninstall lvs -n $my_namespace
Use Case 1: Default Live Video Search#
helm install lvs . -f user_values_override.yaml -f default_override.yaml -n $my_namespace
Use Case 2: RTSP Test Mode#
helm install lvs . -f user_values_override.yaml -f rtsp_test_override.yaml -n $my_namespace
Use Case 3: USB Camera Mode#
helm install lvs . -f user_values_override.yaml -f usb_camera_override.yaml -n $my_namespace
Use Case 4: GPU-enabled MME + DataPrep#
First update user_values_override.yaml:
global.gpu.multimodalEmbeddingEnabled: trueglobal.gpu.vdmsDataprepEnabled: trueglobal.gpu.key: <gpu-resource-key>global.gpu.device: GPU
Then deploy with your selected scenario profile (example: default):
helm install lvs . -f user_values_override.yaml -f default_override.yaml -n $my_namespace
Step 6: Verify the Deployment#
kubectl get pods -n $my_namespace
kubectl get svc -n $my_namespace
Before proceeding, ensure:
Pods are in
Runningstate.Containers are in ready state.
Note:
init-resourcesruns as a Kubernetes Job. Its pod can show0/1 Completed(for example,lvs-live-video-search-init-resources-xxxxx 0/1 Completed), which is expected. Usekubectl get jobs -n $my_namespaceand confirmlvs-live-video-search-init-resourcesshowsCOMPLETIONS 1/1andSTATUS Complete.
If needed, inspect specific workloads:
kubectl describe pod <pod-name> -n $my_namespace
kubectl logs <pod-name> -n $my_namespace
Step 7: Accessing the application#
Nginx service runs as a reverse proxy in one of the pods and is exposed via NodePort by default. Get the host IP and NodePort using:
lvs_hostip=$(kubectl get pods -l app=nginx -n $my_namespace -o jsonpath='{.items[0].status.hostIP}')
lvs_port=$(kubectl get service nginx -n $my_namespace -o jsonpath='{.spec.ports[0].nodePort}')
echo "http://${lvs_hostip}:${lvs_port}"
Copy the printed URL and open it in your browser to access the Live Video Search Application.
If you prefer local access without NodePort:
kubectl port-forward svc/nginx 12345:80 -n $my_namespace
Open http://localhost:12345.
Step 8: Update Helm Dependencies#
If subchart dependencies change:
helm dependency build
Step 9: Uninstall Helm chart#
helm uninstall lvs -n $my_namespace
PVC retention on uninstall is controlled by global.keepPvc.
When global.keepPvc: true, PVC-backed data is retained across uninstall/reinstall and pod restarts. This includes persisted application state (for example, stored query-related data in backing services) and converted OpenVINO model assets stored on persistent volumes.
If you want a clean reset, delete all PVCs for the lvs release:
kubectl delete pvc -n "$my_namespace" -l app.kubernetes.io/instance=lvs
Troubleshooting#
Pods stay Pending or not Ready: Check storage provisioning, node capacity, and device plugin availability (for GPU mode).
Node allocation/scheduling issues caused by PVC affinity conflicts (often from old PVCs): Delete old release PVCs and redeploy:
kubectl delete pvc -n "$my_namespace" -l app.kubernetes.io/instance=lvs
Search not returning expected results: Verify
global.env.embeddingModelNameand confirm clips are ingested.USB mode does not detect camera: Confirm device path and override
frigate.usbCameraDeviceinuser_values_override.yamlwhen not using/dev/video0.GPU deployment fails validation: Ensure both MME and DataPrep GPU flags are aligned, and both
global.gpu.keyandglobal.gpu.deviceare set.