Deploy with Helm#
This section shows how to deploy the Live Video Alert Agent using the Helm chart.
Prerequisites#
Before you begin, ensure that you have the following:
Kubernetes* cluster set up and running.
The cluster must support dynamic provisioning of Persistent Volumes (PV). Refer to the Kubernetes Dynamic Provisioning Guide for more details.
Install
kubectlon your system. See the Installation Guide. Ensure access to the Kubernetes cluster.Helm installed on your system. See the Installation Guide.
Storage Requirement: The chart creates a 10 Gi PVC for the VLM model on first run.
Helm Chart Installation#
1. Acquire the Helm chart#
Option 1: Get the chart from Docker Hub#
helm pull oci://registry-1.docker.io/intel/live-video-alert-agent-chart --version <version-no>
tar -xvf live-video-alert-agent-chart<version-no>.tgz
cd live-video-alert-agent-chart
Refer to the Release Notes for the latest version.
Option 2: Install from Source#
git clone https://github.com/open-edge-platform/edge-ai-suites.git edge-ai-suites
cd edge-ai-suites/metro-ai-suite/live-video-analysis/live-video-alert-agent/chart
2. Configure Required Values#
Edit user_values_override.yaml with values for your environment:
Key |
Description |
Example |
|---|---|---|
|
(Required) IP of a cluster node reachable by your browser |
|
|
Retain model PVC on uninstall (avoids ~10 min re-download) |
|
|
Required only for gated models on first download |
|
|
HTTP proxy for outbound connections |
|
|
HTTPS proxy for outbound connections |
|
|
Addresses that bypass the proxy |
|
|
Enable Intel GPU for VLM inference |
|
|
GPU resource key from the device plugin |
|
|
Target device for inference |
|
|
Linux group IDs for GPU device access. Run |
|
|
RTSP stream URL to load at startup (optional) |
|
|
Schedule app pod on a specific node |
|
Note:
user_values_override.yamlmay contain credentials. Do not commit it to version control.
3. Build Helm Dependencies#
helm dependency build
4. Set and Create a Namespace#
my_release=lva
my_namespace=lva
kubectl create namespace $my_namespace || true
Note: All subsequent steps assume
my_releaseandmy_namespaceare set in your shell session. The|| truemakes the namespace creation safe to re-run.
5. Deploy the Helm Chart#
helm install $my_release . -f user_values_override.yaml -n $my_namespace
6. Verify the Deployment#
kubectl get pods -n $my_namespace
kubectl get svc -n $my_namespace
Before proceeding, ensure all pods show Running status and 1/1 in the READY column.
Note: The OVMS pod may take up to 10 minutes on first start while the VLM model is downloaded. Set
global.keepPvc: trueto retain the model across reinstalls.
7. Access the Application#
node_ip=$(kubectl get pods -l app.kubernetes.io/component=app -n $my_namespace -o jsonpath='{.items[0].status.hostIP}')
app_port=$(kubectl get svc -l app.kubernetes.io/component=app -n $my_namespace -o jsonpath='{.items[0].spec.ports[0].nodePort}')
echo "http://${node_ip}:${app_port}"
Open the printed URL in your browser to access the Live Video Alert Agent dashboard.
8. Uninstall Helm Chart#
helm uninstall $my_release -n $my_namespace
PVC retention on uninstall is controlled by global.keepPvc. To delete the PVC manually:
kubectl delete pvc ${my_release}-ovms-models -n $my_namespace
Upgrading#
After modifying subchart sources or pulling a new chart version, rebuild dependencies before redeploying:
helm dependency build
helm upgrade $my_release . -f user_values_override.yaml -n $my_namespace
Troubleshooting#
Pods stuck in
Pending: Check storage availability and node capacity.kubectl describe pod <pod-name> -n $my_namespace kubectl get events -n $my_namespace --sort-by='.metadata.creationTimestamp'
OVMS pod slow to start: Expected on first deploy — model is downloading (~2 GB). Monitor with:
kubectl logs -n $my_namespace deployment/${my_release}-ovms --follow
ImagePullBackOff: Check image name and tag overrides inuser_values_override.yaml. Ensure registry is reachable.GPU not working: Verify device plugin resource key with
kubectl describe node <gpu-node> | grep gpu.intel.com.Check logs:
kubectl logs <pod-name> -n $my_namespace