๐ Get Started#
The Video Search and Summary (VSS) sample application helps developers create summary of long form video, search for the right video, and combine both search and summary pipelines. This guide will help you set up, run, and modify the sample application on local and Edge AI systems.
This guide shows how to:
Set up the sample application: Use Setup script to quickly deploy the application in your environment.
Run different application stacks: Execute different application stacks available in the application to perform video search and summary.
Modify application parameters: Customize settings like inference models and deployment configurations to adapt the application to your specific requirements.
โ Prerequisites#
Verify that your system meets the minimum requirements.
Install Docker tool: Installation Guide.
Install Docker Compose tool: Installation Guide.
Install Python* programming language v3.11
๐ Project Structure#
The repository is organized as follows:
sample-applications/video-search-and-summarization/
โโโ config # Configuration files
โ โโโ nginx.conf # Nginx configuration
โ โโโ rmq.conf # RabbitMQ configuration
โโโ docker # Docker compose files
โ โโโ compose.base.yaml # Base services configuration
โ โโโ compose.summary.yaml # Compose override file for video summarization services
โ โโโ compose.search.yaml # Compose override file for Video search services
โ โโโ compose.gpu_ovms.yaml # GPU configuration for OVMS
โโโ docs # Documentation
โ โโโ user-guide # User guides and tutorials
โโโ pipeline-manager # Backend service which orchestrates the Video Summarization and Search
โโโ search-ms # Video Search Microservice
โโโ ui # Video Summary and Search UI code
โโโ build.sh # Script for building application images
โโโ setup.sh # Setup script for environment and deployment
โโโ README.md # Project documentation
โ๏ธ Setting Required Environment Variables#
Before running the application, you need to set several environment variables:
Registry Configuration: The application uses registry URL and tag to pull the required images.
export REGISTRY_URL=intel export TAG=1.2.0
Required credentials for some services: Following variables MUST be set on your current shell before running the setup script:
# MinIO credentials (object storage) export MINIO_ROOT_USER=<your-minio-username> export MINIO_ROOT_PASSWORD=<your-minio-password> # PostgreSQL credentials (database) export POSTGRES_USER=<your-postgres-username> export POSTGRES_PASSWORD=<your-postgres-password> # RabbitMQ credentials (message broker) export RABBITMQ_USER=<your-rabbitmq-username> export RABBITMQ_PASSWORD=<your-rabbitmq-password>
Setting environment variables for customizing model selection:
These environment variables MUST be set on your current shell. Setting these variables help you customize which models are used for deployment.
# For VLM-based chunk captioning and video summary on CPU export VLM_MODEL_NAME="Qwen/Qwen2.5-VL-3B-Instruct" # or any other supported VLM model on CPU # For VLM-based chunk captioning and video summary on GPU export VLM_MODEL_NAME="microsoft/Phi-3.5-vision-instruct" # or any other supported VLM model on GPU # (Optional) For OVMS-based video summary (when using with ENABLE_OVMS_LLM_SUMMARY=true or ENABLE_OVMS_LLM_SUMMARY_GPU=true) export OVMS_LLM_MODEL_NAME="Intel/neural-chat-7b-v3-3" # or any other supported LLM model # Model used by Audio Analyzer service. Only Whisper models variants are supported. # Common Supported models: tiny.en, small.en, medium.en, base.en, large-v1, large-v2, large-v3. # You can provide just one or comma-separated list of models. export ENABLED_WHISPER_MODELS="tiny.en,small.en,medium.en" # Object detection model used for Video Ingestion Service. Only Yolo models are supported. export OD_MODEL_NAME="yolov8l-worldv2" # Multimodal embedding model. Only openai/clip-vit-base models are supported export VCLIP_MODEL=openai/clip-vit-base-patch32
Advanced VLM Configuration Options:
The following environment variables provide additional control over VLM inference behavior and logging:
# (Optional) OpenVINO Configuration for VLM inference optimization # Pass OpenVINO configuration parameters as a JSON string to fine-tune inference performance # Default latency-optimized configuration (equivalent to not setting OV_CONFIG) # export OV_CONFIG='{"PERFORMANCE_HINT": "LATENCY"}' # Throughput-optimized configuration export OV_CONFIG='{"PERFORMANCE_HINT": "THROUGHPUT"}'
IMPORTANT: The
OV_CONFIG
variable is used to pass OpenVINO configuration parameters to the VLM service. It allows you to optimize inference performance based on your hardware and workload. For a complete list of OpenVINO configuration options, refer to the OpenVINO Documentation. Note: If OV_CONFIG is not set, the default configuration{"PERFORMANCE_HINT": "LATENCY"}
will be used.
๐ Working with Gated Models
To run a GATED MODEL like Llama models, the user will need to pass their huggingface token. The user will need to request access to specific model by going to the respective model page on HuggingFace.
Go to https://huggingface.co/settings/tokens to get your token.
export GATED_MODEL=true
export HUGGINGFACE_TOKEN=<your_huggingface_token>
Once exported, run the setup script as mentioned here. Please switch off the GATED_MODEL
flag by running export GATED_MODEL=false
, once you are no more using gated models. This avoids unnecessary authentication step during setup.
๐ Application Stacks Overview#
The Video Summary application offers multiple stacks and deployment options:
Stack |
Description |
Flag (used with setup script) |
---|---|---|
Video Summary |
Video frame captioning and Summary |
|
Video Search |
Video indexing and semantic search |
|
Video Search + Summary (Under Construction) |
Both summary and search capabilities |
|
๐งฉ Deployment Options for Video Summary#
Option |
Chunk-Wise Summary |
Final Summary |
Environment Variables |
Recommended Models |
---|---|---|---|---|
VLM-CPU |
vlm-openvino-serving on CPU |
vlm-openvino-serving on CPU |
Default |
VLM: |
VLM-GPU |
vlm-openvino-serving |
vlm-openvino-serving GPU |
|
VLM: |
VLM-OVMS-CPU |
vlm-openvino-serving on CPU |
OVMS Microservice on CPU |
|
VLM: |
VLM-CPU-OVMS-GPU |
vlm-openvino-serving on CPU |
OVMS Microservice on GPU |
|
VLM: |
โถ๏ธ Running the Application#
โน๏ธ Note for EMT (Edge Microvisor Toolkit) Users
If you are running the VSS application on an OS image built with Edge Microvisor Toolkit (EMT)โan Azure Linux-based build pipeline for Intelยฎ platformsโyou must install the following package:
sudo dnf install mesa-libGL # If you are using TDNF, you can use the following command to install: sudo tdnf search mesa-libGL sudo tdnf install mesa-libGLInstalling
mesa-libGL
provides the OpenGL library which is needed byAudio Analyzer service
.
Follow these steps to run the application:
Clone the repository and navigate to the project directory:
git clone https://github.com/open-edge-platform/edge-ai-libraries.git -b release-1.2.0 cd edge-ai-libraries/sample-applications/video-search-and-summarization
Set the required environment variables as described above.
Run the setup script with the appropriate flag, depending on your use case.
NOTE: Before switching to a different mode always stop the current application stack by running:
source setup.sh --down
To run Video Summary only:
source setup.sh --summary
To run Video Search only:
source setup.sh --search
To run Video Summary with OVMS Microservice for final summary :
ENABLE_OVMS_LLM_SUMMARY=true source setup.sh --summary
(Optional) Verify the resolved environment variables and setup configurations.
# To just set environment variables without starting containers source setup.sh --setenv # To see resolved configurations for summary services without starting containers source setup.sh --summary config # To see resolved configurations for search services without starting containers source setup.sh --search config # To see resolved configurations for both search and summary services combined without starting containers source setup.sh --all config # To see resolved configurations for summary services with OVMS setup on CPU without starting containers ENABLE_OVMS_LLM_SUMMARY=true source setup.sh --summary config
โก Using GPU Acceleration#
To use GPU acceleration for VLM inference:
NOTE: Before switching to a different mode always stop the current application stack by running:
source setup.sh --down
ENABLE_VLM_GPU=true source setup.sh --summary
To use GPU acceleration for OVMS-based summary:
ENABLE_OVMS_LLM_SUMMARY_GPU=true source setup.sh --summary
To use GPU acceleration for vclip-embedding-ms for search usecase:
ENABLE_EMBEDDING_GPU=true source setup.sh --search
To verify configuration and resolved environment variables without running the application:
# For VLM inference on GPU
ENABLE_VLM_GPU=true source setup.sh --summary config
# For OVMS inference on GPU
ENABLE_OVMS_LLM_SUMMARY_GPU=true source setup.sh --summary config
# For vclip-embedding-ms on GPU
ENABLE_EMBEDDING_GPU=true source setup.sh --search config
NOTE: Please avoid setting
ENABLE_VLM_GPU
,ENABLE_OVMS_LLM_SUMMARY_GPU
, orENABLE_EMBEDDING_GPU
explicitly on shell usingexport
, as you need to switch these flags off as well, to return back to CPU configuration.
๐ Accessing the Application#
After successfully starting the application, open a browser and go to http://<host-ip>:12345
to access the application dashboard.
โธ๏ธ Running in Kubernetes#
Refer to Deploy with Helm for the details. Ensure the prerequisites mentioned on this page are addressed before proceeding to deploy with Helm.
๐ Advanced Setup Options#
For alternative ways to set up the sample application, see:
๐ Supporting Resources#
Troubleshooting#
Containers started but Application not working#
You can try resetting the volume storage, by deleting the previously created volumes using following commands:
source setup.sh --down docker volume rm audio_analyzer_data data-prep
NOTE : This step does not apply when you are setting up the application for the first time.
VLM Microservice Model Loading Issues#
Problem: VLM microservice fails to load or save models with permission errors, or you see errors related to model access in the logs.
Cause: This issue occurs when the ov-models
Docker volume was created with incorrect ownership (root user) in previous versions of the application. The VLM microservice runs as a non-root user and requires proper permissions to read/write models.
Symptoms:
VLM microservice container fails to start or crashes during model loading
Permission denied errors in VLM service logs
Model conversion or caching failures
Error messages mentioning
/home/appuser/.cache/huggingface
or/app/ov-model
access issues
Solution:
Stop the running application:
source setup.sh --down
Remove the existing
ov-models
(old volume name) anddocker_ov-models
(updated volume name) Docker volume:docker volume rm ov-models docker_ov-models
Restart the application (the volume will be recreated with correct permissions):
# For Video Summary source setup.sh --summary # Or for Video Search source setup.sh --search
Note: Removing the ov-models
/docker_ov-models
volume will delete any previously cached/converted models. The VLM service will automatically re-download and convert models on the next startup, which may take additional time depending on your internet connection and the model size.
Prevention: This issue has been fixed in the current version of the VLM microservice Dockerfile. New installations will automatically create the volume with correct permissions.
VLM Final Summary Hallucination Issues#
Problem: The final summary generated by the VLM microservice contains hallucinated or inaccurate information that doesnโt reflect the actual video content.
Cause: This issue can occur when using smaller VLM models that may not have sufficient capacity to accurately process and summarize complex video content, leading to generation of plausible but incorrect information.
Symptoms:
Final summary contains information not present in the video
Summary describes events, objects, or activities that donโt actually occur in the video
Inconsistent or contradictory information in the generated summary
Summary quality is poor despite chunk-wise summaries being accurate
Solution:
Try using a larger, more capable VLM model by updating the VLM_MODEL_NAME
environment variable:
Stop the running application:
source setup.sh --down
Set a larger VLM model (e.g., upgrade from 3B to 7B parameters):
export VLM_MODEL_NAME="Qwen/Qwen2.5-VL-7B-Instruct"
Restart the application:
source setup.sh --summary
Alternative Models to Try:
For CPU:
Qwen/Qwen2.5-VL-7B-Instruct
(larger version)For GPU: Consider other supported VLM models with higher parameter counts
Note: Larger models will require more system resources (RAM/VRAM) and may have longer inference times, but typically provide more accurate and coherent summaries.