๐ Get Started#
The Video Search and Summarization (VSS) sample application helps developers create a summary of long form video, search for the right video, and combine both search and summarization pipelines. This guide will help you set up, run, and modify the sample application on local and Edge AI systems.
This guide shows how to:
Set up the sample application: Use Setup script to quickly deploy the application in your environment.
Run different application modes: Execute different application modes available in the application to perform video search and summarization.
Modify application parameters: Customize settings like inference models and deployment configurations to adapt the application to your specific requirements.
โ Prerequisites#
Verify that your system meets the minimum requirements.
Install Docker tool: Installation Guide.
Install Docker Compose tool: Installation Guide.
Install Python programming language v3.11
๐ Project Structure#
The repository is organized as follows:
sample-applications/video-search-and-summarization/
โโโ config # Configuration files
โ โโโ nginx.conf # NGINX configuration
โ โโโ rmq.conf # RabbitMQ configuration
โโโ docker # Docker Compose files
โ โโโ compose.base.yaml # Base services configuration
โ โโโ compose.summary.yaml # Compose override file for video summarization services
โ โโโ compose.search.yaml # Compose override file for video search services
โ โโโ compose.gpu_ovms.yaml # GPU configuration for OpenVINOโข model server
โโโ docs # Documentation
โ โโโ user-guide # User guides and tutorials
โโโ pipeline-manager # Backend service which orchestrates the video Summarization and search
โโโ search-ms # Video search microservice
โโโ ui # Video search and summarization UI code
โโโ build.sh # Script for building application images
โโโ setup.sh # Setup script for environment and deployment
โโโ README.md # Project documentation
โ๏ธ Set Required Environment Variables#
Before running the application, you need to set several environment variables:
Configure the registry: The application uses registry URL and tag to pull the required images.
export REGISTRY_URL=intel export TAG=1.2.2
Set required credentials for some services: Following variables MUST be set on your current shell before running the setup script:
# MinIO credentials (object storage) export MINIO_ROOT_USER=<your-minio-username> export MINIO_ROOT_PASSWORD=<your-minio-password> # PostgreSQL credentials (database) export POSTGRES_USER=<your-postgres-username> export POSTGRES_PASSWORD=<your-postgres-password> # RabbitMQ credentials (message broker) export RABBITMQ_USER=<your-rabbitmq-username> export RABBITMQ_PASSWORD=<your-rabbitmq-password>
Set environment variables for customizing model selection:
You must set these environment variables on your current shell. Setting these variables help you customize the models used for deployment.
# For VLM-based chunk captioning and video summarization on CPU export VLM_MODEL_NAME="Qwen/Qwen2.5-VL-3B-Instruct" # or any other supported VLM model on CPU # For VLM-based chunk captioning and video summarization on GPU export VLM_MODEL_NAME="microsoft/Phi-3.5-vision-instruct" # or any other supported VLM model on GPU # (Optional) For OVMS-based video summarization (when using with ENABLE_OVMS_LLM_SUMMARY=true or ENABLE_OVMS_LLM_SUMMARY_GPU=true) export OVMS_LLM_MODEL_NAME="Intel/neural-chat-7b-v3-3" # or any other supported LLM model # Model used by Audio Analyzer service. Only Whisper models variants are supported. # Common Supported models: tiny.en, small.en, medium.en, base.en, large-v1, large-v2, large-v3. # You can provide just one or comma-separated list of models. export ENABLED_WHISPER_MODELS="tiny.en,small.en,medium.en" # Object detection model used for Video Ingestion Service. Only Yolo models are supported. export OD_MODEL_NAME="yolov8l-worldv2" # SETTING EMBEDDING MODELS # Set this when using --search option to run the application in video search mode. This enables a multimodal embedding model capable of generating correlated text and image embeddings. Only openai/clip-vit-base model is supported as of now. export VCLIP_MODEL=openai/clip-vit-base-patch32 # Set this when using --all option to run application in combined summarization and search mode. Only Qwen/Qwen3-Embedding-0.6B is supported as of now. export QWEN_MODEL=Qwen/Qwen3-Embedding-0.6B
Configure Directory Watcher (Video Search Mode Only):
For automated video ingestion in search mode, you can use the directory watcher service:
# Path to the directory to watch on the host system. Default: "edge-ai-libraries/sample-applications/video-search-and-summarization/data" export VS_WATCHER_DIR="/path/to/your/video/directory"
๐ Directory Watcher: For complete setup instructions, configuration options, and usage details, see the Directory Watcher Service Guide. This service only works with the
--searchmode.Set advanced VLM Configuration Options:
The following environment variables provide additional control over VLM inference behavior and logging:
# (Optional) OpenVINO configuration for VLM inference optimization # Pass OpenVINO configuration parameters as a JSON string to fine-tune inference performance # Default latency-optimized configuration (equivalent to not setting OV_CONFIG) # export OV_CONFIG='{"PERFORMANCE_HINT": "LATENCY"}' # Throughput-optimized configuration export OV_CONFIG='{"PERFORMANCE_HINT": "THROUGHPUT"}'
IMPORTANT: The
OV_CONFIGvariable is used to pass OpenVINO configuration parameters to the VLM service. It allows you to optimize inference performance based on your hardware and workload. For a complete list of OpenVINO configuration options, refer to the OpenVINO Documentation. Note: If OV_CONFIG is not set, the default configuration{"PERFORMANCE_HINT": "LATENCY"}will be used.
๐ Work with Gated Models
To run a GATED MODEL like Llama models, you will need to pass your huggingface token. You will need to request for an access to a specific model by going to the respective model page on Hugging Face website.
Go to https://huggingface.co/settings/tokens to get your token.
export GATED_MODEL=true
export HUGGINGFACE_TOKEN=<your_huggingface_token>
Once exported, run the setup script as mentioned here. Switch off the GATED_MODEL flag by running export GATED_MODEL=false, once you no longer use gated models. This avoids unnecessary authentication step during setup.
๐ Application Mode Overview#
The Video Summarization application offers multiple modes and deployment options:
Mode |
Description |
Flag (used with setup script) |
|---|---|---|
Video Summarization |
Video frame captioning and summarization |
|
Video Search |
Video indexing and semantic search |
|
Video Search + Summarization |
Both search and summarization capabilities |
|
๐ Automated Video Ingestion: The Video Search mode includes an optional Directory Watcher service for automated video processing. See the Directory Watcher Service Guide for details on setting up automatic video monitoring and ingestion.
๐งฉ Deployment Options for Video Summarization#
Deployment Option |
Chunk-Wise Summary(1) Configuration |
Final Summary2 Configuration |
Environment Variables to Set |
Recommended Models |
Recommended Usage Model |
|---|---|---|---|---|---|
VLM-CPU |
vlm-openvino-serving on CPU |
vlm-openvino-serving on CPU |
Default |
VLM: |
For usage with CPUs only; when inference speed is not a priority. |
VLM-GPU |
vlm-openvino-serving |
vlm-openvino-serving GPU |
|
VLM: |
For usage with CPUs and GPUs; when inference speed is a priority. |
VLM-CPU-OVMS-CPU |
vlm-openvino-serving on CPU |
OVMS Microservice on CPU |
|
VLM: |
For usage with CPUs and microservices; when inference speed is not a priority. |
VLM-CPU-OVMS-GPU |
vlm-openvino-serving on CPU |
OVMS Microservice on GPU |
|
VLM: |
For usage with CPUs, GPUs, and microservices; when inference speed is a priority. |
Notes: 1) Chunk-Wise Summary is a method of summarization where it breaks videos into chunks and then summarizes each chunk. 2) Final Summary is a method of summarization where it summarizes the whole video.
โถ๏ธ Run the Application#
โน๏ธ Note for Edge Microvisor Toolkit Users
If you are running the VSS application on an OS image built with Edge Microvisor Toolkit โ an Azure Linux-based build pipeline for Intelยฎ platforms โ you must install the following package:
sudo dnf install mesa-libGL # If you are using TDNF, you can use the following command to install: sudo tdnf search mesa-libGL sudo tdnf install mesa-libGLInstalling
mesa-libGLprovides the OpenGL library which is needed by theAudio Analyzer service.
Follow these steps to run the application:
Clone the repository and navigate to the project directory:
git clone https://github.com/open-edge-platform/edge-ai-libraries.git cd edge-ai-libraries/sample-applications/video-search-and-summarization
Set the required environment variables as described here.
Run the setup script with the appropriate flag, depending on your use case.
Note: Before switching to a different mode, always stop the current application mode by running:
source setup.sh --down
๐ก Clean-up Tip: If you encounter issues or want to completely reset the application data, use
source setup.sh --clean-datato stop all containers and remove all Docker volumes including user data. This provides a fresh start for troubleshooting.
To run Video Summarization only:
source setup.sh --summary
To run Video Search only:
source setup.sh --search
๐ Directory Watcher: For automated video ingestion and processing in search mode, see the Directory Watcher Service Guide to learn how to set up automatic monitoring and processing of video files from a specified directory.
To run a unified Video Search and Summarization :
source setup.sh --all
To run Video Summarization with OpenVINO model server microservice for a final summary :
ENABLE_OVMS_LLM_SUMMARY=true source setup.sh --summary
(Optional) Verify the resolved environment variables and setup configurations:
# To just set environment variables without starting containers source setup.sh --setenv # To see resolved configurations for summarization services without starting containers source setup.sh --summary config # To see resolved configurations for search services without starting containers source setup.sh --search config # To see resolved configurations for both search and summarization services combined without starting containers source setup.sh --all config # To see resolved configurations for summarization services with OpenVINO model server setup on CPU without starting containers ENABLE_OVMS_LLM_SUMMARY=true source setup.sh --summary config
โก Use GPU Acceleration#
To use GPU acceleration for VLM inference:
Note: Before switching to a different mode, always stop the current application mode by running:
source setup.sh --down
ENABLE_VLM_GPU=true source setup.sh --summary
To use GPU acceleration for OpenVINO model server-based summarization:
ENABLE_OVMS_LLM_SUMMARY_GPU=true source setup.sh --summary
To use GPU acceleration for vclip-embedding-ms for search usecase:
ENABLE_EMBEDDING_GPU=true source setup.sh --search
To verify the configuration and resolved environment variables without running the application:
# For VLM inference on GPU
ENABLE_VLM_GPU=true source setup.sh --summary config
# For OVMS inference on GPU
ENABLE_OVMS_LLM_SUMMARY_GPU=true source setup.sh --summary config
# For vclip-embedding-ms on GPU
ENABLE_EMBEDDING_GPU=true source setup.sh --search config
Note: Avoid setting the
ENABLE_VLM_GPU,ENABLE_OVMS_LLM_SUMMARY_GPU, orENABLE_EMBEDDING_GPUflags explicitly on the shell usingexport, because you need to switch these flags off as well, to return to the CPU configuration.
๐ Access the Application#
After successfully starting the application, open a browser and go to http://<host-ip>:12345 to access the application dashboard.
๐ป CLI Usage#
Refer to CLI Usage for details on using the application from a text user interface (terminal-based UI).
โธ๏ธ Running in Kubernetes Cluster#
Refer to Deploy with Helm for the details. Ensure the prerequisites mentioned on this page are addressed before proceeding to deploy with Helm chart.
๐ Advanced Setup Options#
For alternative ways to set up the sample application, see:
๐ Supporting Resources#
Troubleshooting#
Containers have started but the application is not working#
You can try resetting the volume storage by deleting the previously created volumes:
Note: This step does not apply when you are setting up the application for the first time.
source setup.sh --clean-data
VLM Microservice Model Loading Issues#
Problem: VLM microservice fails to load or save models with permission errors, or you see errors related to model access in the logs.
Cause: This issue occurs when the ov-models Docker volume was created with incorrect ownership (root user) in previous versions of the application. The VLM microservice runs as a non-root user and requires proper permissions to read/write models.
Symptoms:
VLM microservice container fails to start or crashes during model loading
Permission denied errors in VLM service logs
Model conversion or caching failures
Error messages mentioning
/home/appuser/.cache/huggingfaceor/app/ov-modelaccess issues
Solution:
Stop the running application:
source setup.sh --down
Remove the existing
ov-models(old volume name) anddocker_ov-models(updated volume name) Docker volume:docker volume rm ov-models docker_ov-models
Restart the application (the volume will be recreated with correct permissions):
# For Video Summarization source setup.sh --summary # Or for Video Search source setup.sh --search
Note: Removing the
ov-modelsordocker_ov-modelsvolume will delete any previously cached or converted models. The VLM service will automatically re-download and convert models on the next startup, which may take additional time depending on your internet connection and the model size.
Prevention: This issue has been fixed in the current version of the VLM microservice Dockerfile. New installations will automatically create the volume with correct permissions.
VLM Final Summary Hallucination Issues#
Problem: The final summary generated by the VLM microservice contains hallucinated or inaccurate information that doesnโt reflect the actual video content.
Cause: This issue can occur when using smaller VLM models that may not have sufficient capacity to accurately process and summarize complex video content, leading to generation of plausible but incorrect information.
Symptoms:
The final summary contains information not present in the video
The Summary describes events, objects, or activities that donโt actually occur in the video
Inconsistent or contradictory information in the generated summary
The Summary quality is poor despite chunk-wise summaries being accurate
Solution:
Try using a larger, more capable VLM model by updating the VLM_MODEL_NAME environment variable:
Stop the running application:
source setup.sh --down
Set a larger VLM model (e.g., upgrade from 3B to 7B parameters):
export VLM_MODEL_NAME="Qwen/Qwen2.5-VL-7B-Instruct"
Restart the application:
source setup.sh --summary
Alternative Models to Try:
For CPU:
Qwen/Qwen2.5-VL-7B-Instruct(larger version)For GPU: Consider other supported VLM models with higher parameter counts
Note: Larger models will require more system resources (RAM or VRAM) and may have longer inference times, but typically provide more accurate and coherent summaries.