Get Started Guide#

  • Time to Complete: 30 mins

  • Programming Language: Python

Get Started#

Prerequisites#

Step 1: Build#

Clone the source code repository if you don’t have it

git clone https://github.com/open-edge-platform/edge-ai-suites.git

Start from metro-ai-suite

cd edge-ai-suites/metro-ai-suite

Run the commands to build images for the microservices:

git clone https://github.com/open-edge-platform/edge-ai-libraries.git
cd edge-ai-libraries/microservices

docker build -t dataprep-visualdata-milvus:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f visual-data-preparation-for-retrieval/milvus/src/Dockerfile .

docker build -t retriever-milvus:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f vector-retriever/milvus/src/Dockerfile .

cd vlm-openvino-serving
docker build -t vlm-openvino-serving:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f docker/Dockerfile .

cd ../../..

Run the command to build image for the application:

docker build -t visual-search-qa-app:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f visual-search-question-and-answering/src/Dockerfile .

Step 2: Prepare host directories for models and data#

mkdir -p $HOME/.cache/huggingface
mkdir -p $HOME/models
mkdir -p $HOME/data

If you would like to test the application with a demo dataset, please continue and follow the instructions in the Try with a demo dataset section later in this guide.

Otherwise, if you would like to use your own data (images and video), make sure to put them all in the created data directory ($HOME/data in the example commands above) BEFORE deploying the services.

Note: supported media types: jpg, png, mp4

Step 3: Deploy#

Option2: Deploy the application with the Milvus Server deployed separately#

If you have customized requirements for the Milvus Server, you may start the Milvus Server separately and run the commands for visual search and QA services only

cd visual-search-question-and-answering/
cd deployment/docker-compose/

source env.sh # refer to Option 1 for model selection

docker compose -f compose.yaml up -d

Option3: Deploy the application in Kubernetes#

Please refer to Deploy with helm for details.

Try with a demo dataset#

*Applicable to deployment with Option 1 or 2 (docker compose deployment).

Prepare demo dataset DAVIS#

Create a prepare_demo_dataset.sh script as following

CONTAINER_IDS=$(docker ps -a --filter "ancestor=dataprep-visualdata-milvus" -q)

# Check if any containers were found
if [ -z "$CONTAINER_IDS" ]; then
  echo "No containers found"
  exit 0
fi

CONTAINER_IDS=($CONTAINER_IDS)
NUM_CONTAINERS=${#CONTAINER_IDS[@]}

docker exec -it ${CONTAINER_IDS[0]} bash -c "python example/example_utils.py -d DAVIS"
exit 0

Run the script and check your host data directory $HOME/data, see if DAVIS is there.

bash prepare_demo_dataset.sh

In order to save time, only a subset of the dataset would be processed. They are stored in $HOME/data/DAVIS/subset, use this path to do the next step.

This script only works when the dataprep-visualdata-milvus service is available.

Use it on Web UI#

Go to http://{host_ip}:17580 with a browser. Put the exact path to the subset of demo dataset (usually/home/user/data/DAVIS/subset, may vary according to your local username) into file directory on host. Click UpdataDB. Wait for a while and click showInfo. You should see that the number of processed files is 25.

Try searching with prompt tractor, see if the results are correct.

Expected valid inputs are “car-race”, “deer”, “guitar-violin”, “gym”, “helicopter”, “carousel”, “monkeys-trees”, “golf”, “rollercoaster”, “horsejump-stick”, “planes-crossing”, “tractor”

Try ticking a search result, and ask a question in the leftside chatbox about the selected media.

Note: for each chat request, you may select either a single image, or multiple images, or a single video. Multiple videos or a collection of images+videos are not supported yet.

Performance#

You can check the end-to-end response time for each round of question-and-answering in the chat history.

Summary#

In this get started guide, you learned how to:

  • Build the microservice images

  • Deploy the application with the microservices

  • Try the application with a demo dataset

Learn More#

  • Check the System requirements

  • Explore more functionalities in Tutorials.

  • Understand the components, services, architecture, and data flow, in the Overview.

Troubleshooting#

Error Logs#

  • Check the container log if a microservice shows mal-functional behaviours

docker logs <container_id>
  • Click showInfo button on the web UI to get essential information about microservices

VLM Microservice Model Loading Issues#

Problem: VLM microservice fails to load or save models with permission errors, or you see errors related to model access in the logs.

Cause: This issue occurs when the ov-models Docker volume was created with incorrect ownership (root user) in previous versions of the application. The VLM microservice runs as a non-root user and requires proper permissions to read/write models.

Symptoms:

  • VLM microservice container fails to start or crashes during model loading

  • Permission denied errors in VLM service logs

  • Model conversion or caching failures

  • Error messages mentioning /home/appuser/.cache/huggingface or /app/ov-model access issues

Solution:

  1. Stop the running application:

    docker compose -f compose_milvus.yaml down
    
  2. Remove the existing ov-models:

    docker volume rm ov-models
    
  3. Restart the application (the volume will be recreated with correct permissions):

    source env.sh
    docker compose -f compose_milvus.yaml up -d
    

Note: Removing the ov-models volume will delete any previously cached/converted models. The VLM service will automatically re-download and convert models on the next startup, which may take additional time depending on your internet connection and the model size.

Known Issues#

  • Sometimes downloading the demo dataset can be slow. Try manually downloading it from the source website, and put the zip file under your host $HOME/data folder.