Get Started Guide#

  • Time to Complete: 30 mins

  • Programming Language: Python

Get Started#

Prerequisites#

Step 1: Get the docker images#

Option 1: Build from source#

Clone the source code repository if you don’t have it

git clone https://github.com/open-edge-platform/edge-ai-suites.git

Start from metro-ai-suite

cd edge-ai-suites/metro-ai-suite

Run the commands to build images for the microservices:

git clone https://github.com/open-edge-platform/edge-ai-libraries.git
cd edge-ai-libraries/microservices

docker build -t dataprep-visualdata-milvus:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f visual-data-preparation-for-retrieval/milvus/src/Dockerfile .

docker build -t retriever-milvus:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f vector-retriever/milvus/src/Dockerfile .

cd vlm-openvino-serving
docker build -t vlm-openvino-serving:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f docker/Dockerfile .

cd ../multimodal-embedding-serving
docker build -t multimodal-embedding-serving:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f docker/Dockerfile .

cd ../../..

Run the command to build image for the application:

docker build -t visual-search-qa-app:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f visual-search-question-and-answering/src/Dockerfile .

Option 2: use remote prebuilt images#

Set a remote registry by exporting environment variables:

export REGISTRY="intel/"
export TAG="latest"

Step 2: Prepare host directories for models and data#

mkdir -p $HOME/data

If you would like to test the application with a demo dataset, please continue and follow the instructions in the Try with a demo dataset section later in this guide.

Otherwise, if you would like to use your own data (images and video), make sure to put them all in the created data directory ($HOME/data in the example commands above) and make sure the created path matches with the HOST_DATA_PATH variable in deployment/docker-compose/env.sh BEFORE deploying the services.

Note: Supported media types are jpg, png, and mp4.

Step 3: Deploy#

Option 2: Deploy in Kubernetes#

Refer to Deploy with helm for details.

Try with a demo dataset#

*Applicable to deployment with Option 1 (docker compose deployment).

Prepare demo dataset DAVIS#

Create a prepare_demo_dataset.sh script as following

CONTAINER_IDS=$(docker ps -a --filter "status=running" -q | xargs -r docker inspect --format '{{.Config.Image}} {{.Id}}' | grep "dataprep-visualdata-milvus" | awk '{print $2}')

# Check if any containers were found
if [ -z "$CONTAINER_IDS" ]; then
  echo "No containers found"
  exit 0
fi

CONTAINER_IDS=($CONTAINER_IDS)
NUM_CONTAINERS=${#CONTAINER_IDS[@]}

docker exec -it ${CONTAINER_IDS[0]} bash -c "python example/example_utils.py -d DAVIS"
exit 0

Run the script and check your host data directory $HOME/data, see if DAVIS is there.

bash prepare_demo_dataset.sh

In order to save time, only a subset of the dataset would be processed. They are stored in $HOME/data/DAVIS/subset, use this path to do the next step.

This script only works when the dataprep-visualdata-milvus service is available.

Use it on Web UI#

Go to http://{host_ip}:17580 with a browser. Put the exact path to the subset of demo dataset (usually/home/user/data/DAVIS/subset, may vary according to your local username) into file directory on host. Click UpdataDB and wait for the uploading done.

Try searching with query text tractor, see if the results are correct.

Expected valid inputs are “car-race”, “deer”, “guitar-violin”, “gym”, “helicopter”, “carousel”, “monkeys-trees”, “golf”, “rollercoaster”, “horsejump-stick”, “planes-crossing”, “tractor”

Try ticking a search result, and ask a question in the leftside chatbox about the selected media.

Note: for each chat request, you may select either a single image, or multiple images, or a single video. Multiple videos or a collection of images+videos are not supported yet.

Performance#

You can check the end-to-end response time for each round of question-and-answering in the chat history.

Summary#

In this get started guide, you learned how to:

  • Build the microservice images

  • Deploy the application with the microservices

  • Try the application with a demo dataset

Learn More#