How to Build from Source#
This section shows how to build the Document Summarization Sample Application from the source.
Note:
The build instruction is applicable only on an Ubuntu system. Build from source is not supported for the sample application on Edge Microvisor Toolkit (EMT). The user is recommended to use prebuilt images on EMT.
Prerequisites#
Before you begin, ensure that you have the following prerequisites:
Docker platform installed on your system: Installation Guide.
Steps to Build from Source#
Clone the Repository:
Clone the Document Summarization Sample Application repository:
git clone https://github.com/open-edge-platform/edge-ai-libraries.git edge-ai-libraries
Note: Adjust the repo link appropriately in case of forked repo.
Navigate to the Directory:
Go to the directory where the Dockerfile is located:
cd edge-ai-libraries/sample-applications/document-summarization
Set Up Environment Variables:
Set up the following environment variables:
# OVMS Configuration export VOLUME_OVMS=<model-export-path-for-OVMS> # For example, use: export VOLUME_OVMS="$PWD" export LLM_MODEL="microsoft/Phi-3.5-mini-instruct" # Docker Image Registry Configuration export REGISTRY="intel/" export TAG=1.0.1
To run a GATED MODEL like Llama models, the user will need to pass their huggingface token. The user will need to request access to specific model by going to the respective model page on HuggingFace.
Go to https://huggingface.co/settings/tokens to get your token.
# Login using huggingface-cli pip install huggingface-hub huggingface-cli login # pass hugging face token
Note: OpenTelemetry and OpenLit Configurations are optional. Set these only if there is an OTLP endpoint available.
export OTLP_ENDPOINT=<OTLP-endpoint> export no_proxy=${no_proxy},$OTLP_ENDPOINT,
Run the following script to set up the rest of the environment:
source ./setup.sh
Build the Docker Image:
Build the Docker image for the Document Summarization Sample Application:
docker compose build
Run the Docker Container:
Run the Docker container using the built image:
docker compose up
This will start:
The OVMS service for model serving (gRPC: port 9300, REST: port 8300)
The FastAPI backend service (port 8090)
The Gradio UI service (port 9998)
The nginx (port 8101)
Access the Application:
Open a browser and go to
http://<host-ip>:8101
to access the application dashboard.
Verification#
Ensure that the application is running by checking the Docker container status:
docker ps
Access the application dashboard and verify that it is functioning as expected.
Troubleshooting#
If you encounter any issues during the build or run process, check the Docker logs for errors:
docker logs <container-id>