Get Started#
The ChatQ&A Sample Application is a modular Retrieval Augmented Generation (RAG) pipeline designed to help developers create intelligent chatbots that can answer questions based on enterprise data. This guide will help you set up, run, and modify the ChatQ&A Sample Application on Intel Edge AI systems.
By following this guide, you will learn how to:
Set up the sample application: Use Docker Compose to quickly deploy the application in your environment.
Run the application: Execute the application to see real-time question answering based on your data.
Modify application parameters: Customize settings like inference models and deployment configurations to adapt the application to your specific requirements.
Prerequisites#
Verify that your system meets the minimum requirements.
Install Docker: Installation Guide.
Install Docker Compose: Installation Guide.
Install
Python 3.11
.
Supported Models#
All models - embedding, reranker, and LLM - which are supported by the chosen model serving can be used with this sample application. The models can be downloaded from popular model hubs like Hugging Face. Refer to respective model hub documentation for details on how to access and download models.
The sample application has been validated with a few models just to validate the functionality. This list is only illustrative and the user is not limited to only these models.
Embedding Models validated for each model server#
Model Server |
Models Validated |
---|---|
|
|
|
|
LLM Models validated for each model server#
Model Server |
Models Validated |
---|---|
|
|
|
|
|
|
Note: Limited validation was done on DeepSeek model.
Reranker Models validated#
Model Server |
Models Validated |
---|---|
|
|
Getting access to models#
To run a GATED MODEL like llama models, the user will need to pass their huggingface token. The user will need to request access to specific model by going to the respective model page in HuggingFace.
Visit https://huggingface.co/settings/tokens to get your token.
Running the application using Docker Compose#
Clone the Repository: Clone the repository.
git clone https://github.com/open-edge-platform/edge-ai-libraries.git edge-ai-libraries
Note: Adjust the repo link appropriately in case of forked repo.
Navigate to the Directory: Go to the directory where the Docker Compose file is located:
cd edge-ai-libraries/sample-applications/chat-question-and-answer
Set Up Environment Variables: Set up the environment variables based on the inference method you plan to use:
Common configuration
export HUGGINGFACEHUB_API_TOKEN=<your-huggingface-token> export LLM_MODEL=Intel/neural-chat-7b-v3-3 export EMBEDDING_MODEL_NAME=BAAI/bge-small-en-v1.5 export RERANKER_MODEL=BAAI/bge-reranker-base export OTLP_ENDPOINT_TRACE=<otlp-endpoint-trace> # Optional. Set only if there is an OTLP endpoint available or can be ignored export OTLP_ENDPOINT=<otlp-endpoint> # Optional. Set only if there is an OTLP endpoint available or can be ignored
Login using your Hugging Face token
# Login using huggingface-cli pip install huggingface-hub huggingface-cli login # pass hugging face token
Environment variables for OVMS as inference
# Install required Python packages for model preparation export PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" pip3 install optimum-intel@git+https://github.com/huggingface/optimum-intel.git openvino-tokenizers[transformers]==2024.4.* openvino==2024.4.* nncf==2.14.0 sentence_transformers==3.1.1 openai "transformers<4.45"
Run the below script to set up the rest of the environment depending on the model server and embedding.
export REGISTRY="intel/" export TAG=1.1.2 source setup.sh llm=<model-server> embed=<embedding> # Below are the options # model-server: VLLM , OVMS, TGI # embedding: OVMS, TEI
Start the Application: Start the application using Docker Compose:
docker compose up
Verify the Application: Check that the application is running:
docker ps
Access the Application: Open a browser and go to
http://<host-ip>:5173
to access the application dashboard. The application dashboard allows the user to,Create and manage context by adding documents (pdf, docx, etc.) and web links. Note: There are restrictions on the max size of the document allowed.
Start Q&A session with the created context.
Running in Kubernetes#
Refer to Deploy with Helm for the details. Ensure the prerequisites mentioned on this page are addressed before proceeding to deploy with Helm.
Running Tests#
Ensure you have the necessary environment variables set up as mentioned in the setup section.
Run the tests using
pytest
:cd sample-applications/chat-question-and-answer/tests/unit_tests/ poetry run pytest
Advanced Setup Options#
For alternative ways to set up the sample application, see: