ChatQ&A Overview Purpose Detailed Architecture Overview Technical Architecture Diagram Application Flow Key Components and Their Roles Extensibility Next Steps System Requirements Supported Platforms Minimum Requirements Software Requirements Compatibility Notes Validation Get Started Prerequisites Supported Models Embedding Models validated for each model server LLM Models validated for each model server Reranker Models validated Getting access to models Running the application using Docker Compose Running in Kubernetes Running Tests Advanced Setup Options Related Links Supporting Resources How to Build from Source Prerequisites Steps to Build from Source Verification Troubleshooting How to deploy with Helm Prerequisites Steps to deploy with Helm Option 1: Install from Docker Hub Step 1: Pull the Specific Chart Step 2: Extract the .tgz File Step 3: Configure the values.yaml File Option 2: Install from Source Step 1: Clone the Repository Step 2: Change to the Chart Directory Step 3: Configure the values*.yaml File Step 4: Build Helm Dependencies Common Steps after configuration Step 5: Deploy the Helm Chart Step 6: Verify the Deployment Step 7: Retrieving the Service Endpoint (NodePort and NodeIP) Step 8: Update Helm Dependencies Step 9: Uninstall Helm chart Verification Troubleshooting Related links Deploy with Edge Orchestrator Procedure to Deploy with Edge Orchestrator Prerequisites Making available Deployment Package Deploy the Application onto the Edge Nodes Access the ChatQ&A AI-Suite How to Test Performance Prerequisites Steps to Test Performance Key Performance Metrics Latency Throughput Verification Troubleshooting Benchmarks Test Environment Benchmark Results API Reference Release Notes Current Release Previous releases