OpenVINO Model Server#

OpenVINO Model Server enables you to use inference engine capabilities remotely, over standard network protocols. It can be set up to host models and run them based on requests coming from multiple client systems. Based on OpenVINO inference engine and part of the OpenVINO ecosystem, it offers the same level of scalability and performance, just delivered remotely instead of deployment on edge devices.

To learn more about OpenVINO Model Server, see the OpenVINO documentation section on (how to use model serving)[https://docs.openvino.ai/2025/model-server/ovms_what_is_openvino_model_server.html].