|openvino| Benchmarking Tool ============================ The benchmark application allows users to estimate deep learning inference performance on supported |intel| devices. It uses the asynchronous mode to estimate deep learning inference engine performance and latency. Refer to the tutorial that illustrates how to run the benchmark application on an |core| processor with |xe| or |intel| UHD Graphics. * `Benchmark C++ Tool `__ * `Benchmark Python Tool `__ .. Note:: Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. No product or component can be absolutely secure. Performance varies by use, configuration and other factors. Learn more at `IntelĀ® Performance Index `__. .. Note:: To use the benchmarking tools, you first have to install |openvino|, following the instructions on the |openvino| `Get Started Sample `__.