Developer Tools & Tutorials ############################ | Intel provides a range of developer tools which cover a wide range of topics from heterogeneous computing to deep learning optimization, to help you build and optimize your Embodied Intelligence models and workflows. Please get more information in the following sections: - :doc:`Intel® OpenVINO™ ` is an open-source toolkit for optimizing and deploying deep learning models. - :doc:`Intel® Extension for PyTorch* ` is a library extends PyTorch* with the latest performance optimizations for Intel hardware. - :doc:`Intel® LLM Library for PyTorch* ` is an LLM optimization library which accelerates local LLM inference and fine-tuning on Intel hardware. - :doc:`Intel® oneAPI ` is a unified programming model that enables developers to write code that can be executed on a variety of hardware accelerators. - :doc:`Intel® Extension for OpenXLA* ` is an extension for OpenXLA* that provides seamless run of JAX models on Intel GPU. Also, you can find some advice for arranging heterogeneous computing through different workloads in the following section: - :doc:`Heterogeneous Computing ` Some of the models used in Embodied Intelligence solutions are enabled on Intel platforms, please see here to get tutorials: - :doc:`Model Tutorials ` .. toctree:: :maxdepth: 1 :hidden: developer_tools_tutorials/heterogeneous_computing developer_tools_tutorials/openvino developer_tools_tutorials/ipex developer_tools_tutorials/ipex-llm developer_tools_tutorials/oneapi developer_tools_tutorials/iopenxla developer_tools_tutorials/model_tutorials