# What's New in Open Edge Platform 2025.2 (Dec. 11, 2025) As the third official release of the platform, 2025.2 focuses on expanding and solidifying the range of applications it offers. It enables full validation and support for Intel® Core™ Ultra Series 2 processors, utilizing NPU, iGPU, and CPU to accelerate vision AI model training and fine-tuning. Broad camera support for multi-camera media streams and faster fps, plus new sample applications using transformer-based AI for speech recognition and real-time scene intelligence. Both Robotics AI and Education AI suites are now fully available. The first one gives you a range of design ideas and solutions for creating various AI-driven machines, while the other targets a specific use case of Smart Classroom, its first sample application, to improve both teaching and learning experience. Intel is also announcing the first official release of the OS Image Composer Framework. It provides a powerful, general-purpose toolchain for composing operating system images from pre-built artifacts of any Linux distribution that supports Debian or RPM packages. ## Metro AI Suite The Metro AI Suite accelerates development of solutions for Edge AI video, safety and security, smart city, and transportation use cases. **Effortless Development Experience** [SDK Manager](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-suites/metro-sdk-manager/index.html) eases the process of AI setup. What used to take hours of dependency management now happens with simple point-and-click selection. You will now be building applications in minutes. **Sample Applications that Scale** *Smart Network Video Recorder (NVR)* combines the power of continuous streaming with intelligent spatial awareness through Intel® SceneScape integration. You get search and summarization that understands not just what happened, but where it happened — good for security operations that need context-aware monitoring across multiple locations. *Visual Search and Q&A* combines a multi-modal search engine with a visual Q&A assistant, letting you search images and videos using either text descriptions or other images as queries. Once you find relevant content, you can ask questions about what you see, using the search results as context for more accurate and related answers. The system supports visual Q&A through text prompts, uploaded images, or any images and videos from your search results, creating a seamless workflow from discovery to understanding. **Learn by Building Real Solutions** Hands-on tutorials guide you through high-impact use cases: - *AI Cybersecurity in Smart Intersection* ([tutorial](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-suites/smart-intersection/application-security-enablement.html)), demonstrating and guiding on the use of built-in security features of Panther Lake. These include: Secure Boot, Full Disk Encryption, Total Memory Encryption, and Trusted Compute. The guide also provides assessed guidance on the performance impact. - *Crowd Analytics Tutorial*, on re-using the Metro Vision AI App Recipe components to extend application development to monitoring the length of queues and analyzing hotspots, common use cases for Crowd Analytics. - *Agentic AI Transit Route System* tutorial, on gathering multi-modal transit data to generate near real-time route guidance. - Open Edge Platform enables you to not only build solutions from scratch but also [migrate existing systems to Intel® hardware](https://docs.openedgeplatform.intel.com/2025.2/OEP-articles/nvidia-migration.html). Comprehensive documentation and instructions are now available covering model conversion, as well as migration from the TAO, Deepstream, and Triton tools to OpenVINO™ toolkit. :::::{dropdown} See the release notes of specific Metro AI components: ::::{grid} 2 2 2 2 :::{grid-item} * [Smart Intersection](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/metro-vision-ai-app-recipe/smart-intersection/docs/user-guide/release-notes.md) * [Smart Parking](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/metro-vision-ai-app-recipe/smart-parking/docs/user-guide/release-notes.md) * [Loitering Detection](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/metro-vision-ai-app-recipe/loitering-detection/docs/user-guide/release-notes.md) * [Image-based Video Search](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/image-based-video-search/docs/user-guide/release-notes.md) * [Visual Search and Q&A](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/visual-search-question-and-answering/docs/user-guide/release-notes.md) * [Sensor Fusion for Traffic Management](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/sensor-fusion-for-traffic-management/docs/user-guide/release-notes.md) * [Smart NVR](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/smart-nvr/docs/user-guide/release-notes.md) * [Video Processing Platform SDK](https://github.com/open-edge-platform/edge-ai-suites/blob/main/metro-ai-suite/video-processing-for-nvr/docs/user-guide/release-notes.md) ::: :::{grid-item} * [OpenVINO™](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [OpenVINO™ Model Server](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [DLStreamer](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md) * [DLStreamer Pipeline Server](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/dlstreamer-pipeline-server/docs/user-guide/release_notes/Overview.md) * [Geti™](https://docs.geti.intel.com/docs/user-guide/release-notes/) * [Visual Pipeline And Platform Evaluation Tool](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md) ::: :::: ::::: ## Manufacturing AI Suite A range of general improvements have been made to the suite, including new components and reorganization of the existing ones. The Time Series category has received two new sample applications, Weld Anomaly Detection, featuring dataset ingestion, CatBoost machine learning model integration, and a dedicated Grafana dashboard, as well as Multimodal (Vision + Time Series) Weld Defect Detection, using dlstreamer-pipeline-server for vision inference and time-series-analytics for timeseries inference with fusion analytics for final anomaly decisions. Security and reliability have been strengthened through Trivy vulnerability fixes, securing the external interface using nginx. :::::{dropdown} See the release notes of specific Manufacturing AI components: ::::{grid} 2 2 2 2 :::{grid-item} * [Human-Machine Interface (HMI) Augmented Worker](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md) * [Industrial Edge Insights - Time Series](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-time-series/docs/user-guide/release_notes.md) * [Industrial Edge Insights - Multimodal](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/release_notes/dec-2025.md) * [Pallet Defect Detection](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/release_notes/Overview.md) * [PCB Anomaly Detection](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/release_notes/Overview.md) * [Weld Porosity](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/release_notes/Overview.md) * [Worker Safety Gear Detection](https://github.com/open-edge-platform/edge-ai-suites/blob/release-2025.2.0/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/release_notes/Overview.md) ::: :::{grid-item} * [OpenVINO™](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [OpenVINO™ Model Server](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [DLStreamer](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md) * [DLStreamer Pipeline Server](../edge-ai-libraries/dlstreamer-pipeline-server/release_notes/Overview.html) ::: :::: ::::: ## Education AI Suite The Education AI Suite is a collection of education-focused AI applications, libraries, and benchmarking tools to help developers build intelligent classroom solutions faster. It provides audio and video pipelines accelerated with the OpenVINO™ toolkit, enabling high-performance deployment on Intel® CPUs, integrated GPUs, and NPUs. The suite organizes workflows tailored for the education sector, with initial support for the Smart Classroom application — an extensible framework for processing, analyzing, and summarizing classroom sessions using advanced multimodal AI. The main features are: * Audio Intelligence * Audio transcription with Automatic Speech Recognition (ASR) models (e.g., Whisper, Paraformer) * Summarization using powerful Large Language Models (LLMs) (e.g., Qwen, LLaMA) * Plug-and-play architecture for integrating new ASR and LLM models * API-first design ready for frontend integration * Video Intelligence * Front Camera Pipeline: student pose detection of sitting, standing, and hand raise; Re-Identification (ReID) to track students consistently across camera views. * Rear Camera Pipeline: student pose detection of sitting, standing, and hand raise. * Board Camera Pipeline: Board content classification. See the [Release Notes for Smart Classroom](https://github.com/open-edge-platform/edge-ai-suites/blob/main/education-ai-suite/smart-classroom/docs/user-guide/release-notes.md) ## Robotics AI Suite The Robotics AI Suite has been updated to provide additional AI pipelines centered around [Robot Motion Control Task](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/edge-control-libraries/fieldbus/robotmctask/README.md) and [Model Predictive Control (MPC) Demo](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-suites/robotics-ai-suite/embodied/sample_pipelines/mpc_demo.html). * Robot Motion Control Task is a comprehensive C++ library designed for robot motion control task development. It provides APIs that enable robot developers to build sophisticated robot applications with integrated AI inference engines and EtherCAT protocol support. * Model predictive control (MPC) is an advanced control technique that uses a dynamic model — typically a linear model identified from data—to optimize control actions while respecting system constraints. Its key strength is "thinking ahead": MPC optimizes the current control move while considering future behavior and anticipated events. These capabilities make it valuable for robotics, helping to address issues like mismatched perception–action timing, jerky trajectories, and collision risks. * [Support for Gigabit Multimedia Serial Link (GMSL)](https://eci.intel.com/docs/3.3/development/tutorials/enable-gmsl.html) on Intel® Core™ Ultra Series 2 processors, formerly codenamed Arrow Lake. GMSL cameras use the GMSL and GMSL2 technology to carry high speed video, bidirectional control data, and power over a single coaxial cable. Validated on [SEAVO® Embedded Computer HB03](https://www.seavo.com/en/products/products-info_itemid_693.html), [Advantech AFE-R360 series robot solutions](https://www.advantech.com/en-eu/products/8d5aadd0-1ef5-4704-a9a1-504718fb3b41/afe-r360/mod_1e4a1980-9a31-46e6-87b6-affbd7a2cb44), and [Advantech ASR-A502 series robot solutions](https://www.advantech.com/en-eu/products/8d5aadd0-1ef5-4704-a9a1-504718fb3b41/asr-a502/mod_ccca0f36-a50b-40c7-87b7-10fb96448605) with [Advantech GMSL Input Module Card](https://www.advantech.com/en-eu/products/8d5aadd0-1ef5-4704-a9a1-504718fb3b41/mioe-gmsl/mod_fc1fc070-30f8-40c1-881f-56c967e26924). * Adds documentation on how to optimize the Robotics Vision-Language-Action (VLA) model [Pi0.5 with OpenVINO™ toolkit](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-suites/robotics-ai-suite/embodied/openvino_optimization.html), compress model weights to INT8 (8-bit Integer format) and benchmark using the OpenVINO™ `benchmark_app` tool. ## Edge AI Libraries Edge AI Libraries provide optimized components for building and deploying real-time AI solutions on edge devices. This comprehensive collection includes flagship components like Deep Learning Streamer for video analytics pipelines, Intel SceneScape for spatial intelligence, and ViPPET for performance evaluation, alongside VLM (Vision-Language Model)-based applications for advanced AI processing. New additions include the Multilevel Video Understanding microservice for intelligent video summarization and Video Chunking Utils for efficient video segmentation, delivering the building blocks you need for sophisticated edge AI deployments. ### Deep Learning Streamer These enhancements let you support more cameras, process higher resolution video, and achieve more accurate analytics - all while reducing hardware requirements and operational costs. **DL Streamer Pipeline Optimizer** automatically finds the optimal configuration settings for your cameras, testing thousands of parameter combinations to maximize performance and prevent costly hardware upgrades. **Smart Motion Detection** intelligently monitors video streams and only triggers AI analysis when movement occurs, to reduc processing power while maintaining full security coverage. **Advanced Object Tracking with DeepSORT** creates unique "fingerprints" for tracked objects using AI-powered re-identification, maintaining accurate identity tracking even when objects are temporarily obscured or move between camera views. **GPU-Accelerated Video Watermarking** eliminates CPU bottlenecks by moving watermark processing to GPU hardware, enabling high-density camera deployments without performance penalties. See the [Release Notes for DL Streamer]([https://github.com/open-edge-platform/edge-ai-libraries/blob/2025.2/libraries/dl-streamer/RELEASE_NOTES.md](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md)) ### Visual Pipeline and Performance Evaluation Tool (ViPPET) ViPPET eases hardware platform evaluation by running video analytics pipelines that approximate your workload to understand system-level performance. The new GUI provides graphical interpretation of `gst-launch` streams and graphical modification of parameters, with import and export capabilities for pipelines. Enhanced extensibility makes it easier to import, configure, and execute benchmarking pipelines. ViPPET now includes POSE model support and DL Streamer optimizer integration for comprehensive pipeline evaluation. See the [Release Notes for VIPPET](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md) ### Intel® SceneScape Intel SceneScape opens entirely new application possibilities with dynamic camera configuration, 3D mapping, and intelligent object clustering. Track 100 objects at 15fps on Intel® Core™ Ultra processor, analyze crowd patterns, and connect to vehicle systems through V2X (Vehicle-to-Everything) integration - bringing spatial intelligence to traffic management, smart city applications, and greater observability through the use of OpenTelemetry. See the [Release Notes for Intel SceneScape](https://github.com/open-edge-platform/scenescape/releases) ### OpenVINO™ OpenVINO™ GenAI introduces encrypted blob format for secure model deployment, NPU deployment is simplified with batch support and compatibility with older driver versions. Plus, gold support for Intel® Arc™ Pro B-Series (B50 and B60) and Intel® Core™ Ultra Processor Series 3. See the [Release Notes for OpenVINO™](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) ### Geti™ Software Geti™ software enables users to build computer vision models in a fraction of time and with minimal data. The latest Geti software release, v2.13, supports **computer vision model fine-tuning on Intel® Core™ Ultra (Series 2) processors with integrated GPU**. Additional feature highlights include DEIM-DETR as the latest state-of-the-art model for object detection, configurable data augmentation across all tasks, and support for multiple workspaces to enable role-based access control on workspace level for users. See the [Release Notes for Geti™ software](https://github.com/open-edge-platform/geti/releases) ### Video Search and Summarization Video Search and Summarization combines search and summarization into one powerful application that helps you quickly find and understand video content using AI. Upload videos with tags for easy organization, then search across your entire video library to find specific moments or generate comprehensive summaries. The system actively monitors for your desired content through Search Alerts, while TopK results help you quickly filter through findings in the redesigned interface. AI models (CLIP, CN-CLIP, MobileCLIP, SigLIP2, and BLIP2) power the search capabilities, with flexible support for both PyTorch and OpenVINO runtimes to match your deployment needs. The system processes videos frame-by-frame for more precise search results, uses batched processing for faster performance, and remembers your content across system restarts. Results are automatically grouped by tags for better organization, and CLI tools provide command-line access for automated workflows. Whether you are searching security footage for specific incidents or analyzing hours of content for key insights, this application makes complex video analysis accessible and efficient. See the [Release Notes for Video Search and Summarization](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/video-search-and-summarization/docs/user-guide/release-notes.md) :::::{dropdown} See the release notes of specific Tools, Libraries, and Microservices: ::::{grid} 2 2 2 2 :::{grid-item} * [DL Streamer](https://github.com/open-edge-platform/edge-ai-libraries/blob/2025.2/libraries/dl-streamer/RELEASE_NOTES.md) * [VIPPET](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md) * [SceneScape](https://github.com/open-edge-platform/scenescape/releases) * [Geti™](https://github.com/open-edge-platform/geti/releases) * [Anomalib](https://github.com/open-edge-platform/anomalib/releases) * [Datumaro](https://github.com/open-edge-platform/datumaro/releases) * [Geti™ SDK](https://github.com/open-edge-platform/geti-sdk/releases) * [Model API](https://github.com/open-edge-platform/model_api/releases) * [Training Extensions](https://github.com/open-edge-platform/training_extensions/releases) * [Chat Question And Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/chat-question-and-answer-core/docs/user-guide/release-notes.md) * [Chat Question And Answer](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/chat-question-and-answer/docs/user-guide/release-notes.md) * [Document Summarization](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/document-summarization/docs/user-guide/release-notes.md) * [Video Search and Summarization](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/sample-applications/video-search-and-summarization/docs/user-guide/release-notes.md) ::: :::{grid-item} * [Audio Analyzer](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/audio-analyzer/docs/user-guide/release-notes.md) * [DL Streamer Pipeline Server](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/dlstreamer-pipeline-server/RELEASE_NOTES.md) * [Model Download](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/model-download/docs/user-guide/release-notes.md) * [Model Registry](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/microservices/model-registry/docs/user-guide/release-notes.md) * [Multilevel Video Understanding](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/microservices/multilevel-video-understanding/docs/user-guide/release-notes.md) * [Multimodal Embedding Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/microservices/multimodal-embedding-serving/docs/user-guide/release-notes.md) * [Time Series Analytics](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/time-series-analytics/docs/user-guide/release_notes/Overview.md) * [Vector Retriever](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/vector-retriever/milvus/docs/user-guide/release-notes.md) * [Visual Data Preparation For Retrieval](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/visual-data-preparation-for-retrieval/milvus/docs/user-guide/release-notes.md) * [VLM OpenVINO™ Serving](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/microservices/vlm-openvino-serving/docs/user-guide/release-notes.md) ::: :::: ::::: ## Edge Microvisor Toolkit Edge Microvisor Toolkit is an evolving platform designed to optimize Edge and AI compute workloads. Serving as a reference container-host operating system, Edge Microvisor Toolkit ensures seamless support for the latest Intel® platforms while offering flexibility through multiple editions tailored to developer and production needs: * **security-hardened editions** for environments requiring enhanced protection and compliance through Edge Manageability Framework and the standalone edition (EMT-S) optimized for lightweight deployments and simplified integration also supporting a real-time variant. * **developer-focused variant** providing tools and capabilities for rapid prototyping and application development. * **a small minimal build** of Edge Microvisor Toolkit primarily intended for provisioning scenarios and used in Edge Manageability Framework to onboard and provision edge nodes at scale. By continuously improving performance, security, and platform compatibility, scenarios Edge Microvisor Toolkit empowers developers and solution architects to build, deploy, and scale edge workloads efficiently across diverse Intel-based infrastructures. It has been validated across a range of Intel hardware platforms: Intel® Core™ Ultra processors (Series 1), Intel® Core™ Ultra Processors (Series 2), Intel® Core™ Processor N-series, Intel Atom® x7000RE Series processor, and Intel® Core™ processors (Series 2). See the [Release Notes for Edge Microvisor Toolkit](https://github.com/open-edge-platform/edge-microvisor-toolkit/releases) ## Edge Manageability Framework Edge Manageability Framework enables you to securely onboard and provision remote edge devices to a central management plane, orchestrate Kubernetes clusters and applications across distributed edge, at scale. With this release, it brings many updates to its underlying technologies, targeting operating system support, security, stability, and more. This is the first release that allows layers of the stack to be selectively omitted to reduce resource requirements. Edge Manageability Framework enables you to securely onboard and provision remote edge devices to a central management plane, orchestrate Kubernetes clusters and applications across distributed edge, at scale. With this release, it brings many updates to its underlying technologies, targeting operating system support, security, and stability, including the following: * Support for provisioning of Ubuntu 24.04 OS. * Adding out-of-band management support for Intel Core platforms featuring Intel vPro® Essentials Platform in addition to existing support for Intel vPro® Enterprise Platform. * Support for adding custom OS Profiles to allow user-provided OS images. * This is the first release that allows layers of the stack to be selectively omitted to reduce resource requirements. For a detailed listing of all the changes and KPIs, see the [Release Notes for Edge Manageability Framework 2025.2](https://docs.openedgeplatform.intel.com/edge-manage-docs/2025.2/release_notes/2025.2/version_2025.2.html) ## OS Image Composer Framework The first official release of the OS Image Composer Framework provides a powerful, general-purpose toolchain for composing OS images from pre-built artifacts of any Linux distribution that supports Debian or RPM packages. With OS Image Composer, you can generate Edge-optimized OS images using our pre-curated YAML templates or customize them to create your own variants tailored to your edge services and applications. The initial release requires the host system to run Ubuntu 22.04 or 24.04 OS, but future versions will support image composition from any Linux host OS. Following a declarative methodology, OS Image Composer allows you to define image templates that focus on what matters most to you. You have full control over package selection, partition and filesystem configuration, secure boot settings, as well as image finalization and configuration options. An automatic image merging function simplifies customization by letting you override only the settings you need. See the [Release Notes for OS Image Composer Framework](https://github.com/open-edge-platform/os-image-composer/releases/tag/2025.2_Release) and the [announcement in Discussions](https://github.com/open-edge-platform/os-image-composer/discussions/332) :::{toctree} :hidden: Release notes 2025.1 <./release-notes/oep-release-notes-2025.1.md> Release Policy <./release-notes/oep-release-policy.md> :::