What’s New in Open Edge Platform 2026.0 (April 01, 2026)#

Open Edge Platform opens 2026 by introducing support for the latest Intel hardware – platform components are now functionally operational and performance-optimized for both Intel® Core™ Ultra 3 (Panther Lake) and Intel® Core™ Series 2 processors with P-cores (Bartlett Lake). Alongside this milestone, it provides multiple additions to the Open Edge Platform portfolio:

  • A new AI suite of Healthcare and Life Sciences has just been released. It showcases how efficiently Intel-powered systems can handle a wide range of tasks in the field of patient monitoring and care.

  • Metro AI suite launches several new sample applications, nearly doubling its portfolio. They provide new use cases in multimodal solutions with Vision Language Models, Live Video Analysis, agentic AI, and more.

  • New tools are now available: Geti™ Instant Learn, Anomalib Studio, and Physical AI Studio. These low-code AI studios facilitate work in vision, robotics, and anomaly detection.

  • A new microservice, Model Download, has been introduced for model acquisition and preparation at runtime. As a more focused and leaner solution, it is intended to replace Model Registry, which is now considered deprecated and will be archived with the next release.

To streamline the offering, the Retail AI suite has consolidated the Automated Self-Checkout pipelines with the Loss Prevention Application. The original application is now considered deprecated and will be discontinued with the following release.

The Robotics Suite has extended the Autonomous Robot’s support for both software, covering the latest Robot Operating System 2, Jazzy, and hardware, utilizing NPU and GPU accelerators.

For information on specific components, refer to the following sections:

Metro AI Suite#

This release expands the Metro AI Suite with a new generation of live video applications, including Live Video Captioning, Live Video Search, and the Live Video Alert Agent, enabling natural‑language interaction with live and historical video, real‑time AI alerts, and automated monitoring across multiple camera feeds. These applications demonstrate how Vision Language Models (VLMs), multimodal embeddings, and vector databases can be combined to extract actionable insights from video streams.

Metro AI Suite 2026.0 also introduces agent‑based intelligence for smart cities through the Smart Traffic Intersection Agent and Smart Route Planning Agent. These hybrid AI applications showcase cloud‑edge collaboration, where edge agents analyze local conditions while a cloud‑based orchestrator aggregates insights and recommends optimal routes or actions in real time.

In addition, the Metro AI Suite now includes an Agentic Predictive Maintenance Pipeline for Critical Infrastructure, demonstrating automated inspection, continuous monitoring, and early detection of failures across assets such as utilities, tunnels, bridges, and other urban infrastructure. The pipeline highlights how distributed AI agents and multimodal sensing can support proactive infrastructure management.

See the release notes of specific Metro AI components:

Manufacturing AI Suite#

Manufacturing AI Suite has been refreshed to streamline operations and extend compatibility to the latest silicon platforms. Additionally, multiple sample applications can now be deployed simultaneously on the same host using Docker and Helm.

As the Model Registry microservice has been deprecated, Manufacturing AI suite has migrated to Model Download, removing registry dependencies.

All the updated sample apps now also use DL Streamer Pipeline Server 2026.0.0, with Ubuntu 24 as the default image.

Safety Gear Detection, PCB Anomaly Detection, and Pallet Defect Detection models have been retrained using Geti v2.13.1 for improved accuracy and freshness, while Weld Porosity model precision has been optimized from FP16 to INT8, improving runtime efficiency.

See the release notes of specific Manufacturing AI components:

Robotics AI Suite#

Autonomous Mobile Robot has been updated to fully support the latest generation of Robot Operating System 2, Jazzy, on the latest Intel silicon. The AI workloads can now utilize hardware accelerators such as GPU and NPU. The pick-and-place simulation now migrates to Gazebo Harmonic. A unified TensorFlow-based coordinate system is introduced for all robots and cubes. There are improvements in robustness across arm controllers, the AMR, and MoveIt2 integration.

Humanoid Imitation Learning introduces a new pipeline with π₀.₅ — a Vision-Language-Action (VLA) model designed to serve as a “general-purpose AI brain” for diverse robotic hardware. The pipeline leverages improved integration of advanced reasoning with precise physical control capabilities.

See the release notes of specific Robotics AI components:

Retail AI Suite#

Automated Self-Checkout pipelines are now consolidated into the Loss Prevention repository for ease of use and development. The standalone repository is in maintenance mode with deprecation planned for June 2026.

Loss Prevention now also includes Vision Language Model (VLM) workloads alongside traditional computer vision pipelines, with built-in RTSP streaming and support for heterogeneous CPU/GPU/NPU configurations.

The Order Accuracy platform now receives two applications designed for QSR order validation using VLM. Dine-In features enhanced processing of static images of dine-in food items. Take-Away enables identification of items from continuous RTSP streams and/or uploaded videos using GStreamer pipelines.

See the release notes of specific Retail AI components:

Education AI Suite#

Education AI Suite 2026.0, no longer a preview version, focuses on enabling real-time, data‑driven classroom insights powered by Intel® Core™ Ultra (Panther Lake). The Smart Classroom application now offers next‑generation audio and visual analytics, giving teachers and schools a better understanding of classroom dynamics through AI‑driven summaries and engagement metrics.

This release also introduces platform benchmarking hooks, allowing ISVs, OEMs, and education partners to validate Intel XPU (CPU + iGPU + NPU) performance across Intel Core Ultra Series 1, 2, and 3 platforms.

The Smart Classroom application now offers a series of after-class summary enhancements.

  • Speaker Diarization (via the Audio Pipeline) identifies teacher and student speakers using NPU-accelerated diarization, generates an interactive audio timeline for replay and analysis, and enables time-coded navigation within class video recordings.

  • Class Engagement Metrics – Audio measure teacher and student speech duration, track questions asked and answered, and track student-teacher interaction frequency.

  • Class Engagement Metrics – Video track student hand raises, posture changes (stand up/sit down), and teacher movement.

See the Release Notes for Smart Classroom

Health and Life Sciences AI Suite#

As the initial release, the Health and Life Sciences AI suite includes a single sample application: Multi-Modal Patient Monitoring. It showcases how a single Intel-powered edge system can simultaneously run computer vision, physiological signal processing, artificial intelligence inference, and real-time medical device telemetry — all within one integrated dashboard.

It proves that heterogeneous workloads — from 3D human pose estimation, through heart rate extraction, AI-based ECG analysis, to medical device simulation — can coexist efficiently on one platform without compromising performance or stability.

This is the initial release of the suite, therefore, it is considered a preview version.

See the Release Notes for Multi-Modal Patient Monitoring

Edge AI Libraries#

Open Edge Platform 2026.0 libraries, tools, and microservices now offer support for the latest Intel hardware, introducing the Panther Lake platform to the Edge. Among the typical maintenance updates, it also brings several changes to its portfolio, as well as its feature set.

Model Download, a new microservice for acquiring and preparing models at runtime has been added. As a more focused and leaner solution, it replaces the now deprecated Model Registry, which will remain available until the next release, and then archived.

Geti™ Instant Learn, Anomalib Studio, and Physical AI Studio are new additions to the Platform’s tool portfolio.

For a more detailed view of the 2026.0 changes, see the highlights below and detailed release notes for individual software components.

Deep Learning Streamer#

Deep Learning Streamer (DL Streamer) delivers video analytics built on open GStreamer standards, enabling interoperability, flexibility, and long-term freedom from vendor lock in. It continues to evolve as the foundation for video analytics pipelines, introducing smarter batching control, zero‑touch ONVIF camera discovery, configurable visual overlays, privacy‑oriented ROI blurring, and new elements for multi‑sensor data processing. These improvements simplify the creation of efficient, scalable, and privacy‑aware pipelines optimized for Intel hardware.

This release also demonstrates a single application running DL Streamer and NVIDIA DeepStream, enabling parallel execution of video analytics pipelines across Intel CPU/GPU/NPU and NVIDIA GPU within the same system.

DL Streamer also expands its Python samples, reflecting growing use of Python as an integration and orchestration layer. Finally, DLStreamer Pipeline Optimizer finds automatically the optimal pipeline configuration to maximize FPS performance by intelligently testing pipeline parameters.

For a detailed listing of all the changes, see the Release notes for Deep Learning Streamer 2026.0.

SceneScape#

SceneScape advances spatial intelligence and scene understanding with a new high‑performance tracking architecture capable of handling up to 1 000 simultaneous objects per scene, along with robust object re‑identification based on semantic attributes and visual embeddings.

Visual Pipeline and Platform Evaluation Tool#

ViPPET’s 2026.0 release introduces a range of new features focused on usability and flexibility, including a demo mode tailored for events, support for multiple pipeline variants (CPU, GPU, NPU), and ready-to-use optimized pipeline templates. Users can now work with a simplified pipeline view, leverage new predefined pipelines for retail, manufacturing, and metro scenarios, and use automatically detected USB or network cameras as inputs. It also adds live WebRTC-based preview of pipeline output and the ability to run pipelines for a specified duration with automatic looping. Additionally, the release improves performance monitoring with redesigned metrics charts showing up to eight system indicators and delivers a refreshed user interface with updated navigation and an enhanced pipeline editor layout.

For a detailed listing of all the changes, see the Release notes for ViPPET 2026.0.

Geti™ Instant Learn#

Geti Instant Learn is an open‑source application designed for developing, benchmarking, and deploying zero‑shot and few‑shot visual prompting algorithms. It is optimized for Intel edge hardware and enables users to build AI solutions quickly using open vocabulary models. The application includes a unified Python library for research and a full‑stack deployment application.

For a detailed listing of all the changes, see the Release notes for Geti™ Instant Learn.

Anomalib Studio#

Anomalib Studio is an open‑source application for developing, benchmarking, and deploying unsupervised anomaly detection algorithms on Intel edge hardware. It unifies the Anomalib Python library with a full‑stack application that enables users to train and deploy anomaly detection models on images or video streams. The platform provides access to the largest public collection of state‑of‑the‑art deep learning anomaly detection algorithms and datasets, with built‑in acceleration via OpenVINO™ and flexible backends for both research and optimized deployment.

For a detailed listing of all the changes, see the Release notes for Anomalib.

Physical AI Studio#

Physical AI Studio is an end‑to‑end framework for developing, training, benchmarking, and deploying Vision‑Language‑Action (VLA) models for robotic imitation learning. It simplifies the entire pipeline — from demonstration capture and robot calibration, through policy training, to deployment on Intel hardware using OpenVINO™, ONNX, or Torch runtimes. It provides a unified ecosystem supporting GUI, CLI, Python APIs, distributed training, and evaluation on standardized robotics benchmarks.

For a detailed listing of all the changes, see the Release notes for Physical AI Studio.

See the release notes of individual components:

Edge Microvisor Toolkit#

For Open Edge Platform 2026.0, Edge Microvisor Toolkit 25.06.02 brings NPU driver support and improved build toolchain. It also offers improvements, optimizations and updates to the repository architecture, compatibility, and documentation.

Importantly, this release brings an additional version of the toolkit. While 25.06.02 is based on kernel 6.12, the 26.06-preview version is designed to support the latest Intel hardware, using kernel 6.17. Tested only on Panther Lake, it gives you an early view of the future release’s functionality.

For a detailed change log, see Release Notes for Edge Microvisor Toolkit.

Edge Manageability Framework#

EMF release versioning is now fully aligned with the Platform, bringing you version 2026.0, with Panther Lake and Bartlett Lake support.

As a major improvement, the minimalistic deployment profile has been introduced to enable Intel® vPro®–based power‑management workflows for Edge Nodes, leveraging the modular architecture of the Edge Manageability Framework and adding simplified multitenancy for Edge Infrastructure Manager. Edge Nodes pre‑provisioned with Ubuntu 24.04 can now be seamlessly onboarded using the Open Edge Platform CLI.

Moreover, Intel vPro Admin Control Mode (ACM) is now supported, and both component and application support have been updated. For a detailed listing of all the changes and KPIs, see the Release Notes for Edge Manageability Framework 2026.0.

OS Image Composer Framework#

OS Image Composer 2026.0 brings several new features, such as:

  • Supporting multiple package repositories, including both RPM and Debian formats.

  • Introducing a GUI for package dependency visualization.

  • Enabling pre-deployment configuration, to embed the settings/binaries directly into the image.

  • A simplified process of building AI-enabled edge solutions with Intel’s optimized software stack, by incorporating Open Edge Platform components.

  • Increased security with automatic “Software Bill of Materials” generation and embedding.

  • Image Composition compatible with Edge Manageability Framework.

See the Release Notes for OS Image Composer Framework.