Deep Learning Streamer Pipeline Framework Release 2025.1.2#
Deep Learning Streamer Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discrete GPU, integrated GPU and NPU. The complete solution leverages:
Open source GStreamer* framework for pipeline management
GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc.
The following elements in the Pipeline Framework repository:
Element
Description
Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
Performs audio event detection using AclNet model.
Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video.
Adds user-defined regions of interest to perform inference on, instead of full frame.
Measures frames per second across multiple streams in a single process.
Aggregates inference results from multiple pipeline branches
Converts the metadata structure to the JSON format.
Publishes the JSON metadata to MQTT or Kafka message brokers or files.
Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines.
Overlays the metadata on the video frame to visualize the inference results.
For the details on supported platforms, please refer to System Requirements. For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, refer to Deep Learning Streamer Pipeline Framework installation guide.
New in this Release#
Title |
High-level description |
---|---|
Custom model post-processing |
End user can now create a custom post-processing library (.so); sample added as reference. |
Latency mode support |
Default scheduling policy for DL Streamer is throughput. With this change user can add scheduling-policy=latency for scenarios that prioritize latency requirements over throughput. |
Visual Embeddings enabled |
New models enabled to convert input video into feature embeddings, validated with Clip-ViT-Base-B16/Clip-ViT-Base-B32 models; sample added as reference. |
VLM models support |
new gstgenai element added to convert video into text (with VLM models), validated with miniCPM2.6, available in advanced installation option when building from sources; sample added as reference. |
INT8 automatic quantization support for Yolo models |
Performance improvement, automatic INT8 quantization for Yolo models |
MS Windows 11 support |
Native support for Windows 11 |
New Linux distribution (Azure Linux derivative) |
New distribution added, DL Streamer can be now installed on Edge Microvisor Toolkit. |
License plate recognition use case support |
Added support for models that allow to recognize license plates; sample added as reference. |
Deep Scenario model support |
Commercial 3D model support |
Anomaly model support |
Added support for anomaly model, sample added as reference, sample added as reference. |
RealSense element support |
New gvarealsense element implementation providing basic integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
OpenVINO 2025.2 version support |
Support of recent OpenVINO version added. |
GStreamer 1.26.4 version support |
Support of recent GStreamer version added. |
NPU 1.19 version driver support |
Support of recent NPU driver version added. |
Docker image size reduction |
Reduction for all images, e.g., Ubuntu 24 Release image size reduced to 1.6GB from 2.6GB |
Known Issues#
Issue |
Issue Description |
---|---|
VAAPI memory with |
If you are using |
Artifacts on |
Running inference results visualization on GPU via |
Preview Architecture 2.0 Samples |
Preview Arch 2.0 samples have known issues with inference results. |
Sporadic hang on |
Using Tiger Lake CPU to run this sample may lead to sporadic hang at 99.9% of video processing. Rerun the sample as W/A or use GPU instead. |
Simplified installation process for option 2 via script |
In certain configurations, users may encounter visible errors |
Error when using legacy YoloV5 models: Dynamic resize: Model width dimension shall be static |
To avoid the issue, modify |
python3 - <<EOF “”${MODEL_NAME}”” |