Skip to main content
Ctrl+K

Open Edge Platform

    • Metro AI Suite
    • Manufacturing AI Suite
    • Retail AI Suite
    • Robotics AI Suite
    • Edge AI Libraries
    • Edge Microvisor Toolkit
    • Edge Manageability Framework
  • Metro AI Suite
  • Manufacturing AI Suite
  • Retail AI Suite
  • Robotics AI Suite
  • Edge AI Libraries
  • Edge Microvisor Toolkit
  • Edge Manageability Framework

Section Navigation

  • DL Streamer
    • Get Started
      • System Requirements
      • Install Guide
        • Install Guide Ubuntu
        • Install Guide Ubuntu 24.04 on WSL2
        • Uninstall Guide Ubuntu
        • Install Guide Windows
      • Tutorial
      • Samples
    • Developer Guide
      • Advanced Installation Guide
        • Advanced Installation On Ubuntu - Prerequisites
        • Advanced Installation On Ubuntu - Using Pre-built Packages
        • Advanced Installation - Compilation From Source
        • Advanced Installation On Ubuntu - Build Docker Image
        • Advanced Uninstallation On Ubuntu
      • Metadata
      • Model Preparation
        • YOLO Models
        • Large Vision Models
        • Download Public Models
      • Model Info Section
      • GStreamer Python Bindings
      • Custom GStreamer Plugin Installation
      • Custom Processing
      • Object Tracking
      • GPU device selection
      • Performance Guide
      • Profiling with Intel VTune™
      • Converting NVIDIA DeepStream Pipelines to Deep Learning Streamer Pipeline Framework
      • How to Contribute
        • Coding Style
      • Latency Tracer
      • Model-proc File (legacy)
        • How to Create Model-proc File
    • Elements
      • gvadetect
      • gvaclassify
      • gvainference
      • gvatrack
      • gvaaudiodetect
      • gvagenai
      • gvaattachroi
      • gvafpscounter
      • gvametaaggregate
      • gvametaconvert
      • gvametapublish
      • gvapython
      • gvarealsense
      • gvawatermark
      • GStreamer Elements
        • Compositor
    • Supported Models
    • API Reference
    • Architecture 2.0
      • Migration to 2.0
      • ① Memory Interop and C++ abstract interfaces
      • ② C++ elements
      • ③ GStreamer Elements
      • ③ GStreamer Bin Elements
      • Python Bindings
      • PyTorch tensor inference
      • Elements 2.0
      • Packaging
      • Samples 2.0
      • API 2.0 Reference
    • Deep Learning Streamer Pipeline Framework Release 2025.1.2
  • VIPPET
    • System Requirements
    • Get Started
    • Release Notes
    • Build from Source
    • Build and Use Video Generator
    • GitHub
    • Disclaimers
    • Get Help
  • SceneScape
    • Intel® SceneScape Overview and Architecture
    • Getting Started with Intel® SceneScape
    • Tutorial
    • System Requirements
    • Get Help
    • How to Use AprilTag Camera Calibration in Intel® SceneScape
    • How to Autocalibrate Cameras using Visual Features in Intel® SceneScape
    • How to Create and Manage a Scene Hierarchy in Intel® SceneScape
    • How to Configure DLStreamer Video Pipeline
    • How to Create and Configure a New Scene
    • How to Enable Re-identification Using Visual Similarity Search
    • How to Integrate Cameras and Sensors into Intel® SceneScape
    • How to Integrate Intel® Geti™ AI Models with Intel® SceneScape
    • How to Manually Calibrate Cameras in Intel® SceneScape
    • How to Upgrade Intel® SceneScape
    • How to Use the Intel® SceneScape 3D UI for Camera Calibration
    • How to Use Environmental and Attribute Sensor Types in Intel® SceneScape
    • How to Visualize ROI and Sensor Areas in Intel® SceneScape
    • How to Configure Geospatial Coordinates for a Scene
    • How to Configure Spatial Analytics in Intel® SceneScape
    • How to Define Object Properties
    • How to run License Plate Recognition (LPR) with 3D Object Detection
    • Intel® SceneScape Hardening Guide
    • How Intel® SceneScape converts Pixel-Based Bounding Boxes to Normalized Image Space
    • API Reference
    • Release Notes
  • Geti
  • Anomalib
  • Datumaro
  • Geti SDK
  • Audio Analyzer
    • Overview and Architecture
    • System Requirements
    • Get Started
    • How to Build from Source
    • API Reference
    • Release Notes
  • DL Streamer Pipeline Server
    • Overview and Architecture
    • System Requirements
    • Get Started
    • Troubleshooting
    • How to change Deep Learning Streamer pipeline
    • How to use GPU for decode and inference
    • How to use CPU for decode and inference
    • How to autostart pipelines
    • How to launch configurable pipelines
    • How to perform WebRTC frame streaming
    • How to publish metadata and frame over MQTT
    • How to publish frames to S3
    • How to publish metadata to InfluxDB
    • How to publish metadata over ROS2
    • How to launch and manage pipeline (via script)
    • How to use RTSP camera as a source
    • How to run User Defined Function (UDF) pipelines
    • How to Deploy with Helm
    • How to use image file as source over REST payload
    • How to download and run YOLO models
    • How to build from source
    • How to add system timestamps to metadata
    • API Reference
    • Environment Variables
    • Advanced user guide
      • REST API guide
        • REST Endpoints Reference Guide
        • Defining Media Analytics Pipelines
        • Customizing Pipeline Requests
      • Configuration
        • Basic Deep Learning Streamer Pipeline Server Configuration
      • Cameras
        • RTSP Cameras
      • File Ingestion
        • Image Ingestion
        • Video Ingestion
        • Multifilesrc Usage
      • User Defined Functions (UDF)
        • UDF Writing Guide
        • Configuring udfloader element
      • Publishers
        • MQTT Publishing via gvapython
        • MQTT Publishing
        • OPCUA Publishing post pipeline execution
        • S3 Frame Storage
      • How To Advanced
        • Model Update
        • Object tracking with UDF
        • Enable HTTPS for DL Streamer Pipeline Server (Optional)
        • Performance Analysis (Latency)
        • Get tensor vector data
        • Run multistream pipelines with shared model instance
        • Cross stream batching
        • Enable Open Telemetry
        • Working with other services
    • Release Notes
      • August 2025
      • April 2025
      • March 2025
      • February 2025
      • November 2024
      • October 2024
      • September 2024
      • July 2024
  • Document Ingestion - pgvector
  • Multimodal Embedding Serving
  • Vector Retriever - milvus
    • Get Started Guide
    • Retriever Microservice API Reference
  • Visual Data Preparation For Retrieval
  • VLM OpenVINO Serving
  • Time Series Analytics
    • High-Level Architecture
    • System Requirements
    • Get Started
    • Access Time Series Analytics Microservice API
    • Deploy with Helm
    • API Reference
    • Release Notes
      • August 2025
  • Model Registry
    • Overview and Architecture
    • System Requirements
    • Get Started
    • How to Deploy with Helm
    • How to Interface with Intel® Geti™ Software
    • API Reference
    • Environment Variables
    • Release Notes
  • Chat Q&A
    • ChatQ&A Overview
    • System Requirements
    • Get Started
    • How to Build from Source
    • How to deploy with Helm
    • Deploy with Edge Orchestrator
    • How to Test Performance
    • Benchmarks
    • API Reference
    • Release Notes
  • Chat Q&A Core
  • Document Summarization
    • System Design Document: Document Summarization Application
    • System Requirements
    • Get Started: Document Summarization Application
    • How to Build from Source
    • How to deploy with Helm
    • How to Test Performance
    • API Reference
    • Release Notes
    • FAQ: Document Summarization Application
  • Video Search and Summarization
    • Video Search Overview
    • Video Summary Overview
    • Video Search and Summary (VSS) Architecture overview
    • Video Search Architecture Overview
    • Video Summary Architecture Overview
    • System Requirements
    • Get Started
    • How to Build from Source
    • How to deploy with Helm* Chart
    • API Reference
    • Release Notes
  • Edge AI Libraries
  • Intel® SceneScape
  • How to Enable Re-identification Using Visual Similarity Search

How to Enable Re-identification Using Visual Similarity Search#

This guide provides step-by-step instructions to enable or disable re-identification (ReID) using visual similarity search in a Intel® SceneScape deployment. By completing this guide, you will:

  • Enable re-identification using a visual database and feature-matching model.

  • Understand how to track and evaluate unique object identities across frames.

  • Learn how to tune performance for specific use cases.

This task is important for enabling persistent object tracking across different camera scenes or time intervals.


Prerequisites#

Before you begin, ensure the following:

  • Docker is installed and configured.

  • You have access to modify the docker-compose.yml file in your deployment.

  • You are familiar with scene and camera configuration in Intel® SceneScape.


Steps to Enable Reidentification (ReID) for Out of Box Experience#

  1. Enable VDMS storage by uncomment the following section in docker-compose-dl-streamer-example.yml

vdms:
  image: intellabs/vdms:latest
  init: true
  networks:
    scenescape:
      aliases:
        - vdms.scenescape.intel.com
  environment:
    - OVERRIDE_ca_file=/run/secrets/certs/scenescape-ca.pem
    - OVERRIDE_cert_file=/run/secrets/certs/scenescape-vdms-s.crt
    - OVERRIDE_key_file=/run/secrets/certs/scenescape-vdms-s.key
  secrets:
    - source: root-cert
      target: certs/scenescape-ca.pem
    - source: vdms-server-cert
      target: certs/scenescape-vdms-s.crt
    - source: vdms-server-key
      target: certs/scenescape-vdms-s.key
  restart: always

For information on VDMS, visit the official documentation: https://intellabs.github.io/vdms/.

SceneScape leverages VDMS to store object vector embeddings for the purpose of reidentifying an object using visual features.

  1. Uncomment VDMS dependency in scene config Uncomment the vdms dependency:

depends_on:
  web:
    condition: service_healthy
  broker:
    condition: service_started
  ntpserv:
    condition: service_started
  vdms:
    condition: service_started
  1. Enable Visual Feature Extraction in Video Pipeline Edit the retail-config setting in Docker Compose as follows:

retail-config:
  file: ./dlstreamer-pipeline-server/retail-config-reid.json

This reidentification-specific configuration uses a vision pipeline that includes anonymous visual feature extraction (also called “visual embeddings”) using a person reidentification model:

"pipeline": "multifilesrc loop=TRUE location=/home/pipeline-server/videos/apriltag-cam2.ts name=source ! decodebin ! videoconvert ! video/x-raw,format=BGR ! gvapython class=PostDecodeTimestampCapture function=processFrame module=/home/pipeline-server/user_scripts/gvapython/sscape/sscape_adapter.py name=timesync ! gvadetect model=/home/pipeline-server/models/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml model-proc=/home/pipeline-server/models/object_detection/person/person-detection-retail-0013.json name=detection ! gvainference model=/home/pipeline-server/models/intel/person-reidentification-retail-0277/FP32/person-reidentification-retail-0277.xml inference-region=roi-list ! gvametaconvert add-tensor-data=true name=metaconvert ! gvapython class=PostInferenceDataPublish function=processFrame module=/home/pipeline-server/user_scripts/gvapython/sscape/sscape_adapter.py name=datapublisher ! gvametapublish name=destination ! appsink sync=true",
  1. Start the System Launch the updated stack:

    docker compose up
    

Expected Result: Intel® SceneScape starts with ReID enabled and begins assigning UUIDs based on visual similarity.


Steps to Disable Re-identification#

  1. Comment Out the Database Container Disable vdms by commenting it out in docker-compose.yml:

    # vdms:
    #   image: intellabs/vdms:latest
    #   ...
    
  2. Remove the Dependency from Scene Controller Comment or delete the vdms dependency:

    depends_on:
      - broker
      - web
      - ntpserv
      # - vdms
    
  3. Remove ReID from the Camera Pipeline Edit the retail-config setting in Docker Compose and revert to the config without re-id model:

retail-config:
  file: ./dlstreamer-pipeline-server/retail-config.json
  1. Restart the System:

    docker compose up --build
    

Expected Result: Intel® SceneScape runs without ReID and no visual feature matching is performed.


Evaluating Re-identification Performance#

  • Track Unique IDs:
    Intel® SceneScape publishes unique_detection_count via MQTT under the scene category topic. Each object includes an id field (UUID) for tracking.

  • UI Support:
    UUID display in the 3D UI is planned for future releases.

Note: The default ReID model is tuned for the ‘person’ category and may not generalize well to other object types.


How Re-identification Works#

When an object is first detected, it is assigned a UUID and no similarity score. If ReID is enabled, the system collects visual features over time. Once enough features are gathered, they are compared to those in the database:

  • Match Found: The object is reassigned a matching UUID and given a similarity score.

  • No Match: The object retains its original UUID.

Known Issue: Current VDMS implementation does not support feature expiration, leading to degraded performance over time. This will be addressed in a future release.


Configuration Options#

Parameter

Purpose

Expected Value/Range

DEFAULT_SIMILARITY_THRESHOLD

Controls match sensitivity. Higher values increase matches (and false positives).

Float (e.g., 0.7–0.95)

DEFAULT_MINIMUM_BBOX_AREA

Minimum bounding box size to consider a valid feature.

Pixel area (e.g., 400–1600)

DEFAULT_MINIMUM_FEATURE_COUNT

Minimum features needed before querying DB.

Integer (e.g., 5–20)

DEFAULT_MAX_FEATURE_SLICE_SIZE

Proportion of features stored to improve DB performance.

Float (e.g., 0.1–1.0)

To apply changes:

docker compose down
make -C docker
docker compose up --build

Troubleshooting#

  1. Issue: ReID not working

    • Cause: Database container is not running or not linked.

    • Resolution:

      docker ps | grep vdms
      docker compose logs vdms
      
  2. Issue: Objects not re-identifying across scenes

    • Cause: Insufficient visual features collected or poor lighting.

    • Resolution:

      • Lower DEFAULT_MINIMUM_FEATURE_COUNT.

      • Increase DEFAULT_MINIMUM_BBOX_AREA only if objects are large and visible.

On this page
  • Prerequisites
  • Steps to Enable Reidentification (ReID) for Out of Box Experience
  • Steps to Disable Re-identification
  • Evaluating Re-identification Performance
  • How Re-identification Works
  • Configuration Options
  • Troubleshooting

This Page

  • Show Source

© Copyright 2025, Intel Corporation.

Built with the PyData Sphinx Theme 0.16.1.