Skip to main content
Ctrl+K

Open Edge Platform

    • Open Edge Platform
    • Metro AI Suite
    • Manufacturing AI Suite
    • Retail AI Suite
    • Robotics AI Suite
    • Education AI Suite
    • Health and Life Sciences AI Suite
    • Edge AI Libraries
    • Edge Microvisor Toolkit
    • Edge Manageability Framework
    • OS Image Composer
  • Open Edge Platform
  • Metro AI Suite
  • Manufacturing AI Suite
  • Retail AI Suite
  • Robotics AI Suite
  • Education AI Suite
  • Health and Life Sciences AI Suite
  • Edge AI Libraries
  • Edge Microvisor Toolkit
  • Edge Manageability Framework
  • OS Image Composer

Section Navigation

  • Metro SDK Manager
    • Metro Vision AI SDK
      • Metro Vision AI SDK - Tutorial 1
      • Metro Vision AI SDK - Tutorial 2
      • Metro Vision AI SDK - Tutorial 3
      • Metro Vision AI SDK - Tutorial 4
      • Metro Vision AI SDK - Tutorial 5
    • Metro Gen AI SDK
    • Visual AI Demo Kit
      • Visual AI Demo Kit - Tutorial 1
      • Visual AI Demo Kit - Tutorial 2
      • Visual AI Demo Kit - Tutorial 3
    • Release Notes

Samples

  • Smart Intersection
    • Get Started
      • System Requirements
      • Deploy with Helm
    • How It Works
    • How to use GPU for inference
    • How to use NPU for inference
    • Export and Optimize Geti Model
    • Troubleshooting
    • Release Notes
  • Smart Intersection Agent
    • Get Started
      • System Requirements
      • Build from Source
      • How to Deploy with Helm
    • API Reference
    • Release Notes
  • Smart Route Planning Agent
    • Get Started
      • System Requirements
      • How to Build from Source
      • Environment Variables
    • Release Notes
  • Smart Parking
    • Get Started
      • System Requirements
      • Deploy with Helm
      • Deploy with the Edge Orchestrator
    • How-to Guides
      • Customize the Application
      • Generate and Deploy Offline Package
      • Use GPU for Inference
      • Use NPU for Inference
      • View Open Telemetry Data
      • Benchmark Performance
    • Troubleshooting
    • Release Notes
  • Loitering Detection
    • Get Started
      • System Requirements
      • Deploy with Helm
      • Deploy with the Edge Orchestrator
    • How-to Guides
      • Customize the Application
      • Use GPU for Inference
      • Use NPU for Inference
      • View Open Telemetry Data
      • Benchmark Performance
    • Troubleshooting
    • Release Notes
  • Smart Tolling
    • Get Started
      • System Requirements
      • Server File Download Checklist
    • How It Works
      • The Perception Layer
      • Optimization
      • Analytics Pipeline (Downstream)
    • API Reference
    • Troubleshooting
  • Image-based Video Search
    • Get Started
      • System Requirements
      • Deploy with Helm
      • Deploy with the Edge Orchestrator
    • How It Works
    • How to use GPU for inference
    • How to use NPU for inference
    • Troubleshooting
    • Release Notes
  • Interactive Digital Avatar
  • Visual Search and Q&A
    • Get Started Guide
      • System Requirements
      • Deploy with Helm
    • Tutorials
    • Troubleshooting
    • Release Notes
  • Sensor Fusion for Traffic Management
    • Get Started
      • Prerequisites
      • System Requirements
    • How it Works
    • Advanced user guide
    • APIs
    • Troubleshooting
    • Release Notes
  • Smart NVR
    • Get Started
      • System Requirements
      • Build from Source
      • Deploy with Helm
    • How to Use Smart NVR
    • Integrate Intel® SceneScape with Smart NVR
    • API Reference
    • Troubleshooting
    • Release Notes
  • Video Processing for NVR
  • Deterministic Threat Detection
    • Get Started
    • How-to Guides
      • Synchronize PTP Time (IEEE 802.1AS)
      • Configure the MOXA TSN Switch
      • Configure VLAN on MOXA Switch
      • Create VLAN on All Machines
      • Run RTSP Camera Capture and AI Inference
      • Run the Sensor Data Producer
      • Run the MQTT Aggregator and Visualization
      • Run the Traffic Injector
      • Enable TSN Traffic Shaping (IEEE 802.1Qbv)
    • Release Notes
  • Live Video Search
    • Get Started
    • System Requirements
    • Build from Source
    • How It Works
    • API Reference
    • Release Notes
  • Live Video Alert Agent
    • Get Started
    • System Requirements
    • How to Build Source
    • How It Works
    • API Reference
    • Known Issues
    • Release Notes
  • Live Captioning
    • Get Started
      • System Requirements
      • Build from Source
    • How it Works
    • Enable Alert Mode
    • Configure Object Detection Pipeline
    • API Reference
    • Known Issues
    • Release Notes

Software Development

  • OpenVINO
  • OpenVINO Model Server
  • DL Streamer
  • DL Streamer Pipeline Server

Tools

  • Intel® Edge System Qualification
  • Geti
  • VIPPET
  • SceneScape

Guides and Tutorials

  • Intel® Edge System Qualification
  • Migrate from Nvidia
  • Application Security
  • Metro AI Suite
  • Image-Based Video Search

Image-Based Video Search#

GitHub project Readme

Image-based Video Search sample application performs near real-time analysis and image-based search to detect and retrieve objects of interest in large video datasets.

Overview#

This sample application lets users search live or recorded camera feeds by providing an image and view matching objects with location, timestamp, and confidence score details.

This sample provides a working example of how to combine edge AI microservices for video ingestion, object detection, feature extraction, and vector-based search.

You can use this foundation to build solutions for diverse use cases, including city infrastructure monitoring and security applications, helping operators quickly locate objects of interest across large video datasets.

How it Works#

The application workflow consists of three stages: inputs, processing, and outputs.

architectural diagram Figure 1: Detailed Architecture of the Image-Based Video Search Application.

Inputs#

  • Video files or live camera streams (simulated or real time)

  • User-provided images or images captured from video for search

The application includes a demonstration video for testing. The video loops continuously and appears in the UI as soon as the application starts.

Processing#

  • Nginx reverse proxy server: All interactions with user happens via Nginx server. It protects the IBVS app by handling SSL/TLS encryption, filtering and validating requests, offering centralized access control and making the app directly inaccessible from external access.

  • Video analysis with Deep Learning Streamer Pipeline Server and MediaMTX: Select Analyze Stream to start the DL Streamer Pipeline Server pipeline. The Pipeline Server processes video through MediaMTX, which simulates remote video cameras and publishes live streams. The Pipeline Server extracts frames from RTSP streams and detects objects in each frame, publishing predictions through MQTT.

  • Feature extraction with Feature Matching: DL Streamer Pipeline Server sends metadata and images through MQTT to the Feature Matching microservice. Feature Matching generates feature vectors. If predictions exceed the threshold, the system stores vector embeddings in MilvusDB and saves frames in the Docker file system.

  • Storage and retrieval in MilvusDB: MilvusDB stores feature vectors. You can review them in MilvusUI.

  • Video search with ImageIngestor: To search, first analyze the stream by selecting Analyze Stream. Then upload an image or capture an object from the video using Upload Image or Capture Frame. You can adjust the frame to capture a specific object. The system ingests images via ImageIngestor, processes them with DL Streamer Pipeline Server, and matches them against stored feature vectors in MilvusDB.

Outputs#

  • Matched search results, including metadata, timestamps, confidence scores, and frames

application interface screenshot Screenshot of the Image-Based Video Search sample application interface displaying search input and matched results

Learn More#

  • Get Started

  • System Requirements

  • Release Notes

  • DL Streamer Pipeline Server

On this page
  • Overview
  • How it Works
    • Inputs
    • Processing
    • Outputs
  • Learn More

This Page

  • Show Source