# Elements
Links under the GStreamer element name (first column of the table) contain
the description of element properties, in the format generated by
gst-inspect-1.0 utility.
## Inference plugins
| Element | Description |
|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [gvadetect](./gvadetect.md) | Performs object detection and *optionally* object classification/segmentation/pose estimation. Inputs: ROIs (regions of interest) or full frame. Output: object bounding box detection along with prediction metadata. The `queue` element must be put directly after the `gvadetect` element in pipeline.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU[CPU,NPU] ! queue ! … OUT
|
| [gvaclassify](./gvaclassify.md) | Performs object classification/segmentation/pose estimation. Inputs: ROIs or full frame. Output: prediction metadata. The `queue` element must be put directly after the `gvaclassify` element in the pipeline.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! queue ! gvaclassify model=$mClassify device=CPU ! queue ! … OUT
|
| [gvainference](./gvainference.md) | Executes any inference model and outputs raw results. Does not interpret data and does not generate metadata. The `queue` element must be put directly after the `gvainference` element in the pipeline.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! queue ! gvainference model=$mHeadPoseEst device=CPU ! queue ! … OUT
|
| [gvatrack](./gvatrack.md) | Tracks objects across video frames using zero-term or short-term tracking algorithms. Zero-term tracking assigns unique object IDs and requires object detection to run on every frame. Short-term tracking allows for tracking objects between frames, reducing the need to run object detection on each frame.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU ! gvatrack tracking-type=short-term-imageless ! … OUT
|
| [gvaaudiodetect](./gvaaudiodetect.md) | Legacy plugin. Performs audio event detection using the `AclNet` model.
Example:
gst-launch-1.0 … ! decodebin3 ! audioresample ! audioconvert ! audio/x-raw … ! audiomixer … ! gvaaudiodetect model=$mAudioDetect ! … OUT
| [gvaaudiotranscribe](./gvaaudiotranscribe.md) | ASR plugin. Performs audio transcription using `Whisper` model.
Example:
gst-launch-1.0 … ! decodebin3 ! audioresample ! audioconvert ! audio/x-raw … ! audiomixer … ! gvaaudiotranscribe model=$mASR device=CPU ! … OUT
|
| [gvagenai](./gvagenai.md) | Performs inference using GenAI models. It can be used to generate text descriptions from images or video.
Example:
gst-launch-1.0 … ! decodebin3 ! videoconvert ! gvagenai model=$mGenAI device=GPU ! … OUT
|
## 3D plugins
| Element | Description |
|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [g3dradarprocess](./g3dradarprocess.md) | Processes millimeter-wave (mmWave) radar signal data. Performs data reordering, pre-processing, DC (Direct Current) removal, and interfaces with the radar library to generate point clouds, clusters, and tracking data. Attaches custom metadata containing detected reflection points, clustered objects, and tracked targets to each buffer.
Example:
gst-launch-1.0 multifilesrc location=radar/%06d.bin ! application/octet-stream ! g3dradarprocess radar-config=config.json frame-rate=10 ! fakesink
|
| [g3dlidarparse](./g3dlidarparse.md) | Parses 3D LiDAR binary frames and attaches custom metadata with point cloud data. It reads raw LiDAR frames (BIN/PCD), applies stride/frame-rate thinning, and outputs buffers enriched with LidarMeta (points, frame_id, timestamps, stream_id) for downstream fusion, analytics, or visualization.
Example:
gst-launch-1.0 multifilesrc location="lidar/%06d.bin" caps=application/octet-stream ! g3dlidarparse stride=5 frame-rate=5 ! fakesink
|
## Auxiliary plugins
| Element | Description |
|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [gvaattachroi](./gvaattachroi.md) | Adds user-defined regions of interest to perform inference on (instead of full frame). Example: monitoring road traffic in a city camera feed; splitting large image into smaller pieces, and running inference on each piece (healthcare cell analytics).
Example:
gst-launch-1.0 … ! decodebin3 ! gvaattachroi roi=xtl,ytl,xbr,ybr gvadetect inference-region=1 ! … OUT
|
| [gvafpscounter](./gvafpscounter.md) | Measures frames per second across multiple video streams in a single GStreamer process.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect … ! gvafpscounter ! … OUT
|
| [gvafpsthrottle](./gvafpsthrottle.md) | Throttles the framerate of video streams by enforcing a maximum frames-per-second (FPS) rate. Useful for rate limiting in pipelines or for testing at specific processing framerates.
Example:
gst-launch-1.0 … ! decodebin3 ! gvafpsthrottle target-fps=10 ! … OUT
|
| [gvametaaggregate](./gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches.
Example:
gst-launch-1.0 … ! decodebin3 ! tee name=t t. ! queue ! gvametaaggregate name=a ! gvaclassify … ! gvaclassify … ! gvametaconvert … ! gvametapublish … ! fakesink t. ! queue ! gvadetect … ! a.
|
| [gvametaconvert](./gvametaconvert.md) | Converts the metadata structure to JSON or raw text formats. Can write output to a file.|
| [gvametapublish](./gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect model=$mDetect device=GPU … ! gvametaconvert format=json … ! gvametapublish … ! … OUT
|
| [gvapython](./gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. It is used to augment DL Streamer with user-defined algorithms (e.g. metadata conversion, inference post-processing).
Example:
gst-launch-1.0 … ! gvaclassify ! gvapython module={gvapython.callback_module.classAge_pp} ! … OUT
|
| [gvarealsense](./gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
| [gvawatermark](./gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results.
Example:
gst-launch-1.0 … ! decodebin3 ! gvadetect … ! gvawatermark ! … |
| [gvamotiondetect](./gvamotiondetect.md) | Performs lightweight motion detection on NV12 frames and emits motion ROIs as analytics metadata. Uses VA-API acceleration when VAMemory caps are negotiated, otherwise system-memory path.
Example:
gst-launch-1.0 … ! vaapih264dec ! gvamotiondetect confirm-frames=2 motion-threshold=0.08 ! gvawatermark ! … |
:::{toctree}
:maxdepth: 1
:hidden:
gvadetect
gvaclassify
gvainference
gvatrack
gvaaudiodetect
gvaaudiotranscribe
gvagenai
g3dradarprocess
g3dlidarparse
gvaattachroi
gvafpscounter
gvafpsthrottle
gvametaaggregate
gvametaconvert
gvametapublish
gvapython
gvarealsense
gvawatermark
gvamotiondetect
gstelements
:::