Troubleshooting#
Using REST API in Image Ingestor mode has low first inference latency#
This is an expected behavior observed only for the first inference. Subsequent inferences would be considerably faster.
For inference on GPU, the first inference might be even slower. Latency for up to 15 seconds have been observed for image requests inference on GPU.
When in sync
mode, we suggest users to provide a timeout
with a value to accommodate for the first inference latency to avoid request time out.
Read here to learn more about the API.
Axis RTSP camera freezes or pipeline stops#
Restart the DL Streamer pipeline server container with the pipeline that has this rtsp source.
Deploying with Intel GPU K8S Extension#
If you’re deploying a GPU based pipeline (example: with VA-API elements like vapostproc
, vah264dec
etc., and/or with device=GPU
in gvadetect
in dlstreamer_pipeline_server_config.json
) with Intel GPU k8s Extension, ensure to set the below details in the file helm/values.yaml
appropriately in order to utilize the underlying GPU.
gpu:
enabled: true
type: "gpu.intel.com/i915"
count: 1
Deploying without Intel GPU K8S Extension#
If you’re deploying a GPU based pipeline (example: with VA-API elements like vapostproc
, vah264dec
etc., and/or with device=GPU
in gvadetect
in dlstreamer_pipeline_server_config.json
) without Intel GPU k8s Extension, ensure to set the below details in the file helm/values.yaml
appropriately in order to utilize the underlying GPU.
privileged_access_required: true
Using RTSP/WebRTC streaming, S3_write or MQTT fails with GPU elements in pipeline#
If you are using GPU elements in the pipeline, RTSP/WebRTC streaming, S3_write and MQTT will not work because these are expects CPU buffer.
Add vapostproc ! video/x-raw
before appsink element or jpegenc
element(in case you are using S3_write) in the GPU pipeline.
# Sample pipeline
"pipeline": "{auto_source} name=source ! parsebin ! vah264dec ! vapostproc ! video/x-raw(memory:VAMemory) ! gvadetect name=detection model-instance-id=inst0 ! queue ! gvafpscounter ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! vapostproc ! video/x-raw ! appsink name=appsink"
RTSP streaming fails if you are using udfloader#
If you are using udfloader pipeline RTSP streaming will not work because RTSP pipeline does not support RGB, BGR or Mono format.
If you are using udfloader pipeline
or RGB, BGR or GRAY8
format in the pipeline, add videoconvert ! video/x-raw, format=(string)NV12
before appsink
element in pipeline.
# Sample pipeline
"pipeline": "{auto_source} name=source ! decodebin ! videoconvert ! video/x-raw,format=RGB ! udfloader name=udfloader ! gvametaconvert add-empty-results=true name=metaconvert ! gvametapublish name=destination ! videoconvert ! video/x-raw, format=(string)NV12 ! appsink name=appsink"
Resolving Time Sync Issues in Prometheus#
If you see the following warning in Prometheus, it indicates a time sync issue.
Warning: Error fetching server time: Detected xxx.xxx seconds time difference between your browser and the server.
You can following the below steps to synchronize system time using NTP.
Install systemd-timesyncd if not already installed:
sudo apt install systemd-timesyncd
Check service status:
systemctl status systemd-timesyncd
Configure an NTP server (if behind a corporate proxy):
sudo nano /etc/systemd/timesyncd.conf
Add:
[Time] NTP=corp.intel.com
Replace
corp.intel.com
with a different ntp server that is supported on your network.Restart the service:
sudo systemctl restart systemd-timesyncd
Verify the status:
systemctl status systemd-timesyncd
This should resolve the time discrepancy in Prometheus.
WebRTC Stream on web browser#
The firewall may prevent you from viewing the video stream on web browser. Please disable the firewall using this command.
sudo ufw disable
Error Logs#
View the container logs using this command.
docker logs -f <CONTAINER_NAME>