Get Started#
Time to Complete: 30 minutes
Programming Language: Python 3
Prerequisites#
Set up the application#
The following instructions assume Docker engine is correctly set up in the host system. If not, follow the installation guide for docker engine.
Clone the edge-ai-suites repository and change into industrial-edge-insights-vision directory. The directory contains the utility scripts required in the instructions that follows.
Go to the target directory of your choice and clone the suite.
If you want to clone a specific release branch, replace main with the desired tag.
To learn more on partial cloning, check the Repository Cloning guide.
git clone --filter=blob:none --sparse --branch main https://github.com/open-edge-platform/edge-ai-suites.git
cd edge-ai-suites
git sparse-checkout set manufacturing-ai-suite
cd manufacturing-ai-suite/industrial-edge-insights-vision
Set app specific environment variable file
cp .env_worker-safety-gear-detection .env
Edit the below mentioned environment variables in the
.envfile as follows:HOST_IP=<HOST_IP> # IP address of server where DL Streamer Pipeline Server is running. MINIO_ACCESS_KEY= # MinIO service & client access key e.g. intel1234 MINIO_SECRET_KEY= # MinIO service & client secret key e.g. intel1234 MTX_WEBRTCICESERVERS2_0_USERNAME=<username> # WebRTC credentials e.g. intel1234 MTX_WEBRTCICESERVERS2_0_PASSWORD=<password> # application directory SAMPLE_APP=worker-safety-gear-detection
Install the pre-requisites. Run with sudo if needed.
./setup.sh
This script sets up application pre-requisites, downloads artifacts, sets executable permissions for scripts etc. Downloaded resource directories are made available to the application via volume mounting in docker compose file automatically.
Deploy the Application#
Start the Docker application:
If you’re running multiple instances of app, start the services using
./run.sh upinstead.docker compose up -d
Fetch the list of pipeline loaded available to launch
./sample_list.sh
This lists the pipeline loaded in DL Streamer Pipeline Server.
Example Output:
# Example output for Worker Safety gear detection Environment variables loaded from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/.env Running sample app: worker-safety-gear-detection Checking status of dlstreamer-pipeline-server... Server reachable. HTTP Status Code: 200 Loaded pipelines: [ ... { "description": "DL Streamer Pipeline Server pipeline", "name": "user_defined_pipelines", "parameters": { "properties": { "detection-properties": { "element": { "format": "element-properties", "name": "detection" } } }, "type": "object" }, "type": "GStreamer", "version": "worker_safety_gear_detection" } ... ]
Start the sample application with a pipeline.
./sample_start.sh -p worker_safety_gear_detection
This command will look for the payload for the pipeline specified in the
-pargument above, inside thepayload.jsonfile and launch a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different available options.IMPORTANT: Before you run
sample_start.shscript, make sure thatjqis installed on your system. See the troubleshooting guide for more details.Output:
# Example output for Worker Safety gear detection Environment variables loaded from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/.env Running sample app: worker-safety-gear-detection Checking status of dlstreamer-pipeline-server... Server reachable. HTTP Status Code: 200 Loading payload from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/apps/worker-safety-gear-detection/payload.json Payload loaded successfully. Starting pipeline: worker_safety_gear_detection Launching pipeline: worker_safety_gear_detection Extracting payload for pipeline: worker_safety_gear_detection Found 1 payload(s) for pipeline: worker_safety_gear_detection Payload for pipeline 'worker_safety_gear_detection' {"source":{"uri":"file:///home/pipeline-server/resources/videos/Safety_Full_Hat_and_Vest.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"worker_safety"}},"parameters":{"detection-properties":{"model":"/home/pipeline-server/resources/models/worker-safety-gear-detection/deployment/Detection/model/model.xml","device":"CPU"}}} Posting payload to REST server at https://<HOST_IP>/api/pipelines/user_defined_pipelines/worker_safety_gear_detection Payload for pipeline 'worker_safety_gear_detection' posted successfully. Response: "784b87b45d1511f08ab0da88aa49c01e"NOTE: This will start the pipeline. The inference stream can be viewed on WebRTC, in a browser, at the following url:
If you’re running multiple instances of app, ensure to provide
NGINX_HTTPS_PORTnumber in the url for the app instance i.e. replace <HOST_IP> with <HOST_IP>:<NGINX_HTTPS_PORT>https://<HOST_IP>/mediamtx/worker_safety/
Get the status of running pipeline instance(s).
./sample_status.sh
This command lists the statuses of pipeline instances launched during the lifetime of sample application.
Output:
# Example output for Worker Safety gear detection Environment variables loaded from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/.env Running sample app: worker-safety-gear-detection [ { "avg_fps": 30.036955894826452, "elapsed_time": 3.096184492111206, "id": "784b87b45d1511f08ab0da88aa49c01e", "message": "", "start_time": 1752100724.3075056, "state": "RUNNING" } ]Stop pipeline instances.
./sample_stop.sh
This command will stop all instances that are currently in the
RUNNINGstate and return their last status.Output:
# Example output for Worker Safety gear detection No pipelines specified. Stopping all pipeline instances Environment variables loaded from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/.env Running sample app: worker-safety-gear-detection Checking status of dlstreamer-pipeline-server... Server reachable. HTTP Status Code: 200 Instance list fetched successfully. HTTP Status Code: 200 Found 1 running pipeline instances. Stopping pipeline instance with ID: 784b87b45d1511f08ab0da88aa49c01e Pipeline instance with ID '784b87b45d1511f08ab0da88aa49c01e' stopped successfully. Response: { "avg_fps": 29.985911953641363, "elapsed_time": 37.45091152191162, "id": "784b87b45d1511f08ab0da88aa49c01e", "message": "", "start_time": 1752100724.3075056, "state": "RUNNING" }
To stop a specific instance, identify it with the
--idargument. For example,./sample_stop.sh --id 784b87b45d1511f08ab0da88aa49c01eStop the Docker application.
If you’re running multiple instances of app, stop the services using
./run.sh downinstead.docker compose down -v
This will bring down the services in the application and remove any volumes.