Migrate from Nvidia TAO (Train, Adapt, Optimize)#

The TAO Toolkit (short for Train, Adapt, Optimize) is a low-code AI model development framework that helps to build, fine-tune, and deploy deep learning models, without writing complex training code from scratch. It is part of NVIDIA’s DeepStream and TensorRT ecosystem and is designed mainly for computer vision and speech AI tasks.

Common model formats used by TAO at various stages of model development are:

Stage

Format

Description

Training

.tlt

TAO internal model checkpoint

Export

.etlt

Encrypted TensorRT-ready model

Deployment

.engine

TensorRT optimized inference engine

Optional

.onnx

Open format for interoperability (optional export)

To migrate from TAO, you need to export the model to ONNX, a format supported by Open Edge Platform components. You cannot use .tlt directly. You do it by simply including the –onnx flag in the tao export command:

tao export <task> --onnx_file output_model.onnx

Some models may not allow ONNX export. This regards all encrypted models (.etlt is a proprietary format meant for TensorRT deployment only), as well as the ones with NVIDIA’s proprietary backbones.