Jetson Inference
DNN Vision Library

DNN abstract base class that provides TensorRT functionality underneath. More...

Classes

class  tensorNet
 Abstract class for loading a tensor network with TensorRT. More...
 

Macros

#define DEFAULT_MAX_BATCH_SIZE   1
 Default maximum batch size. More...
 
#define LOG_TRT   "[TRT] "
 Prefix used for tagging printed log output from TensorRT. More...
 

Enumerations

enum  precisionType {
  TYPE_DISABLED = 0, TYPE_FASTEST, TYPE_FP32, TYPE_FP16,
  TYPE_INT8, NUM_PRECISIONS
}
 Enumeration for indicating the desired precision that the network should run in, if available in hardware. More...
 
enum  deviceType {
  DEVICE_GPU = 0, DEVICE_DLA, DEVICE_DLA_0 = DEVICE_DLA, DEVICE_DLA_1,
  NUM_DEVICES
}
 Enumeration for indicating the desired device that the network should run on, if available in hardware. More...
 
enum  modelType {
  MODEL_CUSTOM = 0, MODEL_CAFFE, MODEL_ONNX, MODEL_UFF,
  MODEL_ENGINE
}
 Enumeration indicating the format of the model that's imported in TensorRT (either caffe, ONNX, or UFF). More...
 
enum  profilerQuery {
  PROFILER_PREPROCESS = 0, PROFILER_NETWORK, PROFILER_POSTPROCESS, PROFILER_VISUALIZE,
  PROFILER_TOTAL
}
 Profiling queries. More...
 
enum  profilerDevice { PROFILER_CPU = 0, PROFILER_CUDA }
 Profiler device. More...
 

Functions

const char * precisionTypeToStr (precisionType type)
 Stringize function that returns precisionType in text. More...
 
precisionType precisionTypeFromStr (const char *str)
 Parse the precision type from a string. More...
 
const char * deviceTypeToStr (deviceType type)
 Stringize function that returns deviceType in text. More...
 
deviceType deviceTypeFromStr (const char *str)
 Parse the device type from a string. More...
 
const char * modelTypeToStr (modelType type)
 Stringize function that returns modelType in text. More...
 
modelType modelTypeFromStr (const char *str)
 Parse the model format from a string. More...
 
modelType modelTypeFromPath (const char *path)
 Parse the model format from a file path. More...
 
const char * profilerQueryToStr (profilerQuery query)
 Stringize function that returns profilerQuery in text. More...
 

Detailed Description

DNN abstract base class that provides TensorRT functionality underneath.

These functions aren't typically accessed by end users unless they are implementing their own DNN class like imageNet or detectNet.

Macro Definition Documentation

◆ DEFAULT_MAX_BATCH_SIZE

#define DEFAULT_MAX_BATCH_SIZE   1

Default maximum batch size.

◆ LOG_TRT

#define LOG_TRT   "[TRT] "

Prefix used for tagging printed log output from TensorRT.

Enumeration Type Documentation

◆ deviceType

enum deviceType

Enumeration for indicating the desired device that the network should run on, if available in hardware.

Enumerator
DEVICE_GPU 

GPU (if multiple GPUs are present, a specific GPU can be selected with cudaSetDevice()

DEVICE_DLA 

Deep Learning Accelerator (DLA) Core 0 (only on Jetson Xavier)

DEVICE_DLA_0 

Deep Learning Accelerator (DLA) Core 0 (only on Jetson Xavier)

DEVICE_DLA_1 

Deep Learning Accelerator (DLA) Core 1 (only on Jetson Xavier)

NUM_DEVICES 

Number of device types defined.

◆ modelType

enum modelType

Enumeration indicating the format of the model that's imported in TensorRT (either caffe, ONNX, or UFF).

Enumerator
MODEL_CUSTOM 

Created directly with TensorRT API.

MODEL_CAFFE 

caffemodel

MODEL_ONNX 

ONNX.

MODEL_UFF 

UFF.

MODEL_ENGINE 

TensorRT engine/plan.

◆ precisionType

Enumeration for indicating the desired precision that the network should run in, if available in hardware.

Enumerator
TYPE_DISABLED 

Unknown, unspecified, or disabled type.

TYPE_FASTEST 

The fastest detected precision should be use (i.e.

try INT8, then FP16, then FP32)

TYPE_FP32 

32-bit floating-point precision (FP32)

TYPE_FP16 

16-bit floating-point half precision (FP16)

TYPE_INT8 

8-bit integer precision (INT8)

NUM_PRECISIONS 

Number of precision types defined.

◆ profilerDevice

Profiler device.

Enumerator
PROFILER_CPU 

CPU walltime.

PROFILER_CUDA 

CUDA kernel time.

◆ profilerQuery

Profiling queries.

See also
tensorNet::GetProfilerTime()
Enumerator
PROFILER_PREPROCESS 
PROFILER_NETWORK 
PROFILER_POSTPROCESS 
PROFILER_VISUALIZE 
PROFILER_TOTAL 

Function Documentation

◆ deviceTypeFromStr()

deviceType deviceTypeFromStr ( const char *  str)

Parse the device type from a string.

◆ deviceTypeToStr()

const char* deviceTypeToStr ( deviceType  type)

Stringize function that returns deviceType in text.

◆ modelTypeFromPath()

modelType modelTypeFromPath ( const char *  path)

Parse the model format from a file path.

◆ modelTypeFromStr()

modelType modelTypeFromStr ( const char *  str)

Parse the model format from a string.

◆ modelTypeToStr()

const char* modelTypeToStr ( modelType  type)

Stringize function that returns modelType in text.

◆ precisionTypeFromStr()

precisionType precisionTypeFromStr ( const char *  str)

Parse the precision type from a string.

◆ precisionTypeToStr()

const char* precisionTypeToStr ( precisionType  type)

Stringize function that returns precisionType in text.

◆ profilerQueryToStr()

const char* profilerQueryToStr ( profilerQuery  query)

Stringize function that returns profilerQuery in text.