| |
- __builtin__.object
-
- tensorNet
-
- depthNet
- detectNet
- imageNet
- poseNet
- segNet
class depthNet(tensorNet) |
|
Mono depth estimation DNN - performs depth mapping on monocular images
Examples (jetson-inference/python/examples)
depthnet.py
__init__(...)
Loads an semantic segmentation model.
Parameters:
network (string) -- name of a built-in network to use,
see below for available options.
argv (strings) -- command line arguments passed to depthNet,
see below for available options.
depthNet arguments:
--network NETWORK pre-trained model to load, one of the following:
* monodepth-mobilenet
* monodepth-resnet18
* monodepth-resnet50
--model MODEL path to custom model to load (onnx)
--input_blob INPUT name of the input layer (default is 'input_0')
--output_blob OUTPUT name of the output layer (default is 'output_0')
--batch_size BATCH maximum batch size (default is 1)
--profile enable layer profiling in TensorRT |
|
- Method resolution order:
- depthNet
- tensorNet
- __builtin__.object
Methods defined here:
- GetDepthField(...)
- Return a cudaImage object of the raw depth field.
This is a single-channel float32 image that contains the depth estimates.
Parameters: (none)
Returns:
(cudaImage) -- single-channel float32 depth field
- GetDepthFieldHeight(...)
- Return the height of the depth field, in pixels.
Parameters: (none)
Returns:
(int) -- height of the depth field, in pixels
- GetDepthFieldWidth(...)
- Return the width of the depth field, in pixels.
Parameters: (none)
Returns:
(int) -- width of the depth field, in pixels
- GetNetworkName(...)
- Return the name of the built-in network used by the model.
Parameters: (none)
Returns:
(string) -- name of the network (e.g. 'MonoDepth-Mobilenet', 'MonoDepth-ResNet18')
or 'custom' if using a custom-loaded model
- Process(...)
- Compute the depth field from a monocular RGB/RGBA image.
The results can also be visualized if output image is provided.
Parameters:
input (capsule) -- CUDA memory capsule (input image)
output (capsule) -- CUDA memory capsule (optional output image)
colormap (string) -- colormap name (optional)
filter_mode (string) -- filtering used in upscaling, 'point' or 'linear' (default is 'linear')
Returns: (none)
- Visualize(...)
- Visualize the raw depth field into a colorized RGB/RGBA depth map.
Parameters:
output (capsule) -- output CUDA memory capsule
colormap (string) -- colormap name (optional)
filter_mode (string) -- filtering used in upscaling, 'point' or 'linear' (default is 'linear')
Returns: (none)
- __init__(...)
- x.__init__(...) initializes x; see help(type(x)) for signature
Static methods defined here:
- Usage(...)
- Return the command line parameters accepted by __init__()
Parameters: (none)
Returns:
(string) -- usage string documenting command-line options
Methods inherited from tensorNet:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes inherited from tensorNet:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
|
class detectNet(tensorNet) |
|
Object Detection DNN - locates objects in an image
Examples (jetson-inference/python/examples)
detectnet-console.py
detectnet-camera.py
__init__(...)
Loads an object detection model.
Parameters:
network (string) -- name of a built-in network to use
see below for available options.
argv (strings) -- command line arguments passed to detectNet,
see below for available options.
threshold (float) -- minimum detection threshold.
default value is 0.5
detectNet arguments:
--network=NETWORK pre-trained model to load, one of the following:
* ssd-mobilenet-v1
* ssd-mobilenet-v2 (default)
* ssd-inception-v2
* pednet
* multiped
* facenet
* coco-airplane
* coco-bottle
* coco-chair
* coco-dog
--model=MODEL path to custom model to load (caffemodel, uff, or onnx)
--prototxt=PROTOTXT path to custom prototxt to load (for .caffemodel only)
--labels=LABELS path to text file containing the labels for each class
--input-blob=INPUT name of the input layer (default is 'data')
--output-cvg=COVERAGE name of the coverge output layer (default is 'coverage')
--output-bbox=BOXES name of the bounding output layer (default is 'bboxes')
--mean-pixel=PIXEL mean pixel value to subtract from input (default is 0.0)
--batch-size=BATCH maximum batch size (default is 1)
--threshold=THRESHOLD minimum threshold for detection (default is 0.5)
--alpha=ALPHA overlay alpha blending value, range 0-255 (default: 120)
--overlay=OVERLAY detection overlay flags (e.g. --overlay=box,labels,conf)
valid combinations are: 'box', 'labels', 'conf', 'none'
--profile enable layer profiling in TensorRT |
|
- Method resolution order:
- detectNet
- tensorNet
- __builtin__.object
Methods defined here:
- Detect(...)
- Detect objects in an RGBA image and return a list of detections.
Parameters:
image (capsule) -- CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
overlay (str) -- combination of box,labels,none flags (default is 'box')
Returns:
[Detections] -- list containing the detected objects (see detectNet.Detection)
- GetClassDesc(...)
- Return the class description for the given object class.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(string) -- the text description of the object class
- GetClassSynset(...)
- Return the synset data category string for the given class.
The synset generally maps to the class training data folder.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(string) -- the synset of the class, typically 9 characters long
- GetNumClasses(...)
- Return the number of object classes that this network model is able to detect.
Parameters: (none)
Returns:
(int) -- number of object classes that the model supports
- GetThreshold(...)
- Return the minimum detection threshold.
Parameters: (none)
Returns:
(float) -- the threshold for detection
- Overlay(...)
- Overlay a list of detections in an RGBA image.
Parameters:
image (capsule) -- CUDA memory capsule
[Detections] -- list containing the detected objects (see detectNet.Detection) width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
overlay (str) -- combination of box,labels,none flags (default is 'box')
Returns:
None
- SetOverlayAlpha(...)
- Set the alpha blending value used during overlay visualization for all classes
Parameters:
alpha (float) -- desired alpha value, between 0.0 and 255.0
Returns: (none)
- SetThreshold(...)
- Return the minimum detection threshold.
Parameters:
(float) -- detection threshold
Returns: (none)
- __init__(...)
- x.__init__(...) initializes x; see help(type(x)) for signature
Static methods defined here:
- Usage(...)
- Return the command line parameters accepted by __init__()
Parameters: (none)
Returns:
(string) -- usage string documenting command-line options
Data and other attributes defined here:
- Detection = <type 'jetson.inference.detectNet.Detection'>
- Object Detection Result
----------------------------------------------------------------------
Data descriptors defined here:
Area
Area of bounding box
Bottom
Bottom bounding box coordinate
Center
Center (x,y) coordinate of bounding box
ClassID
Class index of the detected object
Confidence
Confidence value of the detected object
Height
Height of bounding box
Instance
Instance index of the detected object
Left
Left bounding box coordinate
Right
Right bounding box coordinate
Top
Top bounding box coordinate
Width
Width of bounding box
Methods inherited from tensorNet:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes inherited from tensorNet:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
|
class imageNet(tensorNet) |
|
Image Recognition DNN - classifies an image
Examples (jetson-inference/python/examples)
my-recognition.py
imagenet-console.py
imagenet-camera.py
__init__(...)
Loads an image recognition model.
Parameters:
network (string) -- name of a built-in network to use,
see below for available options.
argv (strings) -- command line arguments passed to imageNet,
see below for available options.
imageNet arguments:
--network=NETWORK pre-trained model to load, one of the following:
* alexnet
* googlenet (default)
* googlenet-12
* resnet-18
* resnet-50
* resnet-101
* resnet-152
* vgg-16
* vgg-19
* inception-v4
--model=MODEL path to custom model to load (caffemodel, uff, or onnx)
--prototxt=PROTOTXT path to custom prototxt to load (for .caffemodel only)
--labels=LABELS path to text file containing the labels for each class
--input-blob=INPUT name of the input layer (default is 'data')
--output-blob=OUTPUT name of the output layer (default is 'prob')
--batch-size=BATCH maximum batch size (default is 1)
--profile enable layer profiling in TensorRT |
|
- Method resolution order:
- imageNet
- tensorNet
- __builtin__.object
Methods defined here:
- Classify(...)
- Classify an RGBA image and return the object's class and confidence.
Parameters:
image (capsule) -- CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
Returns:
(int, float) -- tuple containing the object's class index and confidence
- GetClassDesc(...)
- Return the class description for the given object class.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(string) -- the text description of the object class
- GetClassSynset(...)
- Return the synset data category string for the given class.
The synset generally maps to the class training data folder.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(string) -- the synset of the class, typically 9 characters long
- GetNetworkName(...)
- Return the name of the built-in network used by the model.
Parameters: (none)
Returns:
(string) -- name of the network (e.g. 'googlenet', 'alexnet')
or 'custom' if using a custom-loaded model
- GetNumClasses(...)
- Return the number of object classes that this network model is able to classify.
Parameters: (none)
Returns:
(int) -- number of object classes that the model supports
- __init__(...)
- x.__init__(...) initializes x; see help(type(x)) for signature
Static methods defined here:
- Usage(...)
- Return the command line parameters accepted by __init__()
Parameters: (none)
Returns:
(string) -- usage string documenting command-line options
Methods inherited from tensorNet:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes inherited from tensorNet:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
|
class poseNet(tensorNet) |
|
Pose Estimation DNN - detects the poses of objects in an image
Examples (jetson-inference/python/examples)
posenet.py
__init__(...)
Loads an pose estimation model.
Parameters:
network (string) -- name of a built-in network to use
see below for available options.
argv (strings) -- command line arguments passed to poseNet,
see below for available options.
threshold (float) -- minimum detection threshold.
default value is 0.15
poseNet arguments:
--network=NETWORK pre-trained model to load, one of the following:
* resnet18-body (default)
* resnet18-hand
* densenet121-body
--model=MODEL path to custom model to load (caffemodel, uff, or onnx)
--prototxt=PROTOTXT path to custom prototxt to load (for .caffemodel only)
--labels=LABELS path to text file containing the labels for each class
--input-blob=INPUT name of the input layer (default is 'input')
--output-cvg=COVERAGE name of the coverge output layer (default is 'cmap')
--output-bbox=BOXES name of the bounding output layer (default is 'paf')
--mean-pixel=PIXEL mean pixel value to subtract from input (default is 0.0)
--batch-size=BATCH maximum batch size (default is 1)
--threshold=THRESHOLD minimum threshold for detection (default is 0.5)
--overlay=OVERLAY detection overlay flags (e.g. --overlay=links,keypoints)
valid combinations are: 'box', 'links', 'keypoints', 'none'
--keypoint-scale=X radius scale for keypoints, relative to image (default: 0.0052)
--link-scale=X line width scale for links, relative to image (default: 0.0013)
--profile enable layer profiling in TensorRT |
|
- Method resolution order:
- poseNet
- tensorNet
- __builtin__.object
Methods defined here:
- FindKeypointID(...)
- Return the keypoint ID for the given keypoint name.
Parameters:
(str) -- name of the keypoint
Returns:
(int) -- the ID of the keypoint
- GetKeypointName(...)
- Return the keypoint name for the given keypoint ID.
Parameters:
(int) -- index of the keypoint, between [0, GetNumKeypoints()]
Returns:
(string) -- the text description of the keypoint
- GetKeypointScale(...)
- Get the scale used to calculate the radius of keypoints based on image dimensions.
Parameters: (none)
Returns:
(float) -- the scale used to calculate the radius of keypoints based on image dimensions
- GetLinkScale(...)
- Get the scale used to calculate the width of link lines based on image dimensions.
Parameters: (none)
Returns:
(float) -- the scale used to calculate the width of link lines based on image dimensions
- GetNumKeypoints(...)
- Return the number of keypoints in the model's pose topology.
Parameters: (none)
Returns:
(int) -- number of keypoints in the model's pose topology
- GetThreshold(...)
- Return the minimum detection threshold.
Parameters: (none)
Returns:
(float) -- the threshold for detection
- Overlay(...)
- Overlay a list of object poses onto an image.
Parameters:
input (capsule) -- input image (CUDA memory capsule)
[ObjectPoses] -- list containing the detected object poses (see poseNet.ObjectPose) width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
overlay (str) -- combination of box,labels,none flags (default is 'box')
output (capsule) -- output image (CUDA memory capsule)
Returns:
None
- Process(...)
- Perform pose estimation on the given image, returning object poses, and overlay the results..
Parameters:
image (capsule) -- CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
overlay (str) -- combination of box,labels,none flags (default is 'box')
Returns:
[ObjectPoses] -- list containing the detected object poses (see poseNet.ObjectPose)
- SetKeypointAlpha(...)
- Set the alpha blending value used during overlay visualization for one or all keypoint types
Parameters:
alpha (float) -- desired alpha value, between 0.0 and 255.0
keypoint (int) -- optional index of the keypoint to set the alpha (otherwise will apply to all keypoints)
Returns: (none)
- SetKeypointScale(...)
- Set the scale used to calculate the radius of keypoint circles.
This scale will be multiplied by the largest image dimension.
Parameters:
scale (float) -- desired scaling factor
Returns: (none)
- SetLinkScale(...)
- Set the scale used to calculate the width of link lines.
This scale will be multiplied by the largest image dimension.
Parameters:
scale (float) -- desired scaling factor
Returns: (none)
- SetThreshold(...)
- Return the minimum detection threshold.
Parameters:
(float) -- detection threshold
Returns: (none)
- __init__(...)
- x.__init__(...) initializes x; see help(type(x)) for signature
Static methods defined here:
- Usage(...)
- Return the command line parameters accepted by __init__()
Parameters: (none)
Returns:
(string) -- usage string documenting command-line options
Data and other attributes defined here:
- ObjectPose = <type 'jetson.inference.poseNet.ObjectPose'>
- Object Pose Estimation Result
----------------------------------------------------------------------
Data descriptors defined here:
Keypoints
List of poseNet.ObjectPose.Keypoint objects
Links
List of (a,b) tuples, where a & b are indexes into the Keypoints list
ID
Object ID from the image frame, starting at 0
Left
Left bounding box coordinate
Right
Right bounding box coordinate
Top
Top bounding box coordinate
Bottom
Bottom bounding box coordinate
Methods inherited from tensorNet:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes inherited from tensorNet:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
|
class segNet(tensorNet) |
|
Semantic Segmentation DNN - segments an image with per-pixel classification
Examples (jetson-inference/python/examples)
segnet-console.py
segnet-camera.py
__init__(...)
Loads an semantic segmentation model.
Parameters:
network (string) -- name of a built-in network to use,
see below for available options.
argv (strings) -- command line arguments passed to segNet,
see below for available options.
segNet arguments:
--network=NETWORK pre-trained model to load, one of the following:
* fcn-resnet18-cityscapes-512x256
* fcn-resnet18-cityscapes-1024x512
* fcn-resnet18-cityscapes-2048x1024
* fcn-resnet18-deepscene-576x320
* fcn-resnet18-deepscene-864x480
* fcn-resnet18-mhp-512x320
* fcn-resnet18-mhp-640x360
* fcn-resnet18-voc-320x320 (default)
* fcn-resnet18-voc-512x320
* fcn-resnet18-sun-512x400
* fcn-resnet18-sun-640x512
--model=MODEL path to custom model to load (caffemodel, uff, or onnx)
--prototxt=PROTOTXT path to custom prototxt to load (for .caffemodel only)
--labels=LABELS path to text file containing the labels for each class
--colors=COLORS path to text file containing the colors for each class
--input-blob=INPUT name of the input layer (default: 'data')
--output-blob=OUTPUT name of the output layer (default: 'score_fr_21classes')
--batch-size=BATCH maximum batch size (default is 1)
--alpha=ALPHA overlay alpha blending value, range 0-255 (default: 150)
--visualize=VISUAL visualization flags (e.g. --visualize=overlay,mask)
valid combinations are: 'overlay', 'mask'
--profile enable layer profiling in TensorRT |
|
- Method resolution order:
- segNet
- tensorNet
- __builtin__.object
Methods defined here:
- GetClassColor(...)
- Return the class color for the given object class.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(r,g,b,a) tuple -- tuple containing the RGBA color of the object class
- GetClassDesc(...)
- Return the class description for the given object class.
Parameters:
(int) -- index of the class, between [0, GetNumClasses()]
Returns:
(string) -- the text description of the object class
- GetGridHeight(...)
- Return the number of rows in the segmentation mask classification grid.
These are the raw dimensions, they are typically smaller than the image size.
In segNet.Mask() the classification grid gets upscaled to match the image size,
but this function returns the original unscaled size of the grid.
Parameters: (none)
Returns:
(int) -- height of the segmentation mask's classification grid
- GetGridSize(...)
- Return a (width, height) tuple with the dimensions of the segmentation mask classification grid.
These are the raw dimensions, they are typically smaller than the image size.
In segNet.Mask() the classification grid gets upscaled to match the image size,
but this function returns the original unscaled size of the grid.
Parameters: (none)
Returns:
(int, int) -- tuple containing the width and height of the segmentation mask's classification grid
- GetGridWidth(...)
- Return the number of columns in the segmentation mask classification grid.
These are the raw dimensions, they are typically smaller than the image size.
In segNet.Mask() the classification grid gets upscaled to match the image size,
but this function returns the original unscaled size of the grid.
Parameters: (none)
Returns:
(int) -- width of the segmentation mask's classification grid
- GetNetworkName(...)
- Return the name of the built-in network used by the model.
Parameters: (none)
Returns:
(string) -- name of the network (e.g. 'FCN_ResNet18', 'FCN_Alexnet')
or 'custom' if using a custom-loaded model
- GetNumClasses(...)
- Return the number of object classes that this network model is able to classify.
Parameters: (none)
Returns:
(int) -- number of object classes that the model supports
- Mask(...)
- Produce a colorized RGBA segmentation mask of the output.
Parameters:
image (capsule) -- output CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
filter_mode (string) -- optional string indicating the filter mode, 'point' or 'linear' (default: 'linear')
Returns: (none)
- Overlay(...)
- Produce the segmentation overlay alpha blended on top of the original image.
Parameters:
image (capsule) -- output CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
filter_mode (string) -- optional string indicating the filter mode, 'point' or 'linear' (default: 'linear')
Returns: (none)
- Process(...)
- Perform the initial inferencing processing of the segmentation.
The results can then be visualized using the Overlay() and Mask() functions.
Parameters:
image (capsule) -- CUDA memory capsule
width (int) -- width of the image (in pixels)
height (int) -- height of the image (in pixels)
ignore_class (string) -- optional label name of class to ignore in the classification (default: 'void')
Returns: (none)
- SetOverlayAlpha(...)
- Set the alpha blending value used during overlay visualization for all classes
Parameters:
alpha (float) -- desired alpha value, between 0.0 and 255.0
explicit_exempt (optional, bool) -- if True, the global alpha doesn't apply to classes that have an alpha value explicitly set in the colors file (default: True)
Returns: (none)
- __init__(...)
- x.__init__(...) initializes x; see help(type(x)) for signature
Static methods defined here:
- Usage(...)
- Return the command line parameters accepted by __init__()
Parameters: (none)
Returns:
(string) -- usage string documenting command-line options
Methods inherited from tensorNet:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes inherited from tensorNet:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
|
class tensorNet(__builtin__.object) |
|
TensorRT DNN Base Object |
|
Methods defined here:
- EnableDebug(...)
- Enable TensorRT debug messages and device synchronization
- EnableLayerProfiler(...)
- Enable the profiling of network layer execution times
- GetModelPath(...)
- Return the path to the network model file on disk
- GetModelType(...)
- Return the type of model format (caffe, ONNX, UFF, or custom)
- GetNetworkFPS(...)
- Return the runtime of the network (in frames per second)
- GetNetworkTime(...)
- Return the runtime of the network (in milliseconds)
- GetPrototxtPath(...)
- Return the path to the network prototxt file on disk
- PrintProfilerTimes(...)
- Print out performance timing info
Data and other attributes defined here:
- __new__ = <built-in method __new__ of type object>
- T.__new__(S, ...) -> a new object with type S, a subtype of T
| |