Jetson Inference
DNN Vision Library

Allocation of CUDA mapped zero-copy memory. More...

Functions

bool cudaAllocMapped (void **cpuPtr, void **gpuPtr, size_t size)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
bool cudaAllocMapped (void **ptr, size_t size)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
bool cudaAllocMapped (void **ptr, size_t width, size_t height, imageFormat format)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
bool cudaAllocMapped (void **ptr, const int2 &dims, imageFormat format)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
template<typename T >
bool cudaAllocMapped (T **ptr, size_t width, size_t height)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
template<typename T >
bool cudaAllocMapped (T **ptr, const int2 &dims)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 
template<typename T >
bool cudaAllocMapped (T **ptr, size_t size)
 Allocate ZeroCopy mapped memory, shared between CUDA and CPU. More...
 

Detailed Description

Allocation of CUDA mapped zero-copy memory.

Function Documentation

◆ cudaAllocMapped() [1/7]

template<typename T >
bool cudaAllocMapped ( T **  ptr,
const int2 &  dims 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

This is a templated version for allocating images from vector types like uchar3, uchar4, float3, float4, ect. The overall size of the allocation will be calculated as dims.x * dims.y * sizeof(T).

Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]dimsint2 vector where width=dims.x and height=dims.y
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [2/7]

template<typename T >
bool cudaAllocMapped ( T **  ptr,
size_t  size 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

This is a templated version for allocating images from vector types like uchar3, uchar4, float3, float4, ect. The overall size of the allocation is specified by the size parameter.

Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]sizesize of the allocation, in bytes.
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [3/7]

template<typename T >
bool cudaAllocMapped ( T **  ptr,
size_t  width,
size_t  height 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

This is a templated version for allocating images from vector types like uchar3, uchar4, float3, float4, ect. The overall size of the allocation will be calculated as width * height * sizeof(T).

Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]widthWidth (in pixels) to allocate.
[in]heightHeight (in pixels) to allocate.
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [4/7]

bool cudaAllocMapped ( void **  cpuPtr,
void **  gpuPtr,
size_t  size 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

Note
although two pointers are returned, one for CPU and GPU, they both resolve to the same physical memory.
Parameters
[out]cpuPtrReturned CPU pointer to the shared memory.
[out]gpuPtrReturned GPU pointer to the shared memory.
[in]sizeSize (in bytes) of the shared memory to allocate.
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [5/7]

bool cudaAllocMapped ( void **  ptr,
const int2 &  dims,
imageFormat  format 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

This overload is for allocating images from an imageFormat type and the image dimensions. The overall size of the allocation will be calculated with the imageFormatSize() function.

Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]dimsint2 vector where width=dims.x and height=dims.y
[in]formatFormat of the image.
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [6/7]

bool cudaAllocMapped ( void **  ptr,
size_t  size 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

Note
this overload of cudaAllocMapped returns one pointer, assumes that the CPU and GPU addresses will match (as is the case with any recent CUDA version).
Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]sizeSize (in bytes) of the shared memory to allocate.
Returns
true if the allocation succeeded, false otherwise.

◆ cudaAllocMapped() [7/7]

bool cudaAllocMapped ( void **  ptr,
size_t  width,
size_t  height,
imageFormat  format 
)
inline

Allocate ZeroCopy mapped memory, shared between CUDA and CPU.

This overload is for allocating images from an imageFormat type and the image dimensions. The overall size of the allocation will be calculated with the imageFormatSize() function.

Parameters
[out]ptrReturned pointer to the shared CPU/GPU memory.
[in]widthWidth (in pixels) to allocate.
[in]heightHeight (in pixels) to allocate.
[in]formatFormat of the image.
Returns
true if the allocation succeeded, false otherwise.