cuda-0.7.5.0: FFI binding to the CUDA interface for programming NVIDIA GPUs

Copyright[2009..2015] Trevor L. McDonell
LicenseBSD
Safe HaskellNone
LanguageHaskell98

Foreign.CUDA.Driver.IPC.Marshal

Contents

Description

IPC memory management for low-level driver interface.

Restricted to devices which support unified addressing on Linux operating systems.

Since CUDA-4.0.

Synopsis

IPC memory management

data IPCDevicePtr a Source #

A CUDA memory handle used for inter-process communication.

export :: DevicePtr a -> IO (IPCDevicePtr a) Source #

Create an inter-process memory handle for an existing device memory allocation. The handle can then be sent to another process and made available to that process via open.

Requires CUDA-4.1.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__MEM.html#group__CUDA__MEM_1g6f1b5be767b275f016523b2ac49ebec1

open :: IPCDevicePtr a -> [IPCFlag] -> IO (DevicePtr a) Source #

Open an inter-process memory handle exported from another process, returning a device pointer usable in the current process.

Maps memory exported by another process with create into the current device address space. For contexts on different devices, open can attempt to enable peer access if the user called add, and is controlled by the LazyEnablePeerAccess flag.

Each handle from a given device and context may only be opened by one context per device per other process. Memory returned by open must be freed via close.

Requires CUDA-4.1.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__MEM.html#group__CUDA__MEM_1ga8bd126fcff919a0c996b7640f197b79

close :: DevicePtr a -> IO () Source #

Close and unmap memory returned by open. The original allocation in the exporting process as well as imported mappings in other processes are unaffected.

Any resources used to enable peer access will be freed if this is the last mapping using them.

Requires CUDA-4.1.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__MEM.html#group__CUDA__MEM_1gd6f5d5bcf6376c6853b64635b0157b9e