cuda-0.9.0.1: FFI binding to the CUDA interface for programming NVIDIA GPUs

Copyright[2009..2017] Trevor L. McDonell
LicenseBSD
Safe HaskellNone
LanguageHaskell98

Foreign.CUDA.Driver.Context.Config

Contents

Description

Context configuration for the low-level driver interface

Synopsis

Context configuration

Resource limits

setLimit :: Limit -> Int -> IO () Source #

Specify the size of the call stack, for compute 2.0 devices.

Requires CUDA-3.1.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g0651954dfb9788173e60a9af7201e65a

Cache

data Cache Source #

Device cache configuration preference

getCache :: IO Cache Source #

On devices where the L1 cache and shared memory use the same hardware resources, this function returns the preferred cache configuration for the current context.

Requires CUDA-3.2.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g40b6b141698f76744dea6e39b9a25360

setCache :: Cache -> IO () Source #

On devices where the L1 cache and shared memory use the same hardware resources, this sets the preferred cache configuration for the current context. This is only a preference.

Any function configuration set via setCacheConfigFun will be preferred over this context-wide setting.

Requires CUDA-3.2.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g54699acf7e2ef27279d013ca2095f4a3

Shared memory

getSharedMem :: IO SharedMem Source #

Return the current size of the shared memory banks in the current context. On devices with configurable shared memory banks, setSharedMem can be used to change the configuration, so that subsequent kernel launches will by default us the new bank size. On devices without configurable shared memory, this function returns the fixed bank size of the hardware.

Requires CUDA-4.2

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g17153a1b8b8c756f7ab8505686a4ad74

setSharedMem :: SharedMem -> IO () Source #

On devices with configurable shared memory banks, this function will set the context's shared memory bank size that will be used by default for subsequent kernel launches.

Changing the shared memory configuration between launches may insert a device synchronisation.

Shared memory bank size does not affect shared memory usage or kernel occupancy, but may have major effects on performance. Larger bank sizes allow for greater potential bandwidth to shared memory, but change the kinds of accesses which result in bank conflicts.

Requires CUDA-4.2

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g2574235fa643f8f251bf7bc28fac3692

Streams

type StreamPriority = Int Source #

Priority of an execution stream. Work submitted to a higher priority stream may preempt execution of work already executing in a lower priority stream. Lower numbers represent higher priorities.

getStreamPriorityRange :: IO (StreamPriority, StreamPriority) Source #

Returns the numerical values that correspond to the greatest and least priority execution streams in the current context respectively. Stream priorities follow the convention that lower numerical numbers correspond to higher priorities. The range of meaningful stream priorities is given by the inclusive range [greatestPriority,leastPriority].

Requires CUDA-5.5.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CTX.html#group__CUDA__CTX_1g137920ab61a71be6ce67605b9f294091