Parallelism combinators with explicit thread-pool creation and passing.
The most basic example of usage is:
main = withPool 2 $ \pool -> parallel_ pool [putStrLn "Echo", putStrLn " in parallel"]
Make sure that you compile with -threaded
and supply +RTS -N2 -RTS
to the generated Haskell executable, or you won't get any parallelism.
The Control.Concurrent.ParallelIO.Global module is implemented on top of this one by maintaining a shared global thread pool with one thread per capability.
- type WorkItem = IO Bool
- type WorkQueue = ConcurrentSet WorkItem
- data Pool
- withPool :: Int -> (Pool -> IO a) -> IO a
- startPool :: Int -> IO Pool
- stopPool :: Pool -> IO ()
- enqueueOnPool :: Pool -> WorkItem -> IO ()
- extraWorkerWhileBlocked :: Pool -> IO a -> IO a
- spawnPoolWorkerFor :: Pool -> IO ()
- killPoolWorkerFor :: Pool -> IO ()
- parallel_ :: Pool -> [IO a] -> IO ()
- parallel :: Pool -> [IO a] -> IO [a]
- parallelInterleaved :: Pool -> [IO a] -> IO [a]
Documentation
The type of thread pools used by ParallelIO
.
The best way to construct one of these is using withPool
.
startPool :: Int -> IO PoolSource
A slightly unsafe way to construct a pool. Make a pool from the maximum number of threads you wish it to execute (including the main thread in the count).
If you use this variant then ensure that you insert a call to stopPool
somewhere in your program after all users of that pool have finished.
A better alternative is to see if you can use the withPool
variant.
stopPool :: Pool -> IO ()Source
Clean up a thread pool. If you don't call this then no one holds the queue, the queue gets GC'd, the threads find themselves blocked indefinitely, and you get exceptions.
This cleanly shuts down the threads so the queue isn't important and you don't get exceptions.
Only call this after all users of the pool have completed, or your program may block indefinitely.
extraWorkerWhileBlocked :: Pool -> IO a -> IO aSource
Wrap any IO action used from your worker threads that may block with this method: it temporarily spawns another worker thread to make up for the loss of the old blocked worker.
This is particularly important if the unblocking is dependent on worker threads actually doing work. If you have this situation, and you don't use this method to wrap blocking actions, then you may get a deadlock if all your worker threads get blocked on work that they assume will be done by other worker threads.
spawnPoolWorkerFor :: Pool -> IO ()Source
Internal method for adding extra unblocked threads to a pool if one is going to be temporarily blocked.
killPoolWorkerFor :: Pool -> IO ()Source
Internal method for removing threads from a pool after we become unblocked.
parallel_ :: Pool -> [IO a] -> IO ()Source
Run the list of computations in parallel.
Has the following properties:
- Never creates more or less unblocked threads than are specified to
live in the pool. NB: this count includes the thread executing
parallel_
. This should minimize contention and hence pre-emption, while also preventing starvation. - On return all actions have been performed.
- The function returns in a timely manner as soon as all actions have been performed.
- The above properties are true even if
parallel_
is used by an action which is itself being executed byparallel_
.
parallel :: Pool -> [IO a] -> IO [a]Source
Run the list of computations in parallel, returning the results in the same order as the corresponding actions.
Has the following properties:
- Never creates more or less unblocked threads than are specified to
live in the pool. NB: this count includes the thread executing
parallel_
. This should minimize contention and hence pre-emption, while also preventing starvation. - On return all actions have been performed.
- The function returns in a timely manner as soon as all actions have been performed.
- The above properties are true even if
parallel
is used by an action which is itself being executed byparallel
.
parallelInterleaved :: Pool -> [IO a] -> IO [a]Source
Run the list of computations in parallel, returning the results in the approximate order of completion.
Has the following properties:
- Never creates more or less unblocked threads than are specified to
live in the pool. NB: this count includes the thread executing
parallel_
. This should minimize contention and hence pre-emption, while also preventing starvation. - On return all actions have been performed.
- The result of running actions appear in the list in undefined order, but which is likely to be very similar to the order of completion.
- The above properties are true even if
parallelInterleaved
is used by an action which is itself being executed byparallelInterleaved
.