-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Combinators for executing IO actions in parallel on a thread pool. -- -- This package provides combinators for sequencing IO actions onto a -- thread pool. The thread pool is guaranteed to contain no more -- unblocked threads than a user-specified upper limit, thus minimizing -- contention. -- -- Furthermore, the parallel combinators can be used reentrantly - your -- parallel actions can spawn more parallel actions - without violating -- this property of the thread pool. -- -- The package is inspired by the thread -- http://thread.gmane.org/gmane.comp.lang.haskell.cafe/56499/focus=56521. -- Thanks to Neil Mitchell and Bulat Ziganshin for some of the code this -- package is based on. @package parallel-io @version 0.3.4 -- | Parallelism combinators with explicit thread-pool creation and -- passing. -- -- The most basic example of usage is: -- --
--   main = withPool 2 $ \pool -> parallel_ pool [putStrLn "Echo", putStrLn " in parallel"]
--   
-- -- Make sure that you compile with -threaded and supply +RTS -- -N2 -RTS to the generated Haskell executable, or you won't get -- any parallelism. -- -- If you plan to allow your worker items to block, then you should read -- the documentation for extraWorkerWhileBlocked. -- -- The Control.Concurrent.ParallelIO.Global module is implemented -- on top of this one by maintaining a shared global thread pool with one -- thread per capability. module Control.Concurrent.ParallelIO.Local -- | Run the list of computations in parallel. -- -- Has the following properties: -- --
    --
  1. Never creates more or less unblocked threads than are specified to -- live in the pool. NB: this count includes the thread executing -- parallel_. This should minimize contention and hence -- pre-emption, while also preventing starvation.
  2. --
  3. On return all actions have been performed.
  4. --
  5. The function returns in a timely manner as soon as all actions -- have been performed.
  6. --
  7. The above properties are true even if parallel_ is used by -- an action which is itself being executed by one of the parallel -- combinators.
  8. --
  9. If any of the IO actions throws an exception this does not prevent -- any of the other actions from being performed.
  10. --
  11. If any of the IO actions throws an exception, the exception thrown -- by the first failing action in the input list will be thrown by -- parallel_. Importantly, at the time the exception is thrown -- there is no guarantee that the other parallel actions have -- completed.
  12. --
-- -- The motivation for this choice is that waiting for the all threads to -- either return or throw before throwing the first exception will almost -- always cause GHC to show the "Blocked indefinitely in MVar operation" -- exception rather than the exception you care about. -- -- The reason for this behaviour can be seen by considering this machine -- state: -- --
    --
  1. The main thread has used the parallel combinators to spawn two -- threads, thread 1 and thread 2. It is blocked on both of them waiting -- for them to return either a result or an exception via an MVar.
  2. --
  3. Thread 1 and thread 2 share another (empty) MVar, the "wait -- handle". Thread 2 is waiting on the handle, while thread 2 will -- eventually put into the handle.
  4. --
-- -- Consider what happens when thread 1 is buggy and throws an exception -- before putting into the handle. Now thread 2 is blocked indefinitely, -- and so the main thread is also blocked indefinetly waiting for the -- result of thread 2. GHC has no choice but to throw the uninformative -- exception. However, what we really wanted to see was the original -- exception thrown in thread 1! -- -- By having the main thread abandon its wait for the results of the -- spawned threads as soon as the first exception comes in, we give this -- exception a chance to actually be displayed. parallel_ :: Pool -> [IO a] -> IO () -- | As parallel_, but instead of throwing exceptions that are -- thrown by subcomputations, they are returned in a data structure. -- -- As a result, property 6 of parallel_ is not preserved, and -- therefore if your IO actions can depend on each other and may throw -- exceptions your program may die with "blocked indefinitely" exceptions -- rather than informative messages. parallelE_ :: Pool -> [IO a] -> IO [Maybe SomeException] -- | Run the list of computations in parallel, returning the results in the -- same order as the corresponding actions. -- -- Has the following properties: -- --
    --
  1. Never creates more or less unblocked threads than are specified to -- live in the pool. NB: this count includes the thread executing -- parallel. This should minimize contention and hence -- pre-emption, while also preventing starvation.
  2. --
  3. On return all actions have been performed.
  4. --
  5. The function returns in a timely manner as soon as all actions -- have been performed.
  6. --
  7. The above properties are true even if parallel is used by -- an action which is itself being executed by one of the parallel -- combinators.
  8. --
  9. If any of the IO actions throws an exception this does not prevent -- any of the other actions from being performed.
  10. --
  11. If any of the IO actions throws an exception, the exception thrown -- by the first failing action in the input list will be thrown by -- parallel. Importantly, at the time the exception is thrown -- there is no guarantee that the other parallel actions have -- completed.
  12. --
-- -- The motivation for this choice is that waiting for the all threads to -- either return or throw before throwing the first exception will almost -- always cause GHC to show the "Blocked indefinitely in MVar operation" -- exception rather than the exception you care about. -- -- The reason for this behaviour can be seen by considering this machine -- state: -- --
    --
  1. The main thread has used the parallel combinators to spawn two -- threads, thread 1 and thread 2. It is blocked on both of them waiting -- for them to return either a result or an exception via an MVar.
  2. --
  3. Thread 1 and thread 2 share another (empty) MVar, the "wait -- handle". Thread 2 is waiting on the handle, while thread 2 will -- eventually put into the handle.
  4. --
-- -- Consider what happens when thread 1 is buggy and throws an exception -- before putting into the handle. Now thread 2 is blocked indefinitely, -- and so the main thread is also blocked indefinetly waiting for the -- result of thread 2. GHC has no choice but to throw the uninformative -- exception. However, what we really wanted to see was the original -- exception thrown in thread 1! -- -- By having the main thread abandon its wait for the results of the -- spawned threads as soon as the first exception comes in, we give this -- exception a chance to actually be displayed. parallel :: Pool -> [IO a] -> IO [a] -- | As parallel, but instead of throwing exceptions that are thrown -- by subcomputations, they are returned in a data structure. -- -- As a result, property 6 of parallel is not preserved, and -- therefore if your IO actions can depend on each other and may throw -- exceptions your program may die with "blocked indefinitely" exceptions -- rather than informative messages. parallelE :: Pool -> [IO a] -> IO [Either SomeException a] -- | Run the list of computations in parallel, returning the results in the -- approximate order of completion. -- -- Has the following properties: -- --
    --
  1. Never creates more or less unblocked threads than are specified to -- live in the pool. NB: this count includes the thread executing -- parallelInterleaved. This should minimize contention and hence -- pre-emption, while also preventing starvation.
  2. --
  3. On return all actions have been performed.
  4. --
  5. The result of running actions appear in the list in undefined -- order, but which is likely to be very similar to the order of -- completion.
  6. --
  7. The above properties are true even if parallelInterleaved -- is used by an action which is itself being executed by one of the -- parallel combinators.
  8. --
  9. If any of the IO actions throws an exception this does not prevent -- any of the other actions from being performed.
  10. --
  11. If any of the IO actions throws an exception, the exception thrown -- by the first failing action in the input list will be thrown by -- parallelInterleaved. Importantly, at the time the exception is -- thrown there is no guarantee that the other parallel actions have -- completed.
  12. --
-- -- The motivation for this choice is that waiting for the all threads to -- either return or throw before throwing the first exception will almost -- always cause GHC to show the "Blocked indefinitely in MVar operation" -- exception rather than the exception you care about. -- -- The reason for this behaviour can be seen by considering this machine -- state: -- --
    --
  1. The main thread has used the parallel combinators to spawn two -- threads, thread 1 and thread 2. It is blocked on both of them waiting -- for them to return either a result or an exception via an MVar.
  2. --
  3. Thread 1 and thread 2 share another (empty) MVar, the "wait -- handle". Thread 2 is waiting on the handle, while thread 1 will -- eventually put into the handle.
  4. --
-- -- Consider what happens when thread 1 is buggy and throws an exception -- before putting into the handle. Now thread 2 is blocked indefinitely, -- and so the main thread is also blocked indefinetly waiting for the -- result of thread 2. GHC has no choice but to throw the uninformative -- exception. However, what we really wanted to see was the original -- exception thrown in thread 1! -- -- By having the main thread abandon its wait for the results of the -- spawned threads as soon as the first exception comes in, we give this -- exception a chance to actually be displayed. parallelInterleaved :: Pool -> [IO a] -> IO [a] -- | As parallelInterleaved, but instead of throwing exceptions that -- are thrown by subcomputations, they are returned in a data structure. -- -- As a result, property 6 of parallelInterleaved is not -- preserved, and therefore if your IO actions can depend on each other -- and may throw exceptions your program may die with "blocked -- indefinitely" exceptions rather than informative messages. parallelInterleavedE :: Pool -> [IO a] -> IO [Either SomeException a] -- | Run the list of computations in parallel, returning the result of the -- first thread that completes with (Just x), if any -- -- Has the following properties: -- --
    --
  1. Never creates more or less unblocked threads than are specified to -- live in the pool. NB: this count includes the thread executing -- parallelInterleaved. This should minimize contention and hence -- pre-emption, while also preventing starvation.
  2. --
  3. On return all actions have either been performed or cancelled -- (with ThreadKilled exceptions).
  4. --
  5. The above properties are true even if parallelFirst is used -- by an action which is itself being executed by one of the parallel -- combinators.
  6. --
  7. If any of the IO actions throws an exception, the exception thrown -- by the first throwing action in the input list will be thrown by -- parallelFirst. Importantly, at the time the exception is thrown -- there is no guarantee that the other parallel actions have been -- completed or cancelled.
  8. --
-- -- The motivation for this choice is that waiting for the all threads to -- either return or throw before throwing the first exception will almost -- always cause GHC to show the "Blocked indefinitely in MVar operation" -- exception rather than the exception you care about. -- -- The reason for this behaviour can be seen by considering this machine -- state: -- --
    --
  1. The main thread has used the parallel combinators to spawn two -- threads, thread 1 and thread 2. It is blocked on both of them waiting -- for them to return either a result or an exception via an MVar.
  2. --
  3. Thread 1 and thread 2 share another (empty) MVar, the "wait -- handle". Thread 2 is waiting on the handle, while thread 1 will -- eventually put into the handle.
  4. --
-- -- Consider what happens when thread 1 is buggy and throws an exception -- before putting into the handle. Now thread 2 is blocked indefinitely, -- and so the main thread is also blocked indefinetly waiting for the -- result of thread 2. GHC has no choice but to throw the uninformative -- exception. However, what we really wanted to see was the original -- exception thrown in thread 1! -- -- By having the main thread abandon its wait for the results of the -- spawned threads as soon as the first exception comes in, we give this -- exception a chance to actually be displayed. parallelFirst :: Pool -> [IO (Maybe a)] -> IO (Maybe a) -- | As parallelFirst, but instead of throwing exceptions that are -- thrown by subcomputations, they are returned in a data structure. -- -- As a result, property 4 of parallelFirst is not preserved, and -- therefore if your IO actions can depend on each other and may throw -- exceptions your program may die with "blocked indefinitely" exceptions -- rather than informative messages. parallelFirstE :: Pool -> [IO (Maybe a)] -> IO (Maybe (Either SomeException a)) -- | A thread pool, containing a maximum number of threads. The best way to -- construct one of these is using withPool. data Pool -- | A safe wrapper around startPool and stopPool. Executes -- an IO action using a newly-created pool with the specified -- number of threads and cleans it up at the end. withPool :: Int -> (Pool -> IO a) -> IO a -- | A slightly unsafe way to construct a pool. Make a pool from the -- maximum number of threads you wish it to execute (including the main -- thread in the count). -- -- If you use this variant then ensure that you insert a call to -- stopPool somewhere in your program after all users of that pool -- have finished. -- -- A better alternative is to see if you can use the withPool -- variant. startPool :: Int -> IO Pool -- | Clean up a thread pool. If you don't call this from the main thread -- then no one holds the queue, the queue gets GC'd, the threads find -- themselves blocked indefinitely, and you get exceptions. -- -- This cleanly shuts down the threads so the queue isn't important and -- you don't get exceptions. -- -- Only call this after all users of the pool have completed, or -- your program may block indefinitely. stopPool :: Pool -> IO () -- | You should wrap any IO action used from your worker threads that may -- block with this method. It temporarily spawns another worker thread to -- make up for the loss of the old blocked worker. -- -- This is particularly important if the unblocking is dependent on -- worker threads actually doing work. If you have this situation, and -- you don't use this method to wrap blocking actions, then you may get a -- deadlock if all your worker threads get blocked on work that they -- assume will be done by other worker threads. -- -- An example where something goes wrong if you don't use this to wrap -- blocking actions is the following example: -- --
--   newEmptyMVar >>= \mvar -> parallel_ pool [readMVar mvar, putMVar mvar ()]
--   
-- -- If we only have one thread, we will sometimes get a schedule where the -- readMVar action is run before the putMVar. Unless we -- wrap the read with extraWorkerWhileBlocked, if the pool has a -- single thread our program to deadlock, because the worker will become -- blocked and no other thread will be available to execute the -- putMVar. -- -- The correct code is: -- --
--   newEmptyMVar >>= \mvar -> parallel_ pool [extraWorkerWhileBlocked pool (readMVar mvar), putMVar mvar ()]
--   
extraWorkerWhileBlocked :: Pool -> IO a -> IO a -- | Internal method for adding extra unblocked threads to a pool if one of -- the current worker threads is going to be temporarily blocked. -- Unrestricted use of this is unsafe, so we recommend that you use the -- extraWorkerWhileBlocked function instead if possible. spawnPoolWorkerFor :: Pool -> IO () -- | Internal method for removing threads from a pool after one of the -- threads on the pool becomes newly unblocked. Unrestricted use of this -- is unsafe, so we reccomend that you use the -- extraWorkerWhileBlocked function instead if possible. killPoolWorkerFor :: Pool -> IO () -- | Parallelism combinators with an implicit global thread-pool. -- -- The most basic example of usage is: -- --
--   main = parallel_ [putStrLn "Echo", putStrLn " in parallel"] >> stopGlobalPool
--   
-- -- Make sure that you compile with -threaded and supply +RTS -- -N2 -RTS to the generated Haskell executable, or you won't get -- any parallelism. -- -- If you plan to allow your worker items to block, then you should read -- the documentation for extraWorkerWhileBlocked. -- -- The Control.Concurrent.ParallelIO.Local module provides a more -- general interface which allows explicit passing of pools and control -- of their size. This module is implemented on top of that one by -- maintaining a shared global thread pool with one thread per -- capability. module Control.Concurrent.ParallelIO.Global -- | Execute the given actions in parallel on the global thread pool. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallel_. parallel_ :: [IO a] -> IO () -- | Execute the given actions in parallel on the global thread pool, -- reporting exceptions explicitly. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelE_. parallelE_ :: [IO a] -> IO [Maybe SomeException] -- | Execute the given actions in parallel on the global thread pool, -- returning the results in the same order as the corresponding actions. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallel. parallel :: [IO a] -> IO [a] -- | Execute the given actions in parallel on the global thread pool, -- returning the results in the same order as the corresponding actions -- and reporting exceptions explicitly. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelE. parallelE :: [IO a] -> IO [Either SomeException a] -- | Execute the given actions in parallel on the global thread pool, -- returning the results in the approximate order of completion. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelInterleaved. parallelInterleaved :: [IO a] -> IO [a] -- | Execute the given actions in parallel on the global thread pool, -- returning the results in the approximate order of completion and -- reporting exceptions explicitly. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelInterleavedE. parallelInterleavedE :: [IO a] -> IO [Either SomeException a] -- | Run the list of computations in parallel, returning the result of the -- first thread that completes with (Just x), if any. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelFirst. parallelFirst :: [IO (Maybe a)] -> IO (Maybe a) -- | Run the list of computations in parallel, returning the result of the -- first thread that completes with (Just x), if any, and reporting any -- exception explicitly. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. -- -- See also parallelFirstE. parallelFirstE :: [IO (Maybe a)] -> IO (Maybe (Either SomeException a)) -- | The global thread pool. Contains as many threads as there are -- capabilities. -- -- Users of the global pool must call stopGlobalPool from the main -- thread at the end of their program. globalPool :: Pool -- | In order to reliably make use of the global parallelism combinators, -- you must invoke this function after all calls to those combinators -- have finished. A good choice might be at the end of main. -- -- See also stopPool. stopGlobalPool :: IO () -- | Wrap any IO action used from your worker threads that may block with -- this method: it temporarily spawns another worker thread to make up -- for the loss of the old blocked worker. -- -- See also extraWorkerWhileBlocked. extraWorkerWhileBlocked :: IO a -> IO a -- | Internal method for adding extra unblocked threads to a pool if one of -- the current worker threads is going to be temporarily blocked. -- Unrestricted use of this is unsafe, so we reccomend that you use the -- extraWorkerWhileBlocked function instead if possible. -- -- See also spawnPoolWorkerFor. spawnPoolWorker :: IO () -- | Internal method for removing threads from a pool after one of the -- threads on the pool becomes newly unblocked. Unrestricted use of this -- is unsafe, so we reccomend that you use the -- extraWorkerWhileBlocked function instead if possible. -- -- See also killPoolWorkerFor. killPoolWorker :: IO () -- | Combinators for executing IO actions in parallel on a thread pool. -- -- This module just reexports -- Control.Concurrent.ParallelIO.Global: this contains versions of -- the combinators that make use of a single global thread pool with as -- many threads as there are capabilities. -- -- For finer-grained control, you can use -- Control.Concurrent.ParallelIO.Local instead, which gives you -- control over the creation of the pool. module Control.Concurrent.ParallelIO