ú΀ñ     BA set that elements can be added to and remove from concurrently. 5The main difference between this and a queue is that  does not O make any guarantees about the order in which things will come out -- in fact, A it will go out of its way to make sure that they are unordered! 1The reason that I use this primitive rather than  is that: D 1) At Standard Chartered we saw intermitted deadlocks when using , I but Neil tells me that he stopped seeing them when they moved to a  H like thing. We never found the reason for the deadlocks though...  2) It'Cs better to dequeue parallel tasks in pseudo random order for many W common applications, because (e.g. in Shake) lots of tasks that require the same U machine resources (i.e. CPU or RAM) tend to be next to each other in the list. U Thus, reducing access locality means that we tend to choose tasks that require  different resources. GA thread pool, containing a maximum number of threads. The best way to ! construct one of these is using . A  is used to communicate s to the workers. ?Type of work items that are put onto the queue internally. The   returned from the  ' action specifies whether the invoking - thread should terminate itself immediately. HA slightly unsafe way to construct a pool. Make a pool from the maximum E number of threads you wish it to execute (including the main thread  in the count). >If you use this variant then ensure that you insert a call to  G somewhere in your program after all users of that pool have finished. 2A better alternative is to see if you can use the  variant. "Clean up a thread pool. If you don'>t call this from the main thread then no one holds the queue,  the queue gets GC'Md, the threads find themselves blocked indefinitely, and you get exceptions. 4This cleanly shuts down the threads so the queue isn't important and you don't get  exceptions. Only call this after; all users of the pool have completed, or your program may  block indefinitely. A safe wrapper around  and . Executes an   action using a newly-created H pool with the specified number of threads and cleans it up at the end. !/Internal method for scheduling work on a pool. ]You should wrap any IO action used from your worker threads that may block with this method. X It temporarily spawns another worker thread to make up for the loss of the old blocked  worker. _This is particularly important if the unblocking is dependent on worker threads actually doing . work. If you have this situation, and you don'1t use this method to wrap blocking actions, then ` you may get a deadlock if all your worker threads get blocked on work that they assume will be  done by other worker threads. 0An example where something goes wrong if you don'>t use this to wrap blocking actions is the following example:  K newEmptyMVar >>= \mvar -> parallel_ pool [readMVar mvar, putMVar mvar ()] GIf we only have one thread, we will sometimes get a schedule where the " action is run  before the #. Unless we wrap the read with , if the pool has a c single thread our program to deadlock, because the worker will become blocked and no other thread " will be available to execute the #. The correct code is: j newEmptyMVar >>= \mvar -> parallel_ pool [extraWorkerWhileBlocked pool (readMVar mvar), putMVar mvar ()] SInternal method for adding extra unblocked threads to a pool if one of the current X worker threads is going to be temporarily blocked. Unrestricted use of this is unsafe, " so we reccomend that you use the  function instead if possible. VInternal method for removing threads from a pool after one of the threads on the pool [ becomes newly unblocked. Unrestricted use of this is unsafe, so we reccomend that you use  the  function instead if possible. *Run the list of computations in parallel. Has the following properties:  D Never creates more or less unblocked threads than are specified to D live in the pool. NB: this count includes the thread executing . R This should minimize contention and hence pre-emption, while also preventing  starvation. , On return all actions have been performed. E The function returns in a timely manner as soon as all actions have  been performed. ' The above properties are true even if  is used by an O action which is itself being executed by one of the parallel combinators. OIf any of the IO actions throws an exception, undefined behaviour will result. * If you want safety, wrap your actions in Control.Exception.try. GRun the list of computations in parallel, returning the results in the * same order as the corresponding actions. Has the following properties:  D Never creates more or less unblocked threads than are specified to D live in the pool. NB: this count includes the thread executing . R This should minimize contention and hence pre-emption, while also preventing  starvation. , On return all actions have been performed. E The function returns in a timely manner as soon as all actions have  been performed. ' The above properties are true even if  is used by an O action which is itself being executed by one of the parallel combinators. OIf any of the IO actions throws an exception, undefined behaviour will result. * If you want safety, wrap your actions in Control.Exception.try. GRun the list of computations in parallel, returning the results in the " approximate order of completion. Has the following properties:  D Never creates more or less unblocked threads than are specified to D live in the pool. NB: this count includes the thread executing  . R This should minimize contention and hence pre-emption, while also preventing  starvation. , On return all actions have been performed. P The result of running actions appear in the list in undefined order, but which > is likely to be very similar to the order of completion. ' The above properties are true even if   is used by an O action which is itself being executed by one of the parallel combinators. OIf any of the IO actions throws an exception, undefined behaviour will result. * If you want safety, wrap your actions in Control.Exception.try. $      LThe global thread pool. Contains as many threads as there are capabilities. #Users of the global pool must call  3 from the main thread at the end of their program. EIn order to reliably make use of the global parallelism combinators, I you must invoke this function after all calls to those combinators have 0 finished. A good choice might be at the end of main.  See also . RWrap any IO action used from your worker threads that may block with this method: X it temporarily spawns another worker thread to make up for the loss of the old blocked  worker.  See also . SInternal method for adding extra unblocked threads to a pool if one of the current X worker threads is going to be temporarily blocked. Unrestricted use of this is unsafe, " so we reccomend that you use the   function instead if possible.  See also . VInternal method for removing threads from a pool after one of the threads on the pool [ becomes newly unblocked. Unrestricted use of this is unsafe, so we reccomend that you use  the   function instead if possible.  See also . AExecute the given actions in parallel on the global thread pool. #Users of the global pool must call  3 from the main thread at the end of their program.  See also . AExecute the given actions in parallel on the global thread pool, G returning the results in the same order as the corresponding actions. #Users of the global pool must call  3 from the main thread at the end of their program.  See also . AExecute the given actions in parallel on the global thread pool, ? returning the results in the approximate order of completion. #Users of the global pool must call  3 from the main thread at the end of their program.  See also  .     %         !"#!$%&'()*+,parallel-io-0.3.0.2#Control.Concurrent.ParallelIO.Local$Control.Concurrent.ParallelIO.Global2Control.Concurrent.ParallelIO.ConcurrentCollectionControl.Concurrent.ParallelIOPool startPoolstopPoolwithPoolextraWorkerWhileBlockedspawnPoolWorkerForkillPoolWorkerFor parallel_parallelparallelInterleaved globalPoolstopGlobalPoolspawnPoolWorkerkillPoolWorker ConcurrentSetbaseControl.Concurrent.ChanChanCSConcurrentCollectionnewinsertdeletepool_threadcountpool_spawnedby pool_queue WorkQueueWorkItemghc-primGHC.BoolBool GHC.TypesIO enqueueOnPoolControl.Concurrent.MVarreadMVarGHC.MVarputMVarseqList