Copyright | (c) Alexey Kuleshevich 2018-2019 |
---|---|
License | BSD3 |
Maintainer | Alexey Kuleshevich <lehins@yandex.ru> |
Stability | experimental |
Portability | non-portable |
Safe Haskell | None |
Language | Haskell2010 |
Synopsis
- data Comp where
- data Scheduler m a = Scheduler {
- numWorkers :: !Int
- scheduleWork :: m a -> m ()
- withScheduler :: MonadUnliftIO m => Comp -> (Scheduler m a -> m b) -> m [a]
- withScheduler_ :: MonadUnliftIO m => Comp -> (Scheduler m a -> m b) -> m ()
- traverseConcurrently :: (MonadUnliftIO m, Traversable t) => Comp -> (a -> m b) -> t a -> m (t b)
- traverseConcurrently_ :: (MonadUnliftIO m, Foldable t) => Comp -> (a -> m b) -> t a -> m ()
- traverse_ :: (Applicative f, Foldable t) => (a -> f ()) -> t a -> f ()
Scheduler and strategies
Computation strategy to use when scheduling work.
Seq | Sequential computation |
ParOn ![Int] | Schedule workers to run on specific capabilities. Specifying an empty list |
ParN !Word16 | Specify the number of workers that will be handling all the jobs. Difference from |
Main type for scheduling work. See withScheduler
or withScheduler_
for the only ways to get
access to such data type.
Since: 1.0.0
Scheduler | |
|
Initialize Scheduler
:: MonadUnliftIO m | |
=> Comp | Computation strategy |
-> (Scheduler m a -> m b) | Action that will be scheduling all the work. |
-> m [a] |
Initialize a scheduler and submit jobs that will be computed sequentially or in parallelel,
which is determined by the Comp
utation strategy.
Here are some cool properties about the withScheduler
:
- This function will block until all of the submitted jobs have finished or at least one of them resulted in an exception, which will be re-thrown at the callsite.
- It is totally fine for nested jobs to submit more jobs for the same or other scheduler
- It is ok to initialize multiple schedulers at the same time, although that will likely result in suboptimal performance, unless workers are pinned to different capabilities.
- Warning It is very dangerous to schedule jobs that do blocking
IO
, since it can lead to a deadlock very quickly, if you are not careful. Consider this example. First execution works fine, since there are two scheduled workers, and one can unblock the other, but the second scenario immediately results in a deadlock.
>>>
withScheduler (ParOn [1,2]) $ \s -> newEmptyMVar >>= (\ mv -> scheduleWork s (readMVar mv) >> scheduleWork s (putMVar mv ()))
[(),()]>>>
import System.Timeout
>>>
timeout 1000000 $ withScheduler (ParOn [1]) $ \s -> newEmptyMVar >>= (\ mv -> scheduleWork s (readMVar mv) >> scheduleWork s (putMVar mv ()))
Nothing
Important: In order to get work done truly in parallel, program needs to be compiled with
-threaded
GHC flag and executed with +RTS -N -RTS
to use all available cores.
Since: 1.0.0
:: MonadUnliftIO m | |
=> Comp | Computation strategy |
-> (Scheduler m a -> m b) | Action that will be scheduling all the work. |
-> m () |
Same as withScheduler
, but discards results of submitted jobs.
Since: 1.0.0
Useful functions
traverseConcurrently :: (MonadUnliftIO m, Traversable t) => Comp -> (a -> m b) -> t a -> m (t b) Source #
Map an action over each element of the Traversable
t
acccording to the supplied computation
strategy.
Since: 1.0.0
traverseConcurrently_ :: (MonadUnliftIO m, Foldable t) => Comp -> (a -> m b) -> t a -> m () Source #
Just like traverseConcurrently
, but restricted to Foldable
and discards the results of
computation.
Since: 1.0.0
traverse_ :: (Applicative f, Foldable t) => (a -> f ()) -> t a -> f () Source #
This is generally a faster way to traverse while ignoring the result rather than using mapM_
.
Since: 1.0.0
Exceptions
If any one of the workers dies with an exception, even if that exceptions is asynchronous, it will be re-thrown in the scheduling thread.
>>>
let didAWorkerDie = handleJust asyncExceptionFromException (return . (== ThreadKilled)) . fmap or
>>>
:t didAWorkerDie
didAWorkerDie :: Foldable t => IO (t Bool) -> IO Bool>>>
didAWorkerDie $ withScheduler Par $ \ s -> scheduleWork s $ pure False
False>>>
didAWorkerDie $ withScheduler Par $ \ s -> scheduleWork s $ myThreadId >>= killThread >> pure False
True>>>
withScheduler Par $ \ s -> scheduleWork s $ myThreadId >>= killThread >> pure False
*** Exception: thread killed