Copyright | (c) 2016 Michael Walker |
---|---|
License | MIT |
Maintainer | Michael Walker <mike@barrucadu.co.uk> |
Stability | experimental |
Portability | GeneralizedNewtypeDeriving |
Safe Haskell | None |
Language | Haskell2010 |
Systematic testing for concurrent computations.
- data Way g
- = Systematically Bounds
- | Randomly g Int
- runSCT :: (MonadRef r n, RandomGen g) => Way g -> MemType -> Conc n r a -> n [(Either Failure a, Trace)]
- runSCT' :: (MonadRef r n, RandomGen g, NFData a) => Way g -> MemType -> Conc n r a -> n [(Either Failure a, Trace)]
- resultsSet :: (MonadRef r n, RandomGen g, Ord a) => Way g -> MemType -> Conc n r a -> n (Set (Either Failure a))
- resultsSet' :: (MonadRef r n, RandomGen g, Ord a, NFData a) => Way g -> MemType -> Conc n r a -> n (Set (Either Failure a))
- data Bounds = Bounds {}
- noBounds :: Bounds
- sctBound :: MonadRef r n => MemType -> Bounds -> Conc n r a -> n [(Either Failure a, Trace)]
- newtype PreemptionBound = PreemptionBound Int
- sctPreBound :: MonadRef r n => MemType -> PreemptionBound -> Conc n r a -> n [(Either Failure a, Trace)]
- newtype FairBound = FairBound Int
- sctFairBound :: MonadRef r n => MemType -> FairBound -> Conc n r a -> n [(Either Failure a, Trace)]
- newtype LengthBound = LengthBound Int
- sctLengthBound :: MonadRef r n => MemType -> LengthBound -> Conc n r a -> n [(Either Failure a, Trace)]
- sctRandom :: (MonadRef r n, RandomGen g) => MemType -> g -> Int -> Conc n r a -> n [(Either Failure a, Trace)]
Running Concurrent Programs
How to explore the possible executions of a concurrent program.
Systematically Bounds | Systematically explore all executions within the bounds. |
Randomly g Int | Explore a fixed number of random executions, with the given PRNG. |
runSCT' :: (MonadRef r n, RandomGen g, NFData a) => Way g -> MemType -> Conc n r a -> n [(Either Failure a, Trace)] Source #
A strict variant of runSCT
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
:: (MonadRef r n, RandomGen g, Ord a) | |
=> Way g | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> Conc n r a | The computation to run many times. |
-> n (Set (Either Failure a)) |
Return the set of results of a concurrent program.
resultsSet' :: (MonadRef r n, RandomGen g, Ord a, NFData a) => Way g -> MemType -> Conc n r a -> n (Set (Either Failure a)) Source #
A strict variant of resultsSet
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
Bounded Partial-order Reduction
We can characterise the state of a concurrent computation by considering the ordering of dependent events. This is a partial order: independent events can be performed in any order without affecting the result, and so are not ordered.
Partial-order reduction is a technique for computing these partial orders, and only testing one total order for each partial order. This cuts down the amount of work to be done significantly. Bounded partial-order reduction is a further optimisation, which only considers schedules within some bound.
This module provides a combination pre-emption, fair, and length bounding runner:
- Pre-emption + fair bounding is useful for programs which use loop/yield control flows but are otherwise terminating.
- Pre-emption, fair + length bounding is useful for
non-terminating programs, and used by the testing functionality
in
Test.DejaFu
.
See Bounded partial-order reduction, K. Coons, M. Musuvathi, K. McKinley for more details.
No bounds enabled. This forces the scheduler to just use partial-order reduction and sleep sets to prune the search space. This will ONLY work if your computation always terminates!
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> Bounds | The combined bounds. |
-> Conc n r a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
SCT via BPOR.
Schedules are generated by running the computation with a deterministic scheduler with some initial list of decisions. At each step of execution, possible-conflicting actions are looked for, if any are found, "backtracking points" are added, to cause the events to happen in a different order in a future execution.
Note that unlike with non-bounded partial-order reduction, this may do some redundant work as the introduction of a bound can make previously non-interfering events interfere with each other.
Pre-emption Bounding
BPOR using pre-emption bounding. This adds conservative backtracking points at the prior context switch whenever a non-conervative backtracking point is added, as alternative decisions can influence the reachability of different states.
See the BPOR paper for more details.
newtype PreemptionBound Source #
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> PreemptionBound | The maximum number of pre-emptions to allow in a single execution |
-> Conc n r a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a pre-emption bounding scheduler.
Fair Bounding
BPOR using fair bounding. This bounds the maximum difference between the number of yield operations different threads have performed.
See the BPOR paper for more details.
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> FairBound | The maximum difference between the number of yield operations performed by different threads. |
-> Conc n r a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a fair bounding scheduler.
Length Bounding
BPOR using length bounding. This bounds the maximum length (in terms of primitive actions) of an execution.
newtype LengthBound Source #
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> LengthBound | The maximum length of a schedule, in terms of primitive actions. |
-> Conc n r a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a length bounding scheduler.
Random Scheduling
By greatly sacrificing completeness, testing of a large concurrent system can be greatly sped-up. Counter-intuitively, random scheduling has better bug-finding behaviour than just executing a program "for real" many times. This is perhaps because a random scheduler is more chaotic than the real scheduler.
:: (MonadRef r n, RandomGen g) | |
=> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> Conc n r a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
SCT via random scheduling.
Schedules are generated by assigning to each new thread a random weight. Threads are then scheduled by a weighted random selection.
This is not guaranteed to find all distinct results.