Copyright | (c) 2016 Michael Walker |
---|---|
License | MIT |
Maintainer | Michael Walker <mike@barrucadu.co.uk> |
Stability | experimental |
Portability | GADTs, GeneralizedNewtypeDeriving |
Safe Haskell | None |
Language | Haskell2010 |
Systematic testing for concurrent computations.
- data Way where
- runSCT :: MonadRef r n => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- runSCT' :: (MonadRef r n, NFData a) => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- resultsSet :: (MonadRef r n, Ord a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- resultsSet' :: (MonadRef r n, Ord a, NFData a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- data Bounds = Bounds {}
- noBounds :: Bounds
- sctBound :: MonadRef r n => MemType -> Bounds -> ConcT r n a -> n [(Either Failure a, Trace)]
- newtype PreemptionBound = PreemptionBound Int
- sctPreBound :: MonadRef r n => MemType -> PreemptionBound -> ConcT r n a -> n [(Either Failure a, Trace)]
- newtype FairBound = FairBound Int
- sctFairBound :: MonadRef r n => MemType -> FairBound -> ConcT r n a -> n [(Either Failure a, Trace)]
- newtype LengthBound = LengthBound Int
- sctLengthBound :: MonadRef r n => MemType -> LengthBound -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctRandom :: (MonadRef r n, RandomGen g) => MemType -> g -> Int -> ConcT r n a -> n [(Either Failure a, Trace)]
Running Concurrent Programs
How to explore the possible executions of a concurrent program:
- Systematically explore all executions within the bounds; or
- Explore a fixed number of random executions, with the given PRNG.
Since: 0.6.0.0
runSCT' :: (MonadRef r n, NFData a) => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)] Source #
A strict variant of runSCT
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
Since: 0.6.0.0
:: (MonadRef r n, Ord a) | |
=> Way | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> ConcT r n a | The computation to run many times. |
-> n (Set (Either Failure a)) |
Return the set of results of a concurrent program.
Since: 0.6.0.0
resultsSet' :: (MonadRef r n, Ord a, NFData a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a)) Source #
A strict variant of resultsSet
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
Since: 0.6.0.0
Bounded Partial-order Reduction
We can characterise the state of a concurrent computation by considering the ordering of dependent events. This is a partial order: independent events can be performed in any order without affecting the result, and so are not ordered.
Partial-order reduction is a technique for computing these partial orders, and only testing one total order for each partial order. This cuts down the amount of work to be done significantly. Bounded partial-order reduction is a further optimisation, which only considers schedules within some bound.
This module provides a combination pre-emption, fair, and length bounding runner:
- Pre-emption + fair bounding is useful for programs which use loop/yield control flows but are otherwise terminating.
- Pre-emption, fair + length bounding is useful for
non-terminating programs, and used by the testing functionality
in
Test.DejaFu
.
See Bounded partial-order reduction, K. Coons, M. Musuvathi, K. McKinley for more details.
Since: 0.2.0.0
No bounds enabled. This forces the scheduler to just use partial-order reduction and sleep sets to prune the search space. This will ONLY work if your computation always terminates!
Since: 0.3.0.0
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> Bounds | The combined bounds. |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
SCT via BPOR.
Schedules are generated by running the computation with a deterministic scheduler with some initial list of decisions. At each step of execution, possible-conflicting actions are looked for, if any are found, "backtracking points" are added, to cause the events to happen in a different order in a future execution.
Note that unlike with non-bounded partial-order reduction, this may do some redundant work as the introduction of a bound can make previously non-interfering events interfere with each other.
Since: 0.5.0.0
Pre-emption Bounding
BPOR using pre-emption bounding. This adds conservative backtracking points at the prior context switch whenever a non-conervative backtracking point is added, as alternative decisions can influence the reachability of different states.
See the BPOR paper for more details.
newtype PreemptionBound Source #
Since: 0.2.0.0
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> PreemptionBound | The maximum number of pre-emptions to allow in a single execution |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a pre-emption bounding scheduler.
Since: 0.4.0.0
Fair Bounding
BPOR using fair bounding. This bounds the maximum difference between the number of yield operations different threads have performed.
See the BPOR paper for more details.
Since: 0.2.0.0
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> FairBound | The maximum difference between the number of yield operations performed by different threads. |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a fair bounding scheduler.
Since: 0.4.0.0
Length Bounding
BPOR using length bounding. This bounds the maximum length (in terms of primitive actions) of an execution.
newtype LengthBound Source #
Since: 0.2.0.0
:: MonadRef r n | |
=> MemType | The memory model to use for non-synchronised |
-> LengthBound | The maximum length of a schedule, in terms of primitive actions. |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
An SCT runner using a length bounding scheduler.
Since: 0.4.0.0
Random Scheduling
By greatly sacrificing completeness, testing of a large concurrent system can be greatly sped-up. Counter-intuitively, random scheduling has better bug-finding behaviour than just executing a program "for real" many times. This is perhaps because a random scheduler is more chaotic than the real scheduler.
:: (MonadRef r n, RandomGen g) | |
=> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
SCT via random scheduling.
Schedules are generated by assigning to each new thread a random weight. Threads are then scheduled by a weighted random selection.
This is not guaranteed to find all distinct results.
Since: 0.5.0.0