Copyright | (c) 2015--2017 Michael Walker |
---|---|
License | MIT |
Maintainer | Michael Walker <mike@barrucadu.co.uk> |
Stability | experimental |
Portability | BangPatterns, GADTs, GeneralizedNewtypeDeriving |
Safe Haskell | None |
Language | Haskell2010 |
Systematic testing for concurrent computations.
- data Way
- systematically :: Bounds -> Way
- randomly :: RandomGen g => g -> Int -> Way
- uniformly :: RandomGen g => g -> Int -> Way
- swarmy :: RandomGen g => g -> Int -> Int -> Way
- runSCT :: (MonadConc n, MonadRef r n) => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- resultsSet :: (MonadConc n, MonadRef r n, Ord a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- data Discard
- runSCTDiscard :: (MonadConc n, MonadRef r n) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- resultsSetDiscard :: (MonadConc n, MonadRef r n, Ord a) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- runSCT' :: (MonadConc n, MonadRef r n, NFData a) => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- resultsSet' :: (MonadConc n, MonadRef r n, Ord a, NFData a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- runSCTDiscard' :: (MonadConc n, MonadRef r n, NFData a) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)]
- resultsSetDiscard' :: (MonadConc n, MonadRef r n, Ord a, NFData a) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n (Set (Either Failure a))
- data Bounds = Bounds {}
- newtype PreemptionBound = PreemptionBound Int
- newtype FairBound = FairBound Int
- newtype LengthBound = LengthBound Int
- noBounds :: Bounds
- sctBound :: (MonadConc n, MonadRef r n) => MemType -> Bounds -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctBoundDiscard :: (MonadConc n, MonadRef r n) => (Either Failure a -> Maybe Discard) -> MemType -> Bounds -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctUniformRandom :: (MonadConc n, MonadRef r n, RandomGen g) => MemType -> g -> Int -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctWeightedRandom :: (MonadConc n, MonadRef r n, RandomGen g) => MemType -> g -> Int -> Int -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctUniformRandomDiscard :: (MonadConc n, MonadRef r n, RandomGen g) => (Either Failure a -> Maybe Discard) -> MemType -> g -> Int -> ConcT r n a -> n [(Either Failure a, Trace)]
- sctWeightedRandomDiscard :: (MonadConc n, MonadRef r n, RandomGen g) => (Either Failure a -> Maybe Discard) -> MemType -> g -> Int -> Int -> ConcT r n a -> n [(Either Failure a, Trace)]
Running Concurrent Programs
How to explore the possible executions of a concurrent program.
Since: 0.7.0.0
Systematically execute a program, trying all distinct executions within the bounds.
This corresponds to sctBound
.
Since: 0.7.0.0
:: RandomGen g | |
=> g | The random generator to drive the scheduling. |
-> Int | The number of executions to try. |
-> Way |
Randomly execute a program, exploring a fixed number of executions.
Threads are scheduled by a weighted random selection, where weights are assigned randomly on thread creation.
This corresponds to sctWeightedRandom
with weight re-use
disabled, and is not guaranteed to find all distinct results
(unlike systematically
/ sctBound
).
Since: 0.7.0.0
:: RandomGen g | |
=> g | The random generator to drive the scheduling. |
-> Int | The number of executions to try. |
-> Way |
Randomly execute a program, exploring a fixed number of executions.
Threads are scheduled by a uniform random selection.
This corresponds to sctUniformRandom
, and is not guaranteed to
find all distinct results (unlike systematically
/ sctBound
).
Since: 0.7.0.0
:: RandomGen g | |
=> g | The random generator to drive the scheduling. |
-> Int | The number of executions to try. |
-> Int | The number of executions to use the thread weights for. |
-> Way |
Randomly execute a program, exploring a fixed number of executions.
Threads are scheduled by a weighted random selection, where weights are assigned randomly on thread creation.
This corresponds to sctWeightedRandom
, and is not guaranteed to
find all distinct results (unlike systematically
/ sctBound
).
Since: 0.7.0.0
:: (MonadConc n, MonadRef r n) | |
=> Way | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
Explore possible executions of a concurrent program according to
the given Way
.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
:: (MonadConc n, MonadRef r n, Ord a) | |
=> Way | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> ConcT r n a | The computation to run many times. |
-> n (Set (Either Failure a)) |
Return the set of results of a concurrent program.
Since: 1.0.0.0
Discarding variants
An Either Failure a -> Maybe Discard
value can be used to
selectively discard results.
Since: 0.7.1.0
DiscardTrace | Discard the trace but keep the result. The result will appear to have an empty trace. |
DiscardResultAndTrace | Discard the result and the trace. It will simply not be reported as a possible behaviour of the program. |
:: (MonadConc n, MonadRef r n) | |
=> (Either Failure a -> Maybe Discard) | Selectively discard results. |
-> Way | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
A variant of runSCT
which can selectively discard results.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
:: (MonadConc n, MonadRef r n, Ord a) | |
=> (Either Failure a -> Maybe Discard) | Selectively discard results. Traces are always discarded. |
-> Way | How to run the concurrent program. |
-> MemType | The memory model to use for non-synchronised |
-> ConcT r n a | The computation to run many times. |
-> n (Set (Either Failure a)) |
A variant of resultsSet
which can selectively discard results.
Since: 1.0.0.0
Strict variants
runSCT' :: (MonadConc n, MonadRef r n, NFData a) => Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)] Source #
A strict variant of runSCT
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
resultsSet' :: (MonadConc n, MonadRef r n, Ord a, NFData a) => Way -> MemType -> ConcT r n a -> n (Set (Either Failure a)) Source #
A strict variant of resultsSet
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
Since: 1.0.0.0
runSCTDiscard' :: (MonadConc n, MonadRef r n, NFData a) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n [(Either Failure a, Trace)] Source #
A strict variant of runSCTDiscard
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
resultsSetDiscard' :: (MonadConc n, MonadRef r n, Ord a, NFData a) => (Either Failure a -> Maybe Discard) -> Way -> MemType -> ConcT r n a -> n (Set (Either Failure a)) Source #
A strict variant of resultsSetDiscard
.
Demanding the result of this will force it to normal form, which may be more efficient in some situations.
Since: 1.0.0.0
Bounded Partial-order Reduction
We can characterise the state of a concurrent computation by considering the ordering of dependent events. This is a partial order: independent events can be performed in any order without affecting the result, and so are not ordered.
Partial-order reduction is a technique for computing these partial orders, and only testing one total order for each partial order. This cuts down the amount of work to be done significantly. Bounded partial-order reduction is a further optimisation, which only considers schedules within some bound.
This module provides a combination pre-emption, fair, and length bounding runner:
- Pre-emption + fair bounding is useful for programs which use loop/yield control flows but are otherwise terminating.
- Pre-emption, fair + length bounding is useful for
non-terminating programs, and used by the testing functionality
in
Test.DejaFu
.
See Bounded partial-order reduction, K. Coons, M. Musuvathi, K. McKinley for more details.
Since: 0.2.0.0
newtype PreemptionBound Source #
BPOR using pre-emption bounding. This adds conservative backtracking points at the prior context switch whenever a non-conervative backtracking point is added, as alternative decisions can influence the reachability of different states.
See the BPOR paper for more details.
Since: 0.2.0.0
BPOR using fair bounding. This bounds the maximum difference between the number of yield operations different threads have performed.
See the BPOR paper for more details.
Since: 0.2.0.0
newtype LengthBound Source #
BPOR using length bounding. This bounds the maximum length (in terms of primitive actions) of an execution.
Since: 0.2.0.0
No bounds enabled. This forces the scheduler to just use partial-order reduction and sleep sets to prune the search space. This will ONLY work if your computation always terminates!
Since: 0.3.0.0
:: (MonadConc n, MonadRef r n) | |
=> MemType | The memory model to use for non-synchronised |
-> Bounds | The combined bounds. |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
SCT via BPOR.
Schedules are generated by running the computation with a deterministic scheduler with some initial list of decisions. At each step of execution, possible-conflicting actions are looked for, if any are found, "backtracking points" are added, to cause the events to happen in a different order in a future execution.
Note that unlike with non-bounded partial-order reduction, this may do some redundant work as the introduction of a bound can make previously non-interfering events interfere with each other.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
:: (MonadConc n, MonadRef r n) | |
=> (Either Failure a -> Maybe Discard) | Selectively discard results. |
-> MemType | The memory model to use for non-synchronised |
-> Bounds | The combined bounds. |
-> ConcT r n a | The computation to run many times |
-> n [(Either Failure a, Trace)] |
A variant of sctBound
which can selectively discard results.
The exact executions tried, and the order in which results are found, is unspecified and may change between releases.
Since: 1.0.0.0
Random Scheduling
By greatly sacrificing completeness, testing of a large concurrent system can be greatly sped-up. Counter-intuitively, random scheduling has better bug-finding behaviour than just executing a program "for real" many times. This is perhaps because a random scheduler is more chaotic than the real scheduler.
:: (MonadConc n, MonadRef r n, RandomGen g) | |
=> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
SCT via uniform random scheduling.
Schedules are generated by assigning to each new thread a random weight. Threads are then scheduled by a weighted random selection.
This is not guaranteed to find all distinct results.
Since: 1.0.0.0
:: (MonadConc n, MonadRef r n, RandomGen g) | |
=> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> Int | The number of executions to use the same set of weights for. |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
SCT via weighted random scheduling.
Schedules are generated by assigning to each new thread a random weight. Threads are then scheduled by a weighted random selection.
This is not guaranteed to find all distinct results.
Since: 1.0.0.0
sctUniformRandomDiscard Source #
:: (MonadConc n, MonadRef r n, RandomGen g) | |
=> (Either Failure a -> Maybe Discard) | Selectively discard results. |
-> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
A variant of sctUniformRandom
which can selectively discard
results.
This is not guaranteed to find all distinct results.
Since: 1.0.0.0
sctWeightedRandomDiscard Source #
:: (MonadConc n, MonadRef r n, RandomGen g) | |
=> (Either Failure a -> Maybe Discard) | Selectively discard results. |
-> MemType | The memory model to use for non-synchronised |
-> g | The random number generator. |
-> Int | The number of executions to perform. |
-> Int | The number of executions to use the same set of weights for. |
-> ConcT r n a | The computation to run many times. |
-> n [(Either Failure a, Trace)] |
A variant of sctWeightedRandom
which can selectively discard
results.
This is not guaranteed to find all distinct results.
Since: 1.0.0.0