Goal: A reasonably efficient, easy-to-understand modern sat solver. I want it
as architecturally simple as the description in /Abstract DPLL and Abstract
DPLL Modulo Theories/ is conceptually, while retaining some efficient
Current state: decision heuristic/code cleanup/tests.
After some investigating, mad coding, and cursing, First UIP clause learning
has been implemented. For conceptual clarity, though, it is implemented in
terms of an explicit conflict graph, explicit dominator calculation, and
explicit cuts. Profiling shows that for conflict-heavy problems,
conflict-clause generation is no more a bottleneck than boolean constraint
This can and will be improved later.
backJump appears to work now. I used to have both Just and Nothing cases
there, but there was no reason why, since either you always reverse some past
decision (maybe the most recent one). Well, the problem had to do with
DecisionMap. Basically instead of keeping around the implications of a
decision literal (those as a result of unit propagation *and* reversed
decisions of higher decision levels), I was throwing them away. This was bad
Anyway, now it appears to work properly.
IT IS ALIVE
I do need the bad variables to be kept around, but I should only update the
list after I'm forced to backtrack *all the way to decision level 0*. Only
then is a variable bad. The Chaff paper makes you think you mark it as /tried
both ways/ the *first* time you see that, no matter the decision level.
On the other hand, why do I need a bad variable list at all? The DPLL paper
doesn't imply that I should. Hmm.
For some reason, the unsat (or fail condition, in the DPLL paper) was not
sufficient: I was trying out all possible assignments but in the end I didn't
get a conflict, just no more options. So I added an or to test for that case
in unsat. Still getting assignments under which some clauses are undefined;
though, it appears they can always be extended to proper, satisfying
assignments. But why does it stop before then?
Any time I've spent coding on this I've spent trying to figure out why some
inputs cause divergence. I finally figured out how (easily) to print out the
assignment after each step, and indeed the same decisions were being made
over, and over, and over again. So I decided to keep a bad list of literals
which have been tried both ways, without success, so that decLit never decides
based on one of those literals. Now it terminates, but the models are (at
least) non-total, and (possibly) simply incorrect. This leads me to believ
that either (1) the DPLL paper is wrong about not having to keep track of
whether you've tried a particular variable both ways, or (2) I misread the
paper or (3) I implemented incorrectly what is in the paper. Hopefully before
I die I will know which of the three is the case.
Profiling reveals instance Model Lit Assignment accounts for 74% of time, and
instance Model Lit Clause Assignment accounts for 12% of time. These occur in
the call graph under unitPropLit. So clearly I need a *better way of
searching for the next unit literal*.
''Abstract DPLL and DPLL Modulo Theories''
''Chaff: Engineering an Efficient SAT solver''
''An Extensible SAT-solver'' by Niklas Een, Niklas Sorensson
''Efficient Conflict Driven Learning in a Boolean Satisfiability Solver'' by
Zhang, Madigan, Moskewicz, Malik
''SAT-MICRO: petit mais costaud!'' by Conchon, Kanig, and Lescuyer