--- layout: post title: LiquidHaskell Caught Telling Lies! date: 2013-11-23 16:12 comments: true tags: - termination author: Ranjit Jhala and Niki Vazou published: true demo: TellingLies.hs --- One crucial goal of a type system is to provide the guarantee, memorably phrased by Milner as *well-typed programs don't go wrong*. The whole point of LiquidHaskell (and related systems) is to provide the above guarantee for expanded notions of "going wrong". All this time, we've claimed (and believed) that LiquidHaskell provided such a guarantee. We were wrong. LiquidHaskell tells lies.
27: {-@ LIQUID "--no-termination" @-}
28: 
29: module TellingLies where
30: 
31: import Language.Haskell.Liquid.Prelude (liquidError)
32: 
33: divide  :: Int -> Int -> Int
34: foo     :: Int -> Int
35: explode :: Int
To catch LiquidHaskell red-handed, we require 1. a notion of **going wrong**, 2. a **program** that clearly goes wrong, and the smoking gun, 3. a **lie** from LiquidHaskell that the program is safe. The Going Wrong --------------- Lets keep things simple with an old fashioned `div`-ision operator. A division by zero would be, clearly *going wrong*. To alert LiquidHaskell to this possibility, we encode "not going wrong" with the precondition that the denominator be non-zero.
54: {-@ divide :: n:Int -> d:{v:Int | v /= 0} -> Int @-}
55: (GHC.Types.Int)
-> {VV : (GHC.Types.Int) | (VV /= 0)} -> (GHC.Types.Int)divide (GHC.Types.Int)n 0 = {VV : [(GHC.Types.Char)] | false} -> {VV : (GHC.Types.Int) | false}liquidError {VV : [(GHC.Types.Char)] | ((len VV) >= 0) && ((sumLens VV) >= 0)}"no you didn't!"
56: divide n d = {VV : (GHC.Types.Int) | (VV == n)}n x1:(GHC.Types.Int)
-> x2:(GHC.Types.Int)
-> {VV : (GHC.Types.Int) | (((x1 >= 0) && (x2 >= 0)) => (VV >= 0)) && (((x1 >= 0) && (x2 >= 1)) => (VV <= x1)) && (VV == (x1 / x2))}`div` {VV : (GHC.Types.Int) | (VV /= 0)}d
The Program ----------- Now, consider the function `foo`.
65: {-@ foo :: n:Int -> {v:Nat | v < n} @-}
66: n:(GHC.Types.Int) -> {VV : (GHC.Types.Int) | (VV >= 0) && (VV < n)}foo (GHC.Types.Int)n | {VV : (GHC.Types.Int) | (VV == n)}n x1:(GHC.Types.Int)
-> x2:(GHC.Types.Int)
-> {VV : (GHC.Types.Bool) | (((Prop VV)) <=> (x1 > x2))}> {VV : (GHC.Types.Int) | (VV == (0  :  int))}0     = {VV : (GHC.Types.Int) | (VV == n)}n x1:(GHC.Types.Int)
-> x2:(GHC.Types.Int) -> {VV : (GHC.Types.Int) | (VV == (x1 - x2))}- {VV : (GHC.Types.Int) | (VV == (1  :  int))}1
67:       | otherwise = n:(GHC.Types.Int) -> {VV : (GHC.Types.Int) | (VV >= 0) && (VV < n)}foo {VV : (GHC.Types.Int) | (VV == n)}n
Now, `foo` should only be called with strictly positive values. In which case, the function returns a `Nat` that is strictly smaller than the input. The function diverges when called with `0` or negative inputs. Note that the signature of `foo` is slightly different, but nevertheless, legitimate, as *when* the function returns an output, the output is indeed a `Nat` that is *strictly less than* the input parameter `n`. Hence, LiquidHaskell happily checks that `foo` does indeed satisfy its given type. So far, nothing has gone wrong either in the program, or with LiquidHaskell, but consider this innocent little function:
86: (GHC.Types.Int)explode = let {VV : (GHC.Types.Int) | (VV == (0  :  int))}z = {VV : (GHC.Types.Int) | (VV == (0  :  int))}0
87:           in  (x:{VV : (GHC.Types.Int) | (VV == 0) && (VV == 1) && (VV == TellingLies.explode) && (VV == z) && (VV > 0) && (VV > TellingLies.explode) && (VV > z) && (VV < 0) && (VV < TellingLies.explode) && (VV < z)}
-> {VV : (GHC.Types.Int) | (VV == 0) && (VV == 1) && (VV == TellingLies.explode) && (VV == x) && (VV == z) && (VV > 0) && (VV > TellingLies.explode) && (VV > x) && (VV > z) && (VV < 0) && (VV < TellingLies.explode) && (VV < x) && (VV < z)}\{VV : (GHC.Types.Int) | (VV == 0) && (VV == 1) && (VV == TellingLies.explode) && (VV == z) && (VV > 0) && (VV > TellingLies.explode) && (VV > z) && (VV < 0) && (VV < TellingLies.explode) && (VV < z)}x -> ({VV : (GHC.Types.Int) | (VV == (2013  :  int))}2013 (GHC.Types.Int)
-> {VV : (GHC.Types.Int) | (VV /= 0)} -> (GHC.Types.Int)`divide` {VV : (GHC.Types.Int) | (VV == z) && (VV == (0  :  int))}z)) (n:(GHC.Types.Int) -> {VV : (GHC.Types.Int) | (VV >= 0) && (VV < n)}foo {VV : (GHC.Types.Int) | (VV == z) && (VV == (0  :  int))}z)
Thanks to *lazy evaluation*, the call to `foo` is ignored, so evaluating `explode` leads to a crash! Ugh! The Lie ------- However, LiquidHaskell produces a polyannish prognosis and cheerfully declares the program *safe*. Huh? Well, LiquidHaskell deduces that * `z == 0` from the binding, * `x : Nat` from the output type for `foo` * `x < z` from the output type for `foo` Of course, no such `x` exists! Or, rather, the SMT solver reasons
108:     z == 0 && x >= 0 && x < z  => z /= 0
as the hypotheses are inconsistent. In other words, LiquidHaskell deduces that the call to `divide` happens in an *impossible* environment, i.e. is dead code, and hence, the program is safe. In our defence, the above, sunny prognosis is not *totally misguided*. Indeed, if Haskell was like ML and had *strict evaluation* then indeed the program would be safe in that we would *not* go wrong i.e. would not crash with a divide-by-zero. But of course, thats a pretty lame excuse, since Haskell doesn't have strict semantics. So looks like LiquidHaskell (and hence, we) have been caught red-handed. Well then, is there a way to prevent LiquidHaskell from telling lies? That is, can we get Milner's *well-typed programs don't go wrong* guarantee under lazy evaluation? Thankfully, there is.