-- Hoogle documentation, generated by Haddock
-- See Hoogle, http://www.haskell.org/hoogle/
-- | Log-domain floating point numbers
--
-- This module presents a type for storing numbers in the log-domain. The
-- main reason for doing this is to prevent underflow when multiplying
-- many probabilities as is done in Hidden Markov Models. It is also
-- helpful for preventing overflow.
@package logfloat
@version 0.14.0
-- | Hugs (September 2006) has buggy definitions for isNaN and
-- isInfinite on Float and Double. If this module is run through
-- CPP with the macro HUGS set to a value no larger than
-- 200609, then correct definitions are used. Otherwise the Prelude
-- definitions are used (which should be correct for other compilers).
-- For example, run Hugs with
--
--
-- hugs -F'cpp -P -DHUGS=200609' Hugs/RealFloat.hs
--
--
-- N.B. The corrected definitions have only been tested to work for
-- Float and Double. These definitions should probably not
-- be used for other RealFloat types.
--
-- This installation was compiled with the normal Prelude version.
-- This should be correct.
module Hugs.RealFloat
isInfinite :: RealFloat a => a -> Bool
isNaN :: RealFloat a => a -> Bool
-- | The Prelude's Ord class for dealing with ordered types is often
-- onerous to use because it requires Eq as well as a total
-- ordering. While such total orderings are common, partial orderings are
-- more so. This module presents a class for partially ordered types.
module Data.Number.PartialOrd
-- | This class defines a partially ordered type. The method names were
-- chosen so as not to conflict with Ord and Eq. We use
-- Maybe instead of defining new types PartialOrdering
-- and FuzzyBool because this way should make the class easier
-- to use.
--
-- Minimum complete definition: cmp
class PartialOrd a
-- | like compare
cmp :: PartialOrd a => a -> a -> Maybe Ordering
-- | like (>)
gt :: PartialOrd a => a -> a -> Maybe Bool
-- | like (>=)
ge :: PartialOrd a => a -> a -> Maybe Bool
-- | like (==)
eq :: PartialOrd a => a -> a -> Maybe Bool
-- | like (/=)
ne :: PartialOrd a => a -> a -> Maybe Bool
-- | like (<=)
le :: PartialOrd a => a -> a -> Maybe Bool
-- | like (<)
lt :: PartialOrd a => a -> a -> Maybe Bool
-- | like max. The default instance returns the left argument when
-- they're equal.
maxPO :: PartialOrd a => a -> a -> Maybe a
-- | like min. The default instance returns the left argument when
-- they're equal.
minPO :: PartialOrd a => a -> a -> Maybe a
infix 4 `gt`
infix 4 `ge`
infix 4 `eq`
infix 4 `ne`
infix 4 `le`
infix 4 `lt`
infix 4 `maxPO`
infix 4 `minPO`
-- | Like Data.Ord.comparing. Helpful in conjunction with the
-- xxxBy family of functions from Data.List
comparingPO :: PartialOrd b => (a -> b) -> a -> a -> Maybe Ordering
instance GHC.Classes.Ord a => Data.Number.PartialOrd.PartialOrd a
instance Data.Number.PartialOrd.PartialOrd GHC.Types.Float
instance Data.Number.PartialOrd.PartialOrd GHC.Types.Double
-- | This module presents a type class for numbers which have
-- representations for transfinite values. The idea originated from the
-- IEEE-754 floating-point special values, used by
-- Data.Number.LogFloat. However not all Fractional types
-- necessarily support transfinite values. In particular, Ratio
-- types including Rational do not have portable representations.
--
-- For the Glasgow compiler (GHC 6.8.2), GHC.Real defines
-- 1%0 and 0%0 as representations for infinity
-- and notANumber, but most operations on them will raise
-- exceptions. If toRational is used on an infinite floating
-- value, the result is a rational with a numerator sufficiently large
-- that it will overflow when converted back to a Double. If
-- used on NaN, the result would buggily convert back as
-- negativeInfinity. For more discussion on why this approach is
-- problematic, see:
--
--
--
-- Hugs (September 2006) stays closer to the haskell98 spec and offers no
-- way of constructing those values, raising arithmetic overflow errors
-- if attempted.
module Data.Number.Transfinite
-- | Many numbers are not Bounded yet, even though they can
-- represent arbitrarily large values, they are not necessarily able to
-- represent transfinite values such as infinity itself. This class is
-- for types which are capable of representing such values. Notably, this
-- class does not require the type to be Fractional nor
-- Floating since integral types could also have representations
-- for transfinite values. By popular demand the Num restriction
-- has been lifted as well, due to complications of defining Show
-- or Eq for some types.
--
-- In particular, this class extends the ordered projection to have a
-- maximum value infinity and a minimum value
-- negativeInfinity, as well as an exceptional value
-- notANumber. All the natural laws regarding infinity
-- and negativeInfinity should pertain. (Some of these are
-- discussed below.)
--
-- Hugs (September 2006) has buggy Prelude definitions for isNaN
-- and isInfinite on Float and Double. This module provides
-- correct definitions, so long as Hugs.RealFloat is compiled
-- correctly.
class (PartialOrd a) => Transfinite a
-- | A transfinite value which is greater than all finite values. Adding or
-- subtracting any finite value is a no-op. As is multiplying by any
-- non-zero positive value (including infinity), and dividing by
-- any positive finite value. Also obeys the law negate infinity =
-- negativeInfinity with all appropriate ramifications.
infinity :: Transfinite a => a
-- | A transfinite value which is less than all finite values. Obeys all
-- the same laws as infinity with the appropriate changes for
-- the sign difference.
negativeInfinity :: Transfinite a => a
-- | An exceptional transfinite value for dealing with undefined results
-- when manipulating infinite values. The following operations must
-- return notANumber, where inf is any value which
-- isInfinite:
--
--
-- infinity + negativeInfinity
-- negativeInfinity + infinity
-- infinity - infinity
-- negativeInfinity - negativeInfinity
-- inf * 0
-- 0 * inf
-- inf / inf
-- inf `div` inf
-- 0 / 0
-- 0 `div` 0
--
--
-- Additionally, any mathematical operations on notANumber must
-- also return notANumber, and any equality or ordering
-- comparison on notANumber must return False
-- (violating the law of the excluded middle, often assumed but not
-- required for Eq; thus, eq and ne are preferred
-- over (==) and (/=)). Since it returns false for
-- equality, there may be more than one machine representation of this
-- value.
notANumber :: Transfinite a => a
-- | Return true for both infinity and negativeInfinity,
-- false for all other values.
isInfinite :: Transfinite a => a -> Bool
-- | Return true only for notANumber.
isNaN :: Transfinite a => a -> Bool
-- | Since the normal log throws an error on zero, we have to
-- redefine it in order for things to work right. Arguing from limits we
-- can see that log 0 == negativeInfinity. Newer versions of GHC
-- have this behavior already, but older versions and Hugs do not.
--
-- This function will raise an error when taking the log of negative
-- numbers, rather than returning notANumber as the newer GHC
-- implementation does. The reason being that typically this is a logical
-- error, and notANumber allows the error to propagate silently.
--
-- In order to improve portability, the Transfinite class is
-- required to indicate that the Floating type does in fact have a
-- representation for negative infinity. Both native floating types
-- (Double and Float) are supported. If you define your own
-- instance of Transfinite, verify the above equation holds for
-- your 0 and negativeInfinity. If it doesn't, then you
-- should avoid importing our log and will probably want
-- converters to handle the discrepancy.
--
-- For GHC, this version of log has rules for fusion with
-- exp. These can give different behavior by preventing overflow
-- to infinity and preventing errors for taking the logarithm of
-- negative values. For Double and Float they can also give
-- different answers due to eliminating floating point fuzz. The rules
-- strictly improve mathematical accuracy, however they should be noted
-- in case your code depends on the implementation details.
log :: (Floating a, Transfinite a) => a -> a
instance Data.Number.Transfinite.Transfinite GHC.Types.Double
instance Data.Number.Transfinite.Transfinite GHC.Types.Float
-- | This module presents a type class for generic conversion between
-- numeric types, generalizing realToFrac in order to overcome
-- problems with pivoting through Rational
module Data.Number.RealToFrac
-- | The realToFrac function is defined to pivot through a
-- Rational according to the haskell98 spec. This is non-portable
-- and problematic as discussed in Data.Number.Transfinite. Since
-- there is resistance to breaking from the spec, this class defines a
-- reasonable variant which deals with transfinite values appropriately.
--
-- There is a generic instance from any Transfinite Real to any
-- Transfinite Fractional, using checks to ensure correctness. GHC has
-- specialized versions for some types which use primitive converters
-- instead, for large performance gains. (These definitions are hidden
-- from other compilers via CPP.) Due to a bug in Haddock the specialized
-- instances are shown twice and the generic instance isn't shown at all.
-- Since the instances are overlapped, you'll need to give type
-- signatures if the arguments to realToFrac are polymorphic.
-- There's also a generic instance for any Real Fractional type to
-- itself, thus if you write any generic instances beware of incoherence.
--
-- If any of these restrictions (CPP, GHC-only optimizations,
-- OverlappingInstances) are onerous to you, contact the maintainer (we
-- like patches). Note that this does work for Hugs with suitable
-- options (e.g. hugs -98 +o -F'cpp -P'). However, Hugs doesn't
-- allow IncoherentInstances nor does it allow diamonds with
-- OverlappingInstances, which restricts the ability to add
-- additional generic instances.
class (Real a, Fractional b) => RealToFrac a b
realToFrac :: RealToFrac a b => a -> b
instance (GHC.Real.Real a, GHC.Real.Fractional a) => Data.Number.RealToFrac.RealToFrac a a
instance (GHC.Real.Real a, Data.Number.Transfinite.Transfinite a, GHC.Real.Fractional b, Data.Number.Transfinite.Transfinite b) => Data.Number.RealToFrac.RealToFrac a b
instance Data.Number.RealToFrac.RealToFrac GHC.Types.Int GHC.Types.Float
instance Data.Number.RealToFrac.RealToFrac GHC.Types.Int GHC.Types.Double
instance Data.Number.RealToFrac.RealToFrac GHC.Num.Integer.Integer GHC.Types.Float
instance Data.Number.RealToFrac.RealToFrac GHC.Num.Integer.Integer GHC.Types.Double
instance Data.Number.RealToFrac.RealToFrac GHC.Types.Float GHC.Types.Double
instance Data.Number.RealToFrac.RealToFrac GHC.Types.Double GHC.Types.Float
-- | This module provides implementations for computing various logarithmic
-- and exponential functions without losing precision (as the naive
-- implementations do). These are the "raw" implementations; i.e., sans
-- newtypes and other conveniences. Since the lack of newtypes means we
-- can't rely on types to clarify things, we use the traditional baroque
-- names for things. The design considerations behind (most of) these
-- implementations are documented at:
-- https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf
--
-- In base-4.9.0.0 GHC added some of these to the Floating class
-- exported from Numeric. Alas, they provide default definitions
-- using the naive implementations, so one can't really rely on the
-- Floating methods being precision preserving. Overall, the
-- specific instance for Double looks fine (though they use
-- different cutoffs for log1pexp for some reason); but it's easy
-- enough to reimplement here, to make absolutely sure we're getting the
-- right thing.
--
-- @since: 0.14.0
module Data.Number.LogFloat.Raw
-- | Compute exp x - 1 without losing precision.
--
-- Standard C libraries provide a special definition for expm1
-- which is more accurate than doing the naive thing, especially for very
-- small arguments.
--
-- This installation was compiled to use the FFI version.
expm1 :: Double -> Double
-- | Compute log (1 + x) without losing precision.
--
-- Standard C libraries provide a special definition for this function,
-- which is more accurate than doing the naive thing, especially for very
-- small arguments. For example, the naive version underflows around
-- 2 ** -53, whereas the specialized version underflows around
-- 2 ** -1074.
--
-- N.B. The statistics:Statistics.Math module provides a pure
-- Haskell implementation of log1p for those who are interested.
-- We do not copy it here because it relies on the vector
-- package which is non-portable. If there is sufficient interest, a
-- portable variant of that implementation could be made. Contact the
-- maintainer if the FFI and naive implementations are insufficient for
-- your needs.
--
-- This installation was compiled to use the FFI version.
log1p :: Double -> Double
-- | Compute log (1 - exp x) without losing precision.
log1mexp :: Double -> Double
-- | Compute log (1 + exp x) without losing precision.
-- Algebraically this is 0 ⊔ x, which is the log-domain's
-- analogue of 1 + x.
log1pexp :: Double -> Double
-- | O(n). Log-domain summation, aka: (log . sum . fmap
-- exp). Algebraically this is ⨆ xs, which is the
-- log-domain equivalent of ∑ xs.
--
-- N.B., this function requires two passes over the input. Thus,
-- it is not amenable to list fusion, and hence will use a lot of memory
-- when summing long lists.
logSumExp :: [Double] -> Double
-- | O(n). Floating-point summation, via Kahan's algorithm. This is
-- nominally equivalent to sum, but greatly mitigates the problem
-- of losing precision.
--
-- N.B., this only requires a single pass over the data; but we
-- use a strict left fold for performance, so it's still not amenable to
-- list fusion.
kahanSum :: [Double] -> Double
-- | O(n). Log-domain softmax, aka: (fmap log . softmax).
--
-- N.B., this requires three passes over the data: two for the
-- logSumExp, and a third for the normalization itself. Thus, it
-- is not amenable to list fusion, and hence will use a lot of memory
-- when summing long lists.
logSoftmax :: [Double] -> [Double]
-- | O(n). Normal-domain softmax: > softmax xs = [ exp x / sum [
-- exp y | y <- xs] | x <- xs ]
--
-- N.B., this requires three passes over the data: same as
-- logSoftmax.
softmax :: [Double] -> [Double]
-- | The logistic function; aka, the inverse of logit. > sigmoid
-- x = 1 / (1 + exp (-x)) > sigmoid x = exp x / (exp x + 1) >
-- sigmoid x = (1 + tanh (x2)) 2
sigmoid :: Double -> Double
-- | The quantile function; aka, the inverse of sigmoid. > logit
-- x = log (x / (1 - x)) > logit x = 2 * atanh (2*x - 1)
logit :: Double -> Double
-- | A variant of logit for when the argument is already in the
-- log-domain; hence, logitExp = logit . exp
logitExp :: Double -> Double
-- | This module presents a type for storing numbers in the log-domain. The
-- main reason for doing this is to prevent underflow when multiplying
-- many small probabilities as is done in Hidden Markov Models and other
-- statistical models often used for natural language processing. The
-- log-domain also helps prevent overflow when multiplying many large
-- numbers. In rare cases it can speed up numerical computation (since
-- addition is faster than multiplication, though logarithms are
-- exceptionally slow), but the primary goal is to improve accuracy of
-- results. A secondary goal has been to maximize efficiency since these
-- computations are frequently done within a O(n^3) loop.
--
-- The LogFloat of this module is restricted to non-negative
-- numbers for efficiency's sake, see Data.Number.LogFloat.Signed
-- for doing signed log-domain calculations.
module Data.Number.LogFloat
-- | A LogFloat is just a Double with a special
-- interpretation. The logFloat function is presented instead of
-- the constructor, in order to ensure semantic conversion. At present
-- the Show instance will convert back to the normal-domain, and
-- hence will underflow at that point. This behavior may change in the
-- future. At present, the Read instance parses things in the
-- normal-domain and then converts them to the log-domain. Again, this
-- behavior may change in the future.
--
-- Because logFloat performs the semantic conversion, we can use
-- operators which say what we mean rather than saying what we're
-- actually doing to the underlying representation. That is, equivalences
-- like the following are true[1] thanks to type-class overloading:
--
--
-- logFloat (p + q) == logFloat p + logFloat q
-- logFloat (p * q) == logFloat p * logFloat q
--
--
-- (Do note, however, that subtraction can and negation will throw
-- errors: since LogFloat can only represent the positive half
-- of Double. Num is the wrong abstraction to put at the
-- bottom of the numeric type-class hierarchy; but alas, we're stuck with
-- it.)
--
-- Performing operations in the log-domain is cheap, prevents underflow,
-- and is otherwise very nice for dealing with miniscule probabilities.
-- However, crossing into and out of the log-domain is expensive and
-- should be avoided as much as possible. In particular, if you're doing
-- a series of multiplications as in lp * logFloat q * logFloat
-- r it's faster to do lp * logFloat (q * r) if you're
-- reasonably sure the normal-domain multiplication won't underflow;
-- because that way you enter the log-domain only once, instead of twice.
-- Also note that, for precision, if you're doing more than a few
-- multiplications in the log-domain, you should use product
-- rather than using (*) repeatedly.
--
-- Even more particularly, you should avoid addition whenever
-- possible. Addition is provided because sometimes we need it, and the
-- proper implementation is not immediately apparent. However, between
-- two LogFloats addition requires crossing the exp/log boundary
-- twice; with a LogFloat and a Double it's three times,
-- since the regular number needs to enter the log-domain first. This
-- makes addition incredibly slow. Again, if you can parenthesize to do
-- normal-domain operations first, do it!
--
--
-- - 1 That is, true up-to underflow and floating point
-- fuzziness. Which is, of course, the whole point of this module.
--
data LogFloat
-- | Constructor which does semantic conversion from normal-domain to
-- log-domain. Throws errors on negative and NaN inputs. If p is
-- non-negative, then following equivalence holds:
--
--
-- logFloat p == logToLogFloat (log p)
--
--
-- If p is NaN or negative, then the two sides differ only in
-- which error is thrown.
logFloat :: Double -> LogFloat
-- | Semantically convert our log-domain value back into the normal-domain.
-- Beware of overflow/underflow. The following equivalence holds (without
-- qualification):
--
--
-- fromLogFloat == exp . logFromLogFloat
--
fromLogFloat :: LogFloat -> Double
-- | Constructor which assumes the argument is already in the log-domain.
-- Throws errors on notANumber inputs.
logToLogFloat :: Double -> LogFloat
-- | Return the log-domain value itself without conversion.
logFromLogFloat :: LogFloat -> Double
-- | O(n). Compute the sum of a finite list of LogFloats,
-- being careful to avoid underflow issues. That is, the following
-- equivalence holds (modulo underflow and all that):
--
--
-- logFloat . sum == sum . map logFloat
--
--
-- N.B., this function requires two passes over the input. Thus,
-- it is not amenable to list fusion, and hence will use a lot of memory
-- when summing long lists.
--
-- Since: 0.13
sum :: [LogFloat] -> LogFloat
-- | O(n). Compute the product of a finite list of LogFloats,
-- being careful to avoid numerical error due to loss of precision. That
-- is, the following equivalence holds (modulo underflow and all that):
--
--
-- logFloat . product == product . map logFloat
--
--
-- Since: 0.13
product :: [LogFloat] -> LogFloat
-- | O(1). Compute powers in the log-domain; that is, the following
-- equivalence holds (modulo underflow and all that):
--
--
-- logFloat (p ** m) == logFloat p `pow` m
--
--
-- Since: 0.13
pow :: LogFloat -> Double -> LogFloat
infixr 8 `pow`
-- | Compute log (1 + x) without losing precision.
--
-- Standard C libraries provide a special definition for this function,
-- which is more accurate than doing the naive thing, especially for very
-- small arguments. For example, the naive version underflows around
-- 2 ** -53, whereas the specialized version underflows around
-- 2 ** -1074.
--
-- N.B. The statistics:Statistics.Math module provides a pure
-- Haskell implementation of log1p for those who are interested.
-- We do not copy it here because it relies on the vector
-- package which is non-portable. If there is sufficient interest, a
-- portable variant of that implementation could be made. Contact the
-- maintainer if the FFI and naive implementations are insufficient for
-- your needs.
--
-- This installation was compiled to use the FFI version.
log1p :: Double -> Double
-- | Compute exp x - 1 without losing precision.
--
-- Standard C libraries provide a special definition for expm1
-- which is more accurate than doing the naive thing, especially for very
-- small arguments.
--
-- This installation was compiled to use the FFI version.
expm1 :: Double -> Double
instance Foreign.Storable.Storable Data.Number.LogFloat.LogFloat
instance GHC.Classes.Ord Data.Number.LogFloat.LogFloat
instance GHC.Classes.Eq Data.Number.LogFloat.LogFloat
instance Data.Array.Base.IArray Data.Array.Base.UArray Data.Number.LogFloat.LogFloat
instance Data.Number.PartialOrd.PartialOrd Data.Number.LogFloat.LogFloat
instance GHC.Read.Read Data.Number.LogFloat.LogFloat
instance GHC.Show.Show Data.Number.LogFloat.LogFloat
instance GHC.Num.Num Data.Number.LogFloat.LogFloat
instance GHC.Real.Fractional Data.Number.LogFloat.LogFloat
instance GHC.Real.Real Data.Number.LogFloat.LogFloat