Changelog for ad-4.5.6
4.5.6 [2024.05.01]
- Add specialized implementations of
log1p,expm1,log1pexp, andlog1mexpinFloatinginstances.
4.5.5 [2024.01.28]
Numeric.AD.Mode.Reverse.Doublenow handles IEEE floating-point special values (e.g.,NaNandInf) correctly whenadis compiled with+ffi. Note that this increase in floating-point accuracy may come at a slight performance penalty in certain applications. If this negatively impacts your application, please mention this at https://github.com/ekmett/ad/issues/106.
4.5.4 [2023.02.19]
-
Add a
Num (Scalar (Scalar t))constraint toOn'sModeinstance, which is required to make it typecheck with GHC 9.6.(Note that this constraint was already present implicitly due to superclass expansion, so this is not a breaking change. The only reason that it must be added explicitly with GHC 9.6 or later is due to 9.6 being more conservative with superclass expansion.)
4.5.3 [2023.01.21]
- Support building with GHC 9.6.
4.5.2 [2022.06.17]
- Fix a bug that would cause
Numeric.AD.Mode.Reverse.diffandNumeric.AD.Mode.Reverse.Double.diffto compute different answers under certain circumstances whenadwas compiled with the+ffiflag.
4.5.1 [2022.05.18]
- Allow building with
transformers-0.6.*.
4.5 [2021.11.07]
- The build-type has been changed from
CustomtoSimple. To achieve this, thedocteststest suite has been removed in favor of usingcabal-docspecto run the doctests. - Expose
Densemode AD again. - Add a
Dense.Representablemode, which is a variant ofDensethat exploitsRepresentablefunctors rather thanTraversablefunctors. Representablecan now also be useful as it can allow us tounjetto convert a value of typeJet f asafely back intoCofree f a.- Improve
Reverse.Doublemode performance by increasing strictness and using an FFI-based tape. - Reverse mode AD uses
reifyTypeableinternally. This means the region parameter/infinitesimals that mark each tape areTypeable, allowing you to do things like define instances ofExceptionthat name the region parameter and perform similar shenanigans. - Drastically reduce code duplication in
Double-based modes, enabling more of them. - Fixed a number of modes that were handling
(**)improperly due to the aforementioned code duplication problem. - Add a
Tower.Doublemode (internally) that uses lazy lists of strict doubles. - Add a
Kahn.Doublemode (internally) that holds strict doubles in the graph. - Switch to using pattern synonyms internally for detecting "known" zeros.
- Drop support for versions of GHC before 8.0
- The
.Doublemodes have been modified to exploit the fact that we can definitely check a Double for equality with 0. In future releases we may require a typeclass that offers the ability to check for known zeroes for all types you process. This will allow us to improve the quality of the results, but may require you to either write an small instance declaration if you are processing some esoteric data type of your own, or put on/off a newtype that indicates to skip known zero optimizations or to use Eq. If there are particularly common types with tricky cases, a futuread-instancespackage might be the right way forward for them to find a home. - Add
Numeric.AD.Double, which tries to mix and match between all the different AD modes to produce optimal results but uses the various.Doublespecializations to reduce the amount of boxing and indirection on the heap. - Add
Numeric.AD.Halley.Double. - Removed the
fooNoEqvariants fromNewton.Double,Doubles always have anEqinstance.
4.4.1 [2020.10.13]
- Change the fixity of
:-inNumeric.AD.Jetto be right-associative. Previously, it wasinfixl, which made things likex :- y :- znearly unusable. - Fix backpropagation error in Kahn mode.
- Fix bugs in the
Erfinstance forForwardDouble. - Add
Numeric.AD.Mode.Reverse.Double, a variant ofNumeric.AD.Mode.Reversethat is specialized toDouble. - Re-export
Jet(..),headJet,tailJetandjetfromNumeric.AD.
4.4 [2020.02.03]
-
Generalize the type of
stochasticGradientDescent:-stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a] +stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => e -> f (Reverse s a) -> Reverse s a) -> [e] -> f a -> [f a]
4.3.6 [2019.02.28]
- Make the test suite pass when built against
musllibc.
4.3.5 [2018.01.18]
- Add
Semigroupinstance forId.
4.3.4
- Support
doctest-0.12
4.3.3
- Revamp
Setup.hsto usecabal-doctest. This makes it build withCabal-2.0, and makes thedoctests work withcabal new-buildand sandboxes.
4.3.2.1
- GHC 8 support
- Fix Kahn mode's
**implementation - Fix multiple problems in Erf and InvErf methods
4.3.2
- Added
NoEqversions of several combinators that can be used whenEqisn't available on the numeric type involved.
4.3.1
- Further improvements have been made in the performance of
Sparsemode, at least asymptotically, when used on functions with many variables. Since this is the target use-case forSparsein the first place, this seems like a good trade-off. Note: this results in an API change, but only in the API of anInternalmodule, so this is treated as a minor version bump.
4.3
- Made drastic improvements in the performance of
TowerandSparsemodes thanks to the help of Björn von Sydow. - Added constrained convex optimization.
- Incorporated some suggestions from herbie for improving floating point accuracy.
4.2.4
- Added
Newton.Doublemodules for performance.
4.2.3
reflection2 support
4.2.2
- Major bug fix for
grads,jacobians, and anything that usesSparsemode inNumeric.AD. Derivatives after the first two were previously incorrect.
4.2.1.1
- Support
natsversion 1
4.2.1
- Added
stochasticGradientDescent.
4.2
- Removed broken
Directedmode. - Added
Numeric.AD.Rank1combinators and moved most infinitesimal handling back out of the modes and into anADwrapper.
4.1
- Fixed a bug in the type of
conjugateGradientAscentandconjugateGradientDescentthat prevent users from being able to ever call it.
4.0.0.1
- Added the missing
instances.hheader file toextra-source-files.
4.0
- An overhaul permitting monomorphic modes was completed by @alang9.
- Add a
ForwardDoublemonomorphic mode
3.4
- Added support for
erfandinverf, etc. fromData.Number.Erf. - Split the infinitesimal and mode into two separate parameters to facilitate inlining and easier extension of the API.
3.3.1
- Build system improvements
- Removed unused LANGUAGE pragmas
- Added HLint configuration
- We now use exactly the same versions of the packages used to build
adwhen running the doctests.
3.3
- Renamed
ReversetoKahnandWengerttoReverse. We use Arthur Kahn's topological sorting algorithm to sort the tape after the fact in Kahn mode, while the stock Reverse mode builds a Wengert list as it goes, which is more efficient in practice.
3.2.2
- Export of the
conjugateGradientDescentandgradientDescentfromNumeric.AD
3.2.1
conjugateGradientDescentnow stops before it starts returning NaN results.
3.2
- Renamed
ChaintoWengertto reflect its use of Wengert lists for reverse mode. - Renamed
lifttoautoto avoid conflict with the more prevalenttransformerslibrary. - Fixed a bug in
Numeric.AD.Forward.gradWith', which caused it to return the wrong value for the primal.
3.1.4
- Added a better "convergence" test for
findZero - Compute
tanandtanhderivatives directly.
3.1.3
- Added
conjugateGradientDescentandconjugateGradientAscenttoNumeric.AD.Newton.
3.1.2
- Dependency bump
3.1
- Added
Chainmode, which isReverseusing a linear tape that doesn't need to be sorted. - Added a suite of doctests.
- Bug fix in
Forwardmode. It was previously yielding incorrect results for anything that usedbindorbind'internally.
3.0
- Moved the contents of
Numeric.AD.Mode.MixedintoNumeric.AD - Split off
Numeric.AD.Variadicfor the variadic combinators - Removed the
UU,FU,UF, andFFtype aliases. - Stopped exporting the types for
ModeandADfrom almost every module. ImportNumeric.AD.Typesif necessary. - Renamed
TensorstoJet - Dependency bump to be compatible with ghc 7.4.1 and mtl 2.1
- More aggressive zero tracking.
diff (**n) 0for constant n anddiff (0**)both now yield the correct answer for all modes.