Copyright | (c) Huw Campbell 2016-2017 |
---|---|
License | BSD2 |
Stability | experimental |
Safe Haskell | None |
Language | Haskell98 |
- train :: SingI (Last shapes) => LearningParameters -> Network layers shapes -> S (Head shapes) -> S (Last shapes) -> Network layers shapes
- backPropagate :: SingI (Last shapes) => Network layers shapes -> S (Head shapes) -> S (Last shapes) -> Gradients layers
- runNet :: Network layers shapes -> S (Head shapes) -> S (Last shapes)
Documentation
train :: SingI (Last shapes) => LearningParameters -> Network layers shapes -> S (Head shapes) -> S (Last shapes) -> Network layers shapes Source #
Update a network with new weights after training with an instance.
backPropagate :: SingI (Last shapes) => Network layers shapes -> S (Head shapes) -> S (Last shapes) -> Gradients layers Source #
Perform reverse automatic differentiation on the network for the current input and expected output.
Note: The loss function pushed backwards is appropriate for both regression and classification as a squared loss or log-loss respectively.
For other loss functions, use runNetwork and runGradient with the back propagated gradient of your loss.