hasktorch-zoo-0.0.1.0: Neural architectures in hasktorch

Copyright(c) Sam Stites 2017
LicenseBSD3
Maintainersam@stites.io
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Torch.Initialization

Description

Helper functions which might end up migrating to the -indef codebase

Synopsis

Documentation

newLinear :: forall o i. All KnownDim '[i, o] => Generator -> IO (Linear i o) Source #

initialize a new linear layer

newConv2d :: forall o i kH kW. All KnownDim '[i, o, kH, kW, kH * kW] => Generator -> IO (Conv2d i o '(kH, kW)) Source #

initialize a new conv2d layer

xavierUniformWith_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> HsReal

gain: an optional scaling factor

-> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

Fills the input Tensor with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from :math:`mathcal{U}(-a, a)` where .. math:: a = text{gain} times sqrt{frac{6}{text{fan_in} + text{fan_out}}} Also known as Glorot initialization. Examples: -set -XScopedTypeVariables w :: Tensor '[3, 5] <- torch.new xavierUniformWith_ w (calculate_gain Relu)

xavierUniform_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

xavierUniformWith_ with default of gain = 1

xavierNormalWith_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> HsReal

gain: an optional scaling factor

-> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

xavierNormal_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

xavierNormalWith_ with default of gain = 1

data Activation Source #

Constructors

LinearFn

Linear activation

Conv1dFn

Conv1d activation

Conv2dFn

Conv2d activation

Conv3dFn

Conv3d activation

Conv1dTFn

Conv1d transpose activation

Conv2dTFn

Conv2d transpose activation

Conv3dTFn

Conv3d transpose activation

SigmoidFn 
TanhFn 
ReluFn 
LeakyReluFn (Maybe Double) 
Instances
Eq Activation Source # 
Instance details

Defined in Torch.Initialization

Show Activation Source # 
Instance details

Defined in Torch.Initialization

data FanMode Source #

Constructors

FanIn 
FanOut 
Instances
Eq FanMode Source # 
Instance details

Defined in Torch.Initialization

Methods

(==) :: FanMode -> FanMode -> Bool #

(/=) :: FanMode -> FanMode -> Bool #

Ord FanMode Source # 
Instance details

Defined in Torch.Initialization

Show FanMode Source # 
Instance details

Defined in Torch.Initialization

kaimingUniformWith_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Activation 
-> FanMode 
-> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

Fills the input Tensor with values according to the method described in "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from :math:`mathcal{U}(-text{bound}, text{bound})` where .. math:: text{bound} = sqrt{frac{6}{(1 + a^2) times text{fan_in}}} Also known as He initialization. Args: tensor: an n-dimensional `torch.Tensor` a: the negative slope of the rectifier used after this layer (0 for ReLU by default) mode: either fan_in (default) or fan_out. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass. nonlinearity: the non-linear function (`nn.functional` name), recommended to use only with relu or leaky_relu (default). Examples: >>> w = torch.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode=fan_in, nonlinearity=relu)

kaimingUniform_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

kaimingNormalWith_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Activation 
-> FanMode 
-> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO () 

Fills the input Tensor with values according to the method described in "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from :math:`mathcal{N}(0, text{std})` where .. math:: text{std} = sqrt{frac{2}{(1 + a^2) times text{fan_in}}} Also known as He initialization. Args: tensor: an n-dimensional `torch.Tensor` a: the negative slope of the rectifier used after this layer (0 for ReLU by default) mode: either fan_in (default) or fan_out. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass. nonlinearity: the non-linear function (`nn.functional` name), recommended to use only with relu or leaky_relu (default). Examples: >>> w = torch.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode=fan_out, nonlinearity=relu)

kaimingNormal_ Source #

Arguments

:: (Dimensions outs, All KnownDim '[i, o, Product outs]) 
=> Generator 
-> Tensor (i :+ (o :+ outs))

tensor: an n-dimensional `torch.Tensor` (minimum length 2)

-> IO ()