L      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                             (c) 2016-2017 Tao HeMITsighingnow@gmail.comSafeFd;JContext definition. DeviceType cpugpu cpu_pinnedNeural network combinators.Apply a linear transformation:  Y = X W^T + b.Apply correlation to inputs ElementWise activation function.Leaky ReLu activation"The following types are supported: elu: !y = x > 0 ? x : slop * (exp(x)-1)leaky: y = x > 0 ? x : slope * x0prelu: same as leaky but the slope is learnable.rrelu: same as leaky but the slope is uniformly randomly chosen from [lower_bound, upper_bound) for training, while fixed to be (lower_bound+upper_bound)/2 for inference. "Apply softmax activation to input. Apply dropout to input. Batch normalization. An operator taking in a n-dimensional input tensor (n > 2), and normalizing the input by subtracting the mean and variance calculated over the spatial dimensions. /Set the l2 norm of each instance to a constant.5Convolution Compute N-D convolution on (N+2)-D input.+Apply convolution to input then add a bias.-Apply deconvolution to input then add a bias.Perform pooling on the input..Performs region-of-interest pooling on inputs.!Apply a recurrent layer to input.9Map integer index to vector representations (embeddings).Apply bilinear sampling to input feature map, which is the key of [NIPS2015] Spatial Transformer Networks  output[batch, channel, y_dst, x_dst] = G(data[batch, channel, y_src, x_src) x_dst, y_dst enumerate all spatial locations in output x_src = grid[batch, 0, y_dst, x_dst] y_src = grid[batch, 1, y_dst, x_dst] G() denotes the bilinear interpolation kernel The out-boundary points will be padded as zeros.-generate sampling grid for bilinear sampling.8Perform nearest neighboor/bilinear up sampling to inputs/Apply spatial transformer to input feature map.NUse linear regression for final output, this is used on final output of a net.PUse Logistic regression for final output, this is used on final output of a net.Softmax with logit loss.[Use mean absolute error regression for final output, this is used on final output of a net.ESupport Vector Machine based transformation on input, backprop L2-SVM-Calculate cross_entropy(data, one_hot(label))%Calculate Smooth L1 Loss(lhs, scalar) JApply a sparse regularization to the output a sigmoid activation function.!2Get output from a symbol and pass 1 gradient back."1Get output from a symbol and pass 0 gradient back#(Custom operator implemented in frontend.$Tensor operations.% Dot product.&Reshape a tensor value.'Transpose a tensor value.(9Add, subtract, multiply, divide and power with IO action.)9Add, subtract, multiply, divide and power with IO action.*9Add, subtract, multiply, divide and power with IO action.+9Add, subtract, multiply, divide and power with IO action.,9Add, subtract, multiply, divide and power with IO action.-0Ordinary arithmetic operators with scalar value..0Ordinary arithmetic operators with scalar value./0Ordinary arithmetic operators with scalar value.00Ordinary arithmetic operators with scalar value.10Ordinary arithmetic operators with scalar value.2@Flip version of ordinary arithmetic operators with scalar value.3@Flip version of ordinary arithmetic operators with scalar value.4@Flip version of ordinary arithmetic operators with scalar value.58Mutable ordinary arithmetic operators with scalar value.68Mutable ordinary arithmetic operators with scalar value.78Mutable ordinary arithmetic operators with scalar value.88Mutable ordinary arithmetic operators with scalar value.98Mutable ordinary arithmetic operators with scalar value.:UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.;UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.<UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.=UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.>UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.?UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.@UCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.AUCompare two tensor values, after comparison, all cell may be set as a same value, or 0, or 1.BfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.CfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.DfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.EfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.FfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.GfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.HfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.IfCompare a tensor value with a scalar value, after comparison, all cell may be set as a same value, or 0, or 1.J@DType class, used to quantify types that can be passed to mxnet.RContext for CPU 0.SContext for GPU 0. Input data.Weight matrix.Bias parameter.%Number of hidden nodes of the output.Input data1 to the correlation.Input data2 to the correlation."Input data to activation function.+Activation function to be applied, one of {relu, sigmoid, softrelu, }."Input data to activation function.+Activation function to be applied, one of {elu, leaky, prelu, rrelu}, default is leaky. "Input data to activation function. Input data to dropout.MFraction of the input that gets dropped out at training time, default is 0.5. "Input data to batch normalization.GammaBeta Moving mean Moving var ]A n-dimensional tensor (n > 2) of the form [batch, channel, spatial_dim1, spatial_dim2, ...].Gamma, a vector of length channel(, which multiplies the normalized input.Beta, a vector of length channelG, which is added to the product of the normalized input and the weight.!Epsilon to prevent division by 0. $Input data to the L2NormalizationOp.%Epsilon to prevent div 0, default is 1e-10.Normalization Mode, one of {channel,  'instance', spatial}, default is  'instance'. Input data.Weight matrix.Bias parameter.-Convolution kernel size: (h, w) or (d, h, w).#Convolution filter(channel) number. Input data to the ConvolutionOp.eAlpha, value of the alpha variance scaling parameter in the normalization formula, default is 0.0001.VBeta, value of the beta power parameter in the normalization formula, default is 0.75.@Value of the k parameter in normalization formula, default is 2.'Normalization window width in elements."Input data to the DeconvolutionOp.Weight matrix.Bias parameter.-Convolution kernel size: (h, w) or (d, h, w).#Convolution filter(channel) number.#Input data to the pooling operator.)Pooling kernel size: (y, x) or (d, y, x).$Pooling type to be applied, one of {avg,  ,  }.6Input data to the pooling operator, a 4D Feature maps.Bounding box coordinates.Fix pooled size: (h, w).DRatio of input feature map height (or w) to raw image height (or w).Input data to RNN.4Vector of all RNN trainable parameters concatenated. Initial hidden state of the RNN.5Initial cell state for LSTM networks (only for LSTM).!Size of the state for each layer.Number of stacked layers.$The type of RNN to compute, one of {gru, lstm, rnn_relu, rnn_tanh}.Input data to the EmbeddingOp.Embedding weight matrix.%Vocabulary size of the input indices.#Dimension of the embedding vectors.$Input data to the BilinearsamplerOp.HInput grid to the BilinearsamplerOp.grid has two channels: x_src, y_src.$Input data to the BilinearsamplerOp.HInput grid to the BilinearsamplerOp.grid has two channels: x_src, y_src.Array of tensors to upsample.Up sampling scale.Upsampling method, one of {bilinear, nearest}.'Input data to the SpatialTransformerOp.LLocalisation net, the output dim should be 6 when transform_type is affine. Input data to function.Input label to function.Input data to function.Input label to function. Input data.Ground truth label.Input data to function.Input label to function.Input data to svm. Label data.IMargin, scale the DType(param_.margin) for activation size, default is 1.~Regularization coefficient, Scale the coefficient responsible for balacing coefficient size and error tradeoff, default is 1.iUse linear, if set true, uses L1-SVM objective function. Default uses L2-SVM objective, default is False. Input data. Input label. Source input Scalar input.  Input data.! Input data.KGradient scale as a supplement to unary and binary operators, default is 1.wValid thresh, default is 0. Regard element valid when x > valid_thresh, this is used only in valid normalization mode.Normalization, one of {batch,  , valid}, default is  ." The input.#Input of custom operator2Type of custom operator, must be registered first.T  !"#$'<%&()*+,-./0123456789:;=>?@ABCDEFGHIJKLMNOPQRSTJKLQPONM$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI  !"#RS  !"#$%%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKL-6.6/70718263748(c) 2016-2017 Tao HeMITsighingnow@gmail.comSafe&'-;=>?AFSTVhx\+Merge the first KVList into the second one.];Update all values in the first HMap into the second KVList. Constraint ensure ^ must contain k-v pair.^HMap definition. -Find specified key-value type pair in KVList.,If a KVList has a specified type of KV pair.aCreate an empty HMap.b:Add a key-value pair into the HMap (via TypeApplications).c:Add a key-value pair into the HMap (via TypeApplications).FIXME should have a 'No ~ FindKV k v kvs constraint here.dInfix version of add.e!Get the value of an existing key.fInfix version of get.g$Update the value of an existing key.h!Set the value of an existing key.i;Update all values in the first HMap into the second KVList.j(Dump key-value pair in HMap as [(k, v)].Z[\]^_`abcdefghij^_`Z[\]abcdefghijZ[\] ^_``6d8f7 (c) 2016-2017 Tao HeMITsighingnow@gmail.comSaferHandle to NDArrayList.sHandle to Predictor.tJCallback: MXKVStoreServerController, the prototype of a server controller.uACallback: MXKVStoreUpdater, user-defined updater for the kvstore.vCallback: CustomOpPropCreator.w"Callback: ExecutorMonitorCallback.xHandle to MXRtc.yHandle to RecordIO.zHandle to KVStore.{Handle to a DataIterator.|Handle a dataiter creator.~Handle to a AtomicSymbol.9Handle to a function that takes param and creates symbol.7Handle to a mxnet narray function that changes NDArray.Handle to NDArray.Handle to Graph.0Handle to a symbol that can be bind as operator.9Handle to a function that takes param and creates symbol.MXFloat type alias.MXUint type alias.Handle size_t type.NNUint type alias.Handle to an Executor.tThe head of the command.The body of the command.*Helper handle for implementing controller.uThe key.The pushed value on the key.%The value stored on local on the key.%The additional handle to the updater.'rstuvwxyz{ |!}"~#$%&'()rsvwxyz{ |!}"~#$%&'(c) 2016-2017 Tao HeMITsighingnow@gmail.comSaferstuvwxyz{|}~~}|{zyxsrwvut (c) 2016 Tao HeMITsighingnow@gmail.comSafe Create a predictor.+Create a predictor wich customized outputs.Get the shape of output node. Set the input data of predictor.%Run a forward pass to get the output.1Run a interactive forward pass to get the output.#Get the output value of prediction.Free a predictor handle.3Create a NDArray List by loading from ndarray file.Get an element from list.Free a predictor handle.The JSON string of the symbol.2The in-memory raw bytes of parameter ndarray file.#The size of parameter ndarray file.The device type, 1: cpu, 2:gpu.The device id of the predictor.!Number of input nodes to the net.The name of input argument. The created predictor handle.The names of input arguments. "Number of output nodes to the net. 9The name of output argument and created predictor handle.The predictor handle.@The index of output node, set to 0 if there is only one output.$Output dimension and the shape data.The name of input node to set."The pointer to the data to be set..The size of data array, used for safety check.#The current step to run forward on.The number of steps left.?The index of output node, set to 0 if there is only one output.User allocated data to hold the output./The size of data array, used for safe checking.*The byte contents of nd file to be loaded.%The size of the nd file to be loaded.0The out put NDListHandle and length of the list.The index in the list._The name of output, the data region of the item, the shape of the item and shape's dimension.*+,-./0123456(c) 2016-2017 Tao HeMITsighingnow@gmail.comSafeij:+Set the last error message needed by C API.%Return str message of the last error.7List all the available operator names, include entries.Get operator handle given name.!List all the available operators.1Get the detailed information about atomic symbol.Create an AtomicSymbol functor.Create a Variable Symbol.5Create a Symbol by grouping list of symbols together.)Add src_dep to the handle as control dep.Free the symbol handle."Copy the symbol to another handle.,Print the content of symbol, used for debug.!Get string attribute from symbol.!Set string attribute from symbol.:Get all attributes from symbol, including all descendents.$List inputs variables in the symbol.List input names in the symbol.!List returns names in the symbol.-Get a symbol that contains all the internals.#Get index-th outputs of the symbol.7$Compose the symbol on other symbols.)Invoke a nnvm op and imperative function."Create a graph handle from symbol.Free the graph handle. Get a new symbol from the graph.#Get Set a attribute in json format.'Get a serialized attrirbute from graph.)Set a attribute whose type is std::vector NodeEntry in c++.Apply passes on the src graph. The name of the operator.The operator handle.The number of parameters.The keys to the params.The values to the params.The name of the variable. Number of symbols to be grouped.Array of symbol handles.&The symbol to add dependency edges on.The source handles. symbol handlekeyattribute keysattribute values80 for recursive, 1 for shallowCreator/Handler of the OP.the graph handle.B9:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWX8YZ[7(c) 2016-2017 Tao HeMITsighingnow@gmail.comSafePerform a synchronize copy to a continugous CPU memory region.DWait until all the pending writes with respect NDArray are finished.HWait until all the pending read/write with respect NDArray are finished.=Wait until all delayed operations in the system is completed.Free the narray handle.Slice the NDArray along axis 0.Index the NDArray along axis 0.Reshape the NDArray.'Get the content of the data in NDArray.#Get the type of the data in NDArrayGet the context of the NDArray.)List all the available functions handles. Get the function handle by name.+Get the information of the function handle..Get the argument requirements of the function.WInvoke a function, the array size of passed in arguments must match the values in the fun function.WInvoke a function, the array size of passed in arguments must match the values in the fun function.)Invoke a nnvm op and imperative function.7List all the available operator names, include entries.List all the available AtomicSymbolCreator.!Get the name of an atomic symbol.1Get the detailed information about atomic symbol.Create an AtomicSymbol.Create a Variable Symbol.5Create a Symbol by grouping list of symbols together.Load a symbol from a json file.!Load a symbol from a json string.Save a symbol into a json file.!Save a symbol into a json string.Free the symbol handle."Copy the symbol to another handle.,Print the content of symbol, used for debug.Get string name from symbol!Get string attribute from symbol.Set string attribute from symbol. Setting attribute to a symbol can affect the semantics (mutable/immutable) of symbolic graph.:Get all attributes from symbol, including all descendents.6Get all attributes from symbol, excluding descendents.List arguments in the symbol.List returns in the symbol.-Get a symbol that contains all the internals.#Get index-th outputs of the symbol.$List auxiliary states in the symbol.$Compose the symbol on other symbols.%Get the gradient graph of the symbol.8Infer shape of unknown input shapes given the known one.BPartially infer shape of unknown input shapes given the known one.6Infer type of unknown input types given the known one.Delete the executor.4Print the content of execution plan, used for debug.Executor forward method.Excecutor run backward.Get executor's head NDArray.Generate Executor from symbol.Generate Executor from symbol. This is advanced function, allow specify group2ctx map. The user can annotate "ctx_group" attribute to name each group.Generate Executor from symbol. This is advanced function, allow specify group2ctx map. The user can annotate "ctx_group" attribute to name each group.6Set a call back to notify the completion of operation.(List all the available iterator entries.MInit an iterator, init with parameters the array size of passed in arguments.1Get the detailed information about data iterator.1Get the detailed information about data iterator.!Free the handle to the IO module.Move iterator to next position.Call iterator.Reset.1Get the handle to the NDArray of underlying data.Get the image index by array.-Get the padding number in current data batch.2Get the handle to the NDArray of underlying label.*Initialized ps-lite environment variables.Create a kvstore.Delete a KVStore handle.,Init a list of (key,value) pairs in kvstore.,Push a list of (key,value) pairs to kvstore.9FIXME Pull a list of (key, value) pairs from the kvstore.FIXME Register an push updater.Get the type of the kvstore. ?The rank of this node in its group, which is in [0, GroupSize). +The number of nodes in this group, which is1number of workers if if `IsWorkerNode() == true`,1number of servers if if `IsServerNode() == true`,!1 if `IsSchedulerNode() == true`. 4Return whether or not this process is a worker node. 4Return whether or not this process is a server node. 7Return whether or not this process is a scheduler node.)Global barrier among all worker machines.$Whether to do barrier when finalize.$FIXME Run as server (or scheduler).#Send a command to all server nodes.9Get the number of ps dead node(s) specified by {node_id}. Create a RecordIO writer object. Delete a RecordIO writer object.$Write a record to a RecordIO object.(Get the current writer pointer position. Create a RecordIO reader object. Delete a RecordIO reader object.$Write a record to a RecordIO object.(Set the current reader pointer position.Create a MXRtc object.Run cuda kernel.Delete a MXRtc object.dMode, indicate the working mode of profiler, record anly symbolic operator when mode == 0, record all operator when mode == 1.#Filename, where to save trace file.wState, indicate the working state of profiler, profiler not running when state == 0, profiler running when state == 1.The returned NDArrayHandle.The shape of NDArray.The dimension of the shape.,Device type, specify device we want to take.%The device id of the specific device."Whether to delay allocation until.The returing handle.,Device type, specify device we want to take.%The device id of the specific device."Whether to delay allocation until.Data type of created array.The returing handle.The head of the raw bytes.Size of the raw bytes.The NDArray handle.Name of the file.Number of arguments to save.(the array of NDArrayHandles to be saved.names of the NDArrays to save.\Name of the file.Name of the file.ZThe size of ndarray handles, ndarray handles the number of names and the returned names.The NDArrayHandle.!The raw data source to copy from."The memory size want to copy from.The NDArrayHandle.!The raw data source to copy into."The memory size want to copy into.The handle to the NDArray.The beginning index of slice.The ending index of slice.$The NDArrayHandle of sliced NDArray.The handle to the NDArray. The index.$The NDArrayHandle of output NDArray.The handle to the NDArray."Number of dimensions of new shape.New sizes of every dimension.=The new shape data and the NDArrayHandle of reshaped NDArray.$The output dimension and it's shape.The NDArray handle.&Pointer holder to get pointer of data.The NDArray handle.(The type of data in this NDArray handle.The NDArray handle.The device type and device id.!The output function handle array.The name of the function."The corresponding function handle.The target function handle.The name of returned function, it's description, the number of it's arguments, argument name, type and descriptions, as well as the return type of this function.sThe number of NDArrays, scalar variables and mutable NDArrays to be passed in, and the type mask of this function.The function to invoke.The normal NDArrays arguments.The scalar arguments.The mutable NDArrays arguments.The function to invoke.The normal NDArrays arguments.The scalar arguments.The mutable NDArrays arguments.Number of keyword parameters.Keys for keyword parameters.Values for keyword parameters.]Creator of the OP.Creator/Handler of the OP.Input NDArrays.Keywords parameters.$Original given output handles array.Return NDArrays as result. The atomic symbol creators list.!Name of the target atomic symbol.Return the name and description of the symbol, the name, type and description of it's arguments, the keyword argument for specifying variable number of arguments, as well as the return type of this symbol.The atomic symbol creator.The number of parameters.The keys of the parameters.The values of the parameters.The created symbol.Name of the variable.The created variable symbol. Number of symbols to be grouped.The created symbol group. The file nameThe json string.The symbol to save.The target file name.The symbol to save.The result json string.The symbol handle to print.:The name of the symbol and whether the call is successful.The source symbol.The key of this attribute.?The value of this attribute and whether the call is successful.The source symbol.The name of the attribute.The value of the attribute.The attributes list.The attributes list.List of arguments' names.The outputs' names.6The output symbol whose outputs are all the internals. The symbol.Index of the output.8The output symbol whose outputs are the index-th symbol.The output string array.The symbol to apply.Name of the symbol.Number of arguments.#Key of keyword arguments, optional. Arguments.The symbol to get gradient.$Number of arguments to get gradient.'Names of the arguments to get gradient.$Return the symbol that has gradient.Symbol handle.$Keys of keyword arguments, optional.$The head pointer of the rows in CSR.The content of the CSR.Return the in, out and auxiliary shape size, ndim and data (array of pointers to head of the input shape), and whether infer shape completes or more information is needed.Symbol handle.$Keys of keyword arguments, optional.$The head pointer of the rows in CSR.The content of the CSR.Return the in, out and auxiliary array's shape size, ndim and data (array of pointers to head of the input shape), and whether infer shape completes or more information is needed.^Symbol handle.Number of input arguments.#Key of keyword arguments, optional.The content of the CSR.Return the size and an array of pointers to head the input, output and auxiliary type, as well as whether infer type completes or more information is needed.Symbol handle.Input arguments.*Return arg_types, out_types and aux_types.The executor handle.The executor handle.The executor handle.Bint value to indicate whether the forward pass is for evaluation.The executor handle.Length.#NDArray handle for heads' gradient._The executor handle.The executor handle.The handles for outputs. The symbol handle. Device type. Device id.Length of arrays in arguments. In array.Grads handle array.Grad req array.Length of auxiliary states.Auxiliary states array. The symbol handle.Device type of default context.Device id of default context.Size of group2ctx map.Keys of group2ctx map.Device type of group2ctx map.Device id of group2ctx map.Length of arrays in arguments. In array. Grads handle array. Grad req array. Length of auxiliary states. Auxiliary states array.The symbol handle.Device type of default context.Device id of default context.Size of group2ctx map.Keys of group2ctx map.Device type of group2ctx map.Device id of group2ctx map.Length of arrays in arguments. In array. Grads handle array. Grad req array. Length of auxiliary states. Auxiliary states array. )Input executor handle for memory sharing.The executor handle.The output iterator entries.(The handle pointer to the data iterator.Size of arrays in arguments.Parameter keys.Parameter values.`(The handle pointer to the data iterator.)The handle pointer to the data iterator.Return the name and description of the data iter creator, the name, type and description of it's arguments, as well as the return type of this symbol.(The handle pointer to the data iterator.(The handle pointer to the data iterator.(The handle pointer to the data iterator.(The handle pointer to the data iterator.a(The handle pointer to the data iterator.(The handle pointer to the data iterator.Output indices of the array.(The handle pointer to the data iterator.(The handle pointer to the data iterator."Number of variables to initialize.Environment keys.Environment values.The type of KVStore.Handle to the kvstore.Handle to the kvstore.The number of key-value pairs.The list of keys.The list of values.Handle to the kvstore.The number of key-value pairs.The list of keys.The list of values.The priority of the action.Handle to the kvstore.The number of key-value pairs.The list of keys.The list of values.The priority of the action.Handle to the KVStore. Handle to the KVStore. Handle to the KVStore.Handle to the KVStore.Handle to the KVStore.+Whether to do barrier when kvstore finalizeHandle to the KVStore.The head of the command.The body of the command.Handle to the kvstore.bnode id, can be a node group or a single node. kScheduler = 1, kServerGroup = 2, kWorkerGroup = 4NA node fails to send heartbeart in {timeout_sec} seconds will be presumed as dead Path to file.Handle to RecordIO object.Handle to RecordIO object.Buffer to write.Size of buffer.Handle to RecordIO object.Handle to output position. Path to file.Handle to RecordIO object.Handle to RecordIO object.Pointer to return buffer.Handle to RecordIO object.Target position.Name.Number of inputs.Number of outputs. Input names. Output names.Inputs.Outputs.Kernel. Handle.Number of inputs.Number of outputs.Inputs.Outputs. Grid dim x Grid dim y Grid dim z Block dim x  Block dim y  Block dim zop type.bcdefghijklmnopqrstuvwxyz{|}~\]^_`a     (c) 2016-2017 Tao HeMITsighingnow@gmail.comSafe  ;<=CFTVdtuvwxyz{|}~     ~}|{zyxwvut     (c) 2016-2017 Tao HeMITsighingnow@gmail.comSafe$rssr(c) 2016-2017 Tao HeMITsighingnow@gmail.comSafe6 Register NDArray ops.Register symbol functions.3Generate the TH AST of a function for a NDArray op.2Generate the TH AST of a function for a Symbol op. startWith s t means s starts with t.6Prepend elements in the second map into the first one.YSplit argument string with ",", split name, type, default value and required information.*Get explicit arguments from all arugments.*Get implicit arguments from all arguments.,If support "out" key in argument dictionary.,If support "out" key in argument dictionary.Function's name.Function's description.Function's argument names.Function's argument types.,Generated signature and function definition.Function's name.Function's description.Function's argument names.Function's argument types.,Generated signature and function definition.Argument names.Argument types.*Return necessary arguments' name and type.Argument names.Argument types.;Return necessary arguments' names, types and default value. (c) 2016 Tao HeMITsighingnow@gmail.comNone-;<=FSTVh7Register symbol operators. !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!      ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"! (c) 2016-2017 Tao HeMITsighingnow@gmail.comNone-;<=FSTVhG-Result representation for generic NDArray op.=0Register immutable version of ndarray operators..Register mutable version of ndarray operators.      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ =<;:9876543210/.-,+*)('&%$#"!      ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!      ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!      ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?> (c) 2016-2017 Tao HeMITsighingnow@gmail.comNone -;=CFTVhw6Wrapper for pretty print multiple dimensions matrices.NDArray type alias.,Wait all async operation to finish in MXNet.EMake a new empty ndarray with specified shape, context and data type.$Make a new NDArray with given shape.Get the shape of given NDArray.Get size of the given ndarray.!Get context of the given ndarray. Make a copy of the give ndarray.Get data stored in NDArray.Return a sliced ndarray that  shares memory with current one.Return a sub ndarray that  shares memory with current one.JBlock until all pending writes operations on current ndarray are finished.)One hot encoding indices into matrix out.ECreate a new NDArray filled with 0, with specified shape and context.ECreate a new NDArray filled with 1, with specified shape and context.OCreate a new NDArray filled with given value, with specified shape and context.;Create a new NDArray that copies content from source_array.Destruct pretty Shape.Context/If delayed allocate.size of every dimensions.(Dimensions and size of every dimensions.(Dimensions and size of every dimensions.The beginning index of slice.The end index of slices. The index.:An ndarray containing indices of the categorical features."The result holder of the encoding.The encoding ndarray.Shape.Shape.Shape. Given value to fill the ndarray.Shape.  (c) 2016 Tao HeMITsighingnow@gmail.comNone|Type alias for variable.)Make an executor using the given handler.Executor forward method.Executor backward method.The executor handle./Whether this forward is for evaluation purpose.The executor handle. (c) 2016-2017 Tao HeMITsighingnow@gmail.comNone"#-hType alias for variable.-Make a new symbolic variable with given name.!Get the name of a given variable."Get specified attribute of symbol."Set specified attribute of symbol.QInfer the shape of the given symbol, return the in, out and auxiliary shape size.iGet the autodiff of current symbol. This function can only be used if current symbol is a loss function.<Bind with explicit argument mapping (name -- value mapping).?Bind without explicit argument mapping (name -- value mapping).List all input arguments.List all output results.List all auxiliary states.4Provide a globally unique serial ID for each symbol.?Generate a globally unique name for each symbol, thread safely.Name.Result variable.(c) 2016 Tao HeMITsighingnow@gmail.comNone  !"#$'<%&()*+,-./0123456789:;=>?@ABCDEFGHIJKLMNOPQRSZ[\]^_`abcdefghij# !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                             !"#$%&'()*+,-./01234564!7%89:;< =>?@A"B$CDEFGHIJKLMNOPQRSTUVWXYZ[K\L]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLM6NOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~7801234564!7%89:;< =>?@A"B$CDEFGHIJKLMNOPQRSTUVWXYZ[K\L]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLM6NOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~78ST      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=> ? @ A B B C D E F G H I J K  L M N O P Q R S T U V W X Y Z [ \ ] ] C ^ _ ` a b b C c d e f g h i j k l m n o p q r s t uvwxynvzvz{|}~o                                     !"#$%&'()*+,-./0123456789:;<=>?@ABCDEF$mxnet-0.2.0.0-4yBVU5MYmQH4r79dPVoUYoMXNet.Core.Base.DTypeMXNet.Core.Base.HMapMXNet.Core.Types.InternalMXNet.Core.Predict.InternalMXNet.Core.NNVM.InternalMXNet.Core.Base.Internal"MXNet.Core.Base.Internal.TH.Symbol#MXNet.Core.Base.Internal.TH.NDArrayMXNet.Core.Base.NDArrayMXNet.Core.Base.ExecutorMXNet.Core.Base.SymbolMXNet.Core.Types.Internal.RawMXNet.Core.Predict.Internal.RawMXNet.Core.NNVM.Internal.RawMXNet.Core.Base.Internal.RawMXNet.Core.Base.Internal.THMXNet.Core.BaseContext deviceTypedeviceIdNeuralfullyConnected correlation activation leakyReLUsoftmaxActivationdropout batchNorm instanceNorml2Normalization convolutionlrn deconvolutionpooling roiPoolingrnn embeddingbilinearSampler gridGenerator upSamplingspatialTransformerlinearRegressionOutputlogisticRegressionOutput softmaxOutputmaeRegressionOutput svmOutputsoftmaxCrossEntropysmoothL1identityAttachKLSparseregmakeLoss blockGradcustomTensordotreshape transpose+.-.*./.^..+.-.*./.^..-../..^.+=.-=.*=./=.^=_Maximum_MinimumequalnotEqualgreater greaterEquallesser lesserEqual _Maximum' _Minimum'equal' notEqual'greater' greaterEqual'lesser' lesserEqual'DTypetypeidtypenameINT32UINT8FLOAT16FLOAT64FLOAT32 contextCPU contextGPU $fDTypeInt32 $fDTypeInt8 $fDTypeDouble $fDTypeFloat $fEqContext $fShowContextShowKVshow' MatchKVListmergeTo'HMapKV:=niladdadd'.+.get.->.updatesetmergeTodump $fInDictkv: $fInDictkv:0$fMatchKVList:kvs2$fMatchKVList[]kvs2 $fShowHMap $fShowKV: $fShowKV[] NDListHandlePredictorHandleMXKVStoreServerControllerMXKVStoreUpdaterCustomOpPropCreatorExecutorMonitorCallback RtcHandleRecordIOHandle KVStoreHandleDataIterHandleDataIterCreatorExecutorHandleAtomicSymbolHandleAtomicSymbolCreatorFunctionHandle NDArrayHandle GraphHandle SymbolHandleOpHandleMXFloatMXUIntNNUInt mxPredCreatemxPredCreatePartialOutmxPredGetOutputShapemxPredSetInput mxPredForwardmxPredPartialForwardmxPredGetOutput mxPredFreemxNDListCreate mxNDListGet mxNDListFreennAPISetLastErrornnGetLastErrornnListAllOpNames nnGetOpHandlennListUniqueOps nnGetOpInfonnSymbolCreateAtomicSymbolnnSymbolCreateVariablennSymbolCreateGroupnnAddControlDeps nnSymbolFree nnSymbolCopy nnSymbolPrintnnSymbolGetAttrnnSymbolSetAttrsnnSymbolListAttrsnnSymbolListInputVariablesnnSymbolListInputNamesnnSymbolListOutputNamesnnSymbolGetInternalsnnSymbolGetOutputnnSymbolCompose nnGraphCreate nnGraphFreennGraphGetSymbolnnGraphSetJSONAttrnnGraphGetJSONAttrnnGraphSetNodeEntryListAttr_nnGraphApplyPassesmxGetLastError mxRandomSeedmxNotifyShutdownmxSetProfilerConfigmxSetProfilerState mxDumpProfilemxNDArrayCreateNonemxNDArrayCreatemxNDArrayCreateExmxNDArrayLoadFromRawBytesmxNDArraySaveRawBytes mxNDArraySave mxNDArrayLoadmxNDArraySyncCopyFromCPUmxNDArraySyncCopyToCPUmxNDArrayWaitToReadmxNDArrayWaitToWritemxNDArrayWaitAll mxNDArrayFreemxNDArraySlice mxNDArrayAtmxNDArrayReshapemxNDArrayGetShapemxNDArrayGetDatamxNDArrayGetDTypemxNDArrayGetContextmxListFunctions mxGetFunction mxFuncGetInfomxFuncDescribe mxFuncInvokemxFuncInvokeExmxImperativeInvokemxListAllOpNames mxSymbolListAtomicSymbolCreatorsmxSymbolGetAtomicSymbolNamemxSymbolGetAtomicSymbolInfomxSymbolCreateAtomicSymbolmxSymbolCreateVariablemxSymbolCreateGroupmxSymbolCreateFromFilemxSymbolCreateFromJSONmxSymbolSaveToFilemxSymbolSaveToJSON mxSymbolFree mxSymbolCopy mxSymbolPrintmxSymbolGetNamemxSymbolGetAttrmxSymbolSetAttrmxSymbolListAttrmxSymbolListAttrShallowmxSymbolListArgumentsmxSymbolListOutputsmxSymbolGetInternalsmxSymbolGetOutputmxSymbolListAuxiliaryStatesmxSymbolCompose mxSymbolGradmxSymbolInferShapemxSymbolInferShapePartialmxSymbolInferTypemxExecutorFreemxExecutorPrintmxExecutorForwardmxExecutorBackwardmxExecutorOutputsmxExecutorBindmxExecutorBindXmxExecutorBindEXmxExecutorSetMonitorCallbackmxListDataItersmxDataIterCreateItermxDataIterGetIterInfomxDataIterFreemxDataIterNextmxDataIterBeforeFirstmxDataIterGetDatamxDataIterGetIndexmxDataIterGetPadNummxDataIterGetLabel mxInitPSEnvmxKVStoreCreate mxKVStoreFree mxKVStoreInit mxKVStorePush mxKVStorePullmxKVStoreSetUpdatermxKVStoreGetTypemxKVStoreGetRankmxKVStoreGetGroupSizemxKVStoreIsWorkerNodemxKVStoreIsServerNodemxKVStoreIsSchedulerNodemxKVStoreBarriermxKVStoreSetBarrierBeforeExitmxKVStoreRunServermxKVStoreSendCommmandToServersmxKVStoreGetNumDeadNodemxRecordIOWriterCreatemxRecordIOWriterFreemxRecordIOWriterWriteRecordmxRecordIOWriterTellmxRecordIOReaderCreatemxRecordIOReaderFreemxRecordIOReaderReadRecordmxRecordIOReaderSeek mxRtcCreate mxRtcPush mxRtcFreemxCustomOpRegister batchnorm batchnorm_v1bilinearsampler blockgradcastconvolution_v1cropelementwisesumfullyconnected gridgeneratoridentityattachklsparsereg instancenorml2normalization leakyrelulinearregressionoutputlogisticregressionoutputmaeregressionoutputmakeloss pooling_v1 roipooling svmoutput sequencelast sequencemasksequencereverse slicechannelsoftmaxactivation softmaxoutputspatialtransformerswapaxis upsampling_Div _DivScalar_Equal _EqualScalar_Greater_GreaterEqualScalar_GreaterScalar_Greater_Equal_Hypot _HypotScalar_Lesser_LesserEqualScalar _LesserScalar _Lesser_Equal_MaximumScalar_MinimumScalar_Minus _MinusScalar_Mod _ModScalar_Mul _MulScalar_NotEqualScalar _Not_Equal_Plus _PlusScalar_Power _PowerScalar _RDivScalar _RMinusScalar _RModScalar _RPowerScalar_add _backward_abs_backward_arccos_backward_arccosh_backward_arcsin_backward_arcsinh_backward_arctan_backward_arctanh_backward_cbrt _backward_cos_backward_cosh_backward_degrees_backward_expm1_backward_gamma_backward_gammaln_backward_hypot_scalar _backward_log_backward_log10_backward_log1p_backward_log2_backward_log_softmax_backward_maximum_scalar_backward_minimum_scalar_backward_mod_scalar_backward_mul_scalar_backward_power_scalar_backward_radians_backward_rcbrt_backward_rdiv_scalar_backward_reciprocal_backward_relu_backward_rmod_scalar_backward_rpower_scalar_backward_rsqrt_backward_sigmoid_backward_sign _backward_sin_backward_sinh_backward_smooth_l1_backward_softmax_backward_sqrt_backward_square _backward_tan_backward_tanh_contrib_CTCLoss_contrib_DeformableConvolution_contrib_DeformablePSROIPooling_contrib_MultiBoxDetection_contrib_MultiBoxPrior_contrib_MultiBoxTarget_contrib_MultiProposal_contrib_PSROIPooling_contrib_Proposal_contrib_count_sketch_contrib_ctc_loss_contrib_dequantize _contrib_fft _contrib_ifft_contrib_quantize_copy_copyto _crop_assign_crop_assign_scalar_cvcopyMakeBorder _cvimdecode _cvimread _cvimresize_div _div_scalar_equal _equal_scalar _grad_add_greater_greater_equal_greater_equal_scalar_greater_scalar_hypot _hypot_scalar_identity_with_attr_like_rhs _imdecode_lesser _lesser_equal_lesser_equal_scalar_lesser_scalar _linalg_gelqf _linalg_gemm _linalg_gemm2 _linalg_potrf _linalg_potri_linalg_sumlogdiag _linalg_syevd _linalg_syrk _linalg_trmm _linalg_trsm_maximum_maximum_scalar_minimum_minimum_scalar_minus _minus_scalar_mod _mod_scalar_mul _mul_scalar _not_equal_not_equal_scalar_onehot_encode_plus _plus_scalar_power _power_scalar _rdiv_scalar_rminus_scalar _rmod_scalar_rpower_scalar_sample_exponential _sample_gamma%_sample_generalized_negative_binomial_sample_multinomial_sample_negative_binomial_sample_normal_sample_poisson_sample_uniform_scatter_elemwise_div_scatter_minus_scalar_scatter_plus_scalar _set_value _slice_assign_sparse_ElementWiseSum _sparse_abs _sparse_add_n_sparse_arccos_sparse_arccosh_sparse_arcsin_sparse_arcsinh_sparse_arctan_sparse_arctanh_sparse_cast_storage _sparse_ceil _sparse_cos _sparse_cosh_sparse_degrees _sparse_dot_sparse_elemwise_add_sparse_elemwise_div_sparse_elemwise_mul_sparse_elemwise_sub _sparse_exp _sparse_expm1 _sparse_fix _sparse_floor _sparse_gamma_sparse_gammaln _sparse_log _sparse_log10 _sparse_log1p _sparse_log2_sparse_make_loss_sparse_negative_sparse_radians _sparse_relu_sparse_retain _sparse_rint _sparse_round _sparse_rsqrt_sparse_sigmoid _sparse_sign _sparse_sin _sparse_sinh _sparse_slice _sparse_sqrt_sparse_square_sparse_stop_gradient _sparse_sum _sparse_tan _sparse_tanh _sparse_trunc_sparse_zeros_like _square_sum_subabs adam_updateadd_narccosarccosharcsinarcsinharctanarctanhargmaxargmax_channelargminargsort batch_dot batch_take broadcast_addbroadcast_axesbroadcast_axis broadcast_divbroadcast_equalbroadcast_greaterbroadcast_greater_equalbroadcast_hypotbroadcast_lesserbroadcast_lesser_equalbroadcast_maximumbroadcast_minimumbroadcast_minus broadcast_mod broadcast_mulbroadcast_not_equalbroadcast_plusbroadcast_power broadcast_sub broadcast_to cast_storagecbrtceilchoose_element_0indexclipconcatcoscoshdegrees elemwise_add elemwise_div elemwise_mul elemwise_subexp expand_dimsexpm1fill_element_0indexfixflattenflipfloor ftrl_updategammagammaln gather_ndidentity linalg_gelqf linalg_gemm linalg_gemm2 linalg_potrf linalg_potrilinalg_sumlogdiag linalg_syrk linalg_trmm linalg_trsmloglog10log1plog2 log_softmax make_lossmaxmax_axismeanminmin_axismp_sgd_mom_update mp_sgd_updatenanprodnansumnegativenormone_hot ones_likepadpickprodradiansrcbrt reciprocalrelurepeat reshape_likereverserintrmsprop_updatermspropalex_updateroundrsqrtsample_exponential sample_gamma$sample_generalized_negative_binomialsample_multinomialsample_negative_binomial sample_normalsample_poissonsample_uniform scatter_ndsgd_mom_update sgd_updatesigmoidsignsinsinhslice slice_axis smooth_l1softmaxsoftmax_cross_entropysortsplitsqrtsquarestack stop_gradientsumsum_axisswapaxestantanhtiletopktrunc zeros_likeNDArrayOpResulttoResult fromResult$fNDArrayOpResult[]$fNDArrayOpResultNDArrayHandle$fNDArrayOpResult()where_ activation' batchnorm' batchnorm_v1'bilinearsampler' blockgrad'cast' convolution'convolution_v1' correlation'crop'custom'deconvolution'dropout'elementwisesum' embedding'fullyconnected'gridgenerator'identityattachklsparsereg' instancenorm'l2normalization'lrn' leakyrelu'linearregressionoutput'logisticregressionoutput'maeregressionoutput' makeloss'pooling' pooling_v1'rnn' roipooling' svmoutput' sequencelast' sequencemask'sequencereverse' slicechannel'softmaxactivation'softmaxoutput'spatialtransformer' swapaxis' upsampling'_Div' _DivScalar'_Equal' _EqualScalar' _Greater'_GreaterEqualScalar'_GreaterScalar'_Greater_Equal'_Hypot' _HypotScalar'_Lesser'_LesserEqualScalar'_LesserScalar'_Lesser_Equal'_MaximumScalar'_MinimumScalar'_Minus' _MinusScalar'_Mod' _ModScalar'_Mul' _MulScalar'_NotEqualScalar' _Not_Equal'_Plus' _PlusScalar'_Power' _PowerScalar' _RDivScalar'_RMinusScalar' _RModScalar'_RPowerScalar'_add'_backward_abs'_backward_arccos'_backward_arccosh'_backward_arcsin'_backward_arcsinh'_backward_arctan'_backward_arctanh'_backward_cbrt'_backward_cos'_backward_cosh'_backward_degrees'_backward_expm1'_backward_gamma'_backward_gammaln'_backward_hypot_scalar'_backward_log'_backward_log10'_backward_log1p'_backward_log2'_backward_log_softmax'_backward_maximum_scalar'_backward_minimum_scalar'_backward_mod_scalar'_backward_mul_scalar'_backward_power_scalar'_backward_radians'_backward_rcbrt'_backward_rdiv_scalar'_backward_reciprocal'_backward_relu'_backward_rmod_scalar'_backward_rpower_scalar'_backward_rsqrt'_backward_sigmoid'_backward_sign'_backward_sin'_backward_sinh'_backward_smooth_l1'_backward_softmax'_backward_sqrt'_backward_square'_backward_tan'_backward_tanh'_contrib_CTCLoss'_contrib_DeformableConvolution' _contrib_DeformablePSROIPooling'_contrib_MultiBoxDetection'_contrib_MultiBoxPrior'_contrib_MultiBoxTarget'_contrib_MultiProposal'_contrib_PSROIPooling'_contrib_Proposal'_contrib_count_sketch'_contrib_ctc_loss'_contrib_dequantize' _contrib_fft'_contrib_ifft'_contrib_quantize'_copy'_copyto' _crop_assign'_crop_assign_scalar'_cvcopyMakeBorder' _cvimdecode' _cvimread' _cvimresize'_div' _div_scalar'_equal'_equal_scalar' _grad_add' _greater'_greater_equal'_greater_equal_scalar'_greater_scalar'_hypot'_hypot_scalar'_identity_with_attr_like_rhs' _imdecode'_lesser'_lesser_equal'_lesser_equal_scalar'_lesser_scalar'_linalg_gelqf' _linalg_gemm'_linalg_gemm2'_linalg_potrf'_linalg_potri'_linalg_sumlogdiag'_linalg_syevd' _linalg_syrk' _linalg_trmm' _linalg_trsm' _maximum'_maximum_scalar' _minimum'_minimum_scalar'_minus'_minus_scalar'_mod' _mod_scalar'_mul' _mul_scalar' _not_equal'_not_equal_scalar'_onehot_encode'_plus' _plus_scalar'_power'_power_scalar' _rdiv_scalar'_rminus_scalar' _rmod_scalar'_rpower_scalar'_sample_exponential'_sample_gamma'&_sample_generalized_negative_binomial'_sample_multinomial'_sample_negative_binomial'_sample_normal'_sample_poisson'_sample_uniform'_scatter_elemwise_div'_scatter_minus_scalar'_scatter_plus_scalar' _set_value'_slice_assign'_sparse_ElementWiseSum' _sparse_abs'_sparse_add_n'_sparse_arccos'_sparse_arccosh'_sparse_arcsin'_sparse_arcsinh'_sparse_arctan'_sparse_arctanh'_sparse_cast_storage' _sparse_ceil' _sparse_cos' _sparse_cosh'_sparse_degrees' _sparse_dot'_sparse_elemwise_add'_sparse_elemwise_div'_sparse_elemwise_mul'_sparse_elemwise_sub' _sparse_exp'_sparse_expm1' _sparse_fix'_sparse_floor'_sparse_gamma'_sparse_gammaln' _sparse_log'_sparse_log10'_sparse_log1p' _sparse_log2'_sparse_make_loss'_sparse_negative'_sparse_radians' _sparse_relu'_sparse_retain' _sparse_rint'_sparse_round'_sparse_rsqrt'_sparse_sigmoid' _sparse_sign' _sparse_sin' _sparse_sinh'_sparse_slice' _sparse_sqrt'_sparse_square'_sparse_stop_gradient' _sparse_sum' _sparse_tan' _sparse_tanh'_sparse_trunc'_sparse_zeros_like' _square_sum'_sub'abs' adam_update'add_n'arccos'arccosh'arcsin'arcsinh'arctan'arctanh'argmax'argmax_channel'argmin'argsort' batch_dot' batch_take'broadcast_add'broadcast_axes'broadcast_axis'broadcast_div'broadcast_equal'broadcast_greater'broadcast_greater_equal'broadcast_hypot'broadcast_lesser'broadcast_lesser_equal'broadcast_maximum'broadcast_minimum'broadcast_minus'broadcast_mod'broadcast_mul'broadcast_not_equal'broadcast_plus'broadcast_power'broadcast_sub' broadcast_to' cast_storage'cbrt'ceil'choose_element_0index'clip'concat'cos'cosh'degrees'dot' elemwise_add' elemwise_div' elemwise_mul' elemwise_sub'exp' expand_dims'expm1'fill_element_0index'fix'flatten'flip'floor' ftrl_update'gamma'gammaln' gather_nd' identity' linalg_gelqf' linalg_gemm' linalg_gemm2' linalg_potrf' linalg_potri'linalg_sumlogdiag' linalg_syrk' linalg_trmm' linalg_trsm'log'log10'log1p'log2' log_softmax' make_loss'max' max_axis'mean'min' min_axis'mp_sgd_mom_update'mp_sgd_update'nanprod'nansum' negative'norm'one_hot' ones_like'pad'pick'prod'radians'rcbrt' reciprocal'relu'repeat'reshape' reshape_like'reverse'rint'rmsprop_update'rmspropalex_update'round'rsqrt'sample_exponential' sample_gamma'%sample_generalized_negative_binomial'sample_multinomial'sample_negative_binomial'sample_normal'sample_poisson'sample_uniform' scatter_nd'sgd_mom_update' sgd_update'sigmoid'sign'sin'sinh'slice' slice_axis' smooth_l1'softmax'softmax_cross_entropy'sort'split'sqrt'square'stack'stop_gradient'sum' sum_axis' swapaxes'tan'tanh'tile'topk' transpose'trunc'where_' zeros_like' PrettyWrapperMkPretty runPrettyNDArray getHandlewaitAllmakeEmptyNDArray makeNDArrayndshapendsizecontextcopyitemsat waitToRead onehotEncodezerosonesfullarray$fNeuralNDArray$fTensorNDArray$fFloatingNDArray$fFractionalNDArray $fNumNDArray $fEqNDArray $fEqNDArray0 $fEqNDArray1 $fShowNDArray$fPrettyPrettyWrapperExecutor makeExecutorforwardbackward getOutputsSymbolvariablegetNamegetAttrsetAttr infershapegradbindbind' listInputs listOutputslistAuxiliariessymidnaming$fNeuralSymbol$fTensorSymbol$fFloatingSymbol$fFractionalSymbol $fNumSymbol $fShowSymbolbase GHC.Floatghc-prim GHC.Classes Data.FoldablenullInDictFindKVIfHasKeyget'update' getKVListYesNoKVListNilCons$fStorableExecutorHandlemakeMXKVStoreServerControllermakeMXKVStoreUpdatermxNDListFree'_mxNDListGetImpl'_mxNDListCreate'_ mxPredFree'_mxPredGetOutput'_mxPredPartialForward'_mxPredForward'_mxPredSetInput'_mxPredGetOutputShapeImpl'_mxPredCreatePartialOut'_mxPredCreate'_mxPredGetOutputShapeImplmxNDListGetImplnnSymbolComposeImplnnSymbolListAttrsImplnnGraphApplyPasses'_nnGraphSetNodeEntryListAttr_'_nnGraphGetJSONAttr'_nnGraphSetJSONAttr'_nnGraphGetSymbol'_ nnGraphFree'_nnGraphCreate'_nnSymbolComposeImpl'_nnSymbolGetOutput'_nnSymbolGetInternals'_nnSymbolListOutputNamesImpl'_nnSymbolListInputNamesImpl'_ nnSymbolListInputVariablesImpl'_nnSymbolListAttrsImpl'_nnSymbolSetAttrs'_nnSymbolGetAttr'_nnSymbolPrint'_nnSymbolCopy'_nnSymbolFree'_nnAddControlDeps'_nnSymbolCreateGroup'_nnSymbolCreateVariable'_nnSymbolCreateAtomicSymbol'_nnGetOpInfoImpl'_nnListUniqueOpsImpl'_nnGetOpHandle'_nnListAllOpNamesImpl'_nnGetLastError'_nnAPISetLastError'_nnListAllOpNamesImplnnListUniqueOpsImplnnGetOpInfoImplnnSymbolListInputVariablesImplnnSymbolListInputNamesImplnnSymbolListOutputNamesImplmxNDArrayLoadImplmxImperativeInvokeImplmxSymbolInferTypeImplmxExecutorOutputsImplmxDataIterGetIterInfoImplmxDataIterGetIndexImplmxCustomOpRegister'_ mxRtcFree'_ mxRtcPush'_ mxRtcCreate'_mxRecordIOReaderSeek'_mxRecordIOReaderReadRecord'_mxRecordIOReaderFree'_mxRecordIOReaderCreate'_mxRecordIOWriterTell'_mxRecordIOWriterWriteRecord'_mxRecordIOWriterFree'_mxRecordIOWriterCreate'_mxKVStoreGetNumDeadNode'_ mxKVStoreSendCommmandToServers'_mxKVStoreSetBarrierBeforeExit'_mxKVStoreBarrier'_mxKVStoreIsSchedulerNode'_mxKVStoreIsServerNode'_mxKVStoreIsWorkerNode'_mxKVStoreGetGroupSize'_mxKVStoreGetRank'_mxKVStoreGetType'_mxKVStorePull'_mxKVStorePush'_mxKVStoreInit'_mxKVStoreFree'_mxKVStoreCreate'_ mxInitPSEnv'_mxDataIterGetLabel'_mxDataIterGetPadNum'_mxDataIterGetIndexImpl'_mxDataIterGetData'_mxDataIterBeforeFirst'_mxDataIterNext'_mxDataIterFree'_mxDataIterGetIterInfoImpl'_mxDataIterCreateIter'_mxListDataItersImpl'_mxExecutorSetMonitorCallback'_mxExecutorBindEX'_mxExecutorBindX'_mxExecutorBind'_mxExecutorOutputsImpl'_mxExecutorBackward'_mxExecutorForward'_mxExecutorPrint'_mxExecutorFree'_mxSymbolInferTypeImpl'_mxSymbolInferShapePartialImpl'_mxSymbolInferShapeImpl'_mxSymbolGrad'_mxSymbolCompose'_!mxSymbolListAuxiliaryStatesImpl'_mxSymbolGetOutput'_mxSymbolGetInternals'_mxSymbolListOutputsImpl'_mxSymbolListArgumentsImpl'_mxSymbolListAttrShallowImpl'_mxSymbolListAttrImpl'_mxSymbolSetAttr'_mxSymbolGetAttr'_mxSymbolGetName'_mxSymbolPrint'_mxSymbolCopy'_mxSymbolFree'_mxSymbolSaveToJSON'_mxSymbolSaveToFile'_mxSymbolCreateFromJSON'_mxSymbolCreateFromFile'_mxSymbolCreateGroup'_mxSymbolCreateVariable'_mxSymbolCreateAtomicSymbol'_!mxSymbolGetAtomicSymbolInfoImpl'_mxSymbolGetAtomicSymbolName'_&mxSymbolListAtomicSymbolCreatorsImpl'_mxListAllOpNamesImpl'_mxImperativeInvokeImpl'_mxFuncInvokeEx'_mxFuncInvoke'_mxFuncDescribe'_mxFuncGetInfoImpl'_mxGetFunction'_mxListFunctionsImpl'_mxNDArrayGetContext'_mxNDArrayGetDType'_mxNDArrayGetData'_mxNDArrayGetShapeImpl'_mxNDArrayReshape'_ mxNDArrayAt'_mxNDArraySlice'_mxNDArrayFree'_mxNDArrayWaitAll'_mxNDArrayWaitToWrite'_mxNDArrayWaitToRead'_mxNDArraySyncCopyToCPU'_mxNDArraySyncCopyFromCPU'_mxNDArrayLoadImpl'_mxNDArraySave'_mxNDArraySaveRawBytes'_mxNDArrayLoadFromRawBytes'_mxNDArrayCreateEx'_mxNDArrayCreate'_mxNDArrayCreateNone'_mxDumpProfile'_mxSetProfilerState'_mxSetProfilerConfig'_mxNotifyShutdown'_mxRandomSeed'_mxGetLastError'_mxNDArrayGetShapeImplmxListFunctionsImplmxFuncGetInfoImplmxListAllOpNamesImpl$mxSymbolListAtomicSymbolCreatorsImplmxSymbolGetAtomicSymbolInfoImplmxSymbolListAttrImplmxSymbolListAttrShallowImplmxSymbolListArgumentsImplmxSymbolListOutputsImplmxSymbolListAuxiliaryStatesImplmxSymbolInferShapeImplmxSymbolInferShapePartialImplmxListDataItersImplregisterNDArrayOpsregisterSymbolOpsmakeNDArrayFuncmakeSymbolFunc startWith updateMap splitArgTypegetExplicitArggetImplicitArg