!  &A      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                          ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@!(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None27_g amazonka-mlWThe sort order specified in a listing condition. Possible values include the following:asc? - Present the information in ascending order (from A-Z). * dsc: - Present the information in descending order (from Z-A). amazonka-ml9A list of the variables to use in searching or filtering  Evaluation . CreatedAt - Sets the search criteria to  Evaluation creation date. * Status - Sets the search criteria to  Evaluation status. * Name/ - Sets the search criteria to the contents of  Evaluation ____ Name . * IAMUserQ - Sets the search criteria to the user account that invoked an evaluation. *  MLModelId# - Sets the search criteria to the  Predictor that was evaluated. *  DataSourceId# - Sets the search criteria to the  DataSource used in evaluation. * DataUri - Sets the search criteria to the data file(s) used in evaluation. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.% amazonka-ml1Object status with the following possible values:PENDING *  INPROGRESS * FAILED *  COMPLETED * DELETED+ amazonka-mlContains the key values of  DetailsMap : PredictiveModelType - Indicates the type of the MLModel .  Algorithm1 - Indicates the algorithm that was used for the MLModel .. amazonka-ml9A list of the variables to use in searching or filtering  DataSource . CreatedAt - Sets the search criteria to  DataSource creation date. * Status - Sets the search criteria to  DataSource status. * Name/ - Sets the search criteria to the contents of  DataSource ____ Name . * DataUriH - Sets the search criteria to the URI of data files used to create the  DataSourcep . The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory. * IAMUserA - Sets the search criteria to the user account that invoked the  DataSource creation.5 amazonka-ml9A list of the variables to use in searching or filtering BatchPrediction . CreatedAt - Sets the search criteria to BatchPrediction creation date. * Status - Sets the search criteria to BatchPrediction status. * Name/ - Sets the search criteria to the contents of BatchPrediction ____ Name . * IAMUserA - Sets the search criteria to the user account that invoked the BatchPrediction creation. *  MLModelId# - Sets the search criteria to the MLModel used in the BatchPrediction . *  DataSourceId# - Sets the search criteria to the  DataSource used in the BatchPrediction . * DataURI< - Sets the search criteria to the data file(s) used in the BatchPredictionj . The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.> amazonka-mlThe function used to train an MLModelA . Training choices supported by Amazon ML include the following:SGD% - Stochastic Gradient Descent. *  RandomForest# - Random forest of decision trees.@ $#"! %*)('&+-,.4210/35=<;:9876>? (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None"#27b@ amazonka-mlJA custom key-value pair associated with an ML object, such as an ML model.See:  smart constructor.A amazonka-ml&Describes the data specification of a  DataSource .See:  smart constructor.B amazonka-mlDescribes the  DataSource% details specific to Amazon Redshift.See:  smart constructor.C amazonka-ml^Describes the database credentials for connecting to a database on an Amazon Redshift cluster.See:  smart constructor.D amazonka-mlRDescribes the database details required to connect to an Amazon Redshift database.See:  smart constructor.E amazonka-ml7Describes the data specification of an Amazon Redshift  DataSource .See:  smart constructor.F amazonka-ml4Describes the real-time endpoint information for an MLModel .See:  smart constructor.G amazonka-ml7The datasource details that are specific to Amazon RDS.See:  smart constructor.H amazonka-mlHThe database credentials to connect to a database on an RDS DB instance.See:  smart constructor.I amazonka-ml/The database details of an Amazon RDS database.See:  smart constructor.J amazonka-mlMThe data specification of an Amazon Relational Database Service (Amazon RDS)  DataSource .See:  smart constructor.K amazonka-mlThe output from a Predict operation:Details& - Contains the following attributes: JDetailsAttributes.PREDICTIVE_MODEL_TYPE - REGRESSION | BINARY | MULTICLASS !DetailsAttributes.ALGORITHM - SGDPredictedLabel - Present for either a BINARY or  MULTICLASS MLModel request.PredictedScoresE - Contains the raw classification score corresponding to each label.PredictedValue - Present for a  REGRESSION MLModel request.See:  smart constructor.L amazonka-mlMeasurements of how well the MLModele performed on known observations. One of the following metrics is returned, based on the type of the MLModel :BinaryAUC: The binary MLModelF uses the Area Under the Curve (AUC) technique to measure performance.RegressionRMSE: The regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.$MulticlassAvgFScore: The multiclass MLModel4 uses the F1 score technique to measure performance.?For more information about performance metrics, please see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide .See:  smart constructor.M amazonka-mlRepresents the output of a  GetMLModel operation.LThe content consists of the detailed metadata and the current status of the MLModel .See:  smart constructor.N amazonka-mlRepresents the output of  GetEvaluation operation.fThe content consists of the detailed metadata and data file information and the current status of the  Evaluation .See: u smart constructor.O amazonka-mlRepresents the output of the  GetDataSource operation.fThe content consists of the detailed metadata and data file information and the current status of the  DataSource .See: b smart constructor.P amazonka-mlRepresents the output of a GetBatchPrediction operation.^The content consists of the detailed metadata, the status, and the data file information of a Batch Prediction .See: Q smart constructor.Q amazonka-mlCreates a value of P4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:R - The status of the BatchPrediction< . This element can have one of the following values: * PENDINGu - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations. *  INPROGRESS! - The process is underway. * FAILED_ - The request to perform a batch prediction did not run to completion. It is not usable. *  COMPLETED= - The batch prediction process completed successfully. * DELETED - The BatchPrediction( is marked as deleted. It is not usable.S+ - The time of the most recent edit to the BatchPrediction' . The time is expressed in epoch time.T - The time that the BatchPrediction2 was created. The time is expressed in epoch time.U - Undocumented member.V[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).W - The ID of the MLModel$ that generated predictions for the BatchPrediction request.X - The ID of the  DataSource5 that points to the group of observations to predict.Y - Undocumented member.Z - Undocumented member.[ - The ID assigned to the BatchPredictionA at creation. This value should be identical to the value of the BatchPredictionID in the request.\ - Undocumented member.] - Undocumented member.^) - The AWS user account that invoked the BatchPredictionr . The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account._. - A user-supplied name or description of the BatchPrediction .`Z - A description of the most recent details about processing the batch prediction request.a - The location of an Amazon S3 bucket or directory to receive the operation results. The following substrings are not allowed in the s3 key portion of the  outputURI field: :, //, /./, /../.R amazonka-mlThe status of the BatchPrediction< . This element can have one of the following values: * PENDINGu - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations. *  INPROGRESS! - The process is underway. * FAILED_ - The request to perform a batch prediction did not run to completion. It is not usable. *  COMPLETED= - The batch prediction process completed successfully. * DELETED - The BatchPrediction( is marked as deleted. It is not usable.S amazonka-ml(The time of the most recent edit to the BatchPrediction' . The time is expressed in epoch time.T amazonka-mlThe time that the BatchPrediction2 was created. The time is expressed in epoch time.U amazonka-mlUndocumented member.V amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3).W amazonka-mlThe ID of the MLModel$ that generated predictions for the BatchPrediction request.X amazonka-mlThe ID of the  DataSource5 that points to the group of observations to predict.Y amazonka-mlUndocumented member.Z amazonka-mlUndocumented member.[ amazonka-mlThe ID assigned to the BatchPredictionA at creation. This value should be identical to the value of the BatchPredictionID in the request.\ amazonka-mlUndocumented member.] amazonka-mlUndocumented member.^ amazonka-ml&The AWS user account that invoked the BatchPredictionr . The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account._ amazonka-ml+A user-supplied name or description of the BatchPrediction .` amazonka-mlWA description of the most recent details about processing the batch prediction request.a amazonka-mlThe location of an Amazon S3 bucket or directory to receive the operation results. The following substrings are not allowed in the s3 key portion of the  outputURI field: :, //, /./, /../.b amazonka-mlCreates a value of O4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:c - The current status of the  DataSource . This element can have one of the following values: * PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create a  DataSource] . * INPROGRESS - The creation process is underway. * FAILED - The request to create a  DataSource did not run to completion. It is not usable. * COMPLETED - The creation process completed successfully. * DELETED - The  DataSource( is marked as deleted. It is not usable.d. - The number of data files referenced by the  DataSource .e+ - The time of the most recent edit to the BatchPrediction' . The time is expressed in epoch time.f - The time that the  DataSource2 was created. The time is expressed in epoch time.g - Undocumented member.h" - The ID that is assigned to the  DataSource during creation.i - Undocumented member.jI - The total number of observations contained in the data files that the  DataSource references.k - Undocumented member.l - Undocumented member.m' - The AWS user account from which the  DataSource} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.n. - A user-supplied name or description of the  DataSource .od - The location and name of the data in Amazon Simple Storage Service (Amazon S3) that is used by a  DataSource .p - The parameter is true> if statistics need to be generated from the observation data.q? - A description of the most recent details about creating the  DataSource .r - Undocumented member.s\ - A JSON string that represents the splitting and rearrangement requirement used when this  DataSource was created.t - Undocumented member.c amazonka-mlThe current status of the  DataSource . This element can have one of the following values: * PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create a  DataSource] . * INPROGRESS - The creation process is underway. * FAILED - The request to create a  DataSource did not run to completion. It is not usable. * COMPLETED - The creation process completed successfully. * DELETED - The  DataSource( is marked as deleted. It is not usable.d amazonka-ml+The number of data files referenced by the  DataSource .e amazonka-ml(The time of the most recent edit to the BatchPrediction' . The time is expressed in epoch time.f amazonka-mlThe time that the  DataSource2 was created. The time is expressed in epoch time.g amazonka-mlUndocumented member.h amazonka-mlThe ID that is assigned to the  DataSource during creation.i amazonka-mlUndocumented member.j amazonka-mlFThe total number of observations contained in the data files that the  DataSource references.k amazonka-mlUndocumented member.l amazonka-mlUndocumented member.m amazonka-ml$The AWS user account from which the  DataSource} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.n amazonka-ml+A user-supplied name or description of the  DataSource .o amazonka-mlaThe location and name of the data in Amazon Simple Storage Service (Amazon S3) that is used by a  DataSource .p amazonka-mlThe parameter is true> if statistics need to be generated from the observation data.q amazonka-ml<A description of the most recent details about creating the  DataSource .r amazonka-mlUndocumented member.s amazonka-mlYA JSON string that represents the splitting and rearrangement requirement used when this  DataSource was created.t amazonka-mlUndocumented member.u amazonka-mlCreates a value of N4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:vZ - The status of the evaluation. This element can have one of the following values: * PENDINGJ - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel . *  INPROGRESS$ - The evaluation is underway. * FAILED - The request to evaluate an MLModel3 did not run to completion. It is not usable. *  COMPLETED7 - The evaluation process completed successfully. * DELETED - The  Evaluation( is marked as deleted. It is not usable.w - Measurements of how well the MLModel1 performed, using observations referenced by the  DataSourceF . One of the following metrics is returned, based on the type of the MLModel : * BinaryAUC: A binary MLModelk uses the Area Under the Curve (AUC) technique to measure performance. * RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. * MulticlassAvgFScore: A multiclass MLModelu uses the F1 score technique to measure performance. For more information about performance metrics, please see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide .x+ - The time of the most recent edit to the  Evaluation' . The time is expressed in epoch time.y - The time that the  Evaluation2 was created. The time is expressed in epoch time.z - Undocumented member.{p - The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.| - The ID of the MLModel% that is the focus of the evaluation.} - Undocumented member.~ - Undocumented member. - The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.. - A user-supplied name or description of the  Evaluation ." - The ID that is assigned to the  Evaluation at creation.A - A description of the most recent details about evaluating the MLModel . - The ID of the  DataSource that is used to evaluate the MLModel .v amazonka-mlWThe status of the evaluation. This element can have one of the following values: * PENDINGJ - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel . *  INPROGRESS$ - The evaluation is underway. * FAILED - The request to evaluate an MLModel3 did not run to completion. It is not usable. *  COMPLETED7 - The evaluation process completed successfully. * DELETED - The  Evaluation( is marked as deleted. It is not usable.w amazonka-mlMeasurements of how well the MLModel1 performed, using observations referenced by the  DataSourceF . One of the following metrics is returned, based on the type of the MLModel : * BinaryAUC: A binary MLModelk uses the Area Under the Curve (AUC) technique to measure performance. * RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. * MulticlassAvgFScore: A multiclass MLModelu uses the F1 score technique to measure performance. For more information about performance metrics, please see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide .x amazonka-ml(The time of the most recent edit to the  Evaluation' . The time is expressed in epoch time.y amazonka-mlThe time that the  Evaluation2 was created. The time is expressed in epoch time.z amazonka-mlUndocumented member.{ amazonka-mlmThe location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.| amazonka-mlThe ID of the MLModel% that is the focus of the evaluation.} amazonka-mlUndocumented member.~ amazonka-mlUndocumented member. amazonka-mlThe AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account. amazonka-ml+A user-supplied name or description of the  Evaluation . amazonka-mlThe ID that is assigned to the  Evaluation at creation. amazonka-ml>A description of the most recent details about evaluating the MLModel . amazonka-mlThe ID of the  DataSource that is used to evaluate the MLModel . amazonka-mlCreates a value of M4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The current status of an MLModel= . This element can have one of the following values: * PENDINGH - Amazon Machine Learning (Amazon ML) submitted a request to create an MLModel . *  INPROGRESS* - The creation process is underway. * FAILED - The request to create an MLModel8 didn't run to completion. The model isn't usable. *  COMPLETED5 - The creation process completed successfully. * DELETED - The MLModel' is marked as deleted. It isn't usable.+ - The time of the most recent edit to the MLModel' . The time is expressed in epoch time., - A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none . * sgd.l1RegularizationAmount - The coefficient regularization L1 norm, which controls overfitting the data by penalizing large coefficients. This parameter tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm, which controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly.+ - The time of the most recent edit to the ScoreThreshold' . The time is expressed in epoch time. - The time that the MLModel2 was created. The time is expressed in epoch time. - Undocumented member.[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3). - The ID assigned to the MLModel at creation. - Undocumented member. - Undocumented member. - Undocumented member. - Undocumented member.# - The algorithm used to train the MLModel/ . The following algorithm is supported: * SGD- -- Stochastic gradient descent. The goal of SGD2 is to minimize the gradient of the loss function.' - The AWS user account from which the MLModel} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.. - A user-supplied name or description of the MLModel . - The current endpoint of the MLModel . - The ID of the training  DataSource . The  CreateMLModel operation uses the TrainingDataSourceId .@ - A description of the most recent details about accessing the MLModel . - Identifies the MLModel8 category. The following are the available types: *  REGRESSIONZ - Produces a numeric result. For example, "What price should a house be listed at?" * BINARYa - Produces one of two possible results. For example, "Is this a child-friendly web site?". *  MULTICLASSi - Produces one of several possible results. For example, "Is this a HIGH-, LOW-, or MEDIUM-risk trade?". amazonka-mlThe current status of an MLModel= . This element can have one of the following values: * PENDINGH - Amazon Machine Learning (Amazon ML) submitted a request to create an MLModel . *  INPROGRESS* - The creation process is underway. * FAILED - The request to create an MLModel8 didn't run to completion. The model isn't usable. *  COMPLETED5 - The creation process completed successfully. * DELETED - The MLModel' is marked as deleted. It isn't usable. amazonka-ml(The time of the most recent edit to the MLModel' . The time is expressed in epoch time. amazonka-ml)A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none . * sgd.l1RegularizationAmount - The coefficient regularization L1 norm, which controls overfitting the data by penalizing large coefficients. This parameter tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm, which controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly. amazonka-ml(The time of the most recent edit to the ScoreThreshold' . The time is expressed in epoch time. amazonka-mlThe time that the MLModel2 was created. The time is expressed in epoch time. amazonka-mlUndocumented member. amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3). amazonka-mlThe ID assigned to the MLModel at creation. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-ml The algorithm used to train the MLModel/ . The following algorithm is supported: * SGD- -- Stochastic gradient descent. The goal of SGD2 is to minimize the gradient of the loss function. amazonka-ml$The AWS user account from which the MLModel} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account. amazonka-ml+A user-supplied name or description of the MLModel . amazonka-mlThe current endpoint of the MLModel . amazonka-mlThe ID of the training  DataSource . The  CreateMLModel operation uses the TrainingDataSourceId . amazonka-ml=A description of the most recent details about accessing the MLModel . amazonka-mlIdentifies the MLModel8 category. The following are the available types: *  REGRESSIONZ - Produces a numeric result. For example, "What price should a house be listed at?" * BINARYa - Produces one of two possible results. For example, "Is this a child-friendly web site?". *  MULTICLASSi - Produces one of several possible results. For example, "Is this a HIGH-, LOW-, or MEDIUM-risk trade?". amazonka-mlCreates a value of L4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Undocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of K4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The prediction value for  REGRESSION MLModel .% - The prediction label for either a BINARY or  MULTICLASS MLModel . - Undocumented member. - Undocumented member. amazonka-mlThe prediction value for  REGRESSION MLModel . amazonka-ml"The prediction label for either a BINARY or  MULTICLASS MLModel . amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of J4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: ! - The Amazon S3 location of the  DataSchema .> - A JSON string that represents the schema for an Amazon RDS  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . A  DataSchema" is not required if you specify a  DataSchemaUri Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] }_ - A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}} - Describes the  DatabaseName and InstanceIdentifier of an Amazon RDS database.C - The query that is used to retrieve the observation data for the  DataSource .m - The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.` - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location. - The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. - The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. - The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3. - The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task. amazonka-mlThe Amazon S3 location of the  DataSchema . amazonka-ml;A JSON string that represents the schema for an Amazon RDS  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . A  DataSchema" is not required if you specify a  DataSchemaUri Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] } amazonka-ml\A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}} amazonka-mlDescribes the  DatabaseName and InstanceIdentifier of an Amazon RDS database. amazonka-ml@The query that is used to retrieve the observation data for the  DataSource . amazonka-mljThe AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database. amazonka-ml]The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location. amazonka-mlThe role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. amazonka-mlThe role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. amazonka-mlThe subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3. amazonka-mlThe security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task. amazonka-mlCreates a value of I4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID of an RDS DB instance. - Undocumented member. amazonka-mlThe ID of an RDS DB instance. amazonka-mlUndocumented member. amazonka-mlCreates a value of H4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Undocumented member. - Undocumented member. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of G4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:) - The SQL query that is supplied during CreateDataSourceFromRDS . Returns only if Verbose is true in GetDataSourceInput . - The ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console.= - The database details required to connect to an Amazon RDS. - Undocumented member. - The role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. - The role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. amazonka-ml&The SQL query that is supplied during CreateDataSourceFromRDS . Returns only if Verbose is true in GetDataSourceInput . amazonka-mlThe ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console. amazonka-ml:The database details required to connect to an Amazon RDS. amazonka-mlUndocumented member. amazonka-mlThe role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. amazonka-mlThe role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. amazonka-mlCreates a value of F4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:F - The time that the request to create the real-time endpoint for the MLModel3 was received. The time is expressed in epoch time.N - The URI that specifies where to send real-time prediction requests for the MLModel .8 - The current status of the real-time endpoint for the MLModel= . This element can have one of the following values: * NONE; - Endpoint does not exist or was previously deleted. * READY@ - Endpoint is ready to be used for real-time predictions. * UPDATING" - Updating/creating the endpoint.> - The maximum processing rate for the real-time endpoint for MLModel, , measured in incoming requests per second. amazonka-mlCThe time that the request to create the real-time endpoint for the MLModel3 was received. The time is expressed in epoch time. amazonka-mlKThe URI that specifies where to send real-time prediction requests for the MLModel . amazonka-ml5The current status of the real-time endpoint for the MLModel= . This element can have one of the following values: * NONE; - Endpoint does not exist or was previously deleted. * READY@ - Endpoint is ready to be used for real-time predictions. * UPDATING" - Updating/creating the endpoint. amazonka-ml;The maximum processing rate for the real-time endpoint for MLModel, , measured in incoming requests per second. amazonka-mlCreates a value of E4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:8 - Describes the schema location for an Amazon Redshift  DataSource .C - A JSON string that represents the schema for an Amazon Redshift  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . A  DataSchema" is not required if you specify a  DataSchemaUri . Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] }_ - A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}} - Describes the  DatabaseName and ClusterIdentifier for an Amazon Redshift  DataSource .\ - Describes the SQL Query to execute on an Amazon Redshift database for an Amazon Redshift  DataSource .x - Describes AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon Redshift database.B - Describes an Amazon S3 location to store the result set of the SelectSqlQuery query. amazonka-ml5Describes the schema location for an Amazon Redshift  DataSource . amazonka-ml@A JSON string that represents the schema for an Amazon Redshift  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . A  DataSchema" is not required if you specify a  DataSchemaUri . Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] } amazonka-ml\A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}} amazonka-mlDescribes the  DatabaseName and ClusterIdentifier for an Amazon Redshift  DataSource . amazonka-mlYDescribes the SQL Query to execute on an Amazon Redshift database for an Amazon Redshift  DataSource . amazonka-mluDescribes AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon Redshift database. amazonka-ml?Describes an Amazon S3 location to store the result set of the SelectSqlQuery query. amazonka-mlCreates a value of D4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Undocumented member. - Undocumented member. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of C4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Undocumented member. - Undocumented member. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of B4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:* - The SQL query that is specified during CreateDataSourceFromRedshift . Returns only if Verbose is true in GetDataSourceInput. - Undocumented member. - Undocumented member. amazonka-ml'The SQL query that is specified during CreateDataSourceFromRedshift . Returns only if Verbose is true in GetDataSourceInput. amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of A4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:= - A JSON string that represents the schema for an Amazon S3  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . You must provide either the  DataSchema or the DataSchemaLocationS3 . Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] }K - Describes the schema location in Amazon S3. You must provide either the  DataSchema or the DataSchemaLocationS3 ._ - A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}}. - The location of the data file(s) used by a  DataSource{ . The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files. amazonka-ml:A JSON string that represents the schema for an Amazon S3  DataSource . The  DataSchemaU defines the structure of the observation data in the data file(s) referenced in the  DataSource . You must provide either the  DataSchema or the DataSchemaLocationS3 . Define your  DataSchema! as a series of key-value pairs.  attributes and excludedVariableNames[ have an array of key-value pairs for their value. Use the following format to define your  DataSchema4 . { "version": "1.0", "recordAnnotationFieldName": F1, "recordWeightFieldName": F2, "targetFieldName": F3, "dataFormat": CSVA, "dataFileContainsHeader": true, "attributes": [ { "fieldName": F1, "fieldType": TEXT }, { "fieldName": F2, "fieldType": NUMERIC }, { "fieldName": F3, "fieldType":  CATEGORICAL }, { "fieldName": F4, "fieldType": NUMERIC }, { "fieldName": F5, "fieldType":  CATEGORICAL }, { "fieldName": F6, "fieldType": TEXT }, { "fieldName": F7, "fieldType": WEIGHTED_INT_SEQUENCE }, { "fieldName": F8, "fieldType": WEIGHTED_STRING_SEQUENCE! } ], "excludedVariableNames": [ F6 ] } amazonka-mlHDescribes the schema location in Amazon S3. You must provide either the  DataSchema or the DataSchemaLocationS3 . amazonka-ml\A JSON string that represents the splitting and rearrangement processing to be applied to a  DataSource . If the DataRearrangementH parameter is not provided, all of the input data is used to create the  Datasource^ . There are multiple parameters that control what data is used to create a datasource: *  percentBegin  Use  percentBegini to indicate the beginning of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  percentEnd  Use  percentEndc to indicate the end of the range of the data used to create the Datasource. If you do not include  percentBegin and  percentEndJ , Amazon ML includes all of the data when creating the datasource. *  complement  The  complementT parameter instructs Amazon ML to use the data that is not included in the range of  percentBegin to  percentEnd to create a datasource. The  complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for  percentBegin and  percentEnd , along with the  complement parameter. For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data. Datasource for evaluation: 1{"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: F{"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} * strategy D To change how Amazon ML splits the data for a datasource, use the strategy& parameter. The default value for the strategy parameter is  sequentialD , meaning that Amazon ML takes all of the data records between the  percentBegin and  percentEndj parameters for the datasource, in the order that the records appear in the input data. The following two DataRearrangementl lines are examples of sequentially ordered training and evaluation datasources: Datasource for evaluation: L{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}} Datasource for training: a{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}y To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to randomW and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between  percentBegin and  percentEnd . Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records. The following two DataRearrangementp lines are examples of non-sequentially ordered training and evaluation datasources: Datasource for evaluation: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_pathbucket/file.csv"}} Datasource for training: Z{"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3:/ my_s3_path'bucket/file.csv", "complement":"true"}} amazonka-ml+The location of the data file(s) used by a  DataSource{ . The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files. amazonka-mlCreates a value of @4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - An optional string, typically used to describe or define the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.~ - A unique identifier for the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @. amazonka-mlAn optional string, typically used to describe or define the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @. amazonka-ml{A unique identifier for the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @. amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml@ABCADEFGHBIJKLCMNODPQRESTUVWXYZF[\]^_G`abcdefHghiIjklJmnopqrstuvwxKyz{|}L~MNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)Nonev  amazonka-ml API version  2014-12-122 of the Amazon Machine Learning SDK configuration. amazonka-ml&Prism for InvalidTagException' errors. amazonka-mlAAn error on the server occurred when trying to process a request. amazonka-mlPAn error on the client occurred. Typically, the cause is an invalid input value. amazonka-mlA second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request. amazonka-ml,Prism for TagLimitExceededException' errors. amazonka-mlGThe exception is thrown when a predict request is made to an unmounted MLModel . amazonka-ml'A specified resource cannot be located. amazonka-mlpThe subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as  DataSource .  !"#$%&'()*+,-.3/012456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~>?56789:;<=.3/0124+,-%&'()* !"#$  PQRSTUVWXYZ[\]^_`aObcdefghijklmnopqrstNuvwxyz{|}~MLKJIHGFEDCBA@(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV  amazonka-mlSee:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - A unique identifier of the MLModel . - Undocumented member. - Undocumented member. amazonka-mlA unique identifier of the MLModel . amazonka-mlUndocumented member. amazonka-mlUndocumented member. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Undocumented member.! - -- | The response status code. amazonka-mlUndocumented member. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV;j amazonka-mlRepresents the output of a  GetMLModel6 operation, and provides detailed information about a MLModel .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - Specifies whether the  GetMLModel operation should return Recipe . If true, Recipe is returned. If false, Recipe is not returned. - The ID assigned to the MLModel at creation. amazonka-mlSpecifies whether the  GetMLModel operation should return Recipe . If true, Recipe is returned. If false, Recipe is not returned. amazonka-mlThe ID assigned to the MLModel at creation. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The current status of the MLModel< . This element can have one of the following values: * PENDINGI - Amazon Machine Learning (Amazon ML) submitted a request to describe a MLModel . *  INPROGRESS# - The request is processing. * FAILEDJ - The request did not run to completion. The ML model isn't usable. *  COMPLETED, - The request completed successfully. * DELETED - The MLModel' is marked as deleted. It isn't usable.+ - The time of the most recent edit to the MLModel' . The time is expressed in epoch time., - A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none; . We strongly recommend that you shuffle your data. * sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly.+ - The time of the most recent edit to the ScoreThreshold' . The time is expressed in epoch time. - The time that the MLModel2 was created. The time is expressed in epoch time.^ - The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the MLModel3 , normalized and scaled on computation resources.  ComputeTime is only available if the MLModel is in the  COMPLETED state.' - The recipe to use when training the MLModel . The Recipe provides detailed information about the observation data to use during training, and manipulations to perform on the observation data during training.[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3). ( - The MLModel ID, which is same as the  MLModelId in the request.  - Undocumented member. > - The schema used by all of the data files referenced by the  DataSource . : - The epoch time when Amazon Machine Learning marked the MLModel as  INPROGRESS .  StartedAt isn't available if the MLModel is in the PENDING state. : - The scoring threshold is used in binary classification MLModel models. It marks the boundary between a positive prediction and a negative prediction. Output values greater than or equal to the threshold receive a positive result from the MLModel, such as true_ . Output values less than the threshold receive a negative response from the MLModel, such as false .: - The epoch time when Amazon Machine Learning marked the MLModel as  COMPLETED or FAILED .  FinishedAt is only available when the MLModel is in the  COMPLETED or FAILED state.' - The AWS user account from which the MLModel} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.. - A user-supplied name or description of the MLModel .0 - A link to the file that contains logs of the  CreateMLModel operation. - The current endpoint of the MLModel - The ID of the training  DataSource .@ - A description of the most recent details about accessing the MLModel . - Identifies the MLModelq category. The following are the available types: * REGRESSION -- Produces a numeric result. For example, "What price should a house be listed at?" * BINARY -- Produces one of two possible results. For example, "Is this an e-commerce website?" * MULTICLASS -- Produces one of several possible results. For example, "Is this a HIGH, LOW or MEDIUM risk trade?"! - -- | The response status code. amazonka-mlThe current status of the MLModel< . This element can have one of the following values: * PENDINGI - Amazon Machine Learning (Amazon ML) submitted a request to describe a MLModel . *  INPROGRESS# - The request is processing. * FAILEDJ - The request did not run to completion. The ML model isn't usable. *  COMPLETED, - The request completed successfully. * DELETED - The MLModel' is marked as deleted. It isn't usable. amazonka-ml(The time of the most recent edit to the MLModel' . The time is expressed in epoch time. amazonka-ml)A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none; . We strongly recommend that you shuffle your data. * sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly. amazonka-ml(The time of the most recent edit to the ScoreThreshold' . The time is expressed in epoch time. amazonka-mlThe time that the MLModel2 was created. The time is expressed in epoch time. amazonka-ml[The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the MLModel3 , normalized and scaled on computation resources.  ComputeTime is only available if the MLModel is in the  COMPLETED state. amazonka-ml$The recipe to use when training the MLModel . The Recipe provides detailed information about the observation data to use during training, and manipulations to perform on the observation data during training. amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3).  amazonka-ml%The MLModel ID, which is same as the  MLModelId in the request.  amazonka-mlUndocumented member.  amazonka-ml;The schema used by all of the data files referenced by the  DataSource .  amazonka-ml7The epoch time when Amazon Machine Learning marked the MLModel as  INPROGRESS .  StartedAt isn't available if the MLModel is in the PENDING state.  amazonka-ml7The scoring threshold is used in binary classification MLModel models. It marks the boundary between a positive prediction and a negative prediction. Output values greater than or equal to the threshold receive a positive result from the MLModel, such as true_ . Output values less than the threshold receive a negative response from the MLModel, such as false . amazonka-ml7The epoch time when Amazon Machine Learning marked the MLModel as  COMPLETED or FAILED .  FinishedAt is only available when the MLModel is in the  COMPLETED or FAILED state. amazonka-ml$The AWS user account from which the MLModel} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account. amazonka-ml+A user-supplied name or description of the MLModel . amazonka-ml-A link to the file that contains logs of the  CreateMLModel operation. amazonka-mlThe current endpoint of the MLModel amazonka-mlThe ID of the training  DataSource . amazonka-ml=A description of the most recent details about accessing the MLModel . amazonka-mlIdentifies the MLModelq category. The following are the available types: * REGRESSION -- Produces a numeric result. For example, "What price should a house be listed at?" * BINARY -- Produces one of two possible results. For example, "Is this an e-commerce website?" * MULTICLASS -- Produces one of several possible results. For example, "Is this a HIGH, LOW or MEDIUM risk trade?" amazonka-ml- | The response status code. amazonka-ml amazonka-ml          (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV) amazonka-mlRepresents the output of a  GetEvaluation operation and describes an  Evaluation .See: - smart constructor.* amazonka-mlSee: + smart constructor.+ amazonka-mlCreates a value of *4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:, - The ID of the  Evaluation% to retrieve. The evaluation of each MLModelP is recorded and cataloged. The ID provides the means to access the information., amazonka-mlThe ID of the  Evaluation% to retrieve. The evaluation of each MLModelP is recorded and cataloged. The ID provides the means to access the information.- amazonka-mlCreates a value of )4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:.Z - The status of the evaluation. This element can have one of the following values: * PENDINGJ - Amazon Machine Language (Amazon ML) submitted a request to evaluate an MLModel . *  INPROGRESS$ - The evaluation is underway. * FAILED - The request to evaluate an MLModel3 did not run to completion. It is not usable. *  COMPLETED7 - The evaluation process completed successfully. * DELETED - The  Evaluation( is marked as deleted. It is not usable./ - Measurements of how well the MLModel0 performed using observations referenced by the  DataSourceD . One of the following metric is returned based on the type of the MLModel : * BinaryAUC: A binary MLModelk uses the Area Under the Curve (AUC) technique to measure performance. * RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. * MulticlassAvgFScore: A multiclass MLModelu uses the F1 score technique to measure performance. For more information about performance metrics, please see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide .0+ - The time of the most recent edit to the  Evaluation' . The time is expressed in epoch time.1 - The time that the  Evaluation2 was created. The time is expressed in epoch time.2^ - The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the  Evaluation3 , normalized and scaled on computation resources.  ComputeTime is only available if the  Evaluation is in the  COMPLETED state.3[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).4 - The ID of the MLModel& that was the focus of the evaluation.5: - The epoch time when Amazon Machine Learning marked the  Evaluation as  INPROGRESS .  StartedAt isn't available if the  Evaluation is in the PENDING state.6: - The epoch time when Amazon Machine Learning marked the  Evaluation as  COMPLETED or FAILED .  FinishedAt is only available when the  Evaluation is in the  COMPLETED or FAILED state.7 - The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.8. - A user-supplied name or description of the  Evaluation .90 - A link to the file that contains logs of the CreateEvaluation operation.:* - The evaluation ID which is same as the  EvaluationId in the request.;A - A description of the most recent details about evaluating the MLModel .< - The  DataSource used for this evaluation.=! - -- | The response status code.. amazonka-mlWThe status of the evaluation. This element can have one of the following values: * PENDINGJ - Amazon Machine Language (Amazon ML) submitted a request to evaluate an MLModel . *  INPROGRESS$ - The evaluation is underway. * FAILED - The request to evaluate an MLModel3 did not run to completion. It is not usable. *  COMPLETED7 - The evaluation process completed successfully. * DELETED - The  Evaluation( is marked as deleted. It is not usable./ amazonka-mlMeasurements of how well the MLModel0 performed using observations referenced by the  DataSourceD . One of the following metric is returned based on the type of the MLModel : * BinaryAUC: A binary MLModelk uses the Area Under the Curve (AUC) technique to measure performance. * RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. * MulticlassAvgFScore: A multiclass MLModelu uses the F1 score technique to measure performance. For more information about performance metrics, please see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide .0 amazonka-ml(The time of the most recent edit to the  Evaluation' . The time is expressed in epoch time.1 amazonka-mlThe time that the  Evaluation2 was created. The time is expressed in epoch time.2 amazonka-ml[The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the  Evaluation3 , normalized and scaled on computation resources.  ComputeTime is only available if the  Evaluation is in the  COMPLETED state.3 amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3).4 amazonka-mlThe ID of the MLModel& that was the focus of the evaluation.5 amazonka-ml7The epoch time when Amazon Machine Learning marked the  Evaluation as  INPROGRESS .  StartedAt isn't available if the  Evaluation is in the PENDING state.6 amazonka-ml7The epoch time when Amazon Machine Learning marked the  Evaluation as  COMPLETED or FAILED .  FinishedAt is only available when the  Evaluation is in the  COMPLETED or FAILED state.7 amazonka-mlThe AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.8 amazonka-ml+A user-supplied name or description of the  Evaluation .9 amazonka-ml-A link to the file that contains logs of the CreateEvaluation operation.: amazonka-ml'The evaluation ID which is same as the  EvaluationId in the request.; amazonka-ml>A description of the most recent details about evaluating the MLModel .< amazonka-mlThe  DataSource used for this evaluation.= amazonka-ml- | The response status code.+ amazonka-ml,- amazonka-ml=)*+,-./0123456789:;<=+*,-)./0123456789:;<=(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVP amazonka-mlRepresents the output of a  GetDataSource operation and describes a  DataSource .See: U smart constructor.Q amazonka-mlSee: R smart constructor.R amazonka-mlCreates a value of Q4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:S - Specifies whether the  GetDataSource operation should return DataSourceSchema . If true, DataSourceSchema is returned. If false, DataSourceSchema is not returned.T - The ID assigned to the  DataSource at creation.S amazonka-mlSpecifies whether the  GetDataSource operation should return DataSourceSchema . If true, DataSourceSchema is returned. If false, DataSourceSchema is not returned.T amazonka-mlThe ID assigned to the  DataSource at creation.U amazonka-mlCreates a value of P4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:V - The current status of the  DataSource< . This element can have one of the following values: * PENDING- - Amazon ML submitted a request to create a  DataSource . *  INPROGRESS* - The creation process is underway. * FAILED - The request to create a  DataSource3 did not run to completion. It is not usable. *  COMPLETED5 - The creation process completed successfully. * DELETED - The  DataSource( is marked as deleted. It is not usable.W. - The number of data files referenced by the  DataSource .X+ - The time of the most recent edit to the  DataSource' . The time is expressed in epoch time.Y - The time that the  DataSource2 was created. The time is expressed in epoch time.Z^ - The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the  DataSource3 , normalized and scaled on computation resources.  ComputeTime is only available if the  DataSource is in the  COMPLETED state and the ComputeStatistics is set to true.[ - The ID assigned to the  DataSourceA at creation. This value should be identical to the value of the  DataSourceId in the request.\ - Undocumented member.]4 - The total size of observations in the data files.^4 - The schema used by all of the data files of this  DataSource ._: - The epoch time when Amazon Machine Learning marked the  DataSource as  INPROGRESS .  StartedAt isn't available if the  DataSource is in the PENDING state.`: - The epoch time when Amazon Machine Learning marked the  DataSource as  COMPLETED or FAILED .  FinishedAt is only available when the  DataSource is in the  COMPLETED or FAILED state.a' - The AWS user account from which the  DataSource} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.b. - A user-supplied name or description of the  DataSource .c) - A link to the file containing logs of CreateDataSourceFrom* operations.d[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).e - The parameter is true> if statistics need to be generated from the observation data.fO - The user-supplied description of the most recent details about creating the  DataSource .g - Undocumented member.h\ - A JSON string that represents the splitting and rearrangement requirement used when this  DataSource was created.i - Undocumented member.j! - -- | The response status code.V amazonka-mlThe current status of the  DataSource< . This element can have one of the following values: * PENDING- - Amazon ML submitted a request to create a  DataSource . *  INPROGRESS* - The creation process is underway. * FAILED - The request to create a  DataSource3 did not run to completion. It is not usable. *  COMPLETED5 - The creation process completed successfully. * DELETED - The  DataSource( is marked as deleted. It is not usable.W amazonka-ml+The number of data files referenced by the  DataSource .X amazonka-ml(The time of the most recent edit to the  DataSource' . The time is expressed in epoch time.Y amazonka-mlThe time that the  DataSource2 was created. The time is expressed in epoch time.Z amazonka-ml[The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the  DataSource3 , normalized and scaled on computation resources.  ComputeTime is only available if the  DataSource is in the  COMPLETED state and the ComputeStatistics is set to true.[ amazonka-mlThe ID assigned to the  DataSourceA at creation. This value should be identical to the value of the  DataSourceId in the request.\ amazonka-mlUndocumented member.] amazonka-ml1The total size of observations in the data files.^ amazonka-ml1The schema used by all of the data files of this  DataSource ._ amazonka-ml7The epoch time when Amazon Machine Learning marked the  DataSource as  INPROGRESS .  StartedAt isn't available if the  DataSource is in the PENDING state.` amazonka-ml7The epoch time when Amazon Machine Learning marked the  DataSource as  COMPLETED or FAILED .  FinishedAt is only available when the  DataSource is in the  COMPLETED or FAILED state.a amazonka-ml$The AWS user account from which the  DataSource} was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.b amazonka-ml+A user-supplied name or description of the  DataSource .c amazonka-ml&A link to the file containing logs of CreateDataSourceFrom* operations.d amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3).e amazonka-mlThe parameter is true> if statistics need to be generated from the observation data.f amazonka-mlLThe user-supplied description of the most recent details about creating the  DataSource .g amazonka-mlUndocumented member.h amazonka-mlYA JSON string that represents the splitting and rearrangement requirement used when this  DataSource was created.i amazonka-mlUndocumented member.j amazonka-ml- | The response status code.R amazonka-mlTU amazonka-mljPQRSTUVWXYZ[\]^_`abcdefghijRQSTUPVWXYZ[\]^_`abcdefghij(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV`} amazonka-mlRepresents the output of a GetBatchPrediction operation and describes a BatchPrediction .See:  smart constructor.~ amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of ~4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - An ID assigned to the BatchPrediction at creation. amazonka-mlAn ID assigned to the BatchPrediction at creation. amazonka-mlCreates a value of }4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The status of the BatchPrediction3 , which can be one of the following values: * PENDING_ - Amazon Machine Learning (Amazon ML) submitted a request to generate batch predictions. *  INPROGRESS/ - The batch predictions are in progress. * FAILED_ - The request to perform a batch prediction did not run to completion. It is not usable. *  COMPLETED= - The batch prediction process completed successfully. * DELETED - The BatchPrediction( is marked as deleted. It is not usable.' - The time of the most recent edit to BatchPrediction' . The time is expressed in epoch time. - The time when the BatchPrediction2 was created. The time is expressed in epoch time.^ - The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the BatchPrediction3 , normalized and scaled on computation resources.  ComputeTime is only available if the BatchPrediction is in the  COMPLETED state.[ - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3). - The ID of the MLModel$ that generated predictions for the BatchPrediction request. - The ID of the  DataSource that was used to create the BatchPrediction .U - The number of total records that Amazon Machine Learning saw while processing the BatchPrediction .: - The epoch time when Amazon Machine Learning marked the BatchPrediction as  INPROGRESS .  StartedAt isn't available if the BatchPrediction is in the PENDING state. - An ID assigned to the BatchPredictionA at creation. This value should be identical to the value of the BatchPredictionID in the request.: - The epoch time when Amazon Machine Learning marked the BatchPrediction as  COMPLETED or FAILED .  FinishedAt is only available when the BatchPrediction is in the  COMPLETED or FAILED state.W - The number of invalid records that Amazon Machine Learning saw while processing the BatchPrediction .) - The AWS user account that invoked the BatchPredictionr . The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.. - A user-supplied name or description of the BatchPrediction .0 - A link to the file that contains logs of the CreateBatchPrediction operation.Z - A description of the most recent details about processing the batch prediction request.U - The location of an Amazon S3 bucket or directory to receive the operation results.! - -- | The response status code. amazonka-mlThe status of the BatchPrediction3 , which can be one of the following values: * PENDING_ - Amazon Machine Learning (Amazon ML) submitted a request to generate batch predictions. *  INPROGRESS/ - The batch predictions are in progress. * FAILED_ - The request to perform a batch prediction did not run to completion. It is not usable. *  COMPLETED= - The batch prediction process completed successfully. * DELETED - The BatchPrediction( is marked as deleted. It is not usable. amazonka-ml$The time of the most recent edit to BatchPrediction' . The time is expressed in epoch time. amazonka-mlThe time when the BatchPrediction2 was created. The time is expressed in epoch time. amazonka-ml[The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the BatchPrediction3 , normalized and scaled on computation resources.  ComputeTime is only available if the BatchPrediction is in the  COMPLETED state. amazonka-mlXThe location of the data file or directory in Amazon Simple Storage Service (Amazon S3). amazonka-mlThe ID of the MLModel$ that generated predictions for the BatchPrediction request. amazonka-mlThe ID of the  DataSource that was used to create the BatchPrediction . amazonka-mlRThe number of total records that Amazon Machine Learning saw while processing the BatchPrediction . amazonka-ml7The epoch time when Amazon Machine Learning marked the BatchPrediction as  INPROGRESS .  StartedAt isn't available if the BatchPrediction is in the PENDING state. amazonka-mlAn ID assigned to the BatchPredictionA at creation. This value should be identical to the value of the BatchPredictionID in the request. amazonka-ml7The epoch time when Amazon Machine Learning marked the BatchPrediction as  COMPLETED or FAILED .  FinishedAt is only available when the BatchPrediction is in the  COMPLETED or FAILED state. amazonka-mlTThe number of invalid records that Amazon Machine Learning saw while processing the BatchPrediction . amazonka-ml&The AWS user account that invoked the BatchPredictionr . The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account. amazonka-ml+A user-supplied name or description of the BatchPrediction . amazonka-ml-A link to the file that contains logs of the CreateBatchPrediction operation. amazonka-mlWA description of the most recent details about processing the batch prediction request. amazonka-mlRThe location of an Amazon S3 bucket or directory to receive the operation results. amazonka-ml- | The response status code. amazonka-ml amazonka-ml}~~}(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVt  amazonka-ml)Amazon ML returns the following elements.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:) - The ID of the ML object. For example, exampleModelId . - The type of the ML object. amazonka-ml&The ID of the ML object. For example, exampleModelId . amazonka-mlThe type of the ML object. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:" - The ID of the tagged ML object.$ - The type of the tagged ML object.0 - A list of tags associated with the ML object.! - -- | The response status code. amazonka-mlThe ID of the tagged ML object. amazonka-ml!The type of the tagged ML object. amazonka-ml-A list of tags associated with the ML object. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml  (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV3 amazonka-mlRepresents the output of a DescribeMLModels1 operation. The content is essentially a list of MLModel .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:  - The equal to operator. The MLModel results will have FilterVariable4 values that exactly match the value specified with EQ .. - The greater than or equal to operator. The MLModel results will have FilterVariableC values that are greater than or equal to the value specified with GE .B - A string that is found at the beginning of a variable, such as Name or Id . For example, an MLModel could have the Name 2014-09-09-HolidayGiftMailer . To search for this MLModel , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday" - The greater than operator. The MLModel results will have FilterVariable7 values that are greater than the value specified with GT ." - The not equal to operator. The MLModel results will have FilterVariable. values not equal to the value specified with NE ./ - The ID of the page in the paginated results.O - A two-value parameter that determines the sequence of the resulting list of MLModel . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable .b - The number of pages of information to include in the result. The range of acceptable values is 1 through 100 . The default value is 100 . - The less than operator. The MLModel results will have FilterVariable4 values that are less than the value specified with LT .: - Use one of the following variables to filter a list of MLModel : *  CreatedAt - Sets the search criteria to MLModel creation date. * Status - Sets the search criteria to MLModel status. * Name/ - Sets the search criteria to the contents of MLModel ____ Name . * IAMUserA - Sets the search criteria to the user account that invoked the MLModel creation. * TrainingDataSourceId# - Sets the search criteria to the  DataSource used to train one or more MLModel . * RealtimeEndpointStatus# - Sets the search criteria to the MLModel! real-time endpoint status. *  MLModelType - Sets the search criteria to MLModel0 type: binary, regression, or multi-class. *  Algorithm6 - Sets the search criteria to the algorithm that the MLModel uses. * TrainingDataURIC - Sets the search criteria to the data file(s) used in training a MLModelj . The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.+ - The less than or equal to operator. The MLModel results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlThe equal to operator. The MLModel results will have FilterVariable4 values that exactly match the value specified with EQ . amazonka-ml+The greater than or equal to operator. The MLModel results will have FilterVariableC values that are greater than or equal to the value specified with GE . amazonka-ml?A string that is found at the beginning of a variable, such as Name or Id . For example, an MLModel could have the Name 2014-09-09-HolidayGiftMailer . To search for this MLModel , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday amazonka-mlThe greater than operator. The MLModel results will have FilterVariable7 values that are greater than the value specified with GT . amazonka-mlThe not equal to operator. The MLModel results will have FilterVariable. values not equal to the value specified with NE . amazonka-ml,The ID of the page in the paginated results. amazonka-mlLA two-value parameter that determines the sequence of the resulting list of MLModel . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . amazonka-ml_The number of pages of information to include in the result. The range of acceptable values is 1 through 100 . The default value is 100 . amazonka-mlThe less than operator. The MLModel results will have FilterVariable4 values that are less than the value specified with LT . amazonka-ml7Use one of the following variables to filter a list of MLModel : *  CreatedAt - Sets the search criteria to MLModel creation date. * Status - Sets the search criteria to MLModel status. * Name/ - Sets the search criteria to the contents of MLModel ____ Name . * IAMUserA - Sets the search criteria to the user account that invoked the MLModel creation. * TrainingDataSourceId# - Sets the search criteria to the  DataSource used to train one or more MLModel . * RealtimeEndpointStatus# - Sets the search criteria to the MLModel! real-time endpoint status. *  MLModelType - Sets the search criteria to MLModel0 type: binary, regression, or multi-class. *  Algorithm6 - Sets the search criteria to the algorithm that the MLModel uses. * TrainingDataURIC - Sets the search criteria to the data file(s) used in training a MLModelj . The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory. amazonka-ml(The less than or equal to operator. The MLModel results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - A list of MLModel that meet the search criteria.b - The ID of the next page in the paginated results that indicates at least one more page follows.! - -- | The response status code. amazonka-ml A list of MLModel that meet the search criteria. amazonka-ml_The ID of the next page in the paginated results that indicates at least one more page follows. amazonka-ml- | The response status code. amazonka-ml (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV7 amazonka-ml$Represents the query results from a DescribeEvaluations1 operation. The content is essentially a list of  Evaluation .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:  - The equal to operator. The  Evaluation results will have FilterVariable4 values that exactly match the value specified with EQ .. - The greater than or equal to operator. The  Evaluation results will have FilterVariableC values that are greater than or equal to the value specified with GE .B - A string that is found at the beginning of a variable, such as Name or Id . For example, an  Evaluation could have the Name 2014-09-09-HolidayGiftMailer . To search for this  Evaluation , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday" - The greater than operator. The  Evaluation results will have FilterVariable7 values that are greater than the value specified with GT ." - The not equal to operator. The  Evaluation results will have FilterVariable. values not equal to the value specified with NE ./ - The ID of the page in the paginated results.O - A two-value parameter that determines the sequence of the resulting list of  Evaluation . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . - The maximum number of  Evaluation to include in the result. - The less than operator. The  Evaluation results will have FilterVariable4 values that are less than the value specified with LT .9 - Use one of the following variable to filter a list of  Evaluation objects: *  CreatedAt# - Sets the search criteria to the  Evaluation creation date. * Status# - Sets the search criteria to the  Evaluation status. * Name/ - Sets the search criteria to the contents of  Evaluation ____ Name . * IAMUser@ - Sets the search criteria to the user account that invoked an  Evaluation . *  MLModelId# - Sets the search criteria to the MLModel that was evaluated. *  DataSourceId# - Sets the search criteria to the  DataSource used in  Evaluation . * DataUri8 - Sets the search criteria to the data file(s) used in  Evaluationk . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.+ - The less than or equal to operator. The  Evaluation results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlThe equal to operator. The  Evaluation results will have FilterVariable4 values that exactly match the value specified with EQ . amazonka-ml+The greater than or equal to operator. The  Evaluation results will have FilterVariableC values that are greater than or equal to the value specified with GE . amazonka-ml?A string that is found at the beginning of a variable, such as Name or Id . For example, an  Evaluation could have the Name 2014-09-09-HolidayGiftMailer . To search for this  Evaluation , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday amazonka-mlThe greater than operator. The  Evaluation results will have FilterVariable7 values that are greater than the value specified with GT . amazonka-mlThe not equal to operator. The  Evaluation results will have FilterVariable. values not equal to the value specified with NE . amazonka-ml,The ID of the page in the paginated results. amazonka-mlLA two-value parameter that determines the sequence of the resulting list of  Evaluation . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . amazonka-mlThe maximum number of  Evaluation to include in the result. amazonka-mlThe less than operator. The  Evaluation results will have FilterVariable4 values that are less than the value specified with LT . amazonka-ml6Use one of the following variable to filter a list of  Evaluation objects: *  CreatedAt# - Sets the search criteria to the  Evaluation creation date. * Status# - Sets the search criteria to the  Evaluation status. * Name/ - Sets the search criteria to the contents of  Evaluation ____ Name . * IAMUser@ - Sets the search criteria to the user account that invoked an  Evaluation . *  MLModelId# - Sets the search criteria to the MLModel that was evaluated. *  DataSourceId# - Sets the search criteria to the  DataSource used in  Evaluation . * DataUri8 - Sets the search criteria to the data file(s) used in  Evaluationk . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory. amazonka-ml(The less than or equal to operator. The  Evaluation results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - A list of  Evaluation that meet the search criteria.b - The ID of the next page in the paginated results that indicates at least one more page follows.! - -- | The response status code. amazonka-ml A list of  Evaluation that meet the search criteria. amazonka-ml_The ID of the next page in the paginated results that indicates at least one more page follows. amazonka-ml- | The response status code. amazonka-ml (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV  amazonka-ml$Represents the query results from a  1 operation. The content is essentially a list of  DataSource .See:  smart constructor.  amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of  4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:  - The equal to operator. The  DataSource results will have FilterVariable4 values that exactly match the value specified with EQ .. - The greater than or equal to operator. The  DataSource results will have FilterVariableC values that are greater than or equal to the value specified with GE .B - A string that is found at the beginning of a variable, such as Name or Id . For example, a  DataSource could have the Name 2014-09-09-HolidayGiftMailer . To search for this  DataSource , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday" - The greater than operator. The  DataSource results will have FilterVariable7 values that are greater than the value specified with GT ." - The not equal to operator. The  DataSource results will have FilterVariable. values not equal to the value specified with NE ./ - The ID of the page in the paginated results.O - A two-value parameter that determines the sequence of the resulting list of  DataSource . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . - The maximum number of  DataSource to include in the result. - The less than operator. The  DataSource results will have FilterVariable4 values that are less than the value specified with LT .: - Use one of the following variables to filter a list of  DataSource : *  CreatedAt - Sets the search criteria to  DataSource creation dates. * Status - Sets the search criteria to  DataSource statuses. * Name/ - Sets the search criteria to the contents of  DataSource ____ Name . * DataUriH - Sets the search criteria to the URI of data files used to create the  DataSourcep . The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory. * IAMUserA - Sets the search criteria to the user account that invoked the  DataSource creation.+ - The less than or equal to operator. The  DataSource results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlThe equal to operator. The  DataSource results will have FilterVariable4 values that exactly match the value specified with EQ . amazonka-ml+The greater than or equal to operator. The  DataSource results will have FilterVariableC values that are greater than or equal to the value specified with GE . amazonka-ml?A string that is found at the beginning of a variable, such as Name or Id . For example, a  DataSource could have the Name 2014-09-09-HolidayGiftMailer . To search for this  DataSource , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday amazonka-mlThe greater than operator. The  DataSource results will have FilterVariable7 values that are greater than the value specified with GT . amazonka-mlThe not equal to operator. The  DataSource results will have FilterVariable. values not equal to the value specified with NE . amazonka-ml,The ID of the page in the paginated results. amazonka-mlLA two-value parameter that determines the sequence of the resulting list of  DataSource . * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . amazonka-mlThe maximum number of  DataSource to include in the result. amazonka-mlThe less than operator. The  DataSource results will have FilterVariable4 values that are less than the value specified with LT . amazonka-ml7Use one of the following variables to filter a list of  DataSource : *  CreatedAt - Sets the search criteria to  DataSource creation dates. * Status - Sets the search criteria to  DataSource statuses. * Name/ - Sets the search criteria to the contents of  DataSource ____ Name . * DataUriH - Sets the search criteria to the URI of data files used to create the  DataSourcep . The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory. * IAMUserA - Sets the search criteria to the user account that invoked the  DataSource creation. amazonka-ml(The less than or equal to operator. The  DataSource results will have FilterVariable@ values that are less than or equal to the value specified with LE . amazonka-mlCreates a value of  4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - A list of  DataSource that meet the search criteria.a - An ID of the next page in the paginated results that indicates at least one more page follows.! - -- | The response status code. amazonka-ml A list of  DataSource that meet the search criteria. amazonka-ml^An ID of the next page in the paginated results that indicates at least one more page follows. amazonka-ml- | The response status code. amazonka-ml     (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV1 amazonka-mlRepresents the output of a DescribeBatchPredictions1 operation. The content is essentially a list of BatchPrediction s.See: ? smart constructor.2 amazonka-mlSee: 3 smart constructor.3 amazonka-mlCreates a value of 24 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: 4 - The equal to operator. The BatchPrediction results will have FilterVariable4 values that exactly match the value specified with EQ .5. - The greater than or equal to operator. The BatchPrediction results will have FilterVariableC values that are greater than or equal to the value specified with GE .6B - A string that is found at the beginning of a variable, such as Name or Id . For example, a Batch Prediction operation could have the Name 2014-09-09-HolidayGiftMailer . To search for this BatchPrediction , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday7" - The greater than operator. The BatchPrediction results will have FilterVariable7 values that are greater than the value specified with GT .8" - The not equal to operator. The BatchPrediction results will have FilterVariable. values not equal to the value specified with NE .9. - An ID of the page in the paginated results.:O - A two-value parameter that determines the sequence of the resulting list of MLModel s. * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable .;b - The number of pages of information to include in the result. The range of acceptable values is 1 through 100 . The default value is 100 .< - The less than operator. The BatchPrediction results will have FilterVariable4 values that are less than the value specified with LT .=: - Use one of the following variables to filter a list of BatchPrediction : *  CreatedAt# - Sets the search criteria to the BatchPrediction creation date. * Status# - Sets the search criteria to the BatchPrediction status. * Name3 - Sets the search criteria to the contents of the BatchPrediction ____ Name . * IAMUserA - Sets the search criteria to the user account that invoked the BatchPrediction creation. *  MLModelId# - Sets the search criteria to the MLModel used in the BatchPrediction . *  DataSourceId# - Sets the search criteria to the  DataSource used in the BatchPrediction . * DataURI< - Sets the search criteria to the data file(s) used in the BatchPredictionk . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.>+ - The less than or equal to operator. The BatchPrediction results will have FilterVariable@ values that are less than or equal to the value specified with LE .4 amazonka-mlThe equal to operator. The BatchPrediction results will have FilterVariable4 values that exactly match the value specified with EQ .5 amazonka-ml+The greater than or equal to operator. The BatchPrediction results will have FilterVariableC values that are greater than or equal to the value specified with GE .6 amazonka-ml?A string that is found at the beginning of a variable, such as Name or Id . For example, a Batch Prediction operation could have the Name 2014-09-09-HolidayGiftMailer . To search for this BatchPrediction , select Name for the FilterVariable* and any of the following strings for the Prefix; : * 2014-09 * 2014-09-09 * 2014-09-09-Holiday7 amazonka-mlThe greater than operator. The BatchPrediction results will have FilterVariable7 values that are greater than the value specified with GT .8 amazonka-mlThe not equal to operator. The BatchPrediction results will have FilterVariable. values not equal to the value specified with NE .9 amazonka-ml+An ID of the page in the paginated results.: amazonka-mlLA two-value parameter that determines the sequence of the resulting list of MLModel s. * asc9 - Arranges the list in ascending order (A-Z, 0-9). * dscK - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable .; amazonka-ml_The number of pages of information to include in the result. The range of acceptable values is 1 through 100 . The default value is 100 .< amazonka-mlThe less than operator. The BatchPrediction results will have FilterVariable4 values that are less than the value specified with LT .= amazonka-ml7Use one of the following variables to filter a list of BatchPrediction : *  CreatedAt# - Sets the search criteria to the BatchPrediction creation date. * Status# - Sets the search criteria to the BatchPrediction status. * Name3 - Sets the search criteria to the contents of the BatchPrediction ____ Name . * IAMUserA - Sets the search criteria to the user account that invoked the BatchPrediction creation. *  MLModelId# - Sets the search criteria to the MLModel used in the BatchPrediction . *  DataSourceId# - Sets the search criteria to the  DataSource used in the BatchPrediction . * DataURI< - Sets the search criteria to the data file(s) used in the BatchPredictionk . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.> amazonka-ml(The less than or equal to operator. The BatchPrediction results will have FilterVariable@ values that are less than or equal to the value specified with LE .? amazonka-mlCreates a value of 14 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:@ - A list of BatchPrediction' objects that meet the search criteria.Ab - The ID of the next page in the paginated results that indicates at least one more page follows.B! - -- | The response status code.@ amazonka-ml A list of BatchPrediction' objects that meet the search criteria.A amazonka-ml_The ID of the next page in the paginated results that indicates at least one more page follows.B amazonka-ml- | The response status code.? amazonka-mlB123456789:;<=>?@AB32456789:;<=>?1@AB (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVp V amazonka-ml)Amazon ML returns the following elements.See: \ smart constructor.W amazonka-mlSee: X smart constructor.X amazonka-mlCreates a value of W4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:Y - One or more tags to delete.Z0 - The ID of the tagged ML object. For example, exampleModelId .[$ - The type of the tagged ML object.Y amazonka-mlOne or more tags to delete.Z amazonka-ml-The ID of the tagged ML object. For example, exampleModelId .[ amazonka-ml!The type of the tagged ML object.\ amazonka-mlCreates a value of V4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:]8 - The ID of the ML object from which tags were deleted.^: - The type of the ML object from which tags were deleted._! - -- | The response status code.] amazonka-ml5The ID of the ML object from which tags were deleted.^ amazonka-ml7The type of the ML object from which tags were deleted._ amazonka-ml- | The response status code.X amazonka-mlZ amazonka-ml[\ amazonka-ml_ VWXYZ[\]^_ XWYZ[\V]^_ (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVr amazonka-mlRepresents the output of an DeleteRealtimeEndpoint operation.The result contains the  MLModelId& and the endpoint information for the MLModel .See: v smart constructor.s amazonka-mlSee: t smart constructor.t amazonka-mlCreates a value of s4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:u - The ID assigned to the MLModel during creation.u amazonka-mlThe ID assigned to the MLModel during creation.v amazonka-mlCreates a value of r4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:w# - The endpoint information of the MLModelx3 - A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request.y! - -- | The response status code.w amazonka-ml The endpoint information of the MLModelx amazonka-ml0A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request.y amazonka-ml- | The response status code.t amazonka-mluv amazonka-mlyrstuvwxytsuvrwxy(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV23 amazonka-mlRepresents the output of a  DeleteMLModel operation.You can use the  GetMLModel& operation and check the value of the Status parameter to see whether an MLModel is marked as DELETED .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the MLModel . amazonka-ml0A user-supplied ID that uniquely identifies the MLModel . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelID in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelID in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVH amazonka-mlRepresents the output of a DeleteEvaluation_ operation. The output indicates that Amazon Machine Learning (Amazon ML) received the request.You can use the  GetEvaluation& operation and check the value of the Status parameter to see whether an  Evaluation is marked as DELETED .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the  Evaluation to delete. amazonka-ml0A user-supplied ID that uniquely identifies the  Evaluation to delete. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the  Evaluation6 . This value should be identical to the value of the  EvaluationId in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the  Evaluation6 . This value should be identical to the value of the  EvaluationId in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV[y amazonka-mlRepresents the output of a DeleteDataSource operation.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the  DataSource . amazonka-ml0A user-supplied ID that uniquely identifies the  DataSource . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the  DataSource6 . This value should be identical to the value of the  DataSourceID in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the  DataSource6 . This value should be identical to the value of the  DataSourceID in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HVq amazonka-mlRepresents the output of a DeleteBatchPrediction operation.You can use the GetBatchPrediction& operation and check the value of the Status parameter to see whether a BatchPrediction is marked as DELETED .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the BatchPrediction . amazonka-ml0A user-supplied ID that uniquely identifies the BatchPrediction . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the BatchPrediction6 . This value should be identical to the value of the BatchPredictionID in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the BatchPrediction6 . This value should be identical to the value of the BatchPredictionID in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV8 amazonka-mlRepresents the output of an CreateRealtimeEndpoint operation.The result contains the  MLModelId& and the endpoint information for the MLModel .See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the MLModel during creation. amazonka-mlThe ID assigned to the MLModel during creation. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:# - The endpoint information of the MLModel3 - A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request.! - -- | The response status code. amazonka-ml The endpoint information of the MLModel amazonka-ml0A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV   amazonka-mlRepresents the output of a  CreateMLModelJ operation, and is an acknowledgement that Amazon ML received the request.The  CreateMLModelI operation is asynchronous. You can poll for status updates by using the  GetMLModel operation and checking the Status parameter.See:  smart constructor.  amazonka-mlSee:   smart constructor.  amazonka-mlCreates a value of  4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: $ - The data recipe for creating the MLModelx . You must specify either the recipe or its URI. If you don't specify a recipe or its URI, Amazon ML creates a default.Z - The Amazon Simple Storage Service (Amazon S3) location and file name that contains the MLModel~ recipe. You must specify either the recipe or its URI. If you don't specify a recipe or its URI, Amazon ML creates a default.. - A user-supplied name or description of the MLModel ., - A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none; . We strongly recommend that you shuffle your data. * sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly.3 - A user-supplied ID that uniquely identifies the MLModel .1 - The category of supervised learning that this MLModel= will address. Choose from the following types: * Choose  REGRESSION if the MLModel6 will be used to predict a numeric value. * Choose BINARY if the MLModel- result has two possible values. * Choose  MULTICLASS if the MLModelG result has a limited number of values. For more information, see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide . - The  DataSource" that points to the training data.  amazonka-ml!The data recipe for creating the MLModelx . You must specify either the recipe or its URI. If you don't specify a recipe or its URI, Amazon ML creates a default. amazonka-mlWThe Amazon Simple Storage Service (Amazon S3) location and file name that contains the MLModel~ recipe. You must specify either the recipe or its URI. If you don't specify a recipe or its URI, Amazon ML creates a default. amazonka-ml+A user-supplied name or description of the MLModel . amazonka-ml)A list of the training parameters in the MLModelx . The list is implemented as a map of key-value pairs. The following is the current set of training parameters: * sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to  2147483648 . The default value is 33554432 . *  sgd.maxPassesY - The number of times that the training process traverses the observations to build the MLModel, . The value is an integer that ranges from 1 to 10000 . The default value is 10 . * sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none . The default value is none; . We strongly recommend that you shuffle your data. * sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L1 normalization. This parameter can't be used when L23 is specified. Use this parameter sparingly. * sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08* . The value is a double that ranges from 0 to  MAX_DOUBLEQ . The default is to not use L2 normalization. This parameter can't be used when L1, is specified. Use this parameter sparingly. amazonka-ml0A user-supplied ID that uniquely identifies the MLModel . amazonka-ml.The category of supervised learning that this MLModel= will address. Choose from the following types: * Choose  REGRESSION if the MLModel6 will be used to predict a numeric value. * Choose BINARY if the MLModel- result has two possible values. * Choose  MULTICLASS if the MLModelG result has a limited number of values. For more information, see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide . amazonka-mlThe  DataSource" that points to the training data. amazonka-mlCreates a value of  4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the MLModel6 . This value should be identical to the value of the  MLModelId in the request. amazonka-ml- | The response status code.  amazonka-ml amazonka-ml amazonka-ml amazonka-ml          (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV  ) amazonka-mlRepresents the output of a CreateEvaluationJ operation, and is an acknowledgement that Amazon ML received the request.CreateEvaluationI operation is asynchronous. You can poll for status updates by using the GetEvcaluation operation and checking the Status parameter.See: 0 smart constructor.* amazonka-mlSee: + smart constructor.+ amazonka-mlCreates a value of *4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:,. - A user-supplied name or description of the  Evaluation .-3 - A user-supplied ID that uniquely identifies the  Evaluation .. - The ID of the MLModel. to evaluate. The schema used in creating the MLModel must match the schema of the  DataSource used in the  Evaluation ./ - The ID of the  DataSource' for the evaluation. The schema of the  DataSource* must match the schema used to create the MLModel ., amazonka-ml+A user-supplied name or description of the  Evaluation .- amazonka-ml0A user-supplied ID that uniquely identifies the  Evaluation .. amazonka-mlThe ID of the MLModel. to evaluate. The schema used in creating the MLModel must match the schema of the  DataSource used in the  Evaluation ./ amazonka-mlThe ID of the  DataSource' for the evaluation. The schema of the  DataSource* must match the schema used to create the MLModel .0 amazonka-mlCreates a value of )4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:15 - The user-supplied ID that uniquely identifies the  Evaluation6 . This value should be identical to the value of the  EvaluationId in the request.2! - -- | The response status code.1 amazonka-ml2The user-supplied ID that uniquely identifies the  Evaluation6 . This value should be identical to the value of the  EvaluationId in the request.2 amazonka-ml- | The response status code.+ amazonka-ml- amazonka-ml. amazonka-ml/0 amazonka-ml2 )*+,-./012 +*,-./0)12(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV J E amazonka-mlRepresents the output of a CreateDataSourceFromS3J operation, and is an acknowledgement that Amazon ML received the request.The CreateDataSourceFromS3B operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter.See: L smart constructor.F amazonka-mlSee: G smart constructor.G amazonka-mlCreates a value of F4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:H. - A user-supplied name or description of the  DataSource .I - The compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true' if the DataSourceneeds to be used for MLModel training.J; - A user-supplied identifier that uniquely identifies the  DataSource .K - The data specification of a  DataSource : * DataLocationS3 - The Amazon S3 location of the observation data. * DataSchemaLocationS3 - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  Datasource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}"H amazonka-ml+A user-supplied name or description of the  DataSource .I amazonka-mlThe compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true' if the DataSourceneeds to be used for MLModel training.J amazonka-ml8A user-supplied identifier that uniquely identifies the  DataSource .K amazonka-mlThe data specification of a  DataSource : * DataLocationS3 - The Amazon S3 location of the observation data. * DataSchemaLocationS3 - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  Datasource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}"L amazonka-mlCreates a value of E4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:M3 - A user-supplied ID that uniquely identifies the  DataSource6 . This value should be identical to the value of the  DataSourceID in the request.N! - -- | The response status code.M amazonka-ml0A user-supplied ID that uniquely identifies the  DataSource6 . This value should be identical to the value of the  DataSourceID in the request.N amazonka-ml- | The response status code.G amazonka-mlJ amazonka-mlKL amazonka-mlN EFGHIJKLMN GFHIJKLEMN(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV @ a amazonka-mlRepresents the output of a CreateDataSourceFromRedshiftJ operation, and is an acknowledgement that Amazon ML received the request.The CreateDataSourceFromRedshiftB operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter.See: i smart constructor.b amazonka-mlSee: c smart constructor.c amazonka-mlCreates a value of b4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:d. - A user-supplied name or description of the  DataSource .e - The compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true if the  DataSource needs to be used for MLModel training.f3 - A user-supplied ID that uniquely identifies the  DataSource .g0 - The data specification of an Amazon Redshift  DataSource% : * DatabaseInformation - *  DatabaseName3 - The name of the Amazon Redshift database. * ClusterIdentifier - The unique ID for the Amazon Redshift cluster. * DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database. * SelectSqlQuery - The query that is used to retrieve the observation data for the  Datasource . * S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQueryW query is stored in this location. * DataSchemaUri - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  DataSource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}"h - A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following: * A security group to allow Amazon ML to execute the SelectSqlQueryw query on an Amazon Redshift cluster * An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocationd amazonka-ml+A user-supplied name or description of the  DataSource .e amazonka-mlThe compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true if the  DataSource needs to be used for MLModel training.f amazonka-ml0A user-supplied ID that uniquely identifies the  DataSource .g amazonka-ml-The data specification of an Amazon Redshift  DataSource% : * DatabaseInformation - *  DatabaseName3 - The name of the Amazon Redshift database. * ClusterIdentifier - The unique ID for the Amazon Redshift cluster. * DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database. * SelectSqlQuery - The query that is used to retrieve the observation data for the  Datasource . * S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQueryW query is stored in this location. * DataSchemaUri - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  DataSource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}"h amazonka-mlA fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following: * A security group to allow Amazon ML to execute the SelectSqlQueryw query on an Amazon Redshift cluster * An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocationi amazonka-mlCreates a value of a4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:jr - A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the  DataSourceID in the request.k! - -- | The response status code.j amazonka-mloA user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the  DataSourceID in the request.k amazonka-ml- | The response status code.c amazonka-mlf amazonka-mlg amazonka-mlhi amazonka-mlk abcdefghijk cbdefghiajk(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV  ~ amazonka-mlRepresents the output of a CreateDataSourceFromRDSJ operation, and is an acknowledgement that Amazon ML received the request.The CreateDataSourceFromRDSD > operation is asynchronous. You can poll for updates by using the GetBatchPrediction operation and checking the Status parameter. You can inspect the Message when Status shows up as FAILEDI . You can also check the progress of the copy operation by going to the  DataPipeline/ console and looking up the pipeline using the  pipelineId  from the describe call.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:. - A user-supplied name or description of the  DataSource . - The compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true' if the DataSourceneeds to be used for MLModel training.3 - A user-supplied ID that uniquely identifies the  DataSourceC . Typically, an Amazon Resource Number (ARN) becomes the ID for a  DataSource .+ - The data specification of an Amazon RDS  DataSource% : * DatabaseInformation - *  DatabaseName- - The name of the Amazon RDS database. * InstanceIdentifier  - A unique identifier for the Amazon RDS database instance. * DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database. * ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. * ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. * SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [SubnetId , SecurityGroupIds~ ] pair for a VPC-based RDS DB instance. * SelectSqlQuery - A query that is used to retrieve the observation data for the  Datasourcez . * S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQueryQ is stored in this location. * DataSchemaUri - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  Datasource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}" - The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user's account and copy data using the SelectSqlQuery$ query from Amazon RDS to Amazon S3. amazonka-ml+A user-supplied name or description of the  DataSource . amazonka-mlThe compute statistics for a  DataSourceJ . The statistics are generated from the observation data referenced by a  DataSource3 . Amazon ML uses the statistics internally during MLModel) training. This parameter must be set to true' if the DataSourceneeds to be used for MLModel training. amazonka-ml0A user-supplied ID that uniquely identifies the  DataSourceC . Typically, an Amazon Resource Number (ARN) becomes the ID for a  DataSource . amazonka-ml(The data specification of an Amazon RDS  DataSource% : * DatabaseInformation - *  DatabaseName- - The name of the Amazon RDS database. * InstanceIdentifier  - A unique identifier for the Amazon RDS database instance. * DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database. * ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. * ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see  Ohttp://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.htmlRole templates for data pipelines. * SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [SubnetId , SecurityGroupIds~ ] pair for a VPC-based RDS DB instance. * SelectSqlQuery - A query that is used to retrieve the observation data for the  Datasourcez . * S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQueryQ is stored in this location. * DataSchemaUri - The Amazon S3 location of the  DataSchemaU . * DataSchema - A JSON string representing the schema. This is not required if  DataSchemaUri} is specified. * DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the  Datasource . Sample - 3"{"splitting":{"percentBegin":10,"percentEnd":60}}" amazonka-mlThe role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user's account and copy data using the SelectSqlQuery$ query from Amazon RDS to Amazon S3. amazonka-mlCreates a value of ~4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:r - A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the  DataSourceID in the request.! - -- | The response status code. amazonka-mloA user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the  DataSourceID in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml amazonka-ml ~ ~(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV 42  amazonka-mlRepresents the output of a CreateBatchPredictionJ operation, and is an acknowledgement that Amazon ML received the request.The CreateBatchPredictionI operation is asynchronous. You can poll for status updates by using the >GetBatchPrediction operation and checking the Status parameter of the result.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:. - A user-supplied name or description of the BatchPrediction . BatchPredictionName& can only use the UTF-8 character set.3 - A user-supplied ID that uniquely identifies the BatchPrediction . - The ID of the MLModel> that will generate predictions for the group of observations. - The ID of the  DataSource5 that points to the group of observations to predict. - The location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the s3 key portion of the  outputURI field: :, //, /./, /../. Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide . amazonka-ml+A user-supplied name or description of the BatchPrediction . BatchPredictionName& can only use the UTF-8 character set. amazonka-ml0A user-supplied ID that uniquely identifies the BatchPrediction . amazonka-mlThe ID of the MLModel> that will generate predictions for the group of observations. amazonka-mlThe ID of the  DataSource5 that points to the group of observations to predict. amazonka-mlThe location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the s3 key portion of the  outputURI field: :, //, /./, /../. Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the  5http://docs.aws.amazon.com/machine-learning/latest/dg'Amazon Machine Learning Developer Guide . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:3 - A user-supplied ID that uniquely identifies the BatchPrediction/ . This value is identical to the value of the BatchPredictionId in the request.! - -- | The response status code. amazonka-ml0A user-supplied ID that uniquely identifies the BatchPrediction/ . This value is identical to the value of the BatchPredictionId in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml amazonka-ml amazonka-ml  (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV L  amazonka-ml)Amazon ML returns the following elements.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The key-value pairs to use to create tags. If you specify a key without specifying a value, Amazon ML creates a tag with the specified key and a value of null.0 - The ID of the ML object to tag. For example, exampleModelId .$ - The type of the ML object to tag. amazonka-mlThe key-value pairs to use to create tags. If you specify a key without specifying a value, Amazon ML creates a tag with the specified key and a value of null. amazonka-ml-The ID of the ML object to tag. For example, exampleModelId . amazonka-ml!The type of the ML object to tag. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:+ - The ID of the ML object that was tagged.- - The type of the ML object that was tagged.! - -- | The response status code. amazonka-ml(The ID of the ML object that was tagged. amazonka-ml*The type of the ML object that was tagged. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml  (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV b amazonka-mlRepresents the output of an UpdateBatchPrediction operation.-You can see the updated content by using the GetBatchPrediction operation.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the BatchPrediction during creation.2 - A new user-supplied name or description of the BatchPrediction . amazonka-mlThe ID assigned to the BatchPrediction during creation. amazonka-ml/A new user-supplied name or description of the BatchPrediction . amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the BatchPredictionE during creation. This value should be identical to the value of the BatchPredictionId in the request.! - -- | The response status code. amazonka-mlThe ID assigned to the BatchPredictionE during creation. This value should be identical to the value of the BatchPredictionId in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV y amazonka-mlRepresents the output of an UpdateDataSource operation.-You can see the updated content by using the GetBatchPrediction operation.See:  smart constructor. amazonka-mlSee:  smart constructor. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the  DataSource during creation.2 - A new user-supplied name or description of the  DataSource+ that will replace the current description. amazonka-mlThe ID assigned to the  DataSource during creation. amazonka-ml/A new user-supplied name or description of the  DataSource+ that will replace the current description. amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the  DataSourceE during creation. This value should be identical to the value of the  DataSourceID in the request.! - -- | The response status code. amazonka-mlThe ID assigned to the  DataSourceE during creation. This value should be identical to the value of the  DataSourceID in the request. amazonka-ml- | The response status code. amazonka-ml amazonka-ml amazonka-ml(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV l amazonka-mlRepresents the output of an UpdateEvaluation operation.-You can see the updated content by using the  GetEvaluation operation.See:   smart constructor.  amazonka-mlSee:   smart constructor.  amazonka-mlCreates a value of  4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:  - The ID assigned to the  Evaluation during creation. 2 - A new user-supplied name or description of the  Evaluation' that will replace the current content.  amazonka-mlThe ID assigned to the  Evaluation during creation.  amazonka-ml/A new user-supplied name or description of the  Evaluation' that will replace the current content.  amazonka-mlCreates a value of 4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired: - The ID assigned to the  EvaluationE during creation. This value should be identical to the value of the  Evaluation in the request.! - -- | The response status code. amazonka-mlThe ID assigned to the  EvaluationE during creation. This value should be identical to the value of the  Evaluation in the request. amazonka-ml- | The response status code.  amazonka-ml  amazonka-ml   amazonka-ml          (c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None "#27HV L " amazonka-mlRepresents the output of an  UpdateMLModel operation.-You can see the updated content by using the  GetMLModel operation.See: ( smart constructor.# amazonka-mlSee: $ smart constructor.$ amazonka-mlCreates a value of #4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:%. - A user-supplied name or description of the MLModel .& - The ScoreThreshold used in binary classification MLModel} that marks the boundary between a positive prediction and a negative prediction. Output values greater than or equal to the ScoreThreshold$ receive a positive result from the MLModel , such as true . Output values less than the ScoreThreshold& receive a negative response from the MLModel , such as false .' - The ID assigned to the MLModel during creation.% amazonka-ml+A user-supplied name or description of the MLModel .& amazonka-mlThe ScoreThreshold used in binary classification MLModel} that marks the boundary between a positive prediction and a negative prediction. Output values greater than or equal to the ScoreThreshold$ receive a positive result from the MLModel , such as true . Output values less than the ScoreThreshold& receive a negative response from the MLModel , such as false .' amazonka-mlThe ID assigned to the MLModel during creation.( amazonka-mlCreates a value of "4 with the minimum fields required to make a request.BUse one of the following lenses to modify other fields as desired:) - The ID assigned to the MLModelE during creation. This value should be identical to the value of the  MLModelID in the request.*! - -- | The response status code.) amazonka-mlThe ID assigned to the MLModelE during creation. This value should be identical to the value of the  MLModelID in the request.* amazonka-ml- | The response status code.$ amazonka-ml'( amazonka-ml* "#$%&'()* $#%&'(")*(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)NoneHV = amazonka-mlPolls !"c every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.> amazonka-mlPolls !#c every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.? amazonka-mlPolls !$c every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.@ amazonka-mlPolls !%c every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.=>?@=>?@!(c) 2013-2018 Brendan HayMozilla Public License, v. 2.0..Brendan Hay <brendan.g.hay+amazonka@gmail.com>auto-generatednon-portable (GHC extensions)None zE  !"#$%&'()*+,-.3/012456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~     )*+,-./0123456789:;<=PQRSTUVWXYZ[\]^_`abcdefghij}~  123456789:;<=>?@ABVWXYZ[\]^_rstuvwxy    )*+,-./012EFGHIJKLMNabcdefghijk~     "#$%&'()*=>?@=>?@>?56789:;<=.3/0124+,-%&'()* !"#$  PQRSTUVWXYZ[\]^_`aObcdefghijklmnopqrstNuvwxyz{|}~MLKJIHGFEDCBA@&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcRd e f g h i j k l m n o p q * ) ( ' r s t u v w x y z { | } ~        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~"  %                          ! " # $ % & ' ( ) * + $ , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O # P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                                                !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                                                                                                     (amazonka-ml-1.6.1-CNBnEKh3aOlK9oNc02t7Bw!Network.AWS.MachineLearning.Types#Network.AWS.MachineLearning.Predict&Network.AWS.MachineLearning.GetMLModel)Network.AWS.MachineLearning.GetEvaluation)Network.AWS.MachineLearning.GetDataSource.Network.AWS.MachineLearning.GetBatchPrediction(Network.AWS.MachineLearning.DescribeTags,Network.AWS.MachineLearning.DescribeMLModels/Network.AWS.MachineLearning.DescribeEvaluations/Network.AWS.MachineLearning.DescribeDataSources4Network.AWS.MachineLearning.DescribeBatchPredictions&Network.AWS.MachineLearning.DeleteTags2Network.AWS.MachineLearning.DeleteRealtimeEndpoint)Network.AWS.MachineLearning.DeleteMLModel,Network.AWS.MachineLearning.DeleteEvaluation,Network.AWS.MachineLearning.DeleteDataSource1Network.AWS.MachineLearning.DeleteBatchPrediction2Network.AWS.MachineLearning.CreateRealtimeEndpoint)Network.AWS.MachineLearning.CreateMLModel,Network.AWS.MachineLearning.CreateEvaluation2Network.AWS.MachineLearning.CreateDataSourceFromS38Network.AWS.MachineLearning.CreateDataSourceFromRedshift3Network.AWS.MachineLearning.CreateDataSourceFromRDS1Network.AWS.MachineLearning.CreateBatchPrediction#Network.AWS.MachineLearning.AddTags1Network.AWS.MachineLearning.UpdateBatchPrediction,Network.AWS.MachineLearning.UpdateDataSource,Network.AWS.MachineLearning.UpdateEvaluation)Network.AWS.MachineLearning.UpdateMLModel#Network.AWS.MachineLearning.Waiters%Network.AWS.MachineLearning.Types.Sum)Network.AWS.MachineLearning.Types.ProductNetwork.AWS.MachineLearningDescribeMLModelsDescribeBatchPredictionsDescribeDataSourcesDescribeEvaluationsTaggableResourceTypeBatchPrediction DataSource EvaluationMLModel SortOrderAscDscRealtimeEndpointStatusFailedNoneReadyUpdating MLModelTypeBinary Multiclass RegressionMLModelFilterVariableMLMFVAlgorithmMLMFVCreatedAt MLMFVIAMUserMLMFVLastUpdatedAtMLMFVMLModelType MLMFVNameMLMFVRealtimeEndpointStatus MLMFVStatusMLMFVTrainingDataSourceIdMLMFVTrainingDataURIEvaluationFilterVariable EvalCreatedAtEvalDataSourceId EvalDataURI EvalIAMUserEvalLastUpdatedAt EvalMLModelIdEvalName EvalStatus EntityStatus ESCompleted ESDeletedESFailed ESInprogress ESPendingDetailsAttributes AlgorithmPredictiveModelTypeDataSourceFilterVariable DataCreatedAtDataDATALOCATIONS3 DataIAMUserDataLastUpdatedAtDataName DataStatusBatchPredictionFilterVariableBatchCreatedAtBatchDataSourceId BatchDataURI BatchIAMUserBatchLastUpdatedAtBatchMLModelId BatchName BatchStatusSGDTag S3DataSpecRedshiftMetadataRedshiftDatabaseCredentialsRedshiftDatabaseRedshiftDataSpecRealtimeEndpointInfo RDSMetadataRDSDatabaseCredentials RDSDatabase RDSDataSpec PredictionPerformanceMetricsbatchPredictionbpStatusbpLastUpdatedAt bpCreatedAt bpComputeTimebpInputDataLocationS3 bpMLModelIdbpBatchPredictionDataSourceIdbpTotalRecordCount bpStartedAtbpBatchPredictionId bpFinishedAtbpInvalidRecordCountbpCreatedByIAMUserbpName bpMessage bpOutputURI dataSourcedsStatusdsNumberOfFilesdsLastUpdatedAt dsCreatedAt dsComputeTimedsDataSourceId dsRDSMetadatadsDataSizeInBytes dsStartedAt dsFinishedAtdsCreatedByIAMUserdsNamedsDataLocationS3dsComputeStatistics dsMessagedsRedshiftMetadatadsDataRearrangement dsRoleARN evaluationeStatusePerformanceMetricseLastUpdatedAt eCreatedAt eComputeTimeeInputDataLocationS3 eMLModelId eStartedAt eFinishedAteCreatedByIAMUsereName eEvaluationIdeMessageeEvaluationDataSourceIdmLModel mlmStatusmlmLastUpdatedAtmlmTrainingParametersmlmScoreThresholdLastUpdatedAt mlmCreatedAtmlmComputeTimemlmInputDataLocationS3 mlmMLModelIdmlmSizeInBytes mlmStartedAtmlmScoreThreshold mlmFinishedAt mlmAlgorithmmlmCreatedByIAMUsermlmNamemlmEndpointInfomlmTrainingDataSourceId mlmMessagemlmMLModelTypeperformanceMetrics pmProperties predictionpPredictedValuepPredictedLabelpPredictedScorespDetails rdsDataSpecrdsdsDataSchemaURIrdsdsDataSchemardsdsDataRearrangementrdsdsDatabaseInformationrdsdsSelectSqlQueryrdsdsDatabaseCredentialsrdsdsS3StagingLocationrdsdsResourceRolerdsdsServiceRole rdsdsSubnetIdrdsdsSecurityGroupIds rdsDatabaserdsdInstanceIdentifierrdsdDatabaseNamerdsDatabaseCredentials rdsdcUsername rdsdcPassword rdsMetadatarmSelectSqlQueryrmDataPipelineId rmDatabasermDatabaseUserNamermResourceRole rmServiceRolerealtimeEndpointInfo reiCreatedAtreiEndpointURLreiEndpointStatusreiPeakRequestsPerSecondredshiftDataSpecrDataSchemaURI rDataSchemarDataRearrangementrDatabaseInformationrSelectSqlQueryrDatabaseCredentialsrS3StagingLocationredshiftDatabaserdDatabaseNamerdClusterIdentifierredshiftDatabaseCredentials rdcUsername rdcPasswordredshiftMetadataredSelectSqlQueryredRedshiftDatabaseredDatabaseUserName s3DataSpec sdsDataSchemasdsDataSchemaLocationS3sdsDataRearrangementsdsDataLocationS3tagtagValuetagKeymachineLearning_InvalidTagException_InternalServerException_InvalidInputException%_IdempotentParameterMismatchException_TagLimitExceededException_PredictorNotMountedException_ResourceNotFoundException_LimitExceededExceptionPredictResponsePredictpredict pMLModelIdpRecordpPredictEndpointpredictResponse prsPredictionprsResponseStatus$fToQueryPredict$fToPathPredict$fToJSONPredict$fToHeadersPredict$fNFDataPredict$fHashablePredict$fNFDataPredictResponse$fAWSRequestPredict $fEqPredict $fReadPredict $fShowPredict $fDataPredict$fGenericPredict$fEqPredictResponse$fReadPredictResponse$fShowPredictResponse$fDataPredictResponse$fGenericPredictResponseGetMLModelResponse GetMLModel getMLModel gmlmVerbose gmlmMLModelIdgetMLModelResponse gmlmrsStatusgmlmrsLastUpdatedAtgmlmrsTrainingParameters!gmlmrsScoreThresholdLastUpdatedAtgmlmrsCreatedAtgmlmrsComputeTime gmlmrsRecipegmlmrsInputDataLocationS3gmlmrsMLModelIdgmlmrsSizeInBytes gmlmrsSchemagmlmrsStartedAtgmlmrsScoreThresholdgmlmrsFinishedAtgmlmrsCreatedByIAMUser gmlmrsName gmlmrsLogURIgmlmrsEndpointInfogmlmrsTrainingDataSourceId gmlmrsMessagegmlmrsMLModelTypegmlmrsResponseStatus$fToQueryGetMLModel$fToPathGetMLModel$fToJSONGetMLModel$fToHeadersGetMLModel$fNFDataGetMLModel$fHashableGetMLModel$fNFDataGetMLModelResponse$fAWSRequestGetMLModel$fEqGetMLModel$fReadGetMLModel$fShowGetMLModel$fDataGetMLModel$fGenericGetMLModel$fEqGetMLModelResponse$fReadGetMLModelResponse$fShowGetMLModelResponse$fDataGetMLModelResponse$fGenericGetMLModelResponseGetEvaluationResponse GetEvaluation getEvaluationgeEvaluationIdgetEvaluationResponse gersStatusgersPerformanceMetricsgersLastUpdatedAt gersCreatedAtgersComputeTimegersInputDataLocationS3 gersMLModelId gersStartedAtgersFinishedAtgersCreatedByIAMUsergersName gersLogURIgersEvaluationId gersMessagegersEvaluationDataSourceIdgersResponseStatus$fToQueryGetEvaluation$fToPathGetEvaluation$fToJSONGetEvaluation$fToHeadersGetEvaluation$fNFDataGetEvaluation$fHashableGetEvaluation$fNFDataGetEvaluationResponse$fAWSRequestGetEvaluation$fEqGetEvaluation$fReadGetEvaluation$fShowGetEvaluation$fDataGetEvaluation$fGenericGetEvaluation$fEqGetEvaluationResponse$fReadGetEvaluationResponse$fShowGetEvaluationResponse$fDataGetEvaluationResponse$fGenericGetEvaluationResponseGetDataSourceResponse GetDataSource getDataSource gdsVerbosegdsDataSourceIdgetDataSourceResponse gdsrsStatusgdsrsNumberOfFilesgdsrsLastUpdatedAtgdsrsCreatedAtgdsrsComputeTimegdsrsDataSourceIdgdsrsRDSMetadatagdsrsDataSizeInBytesgdsrsDataSourceSchemagdsrsStartedAtgdsrsFinishedAtgdsrsCreatedByIAMUser gdsrsName gdsrsLogURIgdsrsDataLocationS3gdsrsComputeStatistics gdsrsMessagegdsrsRedshiftMetadatagdsrsDataRearrangement gdsrsRoleARNgdsrsResponseStatus$fToQueryGetDataSource$fToPathGetDataSource$fToJSONGetDataSource$fToHeadersGetDataSource$fNFDataGetDataSource$fHashableGetDataSource$fNFDataGetDataSourceResponse$fAWSRequestGetDataSource$fEqGetDataSource$fReadGetDataSource$fShowGetDataSource$fDataGetDataSource$fGenericGetDataSource$fEqGetDataSourceResponse$fReadGetDataSourceResponse$fShowGetDataSourceResponse$fDataGetDataSourceResponse$fGenericGetDataSourceResponseGetBatchPredictionResponseGetBatchPredictiongetBatchPredictiongbpBatchPredictionIdgetBatchPredictionResponse gbprsStatusgbprsLastUpdatedAtgbprsCreatedAtgbprsComputeTimegbprsInputDataLocationS3gbprsMLModelId gbprsBatchPredictionDataSourceIdgbprsTotalRecordCountgbprsStartedAtgbprsBatchPredictionIdgbprsFinishedAtgbprsInvalidRecordCountgbprsCreatedByIAMUser gbprsName gbprsLogURI gbprsMessagegbprsOutputURIgbprsResponseStatus$fToQueryGetBatchPrediction$fToPathGetBatchPrediction$fToJSONGetBatchPrediction$fToHeadersGetBatchPrediction$fNFDataGetBatchPrediction$fHashableGetBatchPrediction"$fNFDataGetBatchPredictionResponse$fAWSRequestGetBatchPrediction$fEqGetBatchPrediction$fReadGetBatchPrediction$fShowGetBatchPrediction$fDataGetBatchPrediction$fGenericGetBatchPrediction$fEqGetBatchPredictionResponse $fReadGetBatchPredictionResponse $fShowGetBatchPredictionResponse $fDataGetBatchPredictionResponse#$fGenericGetBatchPredictionResponseDescribeTagsResponse DescribeTags describeTags dtResourceIddtResourceTypedescribeTagsResponsedtrsResourceIddtrsResourceTypedtrsTagsdtrsResponseStatus$fToQueryDescribeTags$fToPathDescribeTags$fToJSONDescribeTags$fToHeadersDescribeTags$fNFDataDescribeTags$fHashableDescribeTags$fNFDataDescribeTagsResponse$fAWSRequestDescribeTags$fEqDescribeTags$fReadDescribeTags$fShowDescribeTags$fDataDescribeTags$fGenericDescribeTags$fEqDescribeTagsResponse$fReadDescribeTagsResponse$fShowDescribeTagsResponse$fDataDescribeTagsResponse$fGenericDescribeTagsResponseDescribeMLModelsResponsedescribeMLModelsdmlmEQdmlmGE dmlmPrefixdmlmGTdmlmNE dmlmNextToken dmlmSortOrder dmlmLimitdmlmLTdmlmFilterVariabledmlmLEdescribeMLModelsResponsedmlmsrsResultsdmlmsrsNextTokendmlmsrsResponseStatus$fToQueryDescribeMLModels$fToPathDescribeMLModels$fToJSONDescribeMLModels$fToHeadersDescribeMLModels$fNFDataDescribeMLModels$fHashableDescribeMLModels$fAWSPagerDescribeMLModels $fNFDataDescribeMLModelsResponse$fAWSRequestDescribeMLModels$fEqDescribeMLModels$fReadDescribeMLModels$fShowDescribeMLModels$fDataDescribeMLModels$fGenericDescribeMLModels$fEqDescribeMLModelsResponse$fReadDescribeMLModelsResponse$fShowDescribeMLModelsResponse$fDataDescribeMLModelsResponse!$fGenericDescribeMLModelsResponseDescribeEvaluationsResponsedescribeEvaluationsdeEQdeGEdePrefixdeGTdeNE deNextToken deSortOrderdeLimitdeLTdeFilterVariabledeLEdescribeEvaluationsResponse desrsResultsdesrsNextTokendesrsResponseStatus$fToQueryDescribeEvaluations$fToPathDescribeEvaluations$fToJSONDescribeEvaluations$fToHeadersDescribeEvaluations$fNFDataDescribeEvaluations$fHashableDescribeEvaluations$fAWSPagerDescribeEvaluations#$fNFDataDescribeEvaluationsResponse$fAWSRequestDescribeEvaluations$fEqDescribeEvaluations$fReadDescribeEvaluations$fShowDescribeEvaluations$fDataDescribeEvaluations$fGenericDescribeEvaluations$fEqDescribeEvaluationsResponse!$fReadDescribeEvaluationsResponse!$fShowDescribeEvaluationsResponse!$fDataDescribeEvaluationsResponse$$fGenericDescribeEvaluationsResponseDescribeDataSourcesResponsedescribeDataSourcesddsEQddsGE ddsPrefixddsGTddsNE ddsNextToken ddsSortOrderddsLimitddsLTddsFilterVariableddsLEdescribeDataSourcesResponse ddssrsResultsddssrsNextTokenddssrsResponseStatus$fToQueryDescribeDataSources$fToPathDescribeDataSources$fToJSONDescribeDataSources$fToHeadersDescribeDataSources$fNFDataDescribeDataSources$fHashableDescribeDataSources$fAWSPagerDescribeDataSources#$fNFDataDescribeDataSourcesResponse$fAWSRequestDescribeDataSources$fEqDescribeDataSources$fReadDescribeDataSources$fShowDescribeDataSources$fDataDescribeDataSources$fGenericDescribeDataSources$fEqDescribeDataSourcesResponse!$fReadDescribeDataSourcesResponse!$fShowDescribeDataSourcesResponse!$fDataDescribeDataSourcesResponse$$fGenericDescribeDataSourcesResponse DescribeBatchPredictionsResponsedescribeBatchPredictionsdbpEQdbpGE dbpPrefixdbpGTdbpNE dbpNextToken dbpSortOrderdbpLimitdbpLTdbpFilterVariabledbpLE describeBatchPredictionsResponse dbpsrsResultsdbpsrsNextTokendbpsrsResponseStatus!$fToQueryDescribeBatchPredictions $fToPathDescribeBatchPredictions $fToJSONDescribeBatchPredictions#$fToHeadersDescribeBatchPredictions $fNFDataDescribeBatchPredictions"$fHashableDescribeBatchPredictions"$fAWSPagerDescribeBatchPredictions($fNFDataDescribeBatchPredictionsResponse$$fAWSRequestDescribeBatchPredictions$fEqDescribeBatchPredictions$fReadDescribeBatchPredictions$fShowDescribeBatchPredictions$fDataDescribeBatchPredictions!$fGenericDescribeBatchPredictions$$fEqDescribeBatchPredictionsResponse&$fReadDescribeBatchPredictionsResponse&$fShowDescribeBatchPredictionsResponse&$fDataDescribeBatchPredictionsResponse)$fGenericDescribeBatchPredictionsResponseDeleteTagsResponse DeleteTags deleteTagsdTagKeys dResourceId dResourceTypedeleteTagsResponse drsResourceIddrsResourceTypedrsResponseStatus$fToQueryDeleteTags$fToPathDeleteTags$fToJSONDeleteTags$fToHeadersDeleteTags$fNFDataDeleteTags$fHashableDeleteTags$fNFDataDeleteTagsResponse$fAWSRequestDeleteTags$fEqDeleteTags$fReadDeleteTags$fShowDeleteTags$fDataDeleteTags$fGenericDeleteTags$fEqDeleteTagsResponse$fReadDeleteTagsResponse$fShowDeleteTagsResponse$fDataDeleteTagsResponse$fGenericDeleteTagsResponseDeleteRealtimeEndpointResponseDeleteRealtimeEndpointdeleteRealtimeEndpoint dreMLModelIddeleteRealtimeEndpointResponsedrersRealtimeEndpointInfodrersMLModelIddrersResponseStatus$fToQueryDeleteRealtimeEndpoint$fToPathDeleteRealtimeEndpoint$fToJSONDeleteRealtimeEndpoint!$fToHeadersDeleteRealtimeEndpoint$fNFDataDeleteRealtimeEndpoint $fHashableDeleteRealtimeEndpoint&$fNFDataDeleteRealtimeEndpointResponse"$fAWSRequestDeleteRealtimeEndpoint$fEqDeleteRealtimeEndpoint$fReadDeleteRealtimeEndpoint$fShowDeleteRealtimeEndpoint$fDataDeleteRealtimeEndpoint$fGenericDeleteRealtimeEndpoint"$fEqDeleteRealtimeEndpointResponse$$fReadDeleteRealtimeEndpointResponse$$fShowDeleteRealtimeEndpointResponse$$fDataDeleteRealtimeEndpointResponse'$fGenericDeleteRealtimeEndpointResponseDeleteMLModelResponse DeleteMLModel deleteMLModel dmlmMLModelIddeleteMLModelResponsedmlmrsMLModelIddmlmrsResponseStatus$fToQueryDeleteMLModel$fToPathDeleteMLModel$fToJSONDeleteMLModel$fToHeadersDeleteMLModel$fNFDataDeleteMLModel$fHashableDeleteMLModel$fNFDataDeleteMLModelResponse$fAWSRequestDeleteMLModel$fEqDeleteMLModel$fReadDeleteMLModel$fShowDeleteMLModel$fDataDeleteMLModel$fGenericDeleteMLModel$fEqDeleteMLModelResponse$fReadDeleteMLModelResponse$fShowDeleteMLModelResponse$fDataDeleteMLModelResponse$fGenericDeleteMLModelResponseDeleteEvaluationResponseDeleteEvaluationdeleteEvaluationdeEvaluationIddeleteEvaluationResponsedersEvaluationIddersResponseStatus$fToQueryDeleteEvaluation$fToPathDeleteEvaluation$fToJSONDeleteEvaluation$fToHeadersDeleteEvaluation$fNFDataDeleteEvaluation$fHashableDeleteEvaluation $fNFDataDeleteEvaluationResponse$fAWSRequestDeleteEvaluation$fEqDeleteEvaluation$fReadDeleteEvaluation$fShowDeleteEvaluation$fDataDeleteEvaluation$fGenericDeleteEvaluation$fEqDeleteEvaluationResponse$fReadDeleteEvaluationResponse$fShowDeleteEvaluationResponse$fDataDeleteEvaluationResponse!$fGenericDeleteEvaluationResponseDeleteDataSourceResponseDeleteDataSourcedeleteDataSourceddsDataSourceIddeleteDataSourceResponseddsrsDataSourceIdddsrsResponseStatus$fToQueryDeleteDataSource$fToPathDeleteDataSource$fToJSONDeleteDataSource$fToHeadersDeleteDataSource$fNFDataDeleteDataSource$fHashableDeleteDataSource $fNFDataDeleteDataSourceResponse$fAWSRequestDeleteDataSource$fEqDeleteDataSource$fReadDeleteDataSource$fShowDeleteDataSource$fDataDeleteDataSource$fGenericDeleteDataSource$fEqDeleteDataSourceResponse$fReadDeleteDataSourceResponse$fShowDeleteDataSourceResponse$fDataDeleteDataSourceResponse!$fGenericDeleteDataSourceResponseDeleteBatchPredictionResponseDeleteBatchPredictiondeleteBatchPredictiondbpBatchPredictionIddeleteBatchPredictionResponsedbprsBatchPredictionIddbprsResponseStatus$fToQueryDeleteBatchPrediction$fToPathDeleteBatchPrediction$fToJSONDeleteBatchPrediction $fToHeadersDeleteBatchPrediction$fNFDataDeleteBatchPrediction$fHashableDeleteBatchPrediction%$fNFDataDeleteBatchPredictionResponse!$fAWSRequestDeleteBatchPrediction$fEqDeleteBatchPrediction$fReadDeleteBatchPrediction$fShowDeleteBatchPrediction$fDataDeleteBatchPrediction$fGenericDeleteBatchPrediction!$fEqDeleteBatchPredictionResponse#$fReadDeleteBatchPredictionResponse#$fShowDeleteBatchPredictionResponse#$fDataDeleteBatchPredictionResponse&$fGenericDeleteBatchPredictionResponseCreateRealtimeEndpointResponseCreateRealtimeEndpointcreateRealtimeEndpoint creMLModelIdcreateRealtimeEndpointResponsecrersRealtimeEndpointInfocrersMLModelIdcrersResponseStatus$fToQueryCreateRealtimeEndpoint$fToPathCreateRealtimeEndpoint$fToJSONCreateRealtimeEndpoint!$fToHeadersCreateRealtimeEndpoint$fNFDataCreateRealtimeEndpoint $fHashableCreateRealtimeEndpoint&$fNFDataCreateRealtimeEndpointResponse"$fAWSRequestCreateRealtimeEndpoint$fEqCreateRealtimeEndpoint$fReadCreateRealtimeEndpoint$fShowCreateRealtimeEndpoint$fDataCreateRealtimeEndpoint$fGenericCreateRealtimeEndpoint"$fEqCreateRealtimeEndpointResponse$$fReadCreateRealtimeEndpointResponse$$fShowCreateRealtimeEndpointResponse$$fDataCreateRealtimeEndpointResponse'$fGenericCreateRealtimeEndpointResponseCreateMLModelResponse CreateMLModel createMLModel cmlmRecipe cmlmRecipeURIcmlmMLModelNamecmlmParameters cmlmMLModelIdcmlmMLModelTypecmlmTrainingDataSourceIdcreateMLModelResponsecmlmrsMLModelIdcmlmrsResponseStatus$fToQueryCreateMLModel$fToPathCreateMLModel$fToJSONCreateMLModel$fToHeadersCreateMLModel$fNFDataCreateMLModel$fHashableCreateMLModel$fNFDataCreateMLModelResponse$fAWSRequestCreateMLModel$fEqCreateMLModel$fReadCreateMLModel$fShowCreateMLModel$fDataCreateMLModel$fGenericCreateMLModel$fEqCreateMLModelResponse$fReadCreateMLModelResponse$fShowCreateMLModelResponse$fDataCreateMLModelResponse$fGenericCreateMLModelResponseCreateEvaluationResponseCreateEvaluationcreateEvaluationceEvaluationNameceEvaluationId ceMLModelIdceEvaluationDataSourceIdcreateEvaluationResponsecersEvaluationIdcersResponseStatus$fToQueryCreateEvaluation$fToPathCreateEvaluation$fToJSONCreateEvaluation$fToHeadersCreateEvaluation$fNFDataCreateEvaluation$fHashableCreateEvaluation $fNFDataCreateEvaluationResponse$fAWSRequestCreateEvaluation$fEqCreateEvaluation$fReadCreateEvaluation$fShowCreateEvaluation$fDataCreateEvaluation$fGenericCreateEvaluation$fEqCreateEvaluationResponse$fReadCreateEvaluationResponse$fShowCreateEvaluationResponse$fDataCreateEvaluationResponse!$fGenericCreateEvaluationResponseCreateDataSourceFromS3ResponseCreateDataSourceFromS3createDataSourceFromS3cdsfsDataSourceNamecdsfsComputeStatisticscdsfsDataSourceId cdsfsDataSpeccreateDataSourceFromS3ResponsecdsfsrsDataSourceIdcdsfsrsResponseStatus$fToQueryCreateDataSourceFromS3$fToPathCreateDataSourceFromS3$fToJSONCreateDataSourceFromS3!$fToHeadersCreateDataSourceFromS3$fNFDataCreateDataSourceFromS3 $fHashableCreateDataSourceFromS3&$fNFDataCreateDataSourceFromS3Response"$fAWSRequestCreateDataSourceFromS3$fEqCreateDataSourceFromS3$fReadCreateDataSourceFromS3$fShowCreateDataSourceFromS3$fDataCreateDataSourceFromS3$fGenericCreateDataSourceFromS3"$fEqCreateDataSourceFromS3Response$$fReadCreateDataSourceFromS3Response$$fShowCreateDataSourceFromS3Response$$fDataCreateDataSourceFromS3Response'$fGenericCreateDataSourceFromS3Response$CreateDataSourceFromRedshiftResponseCreateDataSourceFromRedshiftcreateDataSourceFromRedshiftcdsfrDataSourceNamecdsfrComputeStatisticscdsfrDataSourceId cdsfrDataSpec cdsfrRoleARN$createDataSourceFromRedshiftResponsecdsfrrsDataSourceIdcdsfrrsResponseStatus%$fToQueryCreateDataSourceFromRedshift$$fToPathCreateDataSourceFromRedshift$$fToJSONCreateDataSourceFromRedshift'$fToHeadersCreateDataSourceFromRedshift$$fNFDataCreateDataSourceFromRedshift&$fHashableCreateDataSourceFromRedshift,$fNFDataCreateDataSourceFromRedshiftResponse($fAWSRequestCreateDataSourceFromRedshift $fEqCreateDataSourceFromRedshift"$fReadCreateDataSourceFromRedshift"$fShowCreateDataSourceFromRedshift"$fDataCreateDataSourceFromRedshift%$fGenericCreateDataSourceFromRedshift($fEqCreateDataSourceFromRedshiftResponse*$fReadCreateDataSourceFromRedshiftResponse*$fShowCreateDataSourceFromRedshiftResponse*$fDataCreateDataSourceFromRedshiftResponse-$fGenericCreateDataSourceFromRedshiftResponseCreateDataSourceFromRDSResponseCreateDataSourceFromRDScreateDataSourceFromRDScdsfrdsDataSourceNamecdsfrdsComputeStatisticscdsfrdsDataSourceIdcdsfrdsRDSDatacdsfrdsRoleARNcreateDataSourceFromRDSResponsecdsfrdsrsDataSourceIdcdsfrdsrsResponseStatus $fToQueryCreateDataSourceFromRDS$fToPathCreateDataSourceFromRDS$fToJSONCreateDataSourceFromRDS"$fToHeadersCreateDataSourceFromRDS$fNFDataCreateDataSourceFromRDS!$fHashableCreateDataSourceFromRDS'$fNFDataCreateDataSourceFromRDSResponse#$fAWSRequestCreateDataSourceFromRDS$fEqCreateDataSourceFromRDS$fReadCreateDataSourceFromRDS$fShowCreateDataSourceFromRDS$fDataCreateDataSourceFromRDS $fGenericCreateDataSourceFromRDS#$fEqCreateDataSourceFromRDSResponse%$fReadCreateDataSourceFromRDSResponse%$fShowCreateDataSourceFromRDSResponse%$fDataCreateDataSourceFromRDSResponse($fGenericCreateDataSourceFromRDSResponseCreateBatchPredictionResponseCreateBatchPredictioncreateBatchPredictioncbpBatchPredictionNamecbpBatchPredictionId cbpMLModelIdcbpBatchPredictionDataSourceId cbpOutputURIcreateBatchPredictionResponsecbprsBatchPredictionIdcbprsResponseStatus$fToQueryCreateBatchPrediction$fToPathCreateBatchPrediction$fToJSONCreateBatchPrediction $fToHeadersCreateBatchPrediction$fNFDataCreateBatchPrediction$fHashableCreateBatchPrediction%$fNFDataCreateBatchPredictionResponse!$fAWSRequestCreateBatchPrediction$fEqCreateBatchPrediction$fReadCreateBatchPrediction$fShowCreateBatchPrediction$fDataCreateBatchPrediction$fGenericCreateBatchPrediction!$fEqCreateBatchPredictionResponse#$fReadCreateBatchPredictionResponse#$fShowCreateBatchPredictionResponse#$fDataCreateBatchPredictionResponse&$fGenericCreateBatchPredictionResponseAddTagsResponseAddTagsaddTagsatTags atResourceIdatResourceTypeaddTagsResponseatrsResourceIdatrsResourceTypeatrsResponseStatus$fToQueryAddTags$fToPathAddTags$fToJSONAddTags$fToHeadersAddTags$fNFDataAddTags$fHashableAddTags$fNFDataAddTagsResponse$fAWSRequestAddTags $fEqAddTags $fReadAddTags $fShowAddTags $fDataAddTags$fGenericAddTags$fEqAddTagsResponse$fReadAddTagsResponse$fShowAddTagsResponse$fDataAddTagsResponse$fGenericAddTagsResponseUpdateBatchPredictionResponseUpdateBatchPredictionupdateBatchPredictionubpBatchPredictionIdubpBatchPredictionNameupdateBatchPredictionResponseubprsBatchPredictionIdubprsResponseStatus$fToQueryUpdateBatchPrediction$fToPathUpdateBatchPrediction$fToJSONUpdateBatchPrediction $fToHeadersUpdateBatchPrediction$fNFDataUpdateBatchPrediction$fHashableUpdateBatchPrediction%$fNFDataUpdateBatchPredictionResponse!$fAWSRequestUpdateBatchPrediction$fEqUpdateBatchPrediction$fReadUpdateBatchPrediction$fShowUpdateBatchPrediction$fDataUpdateBatchPrediction$fGenericUpdateBatchPrediction!$fEqUpdateBatchPredictionResponse#$fReadUpdateBatchPredictionResponse#$fShowUpdateBatchPredictionResponse#$fDataUpdateBatchPredictionResponse&$fGenericUpdateBatchPredictionResponseUpdateDataSourceResponseUpdateDataSourceupdateDataSourceudsDataSourceIdudsDataSourceNameupdateDataSourceResponseudsrsDataSourceIdudsrsResponseStatus$fToQueryUpdateDataSource$fToPathUpdateDataSource$fToJSONUpdateDataSource$fToHeadersUpdateDataSource$fNFDataUpdateDataSource$fHashableUpdateDataSource $fNFDataUpdateDataSourceResponse$fAWSRequestUpdateDataSource$fEqUpdateDataSource$fReadUpdateDataSource$fShowUpdateDataSource$fDataUpdateDataSource$fGenericUpdateDataSource$fEqUpdateDataSourceResponse$fReadUpdateDataSourceResponse$fShowUpdateDataSourceResponse$fDataUpdateDataSourceResponse!$fGenericUpdateDataSourceResponseUpdateEvaluationResponseUpdateEvaluationupdateEvaluationueEvaluationIdueEvaluationNameupdateEvaluationResponseuersEvaluationIduersResponseStatus$fToQueryUpdateEvaluation$fToPathUpdateEvaluation$fToJSONUpdateEvaluation$fToHeadersUpdateEvaluation$fNFDataUpdateEvaluation$fHashableUpdateEvaluation $fNFDataUpdateEvaluationResponse$fAWSRequestUpdateEvaluation$fEqUpdateEvaluation$fReadUpdateEvaluation$fShowUpdateEvaluation$fDataUpdateEvaluation$fGenericUpdateEvaluation$fEqUpdateEvaluationResponse$fReadUpdateEvaluationResponse$fShowUpdateEvaluationResponse$fDataUpdateEvaluationResponse!$fGenericUpdateEvaluationResponseUpdateMLModelResponse UpdateMLModel updateMLModelumlmMLModelNameumlmScoreThreshold umlmMLModelIdupdateMLModelResponseumlmrsMLModelIdumlmrsResponseStatus$fToQueryUpdateMLModel$fToPathUpdateMLModel$fToJSONUpdateMLModel$fToHeadersUpdateMLModel$fNFDataUpdateMLModel$fHashableUpdateMLModel$fNFDataUpdateMLModelResponse$fAWSRequestUpdateMLModel$fEqUpdateMLModel$fReadUpdateMLModel$fShowUpdateMLModel$fDataUpdateMLModel$fGenericUpdateMLModel$fEqUpdateMLModelResponse$fReadUpdateMLModelResponse$fShowUpdateMLModelResponse$fDataUpdateMLModelResponse$fGenericUpdateMLModelResponsemLModelAvailablebatchPredictionAvailabledataSourceAvailableevaluationAvailableTag'_tagKey _tagValue S3DataSpec'_sdsDataLocationS3_sdsDataRearrangement_sdsDataSchemaLocationS3_sdsDataSchemaRedshiftMetadata'_redDatabaseUserName_redRedshiftDatabase_redSelectSqlQueryRedshiftDatabaseCredentials' _rdcPassword _rdcUsernameRedshiftDatabase'_rdClusterIdentifier_rdDatabaseNameRedshiftDataSpec'_rS3StagingLocation_rDatabaseCredentials_rSelectSqlQuery_rDatabaseInformation_rDataRearrangement _rDataSchema_rDataSchemaURIRealtimeEndpointInfo'_reiPeakRequestsPerSecond_reiEndpointStatus_reiEndpointURL _reiCreatedAt RDSMetadata'_rmServiceRole_rmResourceRole_rmDatabaseUserName _rmDatabase_rmDataPipelineId_rmSelectSqlQueryRDSDatabaseCredentials'_rdsdcPassword_rdsdcUsername RDSDatabase'_rdsdDatabaseName_rdsdInstanceIdentifier RDSDataSpec'_rdsdsSecurityGroupIds_rdsdsSubnetId_rdsdsServiceRole_rdsdsResourceRole_rdsdsS3StagingLocation_rdsdsDatabaseCredentials_rdsdsSelectSqlQuery_rdsdsDatabaseInformation_rdsdsDataRearrangement_rdsdsDataSchema_rdsdsDataSchemaURI Prediction' _pDetails_pPredictedScores_pPredictedLabel_pPredictedValuePerformanceMetrics' _pmPropertiesMLModel'_mlmMLModelType _mlmMessage_mlmTrainingDataSourceId_mlmEndpointInfo_mlmName_mlmCreatedByIAMUser _mlmAlgorithm_mlmFinishedAt_mlmScoreThreshold _mlmStartedAt_mlmSizeInBytes _mlmMLModelId_mlmInputDataLocationS3_mlmComputeTime _mlmCreatedAt_mlmScoreThresholdLastUpdatedAt_mlmTrainingParameters_mlmLastUpdatedAt _mlmStatus Evaluation'_eEvaluationDataSourceId _eMessage_eEvaluationId_eName_eCreatedByIAMUser _eFinishedAt _eStartedAt _eMLModelId_eInputDataLocationS3 _eComputeTime _eCreatedAt_eLastUpdatedAt_ePerformanceMetrics_eStatus DataSource' _dsRoleARN_dsDataRearrangement_dsRedshiftMetadata _dsMessage_dsComputeStatistics_dsDataLocationS3_dsName_dsCreatedByIAMUser _dsFinishedAt _dsStartedAt_dsDataSizeInBytes_dsRDSMetadata_dsDataSourceId_dsComputeTime _dsCreatedAt_dsLastUpdatedAt_dsNumberOfFiles _dsStatusBatchPrediction' _bpOutputURI _bpMessage_bpName_bpCreatedByIAMUser_bpInvalidRecordCount _bpFinishedAt_bpBatchPredictionId _bpStartedAt_bpTotalRecordCount_bpBatchPredictionDataSourceId _bpMLModelId_bpInputDataLocationS3_bpComputeTime _bpCreatedAt_bpLastUpdatedAt _bpStatus