| Copyright | (c) 2013-2023 Brendan Hay | 
|---|---|
| License | Mozilla Public License, v. 2.0. | 
| Maintainer | Brendan Hay | 
| Stability | auto-generated | 
| Portability | non-portable (GHC extensions) | 
| Safe Haskell | Safe-Inferred | 
| Language | Haskell2010 | 
Amazonka.SageMaker.Types.ClarifyInferenceConfig
Description
Synopsis
- data ClarifyInferenceConfig = ClarifyInferenceConfig' {
- contentTemplate :: Maybe Text
 - featureHeaders :: Maybe (NonEmpty Text)
 - featureTypes :: Maybe (NonEmpty ClarifyFeatureType)
 - featuresAttribute :: Maybe Text
 - labelAttribute :: Maybe Text
 - labelHeaders :: Maybe (NonEmpty Text)
 - labelIndex :: Maybe Natural
 - maxPayloadInMB :: Maybe Natural
 - maxRecordCount :: Maybe Natural
 - probabilityAttribute :: Maybe Text
 - probabilityIndex :: Maybe Natural
 
 - newClarifyInferenceConfig :: ClarifyInferenceConfig
 - clarifyInferenceConfig_contentTemplate :: Lens' ClarifyInferenceConfig (Maybe Text)
 - clarifyInferenceConfig_featureHeaders :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty Text))
 - clarifyInferenceConfig_featureTypes :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty ClarifyFeatureType))
 - clarifyInferenceConfig_featuresAttribute :: Lens' ClarifyInferenceConfig (Maybe Text)
 - clarifyInferenceConfig_labelAttribute :: Lens' ClarifyInferenceConfig (Maybe Text)
 - clarifyInferenceConfig_labelHeaders :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty Text))
 - clarifyInferenceConfig_labelIndex :: Lens' ClarifyInferenceConfig (Maybe Natural)
 - clarifyInferenceConfig_maxPayloadInMB :: Lens' ClarifyInferenceConfig (Maybe Natural)
 - clarifyInferenceConfig_maxRecordCount :: Lens' ClarifyInferenceConfig (Maybe Natural)
 - clarifyInferenceConfig_probabilityAttribute :: Lens' ClarifyInferenceConfig (Maybe Text)
 - clarifyInferenceConfig_probabilityIndex :: Lens' ClarifyInferenceConfig (Maybe Natural)
 
Documentation
data ClarifyInferenceConfig Source #
The inference configuration parameter for the model container.
See: newClarifyInferenceConfig smart constructor.
Constructors
| ClarifyInferenceConfig' | |
Fields 
  | |
Instances
newClarifyInferenceConfig :: ClarifyInferenceConfig Source #
Create a value of ClarifyInferenceConfig with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:contentTemplate:ClarifyInferenceConfig', clarifyInferenceConfig_contentTemplate - A template string used to format a JSON record into an acceptable model
 container input. For example, a ContentTemplate string
 '{"myfeatures":$features}' will format a list of features
 [1,2,3] into the record string '{"myfeatures":[1,2,3]}'.
 Required only when the model container input is in JSON Lines format.
$sel:featureHeaders:ClarifyInferenceConfig', clarifyInferenceConfig_featureHeaders - The names of the features. If provided, these are included in the
 endpoint response payload to help readability of the InvokeEndpoint
 output. See the
 Response
 section under Invoke the endpoint in the Developer Guide for more
 information.
$sel:featureTypes:ClarifyInferenceConfig', clarifyInferenceConfig_featureTypes - A list of data types of the features (optional). Applicable only to NLP
 explainability. If provided, FeatureTypes must have at least one
 'text' string (for example, ['text']). If FeatureTypes is not
 provided, the explainer infers the feature types based on the baseline
 data. The feature types are included in the endpoint response payload.
 For additional information see the
 response
 section under Invoke the endpoint in the Developer Guide for more
 information.
$sel:featuresAttribute:ClarifyInferenceConfig', clarifyInferenceConfig_featuresAttribute - Provides the JMESPath expression to extract the features from a model
 container input in JSON Lines format. For example, if
 FeaturesAttribute is the JMESPath expression 'myfeatures', it
 extracts a list of features [1,2,3] from request data
 '{"myfeatures":[1,2,3]}'.
$sel:labelAttribute:ClarifyInferenceConfig', clarifyInferenceConfig_labelAttribute - A JMESPath expression used to locate the list of label headers in the
 model container output.
Example: If the model container output of a batch request is
 '{"labels":["cat","dog","fish"],"probability":[0.6,0.3,0.1]}',
 then set LabelAttribute to 'labels' to extract the list of label
 headers ["cat","dog","fish"]
$sel:labelHeaders:ClarifyInferenceConfig', clarifyInferenceConfig_labelHeaders - For multiclass classification problems, the label headers are the names
 of the classes. Otherwise, the label header is the name of the predicted
 label. These are used to help readability for the output of the
 InvokeEndpoint API. See the
 response
 section under Invoke the endpoint in the Developer Guide for more
 information. If there are no label headers in the model container
 output, provide them manually using this parameter.
$sel:labelIndex:ClarifyInferenceConfig', clarifyInferenceConfig_labelIndex - A zero-based index used to extract a label header or list of label
 headers from model container output in CSV format.
Example for a multiclass model: If the model container output
 consists of label headers followed by probabilities:
 '"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"', set
 LabelIndex to 0 to select the label headers
 ['cat','dog','fish'].
$sel:maxPayloadInMB:ClarifyInferenceConfig', clarifyInferenceConfig_maxPayloadInMB - The maximum payload size (MB) allowed of a request from the explainer to
 the model container. Defaults to 6 MB.
$sel:maxRecordCount:ClarifyInferenceConfig', clarifyInferenceConfig_maxRecordCount - The maximum number of records in a request that the model container can
 process when querying the model container for the predictions of a
 synthetic dataset.
 A record is a unit of input data that inference can be made on, for
 example, a single line in CSV data. If MaxRecordCount is 1, the
 model container expects one record per request. A value of 2 or greater
 means that the model expects batch requests, which can reduce overhead
 and speed up the inferencing process. If this parameter is not provided,
 the explainer will tune the record count per request according to the
 model container's capacity at runtime.
$sel:probabilityAttribute:ClarifyInferenceConfig', clarifyInferenceConfig_probabilityAttribute - A JMESPath expression used to extract the probability (or score) from
 the model container output if the model container is in JSON Lines
 format.
Example: If the model container output of a single request is
 '{"predicted_label":1,"probability":0.6}', then set
 ProbabilityAttribute to 'probability'.
$sel:probabilityIndex:ClarifyInferenceConfig', clarifyInferenceConfig_probabilityIndex - A zero-based index used to extract a probability value (score) or list
 from model container output in CSV format. If this value is not
 provided, the entire model container output will be treated as a
 probability value (score) or list.
Example for a single class model: If the model container output
 consists of a string-formatted prediction label followed by its
 probability: '1,0.6', set ProbabilityIndex to 1 to select the
 probability value 0.6.
Example for a multiclass model: If the model container output
 consists of a string-formatted prediction label followed by its
 probability:
 '"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"', set
 ProbabilityIndex to 1 to select the probability values
 [0.1,0.6,0.3].
clarifyInferenceConfig_contentTemplate :: Lens' ClarifyInferenceConfig (Maybe Text) Source #
A template string used to format a JSON record into an acceptable model
 container input. For example, a ContentTemplate string
 '{"myfeatures":$features}' will format a list of features
 [1,2,3] into the record string '{"myfeatures":[1,2,3]}'.
 Required only when the model container input is in JSON Lines format.
clarifyInferenceConfig_featureHeaders :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty Text)) Source #
The names of the features. If provided, these are included in the
 endpoint response payload to help readability of the InvokeEndpoint
 output. See the
 Response
 section under Invoke the endpoint in the Developer Guide for more
 information.
clarifyInferenceConfig_featureTypes :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty ClarifyFeatureType)) Source #
A list of data types of the features (optional). Applicable only to NLP
 explainability. If provided, FeatureTypes must have at least one
 'text' string (for example, ['text']). If FeatureTypes is not
 provided, the explainer infers the feature types based on the baseline
 data. The feature types are included in the endpoint response payload.
 For additional information see the
 response
 section under Invoke the endpoint in the Developer Guide for more
 information.
clarifyInferenceConfig_featuresAttribute :: Lens' ClarifyInferenceConfig (Maybe Text) Source #
Provides the JMESPath expression to extract the features from a model
 container input in JSON Lines format. For example, if
 FeaturesAttribute is the JMESPath expression 'myfeatures', it
 extracts a list of features [1,2,3] from request data
 '{"myfeatures":[1,2,3]}'.
clarifyInferenceConfig_labelAttribute :: Lens' ClarifyInferenceConfig (Maybe Text) Source #
A JMESPath expression used to locate the list of label headers in the model container output.
Example: If the model container output of a batch request is
 '{"labels":["cat","dog","fish"],"probability":[0.6,0.3,0.1]}',
 then set LabelAttribute to 'labels' to extract the list of label
 headers ["cat","dog","fish"]
clarifyInferenceConfig_labelHeaders :: Lens' ClarifyInferenceConfig (Maybe (NonEmpty Text)) Source #
For multiclass classification problems, the label headers are the names
 of the classes. Otherwise, the label header is the name of the predicted
 label. These are used to help readability for the output of the
 InvokeEndpoint API. See the
 response
 section under Invoke the endpoint in the Developer Guide for more
 information. If there are no label headers in the model container
 output, provide them manually using this parameter.
clarifyInferenceConfig_labelIndex :: Lens' ClarifyInferenceConfig (Maybe Natural) Source #
A zero-based index used to extract a label header or list of label headers from model container output in CSV format.
Example for a multiclass model: If the model container output
 consists of label headers followed by probabilities:
 '"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"', set
 LabelIndex to 0 to select the label headers
 ['cat','dog','fish'].
clarifyInferenceConfig_maxPayloadInMB :: Lens' ClarifyInferenceConfig (Maybe Natural) Source #
The maximum payload size (MB) allowed of a request from the explainer to
 the model container. Defaults to 6 MB.
clarifyInferenceConfig_maxRecordCount :: Lens' ClarifyInferenceConfig (Maybe Natural) Source #
The maximum number of records in a request that the model container can
 process when querying the model container for the predictions of a
 synthetic dataset.
 A record is a unit of input data that inference can be made on, for
 example, a single line in CSV data. If MaxRecordCount is 1, the
 model container expects one record per request. A value of 2 or greater
 means that the model expects batch requests, which can reduce overhead
 and speed up the inferencing process. If this parameter is not provided,
 the explainer will tune the record count per request according to the
 model container's capacity at runtime.
clarifyInferenceConfig_probabilityAttribute :: Lens' ClarifyInferenceConfig (Maybe Text) Source #
A JMESPath expression used to extract the probability (or score) from the model container output if the model container is in JSON Lines format.
Example: If the model container output of a single request is
 '{"predicted_label":1,"probability":0.6}', then set
 ProbabilityAttribute to 'probability'.
clarifyInferenceConfig_probabilityIndex :: Lens' ClarifyInferenceConfig (Maybe Natural) Source #
A zero-based index used to extract a probability value (score) or list from model container output in CSV format. If this value is not provided, the entire model container output will be treated as a probability value (score) or list.
Example for a single class model: If the model container output
 consists of a string-formatted prediction label followed by its
 probability: '1,0.6', set ProbabilityIndex to 1 to select the
 probability value 0.6.
Example for a multiclass model: If the model container output
 consists of a string-formatted prediction label followed by its
 probability:
 '"[\'cat\',\'dog\',\'fish\']","[0.1,0.6,0.3]"', set
 ProbabilityIndex to 1 to select the probability values
 [0.1,0.6,0.3].