Copyright | (c) 2013-2023 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | Safe-Inferred |
Language | Haskell2010 |
Detects named entities in input text when you use the pre-trained model. Detects custom entities if you have a custom entity recognition model.
When detecting named entities using the pre-trained model, use plain text as the input. For more information about named entities, see Entities in the Comprehend Developer Guide.
When you use a custom entity recognition model, you can input plain text or you can upload a single-page input document (text, PDF, Word, or image).
If the system detects errors while processing a page in the input
document, the API response includes an entry in Errors
for each error.
If the system detects a document-level error in your input document, the
API returns an InvalidRequestException
error response. For details
about this exception, see
Errors in semi-structured documents
in the Comprehend Developer Guide.
Synopsis
- data DetectEntities = DetectEntities' {}
- newDetectEntities :: DetectEntities
- detectEntities_bytes :: Lens' DetectEntities (Maybe ByteString)
- detectEntities_documentReaderConfig :: Lens' DetectEntities (Maybe DocumentReaderConfig)
- detectEntities_endpointArn :: Lens' DetectEntities (Maybe Text)
- detectEntities_languageCode :: Lens' DetectEntities (Maybe LanguageCode)
- detectEntities_text :: Lens' DetectEntities (Maybe Text)
- data DetectEntitiesResponse = DetectEntitiesResponse' {
- blocks :: Maybe [Block]
- documentMetadata :: Maybe DocumentMetadata
- documentType :: Maybe [DocumentTypeListItem]
- entities :: Maybe [Entity]
- errors :: Maybe [ErrorsListItem]
- httpStatus :: Int
- newDetectEntitiesResponse :: Int -> DetectEntitiesResponse
- detectEntitiesResponse_blocks :: Lens' DetectEntitiesResponse (Maybe [Block])
- detectEntitiesResponse_documentMetadata :: Lens' DetectEntitiesResponse (Maybe DocumentMetadata)
- detectEntitiesResponse_documentType :: Lens' DetectEntitiesResponse (Maybe [DocumentTypeListItem])
- detectEntitiesResponse_entities :: Lens' DetectEntitiesResponse (Maybe [Entity])
- detectEntitiesResponse_errors :: Lens' DetectEntitiesResponse (Maybe [ErrorsListItem])
- detectEntitiesResponse_httpStatus :: Lens' DetectEntitiesResponse Int
Creating a Request
data DetectEntities Source #
See: newDetectEntities
smart constructor.
DetectEntities' | |
|
Instances
newDetectEntities :: DetectEntities Source #
Create a value of DetectEntities
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:bytes:DetectEntities'
, detectEntities_bytes
- This field applies only when you use a custom entity recognition model
that was trained with PDF annotations. For other cases, enter your text
input in the Text
field.
Use the Bytes
parameter to input a text, PDF, Word or image file.
Using a plain-text file in the Bytes
parameter is equivelent to using
the Text
parameter (the Entities
field in the response is
identical).
You can also use the Bytes
parameter to input an Amazon Textract
DetectDocumentText
or AnalyzeDocument
output file.
Provide the input document as a sequence of base64-encoded bytes. If your code uses an Amazon Web Services SDK to detect entities, the SDK may encode the document file bytes for you.
The maximum length of this field depends on the input document type. For details, see Inputs for real-time custom analysis in the Comprehend Developer Guide.
If you use the Bytes
parameter, do not use the Text
parameter.--
-- Note: This Lens
automatically encodes and decodes Base64 data.
-- The underlying isomorphism will encode to Base64 representation during
-- serialisation, and decode from Base64 representation during deserialisation.
-- This Lens
accepts and returns only raw unencoded data.
DetectEntities
, detectEntities_documentReaderConfig
- Provides configuration parameters to override the default actions for
extracting text from PDF documents and image files.
DetectEntities
, detectEntities_endpointArn
- The Amazon Resource Name of an endpoint that is associated with a custom
entity recognition model. Provide an endpoint if you want to detect
entities by using your own custom model instead of the default model
that is used by Amazon Comprehend.
If you specify an endpoint, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you provide in your request.
For information about endpoints, see Managing endpoints.
DetectEntities
, detectEntities_languageCode
- The language of the input documents. You can specify any of the primary
languages supported by Amazon Comprehend. If your request includes the
endpoint for a custom entity recognition model, Amazon Comprehend uses
the language of your custom model, and it ignores any language code that
you specify here.
All input documents must be in the same language.
DetectEntities
, detectEntities_text
- A UTF-8 text string. The maximum string size is 100 KB. If you enter
text using this parameter, do not use the Bytes
parameter.
Request Lenses
detectEntities_bytes :: Lens' DetectEntities (Maybe ByteString) Source #
This field applies only when you use a custom entity recognition model
that was trained with PDF annotations. For other cases, enter your text
input in the Text
field.
Use the Bytes
parameter to input a text, PDF, Word or image file.
Using a plain-text file in the Bytes
parameter is equivelent to using
the Text
parameter (the Entities
field in the response is
identical).
You can also use the Bytes
parameter to input an Amazon Textract
DetectDocumentText
or AnalyzeDocument
output file.
Provide the input document as a sequence of base64-encoded bytes. If your code uses an Amazon Web Services SDK to detect entities, the SDK may encode the document file bytes for you.
The maximum length of this field depends on the input document type. For details, see Inputs for real-time custom analysis in the Comprehend Developer Guide.
If you use the Bytes
parameter, do not use the Text
parameter.--
-- Note: This Lens
automatically encodes and decodes Base64 data.
-- The underlying isomorphism will encode to Base64 representation during
-- serialisation, and decode from Base64 representation during deserialisation.
-- This Lens
accepts and returns only raw unencoded data.
detectEntities_documentReaderConfig :: Lens' DetectEntities (Maybe DocumentReaderConfig) Source #
Provides configuration parameters to override the default actions for extracting text from PDF documents and image files.
detectEntities_endpointArn :: Lens' DetectEntities (Maybe Text) Source #
The Amazon Resource Name of an endpoint that is associated with a custom entity recognition model. Provide an endpoint if you want to detect entities by using your own custom model instead of the default model that is used by Amazon Comprehend.
If you specify an endpoint, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you provide in your request.
For information about endpoints, see Managing endpoints.
detectEntities_languageCode :: Lens' DetectEntities (Maybe LanguageCode) Source #
The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. If your request includes the endpoint for a custom entity recognition model, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you specify here.
All input documents must be in the same language.
detectEntities_text :: Lens' DetectEntities (Maybe Text) Source #
A UTF-8 text string. The maximum string size is 100 KB. If you enter
text using this parameter, do not use the Bytes
parameter.
Destructuring the Response
data DetectEntitiesResponse Source #
See: newDetectEntitiesResponse
smart constructor.
DetectEntitiesResponse' | |
|
Instances
newDetectEntitiesResponse Source #
Create a value of DetectEntitiesResponse
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:blocks:DetectEntitiesResponse'
, detectEntitiesResponse_blocks
- Information about each block of text in the input document. Blocks are
nested. A page block contains a block for each line of text, which
contains a block for each word.
The Block
content for a Word input document does not include a
Geometry
field.
The Block
field is not present in the response for plain-text inputs.
$sel:documentMetadata:DetectEntitiesResponse'
, detectEntitiesResponse_documentMetadata
- Information about the document, discovered during text extraction. This
field is present in the response only if your request used the Byte
parameter.
DetectEntitiesResponse
, detectEntitiesResponse_documentType
- The document type for each page in the input document. This field is
present in the response only if your request used the Byte
parameter.
DetectEntitiesResponse
, detectEntitiesResponse_entities
- A collection of entities identified in the input text. For each entity,
the response provides the entity text, entity type, where the entity
text begins and ends, and the level of confidence that Amazon Comprehend
has in the detection.
If your request uses a custom entity recognition model, Amazon Comprehend detects the entities that the model is trained to recognize. Otherwise, it detects the default entity types. For a list of default entity types, see Entities in the Comprehend Developer Guide.
$sel:errors:DetectEntitiesResponse'
, detectEntitiesResponse_errors
- Page-level errors that the system detected while processing the input
document. The field is empty if the system encountered no errors.
$sel:httpStatus:DetectEntitiesResponse'
, detectEntitiesResponse_httpStatus
- The response's http status code.
Response Lenses
detectEntitiesResponse_blocks :: Lens' DetectEntitiesResponse (Maybe [Block]) Source #
Information about each block of text in the input document. Blocks are nested. A page block contains a block for each line of text, which contains a block for each word.
The Block
content for a Word input document does not include a
Geometry
field.
The Block
field is not present in the response for plain-text inputs.
detectEntitiesResponse_documentMetadata :: Lens' DetectEntitiesResponse (Maybe DocumentMetadata) Source #
Information about the document, discovered during text extraction. This
field is present in the response only if your request used the Byte
parameter.
detectEntitiesResponse_documentType :: Lens' DetectEntitiesResponse (Maybe [DocumentTypeListItem]) Source #
The document type for each page in the input document. This field is
present in the response only if your request used the Byte
parameter.
detectEntitiesResponse_entities :: Lens' DetectEntitiesResponse (Maybe [Entity]) Source #
A collection of entities identified in the input text. For each entity, the response provides the entity text, entity type, where the entity text begins and ends, and the level of confidence that Amazon Comprehend has in the detection.
If your request uses a custom entity recognition model, Amazon Comprehend detects the entities that the model is trained to recognize. Otherwise, it detects the default entity types. For a list of default entity types, see Entities in the Comprehend Developer Guide.
detectEntitiesResponse_errors :: Lens' DetectEntitiesResponse (Maybe [ErrorsListItem]) Source #
Page-level errors that the system detected while processing the input document. The field is empty if the system encountered no errors.
detectEntitiesResponse_httpStatus :: Lens' DetectEntitiesResponse Int Source #
The response's http status code.