h&       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                            !!!!!!!!!!!!!!!!!!""""""""""""""""##############$$$$$$$$$$$$$$%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&''''''''''''''''''''''''''''''''''''''''(((((((((((((((((())))))))))))******************++++++++++++++,,,,,,,,,,,,,,,,,,,,,,,--------------................//////////////0000000000000000111111111111111122222222222222223333333333333344444444444444455555555555555555555555555555555555555666666666666666666666666666666666666667777777777777777777777777777777777777788888888888888888888888888888888888888888899999999999999999999999999999999999999::::::::::::::::::::::::::::::::::::::::::::;;;;;;;;;;;;;;;;;;;; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?   (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';&w "amazonka-textractThe bounding box around the detected page, text, key-value pair, table, table cell, or selection element on a document page. The left (x-coordinate) and top (y-coordinate) are coordinates that represent the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The top and left values returned are ratios of the overall document page size. For example, if the input image is 700 x 200 pixels, and the top-left coordinate of the bounding box is 350 x 50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).The width and height values represent the dimensions of the bounding box as a ratio of the overall document page dimension. For example, if the document page size is 700 x 200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.See: ( smart constructor.$amazonka-textractThe height of the bounding box as a ratio of the overall document page height.%amazonka-textractThe left coordinate of the bounding box as a ratio of overall document page width.&amazonka-textractThe top coordinate of the bounding box as a ratio of overall document page height.'amazonka-textractThe width of the bounding box as a ratio of the overall document page width.(amazonka-textractCreate a value of "" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:$, ) - The height of the bounding box as a ratio of the overall document page height.%, * - The left coordinate of the bounding box as a ratio of overall document page width.&, + - The top coordinate of the bounding box as a ratio of overall document page height.', , - The width of the bounding box as a ratio of the overall document page width.)amazonka-textractThe height of the bounding box as a ratio of the overall document page height.*amazonka-textractThe left coordinate of the bounding box as a ratio of overall document page width.+amazonka-textractThe top coordinate of the bounding box as a ratio of overall document page height.,amazonka-textractThe width of the bounding box as a ratio of the overall document page width. "&'$%#()*+, "&'$%#()*+,(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?'1487564875687(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';*Lamazonka-textractA structure that holds information regarding a detected signature on a page.See: O smart constructor.Namazonka-textract+The page a detected signature was found on.Oamazonka-textractCreate a value of L" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:N, P. - The page a detected signature was found on.Pamazonka-textract+The page a detected signature was found on.LNMOPLNMOP(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';. Xamazonka-textract%Information about the input document.See: [ smart constructor.Zamazonka-textract6The number of pages that are detected in the document.[amazonka-textractCreate a value of X" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:Z, \9 - The number of pages that are detected in the document.\amazonka-textract6The number of pages that are detected in the document.XZY[\XZY[\(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?.dihgef dihgefihg(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';4}amazonka-textract&Returns the kind of currency detected.See:  smart constructor.amazonka-textractCurrency code for detected currency. the current supported codes are: USDEURGBPCADINRJPYCHFAUDCNYBZRSEKHKDamazonka-textract0Percentage confideence in the detected currency.amazonka-textractCreate a value of }" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Currency code for detected currency. the current supported codes are: USDEURGBPCADINRJPYCHFAUDCNYBZRSEKHKD, 3 - Percentage confideence in the detected currency.amazonka-textractCurrency code for detected currency. the current supported codes are: USDEURGBPCADINRJPYCHFAUDCNYBZRSEKHKDamazonka-textract0Percentage confideence in the detected currency.}~}~(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';9lamazonka-textractShows the group that a certain key belongs to. This helps differentiate between names and addresses for different organizations, that can be hard to determine via JSON response.See:  smart constructor.amazonka-textractProvides a group Id number, which will be the same for each in the group.amazonka-textractInforms you on whether the expense group is a name or an address.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Provides a group Id number, which will be the same for each in the group.,  - Informs you on whether the expense group is a name or an address.amazonka-textractProvides a group Id number, which will be the same for each in the group.amazonka-textractInforms you on whether the expense group is a name or an address. (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';=amazonka-textractAn object used to store information about the Type detected by Amazon Textract.See:  smart constructor.amazonka-textract,The confidence of accuracy, as a percentage.amazonka-textract5The word or line of text detected by Amazon Textract.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, / - The confidence of accuracy, as a percentage., 8 - The word or line of text detected by Amazon Textract.amazonka-textract,The confidence of accuracy, as a percentage.amazonka-textract5The word or line of text detected by Amazon Textract. (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?>  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';Damazonka-textractShows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.See:  smart constructor.amazonka-textractShows the result of condition evaluations, including those conditions which activated a human review.amazonka-textract)Shows if and why human review was needed.amazonka-textract8The Amazon Resource Name (ARN) of the HumanLoop created.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Shows the result of condition evaluations, including those conditions which activated a human review., , - Shows if and why human review was needed., ; - The Amazon Resource Name (ARN) of the HumanLoop created.amazonka-textractShows the result of condition evaluations, including those conditions which activated a human review.amazonka-textract)Shows if and why human review was needed.amazonka-textract8The Amazon Resource Name (ARN) of the HumanLoop created.   (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';I=amazonka-textractAllows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information and adult content.See:  smart constructor.amazonka-textractSets whether the input image is free of personally identifiable information or adult content.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Sets whether the input image is free of personally identifiable information or adult content.amazonka-textractSets whether the input image is free of personally identifiable information or adult content. (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';Oamazonka-textractSets up the human review workflow the document will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.See:  smart constructor.amazonka-textract"Sets attributes of the input data.amazonka-textractThe name of the human workflow used for this image. This should be kept unique within a region.amazonka-textract6The Amazon Resource Name (ARN) of the flow definition.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, % - Sets attributes of the input data.,  - The name of the human workflow used for this image. This should be kept unique within a region., 9 - The Amazon Resource Name (ARN) of the flow definition.amazonka-textract"Sets attributes of the input data.amazonka-textractThe name of the human workflow used for this image. This should be kept unique within a region.amazonka-textract6The Amazon Resource Name (ARN) of the flow definition.amazonka-textractamazonka-textract  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?PL (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';Vyamazonka-textractThe Amazon Simple Notification Service (Amazon SNS) topic to which Amazon Textract publishes the completion status of an asynchronous document operation.See:  smart constructor.amazonka-textractThe Amazon SNS topic that Amazon Textract posts the completion status to.amazonka-textractThe Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon SNS topic that Amazon Textract posts the completion status to.,  - The Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.amazonka-textractThe Amazon SNS topic that Amazon Textract posts the completion status to.amazonka-textractThe Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.amazonka-textractamazonka-textract(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';aamazonka-textractSets whether or not your output will go to a user created bucket. Used to set the name of the bucket, and the prefix on the output file. OutputConfig is an optional parameter which lets you adjust where your output will be placed. By default, Amazon Textract will store the results internally and can only be accessed by the Get API operations. With  OutputConfig enabled, you can set the name of the bucket the output will be sent to the file prefix of the results where you can download your results. Additionally, you can set the KMSKeyID parameter to a customer master key (CMK) to encrypt your output. Without this parameter set Amazon Textract will encrypt server-side using the AWS managed CMK for Amazon S3.Decryption of Customer Content is necessary for processing of the documents by Amazon Textract. If your account is opted out under an AI services opt out policy then all unencrypted Customer Content is immediately and permanently deleted after the Customer Content has been processed by the service. No copy of of the output is retained by Amazon Textract. For information about how to opt out, see https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out.html$Managing AI services opt-out policy./For more information on data privacy, see the  3https://aws.amazon.com/compliance/data-privacy-faq/Data Privacy FAQ.See:  smart constructor.amazonka-textractThe prefix of the object key that the output will be saved to. When not enabled, the prefix will be @textract_output".amazonka-textract.The name of the bucket your output will go to.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The prefix of the object key that the output will be saved to. When not enabled, the prefix will be @textract_output"., 1 - The name of the bucket your output will go to.amazonka-textractThe prefix of the object key that the output will be saved to. When not enabled, the prefix will be @textract_output".amazonka-textract.The name of the bucket your output will go to.amazonka-textract(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';gamazonka-textractThe X and Y coordinates of a point on a document page. The X and Y values that are returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page. An array of Point objects, Polygon&, is returned by DetectDocumentText. Polygon represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.See:  smart constructor.amazonka-textract/The value of the X coordinate for a point on a Polygon.amazonka-textract/The value of the Y coordinate for a point on a Polygon.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 2 - The value of the X coordinate for a point on a Polygon., 2 - The value of the Y coordinate for a point on a Polygon.amazonka-textract/The value of the X coordinate for a point on a Polygon.amazonka-textract/The value of the Y coordinate for a point on a Polygon.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';mmamazonka-textractInformation about where the following items are located on a document page: detected page, text, key-value pairs, tables, table cells, and selection elements.See:  smart constructor.amazonka-textractAn axis-aligned coarse representation of the location of the recognized item on the document page.amazonka-textractWithin the bounding box, a fine-grained polygon around the recognized item.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An axis-aligned coarse representation of the location of the recognized item on the document page.,  - Within the bounding box, a fine-grained polygon around the recognized item.amazonka-textractAn axis-aligned coarse representation of the location of the recognized item on the document page.amazonka-textractWithin the bounding box, a fine-grained polygon around the recognized item.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';rHamazonka-textractAn object used to store information about the Value or Label detected by Amazon Textract.See:  smart constructor.amazonka-textract,The confidence in detection, as a percentageamazonka-textract6The word or line of text recognized by Amazon Textractamazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, / - The confidence in detection, as a percentage,  - Undocumented member., 9 - The word or line of text recognized by Amazon Textractamazonka-textract,The confidence in detection, as a percentageamazonka-textractUndocumented member.amazonka-textract6The word or line of text recognized by Amazon Textract  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';|amazonka-textractBreakdown of detected information, seperated into the catagories Type, LabelDetection, and ValueDetectionSee:  smart constructor.amazonka-textractShows the kind of currency, both the code and confidence associated with any monatary value detected.amazonka-textractShows which group a response object belongs to, such as whether an address line belongs to the vendor's address or the recipent's address.amazonka-textract2The explicitly stated label of a detected element.amazonka-textract*The page number the value was detected on.amazonka-textractThe implied label of a detected element. Present alongside LabelDetection for explicit elements.amazonka-textractThe value of a detected element. Present in explicit and implicit elements.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Shows the kind of currency, both the code and confidence associated with any monatary value detected.,  - Shows which group a response object belongs to, such as whether an address line belongs to the vendor's address or the recipent's address., 5 - The explicitly stated label of a detected element., - - The page number the value was detected on.,  - The implied label of a detected element. Present alongside LabelDetection for explicit elements.,  - The value of a detected element. Present in explicit and implicit elements.amazonka-textractShows the kind of currency, both the code and confidence associated with any monatary value detected.amazonka-textractShows which group a response object belongs to, such as whether an address line belongs to the vendor's address or the recipent's address.amazonka-textract2The explicitly stated label of a detected element.amazonka-textract*The page number the value was detected on.amazonka-textractThe implied label of a detected element. Present alongside LabelDetection for explicit elements.amazonka-textractThe value of a detected element. Present in explicit and implicit elements.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractA structure that holds information about the different lines found in a document's tables.See:  smart constructor.amazonka-textractExpenseFields used to show information from detected lines on a table.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - ExpenseFields used to show information from detected lines on a table.amazonka-textractExpenseFields used to show information from detected lines on a table.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractA grouping of tables which contain LineItems, with each table identified by the table's LineItemGroupIndex.See:  smart constructor.amazonka-textractThe number used to identify a specific table in a document. The first table encountered will have a LineItemGroupIndex of 1, the second 2, etc.amazonka-textract=The breakdown of information on a particular line of a table.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The number used to identify a specific table in a document. The first table encountered will have a LineItemGroupIndex of 1, the second 2, etc.,  - The breakdown of information on a particular line of a table.amazonka-textractThe number used to identify a specific table in a document. The first table encountered will have a LineItemGroupIndex of 1, the second 2, etc.amazonka-textract=The breakdown of information on a particular line of a table.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractContains information regarding predicted values returned by Amazon Textract operations, including the predicted value and the confidence in the predicted value.See:  smart constructor.amazonka-textract4Amazon Textract's confidence in its predicted value.amazonka-textract)The predicted value of a detected object.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 7 - Amazon Textract's confidence in its predicted value., , - The predicted value of a detected object.amazonka-textract4Amazon Textract's confidence in its predicted value.amazonka-textract)The predicted value of a detected object.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractThe class assigned to a Page object detected in an input document. Contains information regarding the predicted type/class of a document's page and the page number that the Page object was detected on.See:  smart constructor.amazonka-textractThe class, or document type, assigned to a detected Page object. The class, or document type, assigned to a detected Page object.amazonka-textractThe page number the value was detected on, relative to Amazon Textract's starting position.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The class, or document type, assigned to a detected Page object. The class, or document type, assigned to a detected Page object.,  - The page number the value was detected on, relative to Amazon Textract's starting position.amazonka-textractThe class, or document type, assigned to a detected Page object. The class, or document type, assigned to a detected Page object.amazonka-textractThe page number the value was detected on, relative to Amazon Textract's starting position.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';samazonka-textractEach query contains the question you want to ask in the Text and the alias you want to associate.See:  smart constructor.amazonka-textract2Alias attached to the query, for ease of location.amazonka-textractPages is a parameter that the user inputs to specify which pages to apply a query to. The following is a list of rules for using this parameter.)If a page is not specified, it is set to ["1"] by default.The following characters are allowed in the parameter's string: 0 1 2 3 4 5 6 7 8 9 - *. No whitespace is allowed.When using * to indicate all pages, it must be the only element in the list.$You can use page intervals, such as [@1-3@, @1-1@, @4-*@] . Where *! indicates last page of document.Specified pages must be greater than 0 and less than or equal to the number of pages in the document.amazonka-textractQuestion that Amazon Textract will apply to the document. An example would be "What is the customer's SSN?"amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 5 - Alias attached to the query, for ease of location.,  - Pages is a parameter that the user inputs to specify which pages to apply a query to. The following is a list of rules for using this parameter.)If a page is not specified, it is set to ["1"] by default.The following characters are allowed in the parameter's string: 0 1 2 3 4 5 6 7 8 9 - *. No whitespace is allowed.When using * to indicate all pages, it must be the only element in the list.$You can use page intervals, such as [@1-3@, @1-1@, @4-*@] . Where *! indicates last page of document.Specified pages must be greater than 0 and less than or equal to the number of pages in the document.,  - Question that Amazon Textract will apply to the document. An example would be "What is the customer's SSN?"amazonka-textract2Alias attached to the query, for ease of location.amazonka-textractPages is a parameter that the user inputs to specify which pages to apply a query to. The following is a list of rules for using this parameter.)If a page is not specified, it is set to ["1"] by default.The following characters are allowed in the parameter's string: 0 1 2 3 4 5 6 7 8 9 - *. No whitespace is allowed.When using * to indicate all pages, it must be the only element in the list.$You can use page intervals, such as [@1-3@, @1-1@, @4-*@] . Where *! indicates last page of document.Specified pages must be greater than 0 and less than or equal to the number of pages in the document.amazonka-textractQuestion that Amazon Textract will apply to the document. An example would be "What is the customer's SSN?"amazonka-textract  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractSee:  smart constructor.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  -amazonka-textract(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textract:Information about how blocks are related to each other. A Block object contains 0 or more Relation objects in a list,  Relationships#. For more information, see Block.The Type element provides the type of the relationship for all blocks in the IDs array.See:  smart constructor.amazonka-textractAn array of IDs for related blocks. You can get the type of the relationship from the Type element.amazonka-textractThe type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD. A relationship of type VALUE is a list that contains the ID of the VALUE block that's associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An array of IDs for related blocks. You can get the type of the relationship from the Type element.,  - The type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD. A relationship of type VALUE is a list that contains the ID of the VALUE block that's associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.amazonka-textractAn array of IDs for related blocks. You can get the type of the relationship from the Type element.amazonka-textractThe type of relationship that the blocks in the IDs array have with the current block. The relationship can be VALUE or CHILD. A relationship of type VALUE is a list that contains the ID of the VALUE block that's associated with the KEY of a key-value pair. A relationship of type CHILD is a list of IDs that identify WORD blocks in the case of lines Cell blocks in the case of Tables, and WORD blocks in the case of Selection Elements.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';3amazonka-textract>The S3 bucket name and file name that identifies the document.The AWS Region for the S3 bucket that contains the document must match the Region that you use for Amazon Textract operations.For Amazon Textract to process a file in an S3 bucket, the user must have permission to access the S3 bucket and file.See:  smart constructor.amazonka-textractThe name of the S3 bucket. Note that the # character is not valid in the file name.amazonka-textractThe file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF and TIFF format files.amazonka-textractIf the bucket has versioning enabled, you can specify the object version.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The name of the S3 bucket. Note that the # character is not valid in the file name.,  - The file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF and TIFF format files.,  - If the bucket has versioning enabled, you can specify the object version.amazonka-textractThe name of the S3 bucket. Note that the # character is not valid in the file name.amazonka-textractThe file name of the input document. Synchronous operations can use image files that are in JPEG or PNG format. Asynchronous operations also support PDF and TIFF format files.amazonka-textractIf the bucket has versioning enabled, you can specify the object version.  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';iamazonka-textractThe Amazon S3 bucket that contains the document to be processed. It's used by asynchronous operations.The input document can be an image file in JPEG or PNG format. It can also be a file in PDF format.See:  smart constructor.amazonka-textract6The Amazon S3 bucket that contains the input document.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 9 - The Amazon S3 bucket that contains the input document.amazonka-textract6The Amazon S3 bucket that contains the input document.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';@amazonka-textract7The input document, either as bytes or as an S3 object.You pass image bytes to an Amazon Textract API operation by using the Bytes* property. For example, you would use the Bytes property to pass a document loaded from a local file system. Image bytes passed by using the Bytes property must be base64 encoded. Your code might not need to encode document file bytes if you're using an AWS SDK to call Amazon Textract API operations.You pass images stored in an S3 bucket to an Amazon Textract API operation by using the S3Object property. Documents stored in an S3 bucket don't need to be base64 encoded.The AWS Region for the S3 bucket that contains the S3 object must match the AWS Region that you use for Amazon Textract operations.If you use the AWS CLI to call Amazon Textract operations, passing image bytes using the Bytes property isn't supported. You must first upload the document to an Amazon S3 bucket, and then call the operation using the S3Object property.For Amazon Textract to process an S3 object, the user must have permission to access the S3 object.See:  smart constructor.amazonka-textractA blob of base64-encoded document bytes. The maximum size of a document that's provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.amazonka-textractIdentifies an S3 object as the document source. The maximum size of a document that's stored in an S3 bucket is 5 MB.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A blob of base64-encoded document bytes. The maximum size of a document that's provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data.,  - Identifies an S3 object as the document source. The maximum size of a document that's stored in an S3 bucket is 5 MB.amazonka-textractA blob of base64-encoded document bytes. The maximum size of a document that's provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPEG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the Bytes field.-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data.amazonka-textractIdentifies an S3 object as the document source. The maximum size of a document that's stored in an S3 bucket is 5 MB. (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';o amazonka-textract-The results extracted for a lending document.See:  smart constructor.amazonka-textractThe confidence level for the text of a detected value in a lending document.amazonka-textractThe selection status of a selection element, such as an option button or check box.amazonka-textract>The text extracted for a detected value in a lending document.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The confidence level for the text of a detected value in a lending document.,  - Undocumented member.,  - The selection status of a selection element, such as an option button or check box.,  - The text extracted for a detected value in a lending document.amazonka-textractThe confidence level for the text of a detected value in a lending document.amazonka-textractUndocumented member.amazonka-textractThe selection status of a selection element, such as an option button or check box.amazonka-textract>The text extracted for a detected value in a lending document.  "(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';(amazonka-textractHolds the normalized key-value pairs returned by AnalyzeDocument, including the document type, detected text, and geometry.See:  smart constructor.amazonka-textract!The type of the lending document.amazonka-textract%An array of LendingDetection objects.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Undocumented member., $ - The type of the lending document., ( - An array of LendingDetection objects.amazonka-textractUndocumented member.amazonka-textract!The type of the lending document.amazonka-textract%An array of LendingDetection objects.  #(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';mamazonka-textract5Information regarding a detected signature on a page.See:  smart constructor.amazonka-textractThe confidence, from 0 to 100, in the predicted values for a detected signature.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The confidence, from 0 to 100, in the predicted values for a detected signature.,  - Undocumented member.amazonka-textractThe confidence, from 0 to 100, in the predicted values for a detected signature.amazonka-textractUndocumented member.$(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';ݽamazonka-textractHolds the structured data returned by AnalyzeDocument for lending documents.See:  smart constructor.amazonka-textract!An array of LendingField objects.amazonka-textract4A list of signatures detected in a lending document.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, $ - An array of LendingField objects., 7 - A list of signatures detected in a lending document.amazonka-textract!An array of LendingField objects.amazonka-textract4A list of signatures detected in a lending document.%(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';amazonka-textractContains information about the pages of a document, defined by logical boundary.See:  smart constructor.amazonka-textractThe index for a given document in a DocumentGroup of a specific Type.amazonka-textractAn array of page numbers for a for a given document, ordered by logical boundary.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The index for a given document in a DocumentGroup of a specific Type.,  - An array of page numbers for a for a given document, ordered by logical boundary.amazonka-textractThe index for a given document in a DocumentGroup of a specific Type.amazonka-textractAn array of page numbers for a for a given document, ordered by logical boundary.&(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?'(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';'amazonka-textractA Block represents items that are recognized in a document within a group of pixels close to each other. The information returned in a Block object depends on the type of operation. In text detection for documents (for example DetectDocumentText), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument), you can also get information about the fields, tables, and selection elements that are detected in the document. An array of Block objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText, the array of Block objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis, the array is returned over one or more responses.For more information, see  https://docs.aws.amazon.com/textract/latest/dg/how-it-works.htmlHow Amazon Textract Works.See:  smart constructor.amazonka-textractThe type of text item that's recognized. In operations for text detection, the following types are returned:PAGE - Contains a list of the LINE Block3 objects that are detected on a document page.WORD - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.>In text analysis operations, the following types are returned: PAGE - Contains a list of child Block3 objects that are detected on a document page. KEY_VALUE_SET - Stores the KEY and VALUE Block objects for linked text that's detected on a document page. Use the  EntityType< field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.WORD - A word that's detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.TABLE - A table that's detected on a document page. A table is grid-based information with two or more rows or columns, with a cell span of one row and one column each.CELL - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.SELECTION_ELEMENT - A selection element such as an option button (radio button) or a check box that's detected on a document page. Use the value of SelectionStatus7 to determine the status of the selection element. SIGNATURE - The location and confidene score of a signature detected on a document page. Can be returned as part of a Key-Value pair or a detected cell.QUERY - A question asked during the call of AnalyzeDocument. Contains an alias and an ID that attaches it to its answer. QUERY_RESULT - A response to a question asked during the call of analyze document. Comes with an alias and ID for ease of locating in a response. Also contains location and confidence score.amazonka-textractThe column in which a table cell appears. The first column position is 1.  ColumnIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1.  ColumnSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.amazonka-textract2The type of entity. The following can be returned:KEY- - An identifier for a field on the document.VALUE - The field text. EntityTypes isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.amazonka-textractThe identifier for the recognized text. The identifier is only unique for a single operation.amazonka-textract(The page on which a block was detected. Page is returned by synchronous and asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF or TIFF format. A scanned image (JPEG/PNG) provided to an asynchronous operation, even if it contains multiple document pages, is considered a single-page document. This means that for scanned images the value of Page is always 1. Synchronous operations operations will also return a Page value of 1 because every input document is considered to be a single-page document.amazonka-textractA list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:"0 - The block has no child blocks.1 - The block has child blocks.amazonka-textractThe row in which a table cell is located. The first row position is 1. RowIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe selection status of a selection element, such as an option button or check box.amazonka-textract>The word or line of text that's recognized by Amazon Textract.amazonka-textractThe kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The type of text item that's recognized. In operations for text detection, the following types are returned:PAGE - Contains a list of the LINE Block3 objects that are detected on a document page.WORD - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.>In text analysis operations, the following types are returned: PAGE - Contains a list of child Block3 objects that are detected on a document page. KEY_VALUE_SET - Stores the KEY and VALUE Block objects for linked text that's detected on a document page. Use the  EntityType< field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.WORD - A word that's detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.TABLE - A table that's detected on a document page. A table is grid-based information with two or more rows or columns, with a cell span of one row and one column each.CELL - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.SELECTION_ELEMENT - A selection element such as an option button (radio button) or a check box that's detected on a document page. Use the value of SelectionStatus7 to determine the status of the selection element. SIGNATURE - The location and confidene score of a signature detected on a document page. Can be returned as part of a Key-Value pair or a detected cell.QUERY - A question asked during the call of AnalyzeDocument. Contains an alias and an ID that attaches it to its answer. QUERY_RESULT - A response to a question asked during the call of analyze document. Comes with an alias and ID for ease of locating in a response. Also contains location and confidence score.,  - The column in which a table cell appears. The first column position is 1.  ColumnIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.,  - The number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1.  ColumnSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.,  - The confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text., 5 - The type of entity. The following can be returned:KEY- - An identifier for a field on the document.VALUE - The field text. EntityTypes isn't returned by DetectDocumentText and GetDocumentTextDetection.,  - The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.,  - The identifier for the recognized text. The identifier is only unique for a single operation., + - The page on which a block was detected. Page is returned by synchronous and asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF or TIFF format. A scanned image (JPEG/PNG) provided to an asynchronous operation, even if it contains multiple document pages, is considered a single-page document. This means that for scanned images the value of Page is always 1. Synchronous operations operations will also return a Page value of 1 because every input document is considered to be a single-page document.,  -,  - A list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:"0 - The block has no child blocks.1 - The block has child blocks.,  - The row in which a table cell is located. The first row position is 1. RowIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.,  - The number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.,  - The selection status of a selection element, such as an option button or check box.,  - The word or line of text that's recognized by Amazon Textract.,  - The kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.amazonka-textractThe type of text item that's recognized. In operations for text detection, the following types are returned:PAGE - Contains a list of the LINE Block3 objects that are detected on a document page.WORD - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.>In text analysis operations, the following types are returned: PAGE - Contains a list of child Block3 objects that are detected on a document page. KEY_VALUE_SET - Stores the KEY and VALUE Block objects for linked text that's detected on a document page. Use the  EntityType< field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.WORD - A word that's detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.LINE - A string of tab-delimited, contiguous words that are detected on a document page.TABLE - A table that's detected on a document page. A table is grid-based information with two or more rows or columns, with a cell span of one row and one column each.CELL - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.SELECTION_ELEMENT - A selection element such as an option button (radio button) or a check box that's detected on a document page. Use the value of SelectionStatus7 to determine the status of the selection element. SIGNATURE - The location and confidene score of a signature detected on a document page. Can be returned as part of a Key-Value pair or a detected cell.QUERY - A question asked during the call of AnalyzeDocument. Contains an alias and an ID that attaches it to its answer. QUERY_RESULT - A response to a question asked during the call of analyze document. Comes with an alias and ID for ease of locating in a response. Also contains location and confidence score.amazonka-textractThe column in which a table cell appears. The first column position is 1.  ColumnIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe number of columns that a table cell spans. Currently this value is always 1, even if the number of columns spanned is greater than 1.  ColumnSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe confidence score that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.amazonka-textract2The type of entity. The following can be returned:KEY- - An identifier for a field on the document.VALUE - The field text. EntityTypes isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.amazonka-textractThe identifier for the recognized text. The identifier is only unique for a single operation.amazonka-textract(The page on which a block was detected. Page is returned by synchronous and asynchronous operations. Page values greater than 1 are only returned for multipage documents that are in PDF or TIFF format. A scanned image (JPEG/PNG) provided to an asynchronous operation, even if it contains multiple document pages, is considered a single-page document. This means that for scanned images the value of Page is always 1. Synchronous operations operations will also return a Page value of 1 because every input document is considered to be a single-page document.amazonka-textractA list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:"0 - The block has no child blocks.1 - The block has child blocks.amazonka-textractThe row in which a table cell is located. The first row position is 1. RowIndex isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe number of rows that a table cell spans. Currently this value is always 1, even if the number of rows spanned is greater than 1. RowSpan isn't returned by DetectDocumentText and GetDocumentTextDetection.amazonka-textractThe selection status of a selection element, such as an option button or check box.amazonka-textract>The word or line of text that's recognized by Amazon Textract.amazonka-textractThe kind of text that Amazon Textract has detected. Can check for handwritten text and printed text.!!((c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';0 amazonka-textractThe structure holding all the information returned by AnalyzeExpenseSee:  smart constructor.amazonka-textractThis is a block object, the same as reported when DetectDocumentText is run on a document. It provides word level recognition of text.amazonka-textractDenotes which invoice or receipt in the document the information is coming from. First document will be 1, the second 2, and so on.amazonka-textractInformation detected on each table of a document, seperated into  LineItems.amazonka-textractThe normalized type of the value detected. In this case, DATE.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The value of the date, written as Year-Month-DayTHour:Minute:Second.,  - The normalized type of the value detected. In this case, DATE.amazonka-textractThe value of the date, written as Year-Month-DayTHour:Minute:Second.amazonka-textract>The normalized type of the value detected. In this case, DATE..(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';Lamazonka-textractUsed to contain the information detected by an AnalyzeID operation.See:  smart constructor.amazonka-textract*The confidence score of the detected text.amazonka-textractOnly returned for dates, returns the type of value detected and the date written in a more machine readable way.amazonka-textractText of either the normalized field or value associated with it.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, - - The confidence score of the detected text.,  - Only returned for dates, returns the type of value detected and the date written in a more machine readable way.,  - Text of either the normalized field or value associated with it.amazonka-textract*The confidence score of the detected text.amazonka-textractOnly returned for dates, returns the type of value detected and the date written in a more machine readable way.amazonka-textractText of either the normalized field or value associated with it.amazonka-textract  /(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';P^amazonka-textractStructure containing both the normalized type of the extracted information and the text associated with it. These are extracted as Type and Value respectively.See:  smart constructor.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Undocumented member.,  - Undocumented member.amazonka-textractUndocumented member.amazonka-textractUndocumented member.0(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';Wamazonka-textractThe structure that lists each document processed in an AnalyzeID operation.See:  smart constructor.amazonka-textract?Individual word recognition, as returned by document detection.amazonka-textractDenotes the placement of a document in the IdentityDocument list. The first document is marked 1, the second 2 and so on.amazonka-textractThe structure used to record information extracted from identity documents. Contains both normalized field and value of the extracted text.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Individual word recognition, as returned by document detection.,  - Denotes the placement of a document in the IdentityDocument list. The first document is marked 1, the second 2 and so on.,  - The structure used to record information extracted from identity documents. Contains both normalized field and value of the extracted text.amazonka-textract?Individual word recognition, as returned by document detection.amazonka-textractDenotes the placement of a document in the IdentityDocument list. The first document is marked 1, the second 2 and so on.amazonka-textractThe structure used to record information extracted from identity documents. Contains both normalized field and value of the extracted text.  1(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';\amazonka-textractContains information extracted by an analysis operation after using StartLendingAnalysis.See:  smart constructor.amazonka-textractHolds the structured data returned by AnalyzeDocument for lending documents.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Undocumented member.,  - Undocumented member.,  - Holds the structured data returned by AnalyzeDocument for lending documents.amazonka-textractUndocumented member.amazonka-textractUndocumented member.amazonka-textractHolds the structured data returned by AnalyzeDocument for lending documents.  2(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';bamazonka-textractContains the detections for each page analyzed through the Analyze Lending API.See:  smart constructor.amazonka-textractAn array of Extraction to hold structured data. e.g. normalized key value pairs instead of raw OCR detections .amazonka-textract - Contains summary information for documents grouped by type.,  - A list of warnings that occurred during the lending analysis operation., # - The response's http status code.amazonka-textract5The current model version of the Analyze Lending API.amazonka-textractUndocumented member.amazonka-textract/The current status of the lending analysis job.amazonka-textractReturns if the lending analysis could not be completed. Contains explanation for what error occurred.amazonka-textract;Contains summary information for documents grouped by type.amazonka-textractA list of warnings that occurred during the lending analysis operation.amazonka-textract The response's http status code.amazonka-textractamazonka-textract:(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';amazonka-textractSee:  smart constructor.amazonka-textract5The current model version of the Analyze Lending API.amazonka-textract/The current status of the lending analysis job.amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of lending results.amazonka-textractHolds the information returned by one of AmazonTextract's document analysis operations for the pinstripe.amazonka-textractReturns if the lending analysis job could not be completed. Contains explanation for what error occurred.amazonka-textractA list of warnings that occurred during the lending analysis operation.amazonka-textract The response's http status code.amazonka-textractSee:  smart constructor.amazonka-textractThe maximum number of results to return per paginated call. The largest value that you can specify is 30. If you specify a value greater than 30, a maximum of 30 results is returned. The default value is 30.amazonka-textractIf the previous response was incomplete, Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of lending results.amazonka-textract?A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId! value is only valid for 7 days.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The maximum number of results to return per paginated call. The largest value that you can specify is 30. If you specify a value greater than 30, a maximum of 30 results is returned. The default value is 30.,  - If the previous response was incomplete, Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of lending results.,  - A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId! value is only valid for 7 days.amazonka-textractThe maximum number of results to return per paginated call. The largest value that you can specify is 30. If you specify a value greater than 30, a maximum of 30 results is returned. The default value is 30.amazonka-textractIf the previous response was incomplete, Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of lending results.amazonka-textract?A unique identifier for the lending or text-detection job. The JobId is returned from StartLendingAnalysis. A JobId! value is only valid for 7 days.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 8 - The current model version of the Analyze Lending API.,  - Undocumented member., 2 - The current status of the lending analysis job.,  - If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of lending results.,  - Holds the information returned by one of AmazonTextract's document analysis operations for the pinstripe.,  - Returns if the lending analysis job could not be completed. Contains explanation for what error occurred.,  - A list of warnings that occurred during the lending analysis operation., # - The response's http status code.amazonka-textract5The current model version of the Analyze Lending API.amazonka-textractUndocumented member.amazonka-textract/The current status of the lending analysis job.amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of lending results.amazonka-textractHolds the information returned by one of AmazonTextract's document analysis operations for the pinstripe.amazonka-textractReturns if the lending analysis job could not be completed. Contains explanation for what error occurred.amazonka-textractA list of warnings that occurred during the lending analysis operation.amazonka-textract The response's http status code.amazonka-textractamazonka-textract;(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';amazonka-textractSee:  smart constructor.amazonka-textract,The current model version of AnalyzeExpense.amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract operation.amazonka-textract)The expenses detected by Amazon Textract.amazonka-textract-The current status of the text detection job.amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured.amazonka-textractA list of warnings that occurred during the text-detection operation for the document.amazonka-textract The response's http status code.amazonka-textractSee:  smart constructor.amazonka-textractThe maximum number of results to return per paginated call. The largest value you can specify is 20. If you specify a value greater than 20, a maximum of 20 results is returned. The default value is 20.amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.amazonka-textract4A unique identifier for the text detection job. The JobId is returned from StartExpenseAnalysis. A JobId value is only valid for 7 days.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The maximum number of results to return per paginated call. The largest value you can specify is 20. If you specify a value greater than 20, a maximum of 20 results is returned. The default value is 20.,  - If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks., 7 - A unique identifier for the text detection job. The JobId is returned from StartExpenseAnalysis. A JobId value is only valid for 7 days.amazonka-textractThe maximum number of results to return per paginated call. The largest value you can specify is 20. If you specify a value greater than 20, a maximum of 20 results is returned. The default value is 20.amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.amazonka-textract4A unique identifier for the text detection job. The JobId is returned from StartExpenseAnalysis. A JobId value is only valid for 7 days.amazonka-textractCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  / - The current model version of AnalyzeExpense.,   - Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract operation.,  , - The expenses detected by Amazon Textract.,  0 - The current status of the text detection job.,   - If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.,   - Returns if the detection job could not be completed. Contains explanation for what error occured.,   - A list of warnings that occurred during the text-detection operation for the document.,  # - The response's http status code. amazonka-textract,The current model version of AnalyzeExpense. amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract operation. amazonka-textract)The expenses detected by Amazon Textract. amazonka-textract-The current status of the text detection job. amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results. amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured. amazonka-textractA list of warnings that occurred during the text-detection operation for the document. amazonka-textract The response's http status code.amazonka-textractamazonka-textract  <(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';. amazonka-textractSee:   smart constructor. amazonka-textract,The results of the text-detection operation. amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. amazonka-textract-The current status of the text detection job. amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results. amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured. amazonka-textractA list of warnings that occurred during the text-detection operation for the document. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textractThe maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. amazonka-textract4A unique identifier for the text detection job. The JobId is returned from StartDocumentTextDetection. A JobId! value is only valid for 7 days. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. ,   - If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. ,  7 - A unique identifier for the text detection job. The JobId is returned from StartDocumentTextDetection. A JobId! value is only valid for 7 days. amazonka-textractThe maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. amazonka-textract4A unique identifier for the text detection job. The JobId is returned from StartDocumentTextDetection. A JobId! value is only valid for 7 days. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  / - The results of the text-detection operation. ,   - ,   - Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. ,  0 - The current status of the text detection job. ,   - If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results. ,   - Returns if the detection job could not be completed. Contains explanation for what error occured. ,   - A list of warnings that occurred during the text-detection operation for the document. ,  # - The response's http status code. amazonka-textract,The results of the text-detection operation. amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. amazonka-textract-The current status of the text detection job. amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results. amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured. amazonka-textractA list of warnings that occurred during the text-detection operation for the document. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   =(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';D amazonka-textractSee:   smart constructor. amazonka-textract+The results of the text-analysis operation. amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. amazonka-textract-The current status of the text detection job. amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results. amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured. amazonka-textractA list of warnings that occurred during the document-analysis operation. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textractThe maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. amazonka-textract4A unique identifier for the text-detection job. The JobId is returned from StartDocumentAnalysis. A JobId value is only valid for 7 days. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. ,   - If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. ,  7 - A unique identifier for the text-detection job. The JobId is returned from StartDocumentAnalysis. A JobId value is only valid for 7 days. amazonka-textractThe maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000. amazonka-textractIf the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks. amazonka-textract4A unique identifier for the text-detection job. The JobId is returned from StartDocumentAnalysis. A JobId value is only valid for 7 days. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - ,  . - The results of the text-analysis operation. ,   - Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. ,  0 - The current status of the text detection job. ,   - If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results. ,   - Returns if the detection job could not be completed. Contains explanation for what error occured. ,   - A list of warnings that occurred during the document-analysis operation. ,  # - The response's http status code. amazonka-textract+The results of the text-analysis operation. amazonka-textract>Information about a document that Amazon Textract processed. DocumentMetadata is returned in every page of paginated responses from an Amazon Textract video operation. amazonka-textract-The current status of the text detection job. amazonka-textractIf the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results. amazonka-textractReturns if the detection job could not be completed. Contains explanation for what error occured. amazonka-textractA list of warnings that occurred during the document-analysis operation. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   >(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';Q amazonka-textractSee:   smart constructor. amazonka-textract An array of Block objects that contain the text that's detected in the document. amazonka-textractMetadata about the document. It contains the number of pages that are detected in the document. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textractThe input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. amazonka-textractThe input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG or PNG format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - An array of Block objects that contain the text that's detected in the document. ,   - ,   - Metadata about the document. It contains the number of pages that are detected in the document. ,  # - The response's http status code. amazonka-textract An array of Block objects that contain the text that's detected in the document. amazonka-textractMetadata about the document. It contains the number of pages that are detected in the document. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   ?(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';Z amazonka-textractSee:   smart constructor. amazonka-textractThe version of the AnalyzeIdentity API being used to process documents. amazonka-textractThe list of documents processed by AnalyzeID. Includes a number denoting their place in the list and the response structure for the document. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textract'The document being passed to AnalyzeID. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  * - The document being passed to AnalyzeID. amazonka-textract'The document being passed to AnalyzeID. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The version of the AnalyzeIdentity API being used to process documents. ,   - Undocumented member. ,   - The list of documents processed by AnalyzeID. Includes a number denoting their place in the list and the response structure for the document. ,  # - The response's http status code. amazonka-textractThe version of the AnalyzeIdentity API being used to process documents. amazonka-textractUndocumented member. amazonka-textractThe list of documents processed by AnalyzeID. Includes a number denoting their place in the list and the response structure for the document. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   @(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';a amazonka-textractSee:   smart constructor. amazonka-textract)The expenses detected by Amazon Textract. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - Undocumented member. amazonka-textractUndocumented member. amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - Undocumented member. ,  , - The expenses detected by Amazon Textract. ,  # - The response's http status code. amazonka-textractUndocumented member. amazonka-textract)The expenses detected by Amazon Textract. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   A(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';y amazonka-textractSee:   smart constructor. amazonka-textract6The version of the model used to analyze the document. amazonka-textract,The items that are detected and analyzed by AnalyzeDocument. amazonka-textractMetadata about the analyzed document. An example is the number of pages. amazonka-textract6Shows the results of the human in the loop evaluation. amazonka-textract The response's http status code. amazonka-textractSee:   smart constructor. amazonka-textractSets the configuration for the human in the loop workflow for analyzing documents. amazonka-textractContains Queries and the alias for those Queries, as determined by the input. amazonka-textractThe input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG, PNG, PDF, or TIFF format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. amazonka-textractA list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. Add SIGNATURES to return the locations of detected signatures. To perform both forms and table analysis, add TABLES and FORMS to  FeatureTypes. To detect signatures within form data and table data, add SIGNATURES to either TABLES or FORMS. All lines and words detected in the document are included in the response (including text that isn't related to the value of  FeatureTypes). amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - Sets the configuration for the human in the loop workflow for analyzing documents. ,   - Contains Queries and the alias for those Queries, as determined by the input. ,   - The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG, PNG, PDF, or TIFF format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. ,   - A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. Add SIGNATURES to return the locations of detected signatures. To perform both forms and table analysis, add TABLES and FORMS to  FeatureTypes. To detect signatures within form data and table data, add SIGNATURES to either TABLES or FORMS. All lines and words detected in the document are included in the response (including text that isn't related to the value of  FeatureTypes). amazonka-textractSets the configuration for the human in the loop workflow for analyzing documents. amazonka-textractContains Queries and the alias for those Queries, as determined by the input. amazonka-textractThe input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can't pass image bytes. The document must be an image in JPEG, PNG, PDF, or TIFF format.If you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the Bytes field. amazonka-textractA list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. Add SIGNATURES to return the locations of detected signatures. To perform both forms and table analysis, add TABLES and FORMS to  FeatureTypes. To detect signatures within form data and table data, add SIGNATURES to either TABLES or FORMS. All lines and words detected in the document are included in the response (including text that isn't related to the value of  FeatureTypes). amazonka-textractCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  9 - The version of the model used to analyze the document. ,  / - The items that are detected and analyzed by AnalyzeDocument. ,   - Metadata about the analyzed document. An example is the number of pages. ,  9 - Shows the results of the human in the loop evaluation. ,  # - The response's http status code. amazonka-textract6The version of the model used to analyze the document. amazonka-textract,The items that are detected and analyzed by AnalyzeDocument. amazonka-textractMetadata about the analyzed document. An example is the number of pages. amazonka-textract6Shows the results of the human in the loop evaluation. amazonka-textract The response's http status code. amazonka-textract amazonka-textract   B(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferredz )*+,P\   )*+,P\C(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%~D(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred~ "#(48756LMOXY[dihgef}~    4875687dihgefihg"#(LMOXY[}~ EFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                            !!!!!!!!!!!!!!!!!!""""""""""""""""##############$$$$$$$$$$$$$$%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&''''''''''''''''''''''''''''''''''''''''(((((((((((((((((())))))))))))******************++++++++++++++,,,,,,,,,,,,,,,,,,,,,,,--------------................//////////////000000000000000011111111111111112222222222222222333333333333334444444444444445555555555555555555555555555555555555566666666666666666666666666666666666666777777777777777777777777777777777777778888888888888888888888888888888888888888889999999999999999999999999999999999 9 9 9 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A ,amazonka-textract-2.0-6i5mJlqzS4xCepXxXFK6UK!Amazonka.Textract.Types.BlockType#Amazonka.Textract.Types.BoundingBox)Amazonka.Textract.Types.ContentClassifier)Amazonka.Textract.Types.DetectedSignature(Amazonka.Textract.Types.DocumentMetadata"Amazonka.Textract.Types.EntityType'Amazonka.Textract.Types.ExpenseCurrency,Amazonka.Textract.Types.ExpenseGroupProperty#Amazonka.Textract.Types.ExpenseType#Amazonka.Textract.Types.FeatureType1Amazonka.Textract.Types.HumanLoopActivationOutput/Amazonka.Textract.Types.HumanLoopDataAttributes'Amazonka.Textract.Types.HumanLoopConfig!Amazonka.Textract.Types.JobStatus+Amazonka.Textract.Types.NotificationChannel$Amazonka.Textract.Types.OutputConfigAmazonka.Textract.Types.Point Amazonka.Textract.Types.Geometry(Amazonka.Textract.Types.ExpenseDetection$Amazonka.Textract.Types.ExpenseField&Amazonka.Textract.Types.LineItemFields%Amazonka.Textract.Types.LineItemGroup"Amazonka.Textract.Types.Prediction*Amazonka.Textract.Types.PageClassificationAmazonka.Textract.Types.Query%Amazonka.Textract.Types.QueriesConfig(Amazonka.Textract.Types.RelationshipType$Amazonka.Textract.Types.Relationship Amazonka.Textract.Types.S3Object(Amazonka.Textract.Types.DocumentLocation Amazonka.Textract.Types.Document'Amazonka.Textract.Types.SelectionStatus(Amazonka.Textract.Types.LendingDetection$Amazonka.Textract.Types.LendingField*Amazonka.Textract.Types.SignatureDetection'Amazonka.Textract.Types.LendingDocument%Amazonka.Textract.Types.SplitDocument Amazonka.Textract.Types.TextTypeAmazonka.Textract.Types.Block'Amazonka.Textract.Types.ExpenseDocument+Amazonka.Textract.Types.UndetectedSignature%Amazonka.Textract.Types.DocumentGroup&Amazonka.Textract.Types.LendingSummary!Amazonka.Textract.Types.ValueType'Amazonka.Textract.Types.NormalizedValue+Amazonka.Textract.Types.AnalyzeIDDetections-Amazonka.Textract.Types.IdentityDocumentField(Amazonka.Textract.Types.IdentityDocument"Amazonka.Textract.Types.Extraction%Amazonka.Textract.Types.LendingResultAmazonka.Textract.Types.WarningAmazonka.Textract.Types&Amazonka.Textract.StartLendingAnalysis&Amazonka.Textract.StartExpenseAnalysis,Amazonka.Textract.StartDocumentTextDetection'Amazonka.Textract.StartDocumentAnalysis+Amazonka.Textract.GetLendingAnalysisSummary$Amazonka.Textract.GetLendingAnalysis$Amazonka.Textract.GetExpenseAnalysis*Amazonka.Textract.GetDocumentTextDetection%Amazonka.Textract.GetDocumentAnalysis$Amazonka.Textract.DetectDocumentTextAmazonka.Textract.AnalyzeID Amazonka.Textract.AnalyzeExpense!Amazonka.Textract.AnalyzeDocumentAmazonka.Textract.LensAmazonka.Textract.WaitersAmazonka.Textract BlockType BlockType' fromBlockTypeBlockType_WORDBlockType_TITLEBlockType_TABLEBlockType_SIGNATUREBlockType_SELECTION_ELEMENTBlockType_QUERY_RESULTBlockType_QUERYBlockType_PAGEBlockType_MERGED_CELLBlockType_LINEBlockType_KEY_VALUE_SETBlockType_CELL$fShowBlockType$fReadBlockType $fEqBlockType$fOrdBlockType$fGenericBlockType$fHashableBlockType$fNFDataBlockType$fFromTextBlockType$fToTextBlockType$fToByteStringBlockType$fToLogBlockType$fToHeaderBlockType$fToQueryBlockType$fFromJSONBlockType$fFromJSONKeyBlockType$fToJSONBlockType$fToJSONKeyBlockType$fFromXMLBlockType$fToXMLBlockType BoundingBox BoundingBox'$sel:height:BoundingBox'$sel:left:BoundingBox'$sel:top:BoundingBox'$sel:width:BoundingBox'newBoundingBoxboundingBox_heightboundingBox_leftboundingBox_topboundingBox_width$fNFDataBoundingBox$fHashableBoundingBox$fFromJSONBoundingBox$fEqBoundingBox$fReadBoundingBox$fShowBoundingBox$fGenericBoundingBoxContentClassifierContentClassifier'fromContentClassifier9ContentClassifier_FreeOfPersonallyIdentifiableInformation$ContentClassifier_FreeOfAdultContent$fShowContentClassifier$fReadContentClassifier$fEqContentClassifier$fOrdContentClassifier$fGenericContentClassifier$fHashableContentClassifier$fNFDataContentClassifier$fFromTextContentClassifier$fToTextContentClassifier$fToByteStringContentClassifier$fToLogContentClassifier$fToHeaderContentClassifier$fToQueryContentClassifier$fFromJSONContentClassifier$fFromJSONKeyContentClassifier$fToJSONContentClassifier$fToJSONKeyContentClassifier$fFromXMLContentClassifier$fToXMLContentClassifierDetectedSignatureDetectedSignature'$sel:page:DetectedSignature'newDetectedSignaturedetectedSignature_page$fNFDataDetectedSignature$fHashableDetectedSignature$fFromJSONDetectedSignature$fEqDetectedSignature$fReadDetectedSignature$fShowDetectedSignature$fGenericDetectedSignatureDocumentMetadataDocumentMetadata'$sel:pages:DocumentMetadata'newDocumentMetadatadocumentMetadata_pages$fNFDataDocumentMetadata$fHashableDocumentMetadata$fFromJSONDocumentMetadata$fEqDocumentMetadata$fReadDocumentMetadata$fShowDocumentMetadata$fGenericDocumentMetadata EntityType EntityType'fromEntityTypeEntityType_VALUEEntityType_KEYEntityType_COLUMN_HEADER$fShowEntityType$fReadEntityType$fEqEntityType$fOrdEntityType$fGenericEntityType$fHashableEntityType$fNFDataEntityType$fFromTextEntityType$fToTextEntityType$fToByteStringEntityType$fToLogEntityType$fToHeaderEntityType$fToQueryEntityType$fFromJSONEntityType$fFromJSONKeyEntityType$fToJSONEntityType$fToJSONKeyEntityType$fFromXMLEntityType$fToXMLEntityTypeExpenseCurrencyExpenseCurrency'$sel:code:ExpenseCurrency' $sel:confidence:ExpenseCurrency'newExpenseCurrencyexpenseCurrency_codeexpenseCurrency_confidence$fNFDataExpenseCurrency$fHashableExpenseCurrency$fFromJSONExpenseCurrency$fEqExpenseCurrency$fReadExpenseCurrency$fShowExpenseCurrency$fGenericExpenseCurrencyExpenseGroupPropertyExpenseGroupProperty'$sel:id:ExpenseGroupProperty' $sel:types:ExpenseGroupProperty'newExpenseGroupPropertyexpenseGroupProperty_idexpenseGroupProperty_types$fNFDataExpenseGroupProperty$fHashableExpenseGroupProperty$fFromJSONExpenseGroupProperty$fEqExpenseGroupProperty$fReadExpenseGroupProperty$fShowExpenseGroupProperty$fGenericExpenseGroupProperty ExpenseType ExpenseType'$sel:confidence:ExpenseType'$sel:text:ExpenseType'newExpenseTypeexpenseType_confidenceexpenseType_text$fNFDataExpenseType$fHashableExpenseType$fFromJSONExpenseType$fEqExpenseType$fReadExpenseType$fShowExpenseType$fGenericExpenseType FeatureType FeatureType'fromFeatureTypeFeatureType_TABLESFeatureType_SIGNATURESFeatureType_QUERIESFeatureType_FORMS$fShowFeatureType$fReadFeatureType$fEqFeatureType$fOrdFeatureType$fGenericFeatureType$fHashableFeatureType$fNFDataFeatureType$fFromTextFeatureType$fToTextFeatureType$fToByteStringFeatureType$fToLogFeatureType$fToHeaderFeatureType$fToQueryFeatureType$fFromJSONFeatureType$fFromJSONKeyFeatureType$fToJSONFeatureType$fToJSONKeyFeatureType$fFromXMLFeatureType$fToXMLFeatureTypeHumanLoopActivationOutputHumanLoopActivationOutput'$sel:humanLoopActivationConditionsEvaluationResults:HumanLoopActivationOutput':$sel:humanLoopActivationReasons:HumanLoopActivationOutput',$sel:humanLoopArn:HumanLoopActivationOutput'newHumanLoopActivationOutputhumanLoopActivationOutput_humanLoopActivationConditionsEvaluationResults4humanLoopActivationOutput_humanLoopActivationReasons&humanLoopActivationOutput_humanLoopArn!$fNFDataHumanLoopActivationOutput#$fHashableHumanLoopActivationOutput#$fFromJSONHumanLoopActivationOutput$fEqHumanLoopActivationOutput$fReadHumanLoopActivationOutput$fShowHumanLoopActivationOutput"$fGenericHumanLoopActivationOutputHumanLoopDataAttributesHumanLoopDataAttributes'0$sel:contentClassifiers:HumanLoopDataAttributes'newHumanLoopDataAttributes*humanLoopDataAttributes_contentClassifiers$fToJSONHumanLoopDataAttributes$fNFDataHumanLoopDataAttributes!$fHashableHumanLoopDataAttributes$fEqHumanLoopDataAttributes$fReadHumanLoopDataAttributes$fShowHumanLoopDataAttributes $fGenericHumanLoopDataAttributesHumanLoopConfigHumanLoopConfig'$$sel:dataAttributes:HumanLoopConfig'#$sel:humanLoopName:HumanLoopConfig''$sel:flowDefinitionArn:HumanLoopConfig'newHumanLoopConfighumanLoopConfig_dataAttributeshumanLoopConfig_humanLoopName!humanLoopConfig_flowDefinitionArn$fToJSONHumanLoopConfig$fNFDataHumanLoopConfig$fHashableHumanLoopConfig$fEqHumanLoopConfig$fReadHumanLoopConfig$fShowHumanLoopConfig$fGenericHumanLoopConfig JobStatus JobStatus' fromJobStatusJobStatus_SUCCEEDEDJobStatus_PARTIAL_SUCCESSJobStatus_IN_PROGRESSJobStatus_FAILED$fShowJobStatus$fReadJobStatus $fEqJobStatus$fOrdJobStatus$fGenericJobStatus$fHashableJobStatus$fNFDataJobStatus$fFromTextJobStatus$fToTextJobStatus$fToByteStringJobStatus$fToLogJobStatus$fToHeaderJobStatus$fToQueryJobStatus$fFromJSONJobStatus$fFromJSONKeyJobStatus$fToJSONJobStatus$fToJSONKeyJobStatus$fFromXMLJobStatus$fToXMLJobStatusNotificationChannelNotificationChannel'%$sel:sNSTopicArn:NotificationChannel'!$sel:roleArn:NotificationChannel'newNotificationChannelnotificationChannel_sNSTopicArnnotificationChannel_roleArn$fToJSONNotificationChannel$fNFDataNotificationChannel$fHashableNotificationChannel$fEqNotificationChannel$fReadNotificationChannel$fShowNotificationChannel$fGenericNotificationChannel OutputConfig OutputConfig'$sel:s3Prefix:OutputConfig'$sel:s3Bucket:OutputConfig'newOutputConfigoutputConfig_s3PrefixoutputConfig_s3Bucket$fToJSONOutputConfig$fNFDataOutputConfig$fHashableOutputConfig$fEqOutputConfig$fReadOutputConfig$fShowOutputConfig$fGenericOutputConfigPointPoint' $sel:x:Point' $sel:y:Point'newPointpoint_xpoint_y $fNFDataPoint$fHashablePoint$fFromJSONPoint $fEqPoint $fReadPoint $fShowPoint$fGenericPointGeometry Geometry'$sel:boundingBox:Geometry'$sel:polygon:Geometry' newGeometrygeometry_boundingBoxgeometry_polygon$fNFDataGeometry$fHashableGeometry$fFromJSONGeometry $fEqGeometry$fReadGeometry$fShowGeometry$fGenericGeometryExpenseDetectionExpenseDetection'!$sel:confidence:ExpenseDetection'$sel:geometry:ExpenseDetection'$sel:text:ExpenseDetection'newExpenseDetectionexpenseDetection_confidenceexpenseDetection_geometryexpenseDetection_text$fNFDataExpenseDetection$fHashableExpenseDetection$fFromJSONExpenseDetection$fEqExpenseDetection$fReadExpenseDetection$fShowExpenseDetection$fGenericExpenseDetection ExpenseField ExpenseField'$sel:currency:ExpenseField'"$sel:groupProperties:ExpenseField'!$sel:labelDetection:ExpenseField'$sel:pageNumber:ExpenseField'$sel:type':ExpenseField'!$sel:valueDetection:ExpenseField'newExpenseFieldexpenseField_currencyexpenseField_groupPropertiesexpenseField_labelDetectionexpenseField_pageNumberexpenseField_typeexpenseField_valueDetection$fNFDataExpenseField$fHashableExpenseField$fFromJSONExpenseField$fEqExpenseField$fReadExpenseField$fShowExpenseField$fGenericExpenseFieldLineItemFieldsLineItemFields'*$sel:lineItemExpenseFields:LineItemFields'newLineItemFields$lineItemFields_lineItemExpenseFields$fNFDataLineItemFields$fHashableLineItemFields$fFromJSONLineItemFields$fEqLineItemFields$fReadLineItemFields$fShowLineItemFields$fGenericLineItemFields LineItemGroupLineItemGroup'&$sel:lineItemGroupIndex:LineItemGroup'$sel:lineItems:LineItemGroup'newLineItemGroup lineItemGroup_lineItemGroupIndexlineItemGroup_lineItems$fNFDataLineItemGroup$fHashableLineItemGroup$fFromJSONLineItemGroup$fEqLineItemGroup$fReadLineItemGroup$fShowLineItemGroup$fGenericLineItemGroup Prediction Prediction'$sel:confidence:Prediction'$sel:value:Prediction' newPredictionprediction_confidenceprediction_value$fNFDataPrediction$fHashablePrediction$fFromJSONPrediction$fEqPrediction$fReadPrediction$fShowPrediction$fGenericPredictionPageClassificationPageClassification'!$sel:pageType:PageClassification'#$sel:pageNumber:PageClassification'newPageClassificationpageClassification_pageTypepageClassification_pageNumber$fNFDataPageClassification$fHashablePageClassification$fFromJSONPageClassification$fEqPageClassification$fReadPageClassification$fShowPageClassification$fGenericPageClassificationQueryQuery'$sel:alias:Query'$sel:pages:Query'$sel:text:Query'newQuery query_alias query_pages query_text $fToJSONQuery $fNFDataQuery$fHashableQuery$fFromJSONQuery $fEqQuery $fReadQuery $fShowQuery$fGenericQuery QueriesConfigQueriesConfig'$sel:queries:QueriesConfig'newQueriesConfigqueriesConfig_queries$fToJSONQueriesConfig$fNFDataQueriesConfig$fHashableQueriesConfig$fEqQueriesConfig$fReadQueriesConfig$fShowQueriesConfig$fGenericQueriesConfigRelationshipTypeRelationshipType'fromRelationshipTypeRelationshipType_VALUERelationshipType_TITLERelationshipType_MERGED_CELL!RelationshipType_COMPLEX_FEATURESRelationshipType_CHILDRelationshipType_ANSWER$fShowRelationshipType$fReadRelationshipType$fEqRelationshipType$fOrdRelationshipType$fGenericRelationshipType$fHashableRelationshipType$fNFDataRelationshipType$fFromTextRelationshipType$fToTextRelationshipType$fToByteStringRelationshipType$fToLogRelationshipType$fToHeaderRelationshipType$fToQueryRelationshipType$fFromJSONRelationshipType$fFromJSONKeyRelationshipType$fToJSONRelationshipType$fToJSONKeyRelationshipType$fFromXMLRelationshipType$fToXMLRelationshipType Relationship Relationship'$sel:ids:Relationship'$sel:type':Relationship'newRelationshiprelationship_idsrelationship_type$fNFDataRelationship$fHashableRelationship$fFromJSONRelationship$fEqRelationship$fReadRelationship$fShowRelationship$fGenericRelationshipS3Object S3Object'$sel:bucket:S3Object'$sel:name:S3Object'$sel:version:S3Object' newS3Objects3Object_bucket s3Object_names3Object_version$fToJSONS3Object$fNFDataS3Object$fHashableS3Object $fEqS3Object$fReadS3Object$fShowS3Object$fGenericS3ObjectDocumentLocationDocumentLocation'$sel:s3Object:DocumentLocation'newDocumentLocationdocumentLocation_s3Object$fToJSONDocumentLocation$fNFDataDocumentLocation$fHashableDocumentLocation$fEqDocumentLocation$fReadDocumentLocation$fShowDocumentLocation$fGenericDocumentLocationDocument Document'$sel:bytes:Document'$sel:s3Object:Document' newDocumentdocument_bytesdocument_s3Object$fToJSONDocument$fNFDataDocument$fHashableDocument $fEqDocument$fReadDocument$fShowDocument$fGenericDocumentSelectionStatusSelectionStatus'fromSelectionStatusSelectionStatus_SELECTEDSelectionStatus_NOT_SELECTED$fShowSelectionStatus$fReadSelectionStatus$fEqSelectionStatus$fOrdSelectionStatus$fGenericSelectionStatus$fHashableSelectionStatus$fNFDataSelectionStatus$fFromTextSelectionStatus$fToTextSelectionStatus$fToByteStringSelectionStatus$fToLogSelectionStatus$fToHeaderSelectionStatus$fToQuerySelectionStatus$fFromJSONSelectionStatus$fFromJSONKeySelectionStatus$fToJSONSelectionStatus$fToJSONKeySelectionStatus$fFromXMLSelectionStatus$fToXMLSelectionStatusLendingDetectionLendingDetection'!$sel:confidence:LendingDetection'$sel:geometry:LendingDetection'&$sel:selectionStatus:LendingDetection'$sel:text:LendingDetection'newLendingDetectionlendingDetection_confidencelendingDetection_geometry lendingDetection_selectionStatuslendingDetection_text$fNFDataLendingDetection$fHashableLendingDetection$fFromJSONLendingDetection$fEqLendingDetection$fReadLendingDetection$fShowLendingDetection$fGenericLendingDetection LendingField LendingField'$sel:keyDetection:LendingField'$sel:type':LendingField'"$sel:valueDetections:LendingField'newLendingFieldlendingField_keyDetectionlendingField_typelendingField_valueDetections$fNFDataLendingField$fHashableLendingField$fFromJSONLendingField$fEqLendingField$fReadLendingField$fShowLendingField$fGenericLendingFieldSignatureDetectionSignatureDetection'#$sel:confidence:SignatureDetection'!$sel:geometry:SignatureDetection'newSignatureDetectionsignatureDetection_confidencesignatureDetection_geometry$fNFDataSignatureDetection$fHashableSignatureDetection$fFromJSONSignatureDetection$fEqSignatureDetection$fReadSignatureDetection$fShowSignatureDetection$fGenericSignatureDetectionLendingDocumentLendingDocument'#$sel:lendingFields:LendingDocument')$sel:signatureDetections:LendingDocument'newLendingDocumentlendingDocument_lendingFields#lendingDocument_signatureDetections$fNFDataLendingDocument$fHashableLendingDocument$fFromJSONLendingDocument$fEqLendingDocument$fReadLendingDocument$fShowLendingDocument$fGenericLendingDocument SplitDocumentSplitDocument'$sel:index:SplitDocument'$sel:pages:SplitDocument'newSplitDocumentsplitDocument_indexsplitDocument_pages$fNFDataSplitDocument$fHashableSplitDocument$fFromJSONSplitDocument$fEqSplitDocument$fReadSplitDocument$fShowSplitDocument$fGenericSplitDocumentTextType TextType' fromTextTypeTextType_PRINTEDTextType_HANDWRITING$fShowTextType$fReadTextType $fEqTextType $fOrdTextType$fGenericTextType$fHashableTextType$fNFDataTextType$fFromTextTextType$fToTextTextType$fToByteStringTextType$fToLogTextType$fToHeaderTextType$fToQueryTextType$fFromJSONTextType$fFromJSONKeyTextType$fToJSONTextType$fToJSONKeyTextType$fFromXMLTextType$fToXMLTextTypeBlockBlock'$sel:blockType:Block'$sel:columnIndex:Block'$sel:columnSpan:Block'$sel:confidence:Block'$sel:entityTypes:Block'$sel:geometry:Block'$sel:id:Block'$sel:page:Block'$sel:query:Block'$sel:relationships:Block'$sel:rowIndex:Block'$sel:rowSpan:Block'$sel:selectionStatus:Block'$sel:text:Block'$sel:textType:Block'newBlockblock_blockTypeblock_columnIndexblock_columnSpanblock_confidenceblock_entityTypesblock_geometryblock_id block_page block_queryblock_relationshipsblock_rowIndex block_rowSpanblock_selectionStatus block_textblock_textType $fNFDataBlock$fHashableBlock$fFromJSONBlock $fEqBlock $fReadBlock $fShowBlock$fGenericBlockExpenseDocumentExpenseDocument'$sel:blocks:ExpenseDocument'"$sel:expenseIndex:ExpenseDocument'$$sel:lineItemGroups:ExpenseDocument'#$sel:summaryFields:ExpenseDocument'newExpenseDocumentexpenseDocument_blocksexpenseDocument_expenseIndexexpenseDocument_lineItemGroupsexpenseDocument_summaryFields$fNFDataExpenseDocument$fHashableExpenseDocument$fFromJSONExpenseDocument$fEqExpenseDocument$fReadExpenseDocument$fShowExpenseDocument$fGenericExpenseDocumentUndetectedSignatureUndetectedSignature'$sel:page:UndetectedSignature'newUndetectedSignatureundetectedSignature_page$fNFDataUndetectedSignature$fHashableUndetectedSignature$fFromJSONUndetectedSignature$fEqUndetectedSignature$fReadUndetectedSignature$fShowUndetectedSignature$fGenericUndetectedSignature DocumentGroupDocumentGroup'&$sel:detectedSignatures:DocumentGroup'"$sel:splitDocuments:DocumentGroup'$sel:type':DocumentGroup'($sel:undetectedSignatures:DocumentGroup'newDocumentGroup documentGroup_detectedSignaturesdocumentGroup_splitDocumentsdocumentGroup_type"documentGroup_undetectedSignatures$fNFDataDocumentGroup$fHashableDocumentGroup$fFromJSONDocumentGroup$fEqDocumentGroup$fReadDocumentGroup$fShowDocumentGroup$fGenericDocumentGroupLendingSummaryLendingSummary'#$sel:documentGroups:LendingSummary',$sel:undetectedDocumentTypes:LendingSummary'newLendingSummarylendingSummary_documentGroups&lendingSummary_undetectedDocumentTypes$fNFDataLendingSummary$fHashableLendingSummary$fFromJSONLendingSummary$fEqLendingSummary$fReadLendingSummary$fShowLendingSummary$fGenericLendingSummary ValueType ValueType' fromValueTypeValueType_DATE$fShowValueType$fReadValueType $fEqValueType$fOrdValueType$fGenericValueType$fHashableValueType$fNFDataValueType$fFromTextValueType$fToTextValueType$fToByteStringValueType$fToLogValueType$fToHeaderValueType$fToQueryValueType$fFromJSONValueType$fFromJSONKeyValueType$fToJSONValueType$fToJSONKeyValueType$fFromXMLValueType$fToXMLValueTypeNormalizedValueNormalizedValue'$sel:value:NormalizedValue'$sel:valueType:NormalizedValue'newNormalizedValuenormalizedValue_valuenormalizedValue_valueType$fNFDataNormalizedValue$fHashableNormalizedValue$fFromJSONNormalizedValue$fEqNormalizedValue$fReadNormalizedValue$fShowNormalizedValue$fGenericNormalizedValueAnalyzeIDDetectionsAnalyzeIDDetections'$$sel:confidence:AnalyzeIDDetections')$sel:normalizedValue:AnalyzeIDDetections'$sel:text:AnalyzeIDDetections'newAnalyzeIDDetectionsanalyzeIDDetections_confidence#analyzeIDDetections_normalizedValueanalyzeIDDetections_text$fNFDataAnalyzeIDDetections$fHashableAnalyzeIDDetections$fFromJSONAnalyzeIDDetections$fEqAnalyzeIDDetections$fReadAnalyzeIDDetections$fShowAnalyzeIDDetections$fGenericAnalyzeIDDetectionsIdentityDocumentFieldIdentityDocumentField'!$sel:type':IdentityDocumentField'*$sel:valueDetection:IdentityDocumentField'newIdentityDocumentFieldidentityDocumentField_type$identityDocumentField_valueDetection$fNFDataIdentityDocumentField$fHashableIdentityDocumentField$fFromJSONIdentityDocumentField$fEqIdentityDocumentField$fReadIdentityDocumentField$fShowIdentityDocumentField$fGenericIdentityDocumentFieldIdentityDocumentIdentityDocument'$sel:blocks:IdentityDocument'$$sel:documentIndex:IdentityDocument'-$sel:identityDocumentFields:IdentityDocument'newIdentityDocumentidentityDocument_blocksidentityDocument_documentIndex'identityDocument_identityDocumentFields$fNFDataIdentityDocument$fHashableIdentityDocument$fFromJSONIdentityDocument$fEqIdentityDocument$fReadIdentityDocument$fShowIdentityDocument$fGenericIdentityDocument Extraction Extraction' $sel:expenseDocument:Extraction'!$sel:identityDocument:Extraction' $sel:lendingDocument:Extraction' newExtractionextraction_expenseDocumentextraction_identityDocumentextraction_lendingDocument$fNFDataExtraction$fHashableExtraction$fFromJSONExtraction$fEqExtraction$fReadExtraction$fShowExtraction$fGenericExtraction LendingResultLendingResult'$sel:extractions:LendingResult'$sel:page:LendingResult'&$sel:pageClassification:LendingResult'newLendingResultlendingResult_extractionslendingResult_page lendingResult_pageClassification$fNFDataLendingResult$fHashableLendingResult$fFromJSONLendingResult$fEqLendingResult$fReadLendingResult$fShowLendingResult$fGenericLendingResultWarningWarning'$sel:errorCode:Warning'$sel:pages:Warning' newWarningwarning_errorCode warning_pages$fNFDataWarning$fHashableWarning$fFromJSONWarning $fEqWarning $fReadWarning $fShowWarning$fGenericWarningdefaultService_AccessDeniedException_BadDocumentException_DocumentTooLargeException _HumanLoopQuotaExceededException%_IdempotentParameterMismatchException_InternalServerError_InvalidJobIdException_InvalidKMSKeyException_InvalidParameterException_InvalidS3ObjectException_LimitExceededException'_ProvisionedThroughputExceededException_ThrottlingException_UnsupportedDocumentExceptionStartLendingAnalysisResponseStartLendingAnalysisResponse'($sel:jobId:StartLendingAnalysisResponse'-$sel:httpStatus:StartLendingAnalysisResponse'StartLendingAnalysisStartLendingAnalysis'-$sel:clientRequestToken:StartLendingAnalysis'!$sel:jobTag:StartLendingAnalysis'#$sel:kmsKeyId:StartLendingAnalysis'.$sel:notificationChannel:StartLendingAnalysis''$sel:outputConfig:StartLendingAnalysis'+$sel:documentLocation:StartLendingAnalysis'newStartLendingAnalysis'startLendingAnalysis_clientRequestTokenstartLendingAnalysis_jobTagstartLendingAnalysis_kmsKeyId(startLendingAnalysis_notificationChannel!startLendingAnalysis_outputConfig%startLendingAnalysis_documentLocationnewStartLendingAnalysisResponse"startLendingAnalysisResponse_jobId'startLendingAnalysisResponse_httpStatus$fToQueryStartLendingAnalysis$fToPathStartLendingAnalysis$fToJSONStartLendingAnalysis$fToHeadersStartLendingAnalysis$fNFDataStartLendingAnalysis$fHashableStartLendingAnalysis$$fNFDataStartLendingAnalysisResponse $fAWSRequestStartLendingAnalysis $fEqStartLendingAnalysisResponse"$fReadStartLendingAnalysisResponse"$fShowStartLendingAnalysisResponse%$fGenericStartLendingAnalysisResponse$fEqStartLendingAnalysis$fReadStartLendingAnalysis$fShowStartLendingAnalysis$fGenericStartLendingAnalysisStartExpenseAnalysisResponseStartExpenseAnalysisResponse'($sel:jobId:StartExpenseAnalysisResponse'-$sel:httpStatus:StartExpenseAnalysisResponse'StartExpenseAnalysisStartExpenseAnalysis'-$sel:clientRequestToken:StartExpenseAnalysis'!$sel:jobTag:StartExpenseAnalysis'#$sel:kmsKeyId:StartExpenseAnalysis'.$sel:notificationChannel:StartExpenseAnalysis''$sel:outputConfig:StartExpenseAnalysis'+$sel:documentLocation:StartExpenseAnalysis'newStartExpenseAnalysis'startExpenseAnalysis_clientRequestTokenstartExpenseAnalysis_jobTagstartExpenseAnalysis_kmsKeyId(startExpenseAnalysis_notificationChannel!startExpenseAnalysis_outputConfig%startExpenseAnalysis_documentLocationnewStartExpenseAnalysisResponse"startExpenseAnalysisResponse_jobId'startExpenseAnalysisResponse_httpStatus$fToQueryStartExpenseAnalysis$fToPathStartExpenseAnalysis$fToJSONStartExpenseAnalysis$fToHeadersStartExpenseAnalysis$fNFDataStartExpenseAnalysis$fHashableStartExpenseAnalysis$$fNFDataStartExpenseAnalysisResponse $fAWSRequestStartExpenseAnalysis $fEqStartExpenseAnalysisResponse"$fReadStartExpenseAnalysisResponse"$fShowStartExpenseAnalysisResponse%$fGenericStartExpenseAnalysisResponse$fEqStartExpenseAnalysis$fReadStartExpenseAnalysis$fShowStartExpenseAnalysis$fGenericStartExpenseAnalysis"StartDocumentTextDetectionResponse#StartDocumentTextDetectionResponse'.$sel:jobId:StartDocumentTextDetectionResponse'3$sel:httpStatus:StartDocumentTextDetectionResponse'StartDocumentTextDetectionStartDocumentTextDetection'3$sel:clientRequestToken:StartDocumentTextDetection''$sel:jobTag:StartDocumentTextDetection')$sel:kmsKeyId:StartDocumentTextDetection'4$sel:notificationChannel:StartDocumentTextDetection'-$sel:outputConfig:StartDocumentTextDetection'1$sel:documentLocation:StartDocumentTextDetection'newStartDocumentTextDetection-startDocumentTextDetection_clientRequestToken!startDocumentTextDetection_jobTag#startDocumentTextDetection_kmsKeyId.startDocumentTextDetection_notificationChannel'startDocumentTextDetection_outputConfig+startDocumentTextDetection_documentLocation%newStartDocumentTextDetectionResponse(startDocumentTextDetectionResponse_jobId-startDocumentTextDetectionResponse_httpStatus#$fToQueryStartDocumentTextDetection"$fToPathStartDocumentTextDetection"$fToJSONStartDocumentTextDetection%$fToHeadersStartDocumentTextDetection"$fNFDataStartDocumentTextDetection$$fHashableStartDocumentTextDetection*$fNFDataStartDocumentTextDetectionResponse&$fAWSRequestStartDocumentTextDetection&$fEqStartDocumentTextDetectionResponse($fReadStartDocumentTextDetectionResponse($fShowStartDocumentTextDetectionResponse+$fGenericStartDocumentTextDetectionResponse$fEqStartDocumentTextDetection $fReadStartDocumentTextDetection $fShowStartDocumentTextDetection#$fGenericStartDocumentTextDetectionStartDocumentAnalysisResponseStartDocumentAnalysisResponse')$sel:jobId:StartDocumentAnalysisResponse'.$sel:httpStatus:StartDocumentAnalysisResponse'StartDocumentAnalysisStartDocumentAnalysis'.$sel:clientRequestToken:StartDocumentAnalysis'"$sel:jobTag:StartDocumentAnalysis'$$sel:kmsKeyId:StartDocumentAnalysis'/$sel:notificationChannel:StartDocumentAnalysis'($sel:outputConfig:StartDocumentAnalysis')$sel:queriesConfig:StartDocumentAnalysis',$sel:documentLocation:StartDocumentAnalysis'($sel:featureTypes:StartDocumentAnalysis'newStartDocumentAnalysis(startDocumentAnalysis_clientRequestTokenstartDocumentAnalysis_jobTagstartDocumentAnalysis_kmsKeyId)startDocumentAnalysis_notificationChannel"startDocumentAnalysis_outputConfig#startDocumentAnalysis_queriesConfig&startDocumentAnalysis_documentLocation"startDocumentAnalysis_featureTypes newStartDocumentAnalysisResponse#startDocumentAnalysisResponse_jobId(startDocumentAnalysisResponse_httpStatus$fToQueryStartDocumentAnalysis$fToPathStartDocumentAnalysis$fToJSONStartDocumentAnalysis $fToHeadersStartDocumentAnalysis$fNFDataStartDocumentAnalysis$fHashableStartDocumentAnalysis%$fNFDataStartDocumentAnalysisResponse!$fAWSRequestStartDocumentAnalysis!$fEqStartDocumentAnalysisResponse#$fReadStartDocumentAnalysisResponse#$fShowStartDocumentAnalysisResponse&$fGenericStartDocumentAnalysisResponse$fEqStartDocumentAnalysis$fReadStartDocumentAnalysis$fShowStartDocumentAnalysis$fGenericStartDocumentAnalysis!GetLendingAnalysisSummaryResponse"GetLendingAnalysisSummaryResponse'$sel:analyzeLendingModelVersion:GetLendingAnalysisSummaryResponse'8$sel:documentMetadata:GetLendingAnalysisSummaryResponse'1$sel:jobStatus:GetLendingAnalysisSummaryResponse'5$sel:statusMessage:GetLendingAnalysisSummaryResponse'/$sel:summary:GetLendingAnalysisSummaryResponse'0$sel:warnings:GetLendingAnalysisSummaryResponse'2$sel:httpStatus:GetLendingAnalysisSummaryResponse'GetLendingAnalysisSummaryGetLendingAnalysisSummary'%$sel:jobId:GetLendingAnalysisSummary'newGetLendingAnalysisSummarygetLendingAnalysisSummary_jobId$newGetLendingAnalysisSummaryResponse