h& %      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                                              !!!!!!!!!!!!!!!!!!!!!!!!!!""""""""""""""""""""""""###############################$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$%%%%%%%%%%%%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&&''''''''''''''''''''''''(((((((((((((((((((((((((((((((((((((())))))))))))))))))))))))))))))))********************************************************************************************++++++++++++++++++++++++,,,,,,,,,,,,,,,,,,,,,,-------------------------....................... . . . . . . . . . . . . . . . . . . . . . . . . . / / / / / / / / / / / / / / / / / / / / / / / / / 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 : : : : : : : : : : : : : : ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; < < < < < < < < < < < < < < < < < < < < = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ A A A A A A A A A A A A A A A A A A A A A A A A A B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B BBBCCCCCCCCCCCCCCCCCCCCDDDDDDDDDDDDDDDDEEEEEEEEEEEEEEEEEEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFGGGGGGGGGGGGGGGGGGGGGGGHHHHHHHHHHHHHHHHHHHHIIIIIIIIIIIIIIIIIIIIIIIIJJJJJJJJJJJJJJJJJJJJJJJJKKKKKKKKKKKKKKKKKKKKKKKKKLLLLLLLLLLLLLLLLMMMMMMMMMMMMMMMMMMMMMMNNNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPQQQQQQQQQQQQQQRRRRRRRRRRRRRRRRRSSSSSSSSSSSSSSSSSSSSSSSSTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVWWWWWWWWWWWWWWWWWWWWWWWWWXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXYYYYYYYYYYYYYYZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ[[[[[[[[[[[[[[[[[[[[[[[[[[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]]]]]]]]]]]]]]]]]]]]]]^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^____________________________````````````````````````````````````aaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbccccccccccccccccccccccccccccccccddddddddddddddddddddddddddddddeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffffffffgggggggggggggggggggggggggggggggggggggggggggghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiijjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkllllllllllllllllllllllllllllllmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnooooooooooooooooooooooooooooooooooopppppppppppppppppppppppppppppppppppppqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrssssssssssssssssssssssssssssssssssssstttttttttttttttttttttttttttttttttttuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{||||||||||||||||||||||||||||||||}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""################################################################################################################################$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';g amazonka-dmsDescribes a quota for an Amazon Web Services account, for example the number of replication instances allowed.See:  smart constructor. amazonka-dms?The name of the DMS quota for this Amazon Web Services account. amazonka-dms(The maximum allowed value for the quota. amazonka-dms3The amount currently used toward the quota maximum. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The name of the DMS quota for this Amazon Web Services account., + - The maximum allowed value for the quota., 6 - The amount currently used toward the quota maximum. amazonka-dms?The name of the DMS quota for this Amazon Web Services account. amazonka-dms(The maximum allowed value for the quota. amazonka-dms3The amount currently used toward the quota maximum.  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?g (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?h)-,*+)-,*+-,(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';mMA amazonka-dmsThe name of an Availability Zone for use during database migration. AvailabilityZone" is an optional parameter to the  https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationInstance.htmlCreateReplicationInstance operation, and it@s value relates to the Amazon Web Services Region of an endpoint. For example, the availability zone of an endpoint in the us-east-1 region might be us-east-1a, us-east-1b, us-east-1c, or us-east-1d.See: D smart constructor.C amazonka-dms"The name of the Availability Zone.D amazonka-dmsCreate a value of A" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:C, E% - The name of the Availability Zone.E amazonka-dms"The name of the Availability Zone.ACBDEACBDE(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?m MWVUTSRQPNOMWVUTSRQPNOWVUTSRQP(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';~k amazonka-dmsThe SSL certificate that can be used to encrypt connections between the endpoints and the replication instance.See: w smart constructor.m amazonka-dms3The Amazon Resource Name (ARN) for the certificate.n amazonka-dms*The date that the certificate was created.o amazonka-dmsA customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.p amazonka-dmsThe owner of the certificate.q amazonka-dmsThe contents of a .pem+ file, which contains an X.509 certificate.r amazonka-dmsThe location of an imported Oracle Wallet certificate for use with SSL. Example: /filebase64("${path.root}/rds-ca-2019-root.sso")s amazonka-dms9The key length of the cryptographic algorithm being used.t amazonka-dms*The signing algorithm for the certificate.u amazonka-dms1The beginning date that the certificate is valid.v amazonka-dms-The final date that the certificate is valid.w amazonka-dmsCreate a value of k" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:m, x6 - The Amazon Resource Name (ARN) for the certificate.n, y- - The date that the certificate was created.o, z - A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.p, { - The owner of the certificate.q, | - The contents of a .pem+ file, which contains an X.509 certificate.r, } - The location of an imported Oracle Wallet certificate for use with SSL. Example: /filebase64("${path.root}/rds-ca-2019-root.sso")-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data.s, ~< - The key length of the cryptographic algorithm being used.t, - - The signing algorithm for the certificate.u, 4 - The beginning date that the certificate is valid.v, 0 - The final date that the certificate is valid.x amazonka-dms3The Amazon Resource Name (ARN) for the certificate.y amazonka-dms*The date that the certificate was created.z amazonka-dmsA customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.{ amazonka-dmsThe owner of the certificate.| amazonka-dmsThe contents of a .pem+ file, which contains an X.509 certificate.} amazonka-dmsThe location of an imported Oracle Wallet certificate for use with SSL. Example: /filebase64("${path.root}/rds-ca-2019-root.sso")-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data.~ amazonka-dms9The key length of the cryptographic algorithm being used. amazonka-dms*The signing algorithm for the certificate. amazonka-dms1The beginning date that the certificate is valid. amazonka-dms-The final date that the certificate is valid.kvutsrqponmlwxyz{|}~kvutsrqponmlwxyz{|}~(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?~ (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dms,Briefly describes a Fleet Advisor collector.See:  smart constructor. amazonka-dms(The name of the Fleet Advisor collector. amazonka-dms0The reference ID of the Fleet Advisor collector. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, + - The name of the Fleet Advisor collector., 3 - The reference ID of the Fleet Advisor collector. amazonka-dms(The name of the Fleet Advisor collector. amazonka-dms0The reference ID of the Fleet Advisor collector. (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dms8Describes the last Fleet Advisor collector health check.See:  smart constructor. amazonka-dms*The status of the Fleet Advisor collector. amazonka-dmsThe name of a database in a Fleet Advisor collector inventory. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The database engine of a database in a Fleet Advisor collector inventory, for example  PostgreSQL., ? - The ID of a database in a Fleet Advisor collector inventory.,  - The IP address of a database in a Fleet Advisor collector inventory.,  - The name of a database in a Fleet Advisor collector inventory. amazonka-dmsThe database engine of a database in a Fleet Advisor collector inventory, for example  PostgreSQL. amazonka-dmsThe name of a database in a Fleet Advisor collector inventory.  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?( (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsThe settings in JSON format for the DMS Transfer type source endpoint.See:  smart constructor. amazonka-dms!The name of the S3 bucket to use. amazonka-dmsThe Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, $ - The name of the S3 bucket to use.,  - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action. amazonka-dms!The name of the S3 bucket to use. amazonka-dmsThe Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsProvides the Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role used to define an Amazon DynamoDB target endpoint.See:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dms9Provides information that defines an OpenSearch endpoint.See:  smart constructor. amazonka-dmsThe maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster. amazonka-dmsThe maximum percentage of records that can fail to be written before a full load operation stops.To avoid early failure, this counter is only effective after 1000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops. amazonka-dmsSet this option to true for DMS to migrate documentation using the documentation type _doc. OpenSearch and an Elasticsearch cluster only support the _doc documentation type in versions 7. x and later. The default value is false. amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dmsThe endpoint for the OpenSearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster.,  - The maximum percentage of records that can fail to be written before a full load operation stops.To avoid early failure, this counter is only effective after 1000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.,  - Set this option to true for DMS to migrate documentation using the documentation type _doc. OpenSearch and an Elasticsearch cluster only support the _doc documentation type in versions 7. x and later. The default value is false.,  - The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action.,  - The endpoint for the OpenSearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified. amazonka-dmsThe maximum number of seconds for which DMS retries failed API requests to the OpenSearch cluster. amazonka-dmsThe maximum percentage of records that can fail to be written before a full load operation stops.To avoid early failure, this counter is only effective after 1000 records are transferred. OpenSearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops. amazonka-dmsSet this option to true for DMS to migrate documentation using the documentation type _doc. OpenSearch and an Elasticsearch cluster only support the _doc documentation type in versions 7. x and later. The default value is false. amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dmsThe endpoint for the OpenSearch cluster. DMS uses HTTPS if a transport protocol (http/https) is not specified. amazonka-dms amazonka-dms  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?ǫ(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?g (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';S amazonka-dmsEndpoint settings.See:  smart constructor. amazonka-dmsThe relevance or validity of an endpoint setting for an engine name and its endpoint type. amazonka-dmsThe default value of the endpoint setting if no value is specified using CreateEndpoint or ModifyEndpoint. amazonka-dms+Enumerated values to use for this endpoint. amazonka-dms9The maximum value of an endpoint setting that is of type int. amazonka-dms9The minimum value of an endpoint setting that is of type int. amazonka-dms5The name that you want to give the endpoint settings. amazonka-dms6A value that marks this endpoint setting as sensitive. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dms.The unit of measure for this endpoint setting. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The relevance or validity of an endpoint setting for an engine name and its endpoint type.,  - The default value of the endpoint setting if no value is specified using CreateEndpoint or ModifyEndpoint., . - Enumerated values to use for this endpoint., < - The maximum value of an endpoint setting that is of type int., < - The minimum value of an endpoint setting that is of type int., 8 - The name that you want to give the endpoint settings., 9 - A value that marks this endpoint setting as sensitive., * - The type of endpoint. Valid values are source and target., 1 - The unit of measure for this endpoint setting. amazonka-dmsThe relevance or validity of an endpoint setting for an engine name and its endpoint type. amazonka-dmsThe default value of the endpoint setting if no value is specified using CreateEndpoint or ModifyEndpoint. amazonka-dms+Enumerated values to use for this endpoint. amazonka-dms9The maximum value of an endpoint setting that is of type int. amazonka-dms9The minimum value of an endpoint setting that is of type int. amazonka-dms5The name that you want to give the endpoint settings. amazonka-dms6A value that marks this endpoint setting as sensitive. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dms.The unit of measure for this endpoint setting.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsLists categories of events subscribed to, and generated by, the applicable DMS resource type. This data type appears in response to the  https://docs.aws.amazon.com/dms/latest/APIReference/API_EventCategoryGroup.htmlDescribeEventCategories action.See:  smart constructor. amazonka-dmsA list of event categories from a source type that you've chosen. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A list of event categories from a source type that you've chosen., 2 - The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task amazonka-dmsA list of event categories from a source type that you've chosen. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dms=Describes an event notification subscription created by the CreateEventSubscription operation.See:  smart constructor. amazonka-dms+The DMS event notification subscription Id. amazonka-dmsThe Amazon Web Services customer account associated with the DMS event notification subscription. amazonka-dmsBoolean value that indicates if the event subscription is enabled. amazonka-dmsA lists of event categories. amazonka-dms9The topic ARN of the DMS event notification subscription. amazonka-dms0A list of source Ids for the event subscription. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task amazonka-dms6The status of the DMS event notification subscription. Constraints:Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-existThe status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created. amazonka-dms=The time the DMS event notification subscription was created. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, . - The DMS event notification subscription Id.,  - The Amazon Web Services customer account associated with the DMS event notification subscription.,  - Boolean value that indicates if the event subscription is enabled.,  - A lists of event categories., < - The topic ARN of the DMS event notification subscription., 3 - A list of source Ids for the event subscription., 2 - The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task, 9 - The status of the DMS event notification subscription. Constraints:Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-existThe status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.,  - The time the DMS event notification subscription was created. amazonka-dms+The DMS event notification subscription Id. amazonka-dmsThe Amazon Web Services customer account associated with the DMS event notification subscription. amazonka-dmsBoolean value that indicates if the event subscription is enabled. amazonka-dmsA lists of event categories. amazonka-dms9The topic ARN of the DMS event notification subscription. amazonka-dms0A list of source Ids for the event subscription. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | replication-server | security-group | replication-task amazonka-dms6The status of the DMS event notification subscription. Constraints:Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-existThe status "no-permission" indicates that DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created. amazonka-dms=The time the DMS event notification subscription was created.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsIdentifies the name and value of a filter object. This filter is used to limit the number and type of DMS objects that are returned for a particular  Describe* call or similar operation. Filters are used as an optional parameter for certain API operations.See:  smart constructor. amazonka-dms*The name of the filter as specified for a  Describe* or similar operation. amazonka-dmsThe filter value, which can specify one or more values used to narrow the returned results. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, - - The name of the filter as specified for a  Describe* or similar operation.,  - The filter value, which can specify one or more values used to narrow the returned results. amazonka-dms*The name of the filter as specified for a  Describe* or similar operation. amazonka-dmsThe filter value, which can specify one or more values used to narrow the returned results. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsDescribes a large-scale assessment (LSA) analysis run by a Fleet Advisor collector.See:  smart constructor. amazonka-dms;The ID of an LSA analysis run by a Fleet Advisor collector. amazonka-dms?The status of an LSA analysis run by a Fleet Advisor collector. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, > - The ID of an LSA analysis run by a Fleet Advisor collector.,  - The status of an LSA analysis run by a Fleet Advisor collector. amazonka-dms;The ID of an LSA analysis run by a Fleet Advisor collector. amazonka-dms?The status of an LSA analysis run by a Fleet Advisor collector.(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';'  amazonka-dmsDescribes a schema object in a Fleet Advisor collector inventory.See:  smart constructor. amazonka-dmsThe number of lines of code in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe size level of the code in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe number of objects in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe type of the schema object, as reported by the database engine. Examples include the following: function trigger  SYSTEM_TABLE QUEUE amazonka-dmsThe ID of a schema object in a Fleet Advisor collector inventory. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The number of lines of code in a schema object in a Fleet Advisor collector inventory.,  - The size level of the code in a schema object in a Fleet Advisor collector inventory.,  - The number of objects in a schema object in a Fleet Advisor collector inventory.,  - The type of the schema object, as reported by the database engine. Examples include the following: function trigger  SYSTEM_TABLE QUEUE,  - The ID of a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe number of lines of code in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe size level of the code in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe number of objects in a schema object in a Fleet Advisor collector inventory. amazonka-dmsThe type of the schema object, as reported by the database engine. Examples include the following: function trigger  SYSTEM_TABLE QUEUE amazonka-dmsThe ID of a schema object in a Fleet Advisor collector inventory.  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dms:Provides information that defines an IBM Db2 LUW endpoint.See:  smart constructor. amazonka-dmsFor ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start. amazonka-dmsDatabase name for the endpoint. amazonka-dmsMaximum number of bytes per read, as a NUMBER value. The default is 64 KB. amazonka-dmsEndpoint connection password. amazonka-dms.Endpoint TCP port. The default value is 50000. amazonka-dmsThe full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the  iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the Db2 LUW endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId,. Or you can specify clear-text values for UserName, Password,  ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager  amazonka-dms>Provides information that defines an Amazon Redshift endpoint.See:   smart constructor.  amazonka-dmsA value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.  amazonka-dmsCode to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.  amazonka-dmsAn S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY5 operation has finished. For more information, see  :https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.htmlCOPY in the (Amazon Redshift Database Developer Guide.2For change-data-capture (CDC) mode, DMS creates a  NetChanges* table, and loads the .csv files to this BucketFolder/NetChangesTableID path.  amazonka-dmsThe name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.  amazonka-dmsIf Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true. The default is false.  amazonka-dms If you set  CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set  CompUpdate to false, automatic compression is disabled and existing column encodings aren't changed. The default is true.  amazonka-dmsA value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.  amazonka-dmsThe name of the Amazon Redshift data warehouse (service) that you are working with.  amazonka-dms5The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to auto.  amazonka-dmsA value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true= sets empty CHAR and VARCHAR fields to null. The default is false.  amazonka-dmsThe type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint6 operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S30. But you can@t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"  amazonka-dmsThis setting is only valid for a full-load migration task. Set  ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false.  amazonka-dmsThe number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see  https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.htmlMultipart upload overview.FileTransferUploadStreams7 accepts a value from 1 through 64. It defaults to 10.  amazonka-dmsThe amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.  amazonka-dmsThe maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).  amazonka-dms'The password for the user named in the username property.  amazonka-dms?The port number for Amazon Redshift. The default value is 5439.  amazonka-dmsA value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true, to remove quotation marks. The default is false.  amazonka-dmsA value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".  amazonka-dms8A list of characters that you want to replace. Use with  ReplaceChars.  amazonka-dmsThe full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the  iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId,. Or you can specify clear-text values for UserName, Password,  ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager - The Amazon Resource Name (ARN) of the replication instance. ,   - The status of the schema.  amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint.  amazonka-dms(The last failure message for the schema.  amazonka-dms'The date the schema was last refreshed.  amazonka-dms;The Amazon Resource Name (ARN) of the replication instance.  amazonka-dmsThe status of the schema. 3(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?M  4(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';b  amazonka-dmsIn response to the %DescribeOrderableReplicationInstances operation, this object describes an available replication instance. This description includes the replication instance's type, engine version, and allocated storage.See:   smart constructor.  amazonka-dms9List of Availability Zones for this replication instance.  amazonka-dmsThe default amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dms&The version of the replication engine.  amazonka-dmsThe amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dmsThe minimum amount of storage (in gigabytes) that can be allocated for the replication instance.  amazonka-dmsThe minimum amount of storage (in gigabytes) that can be allocated for the replication instance.  amazonka-dms&The value returned when the specified  EngineVersion of the replication instance is in Beta or test mode. This indicates some features might not work as expected.DMS supports the  ReleaseStatus' parameter in versions 3.1.4 and later.  amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large".For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.  amazonka-dms5The type of storage used by the replication instance.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  < - List of Availability Zones for this replication instance. ,   - The default amount of storage (in gigabytes) that is allocated for the replication instance. ,  ) - The version of the replication engine. ,   - The amount of storage (in gigabytes) that is allocated for the replication instance. ,   - The minimum amount of storage (in gigabytes) that can be allocated for the replication instance. ,   - The minimum amount of storage (in gigabytes) that can be allocated for the replication instance. ,  ) - The value returned when the specified  EngineVersion of the replication instance is in Beta or test mode. This indicates some features might not work as expected.DMS supports the  ReleaseStatus' parameter in versions 3.1.4 and later. ,   - The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large".For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration. ,  8 - The type of storage used by the replication instance.  amazonka-dms9List of Availability Zones for this replication instance.  amazonka-dmsThe default amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dms&The version of the replication engine.  amazonka-dmsThe amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dmsThe minimum amount of storage (in gigabytes) that can be allocated for the replication instance.  amazonka-dmsThe minimum amount of storage (in gigabytes) that can be allocated for the replication instance.  amazonka-dms&The value returned when the specified  EngineVersion of the replication instance is in Beta or test mode. This indicates some features might not work as expected.DMS supports the  ReleaseStatus' parameter in versions 3.1.4 and later.  amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large".For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.  amazonka-dms5The type of storage used by the replication instance.  5(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?c  6(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?dG  7(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';i4  amazonka-dms6Contains metadata for a replication instance task log.See:   smart constructor.  amazonka-dms0The size, in bytes, of the replication task log.  amazonka-dms7The Amazon Resource Name (ARN) of the replication task.  amazonka-dms!The name of the replication task.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  3 - The size, in bytes, of the replication task log. ,  : - The Amazon Resource Name (ARN) of the replication task. ,  $ - The name of the replication task.  amazonka-dms0The size, in bytes, of the replication task log.  amazonka-dms7The Amazon Resource Name (ARN) of the replication task.  amazonka-dms!The name of the replication task. 8(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';xp  amazonka-dmsProvides information about the values of pending modifications to a replication instance. This data type is an object of the  https://docs.aws.amazon.com/dms/latest/APIReference/API_ReplicationInstance.htmlReplicationInstance user-defined data type.See:   smart constructor.  amazonka-dmsThe amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dms6The engine version number of the replication instance.  amazonka-dmsSpecifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true.  amazonka-dmsThe type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported.  amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The amount of storage (in gigabytes) that is allocated for the replication instance. ,  9 - The engine version number of the replication instance. ,   - Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true. ,   - The type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported. ,   - The compute and memory capacity of the replication instance as defined for the specified replication instance class.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.  amazonka-dmsThe amount of storage (in gigabytes) that is allocated for the replication instance.  amazonka-dms6The engine version number of the replication instance.  amazonka-dmsSpecifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true.  amazonka-dmsThe type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported.  amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration. 9(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dms*The task assessment report in JSON format.See:   smart constructor.  amazonka-dms+The task assessment results in JSON format.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.  amazonka-dms7The file containing the results of the task assessment.  amazonka-dms"The status of the task assessment.  amazonka-dms7The Amazon Resource Name (ARN) of the replication task.  amazonka-dmsThe replication task identifier of the task on which the task assessment was run.  amazonka-dms+The date the task assessment was completed.  amazonka-dmsThe URL of the S3 object containing the task assessment results.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  . - The task assessment results in JSON format.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request. ,  : - The file containing the results of the task assessment. ,  % - The status of the task assessment. ,  : - The Amazon Resource Name (ARN) of the replication task. ,   - The replication task identifier of the task on which the task assessment was run. ,  . - The date the task assessment was completed. ,   - The URL of the S3 object containing the task assessment results.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.  amazonka-dms+The task assessment results in JSON format.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.  amazonka-dms7The file containing the results of the task assessment.  amazonka-dms"The status of the task assessment.  amazonka-dms7The Amazon Resource Name (ARN) of the replication task.  amazonka-dmsThe replication task identifier of the task on which the task assessment was run.  amazonka-dms+The date the task assessment was completed.  amazonka-dmsThe URL of the S3 object containing the task assessment results.The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request.  :(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dms$The progress values reported by the AssessmentProgress response element.See:   smart constructor.  amazonka-dmsThe number of individual assessments that have completed, successfully or not.  amazonka-dms?The number of individual assessments that are specified to run.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The number of individual assessments that have completed, successfully or not. ,   - The number of individual assessments that are specified to run.  amazonka-dmsThe number of individual assessments that have completed, successfully or not.  amazonka-dms?The number of individual assessments that are specified to run.  ;(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dmsProvides information that describes a premigration assessment run that you have started using the !StartReplicationTaskAssessmentRun operation.Some of the information appears based on other operations that can return the ReplicationTaskAssessmentRun object.See:   smart constructor.  amazonka-dmsIndication of the completion progress for the individual assessments specified to run.  amazonka-dms"Unique name of the assessment run.  amazonka-dms;Last message generated by an individual assessment failure.  amazonka-dmsARN of the migration task associated with this premigration assessment run.  amazonka-dms2Amazon Resource Name (ARN) of this assessment run.  amazonka-dms8Date on which the assessment run was created using the !StartReplicationTaskAssessmentRun operation.  amazonka-dms;Encryption mode used to encrypt the assessment run results.  amazonka-dmsARN of the KMS encryption key used to encrypt the assessment run results.  amazonka-dmsAmazon S3 bucket where DMS stores the results of this assessment run.  amazonka-dmsFolder in an Amazon S3 bucket where DMS stores the results of this assessment run.  amazonka-dmsARN of the service role used to start the assessment run using the !StartReplicationTaskAssessmentRun% operation. The role must allow the  iam:PassRole action.  amazonka-dmsAssessment run status.1This status can have one of the following values:  "cancelling"/ @ The assessment run was canceled by the "CancelReplicationTaskAssessmentRun operation. "deleting". @ The assessment run was deleted by the "DeleteReplicationTaskAssessmentRun operation."failed"< @ At least one individual assessment completed with a failed status."error-provisioning" @ An internal error occurred while resources were provisioned (during  provisioning status)."error-executing" @ An internal error occurred while individual assessments ran (during running status)."invalid state"- @ The assessment run is in an unknown state."passed" @ All individual assessments have completed, and none has a failed status."provisioning" @ Resources required to run individual assessments are being provisioned. "running"( @ Individual assessments are being run. "starting" @ The assessment run is starting, but resources are not yet being provisioned for individual assessments.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - Indication of the completion progress for the individual assessments specified to run. ,  % - Unique name of the assessment run. ,  > - Last message generated by an individual assessment failure. ,   - ARN of the migration task associated with this premigration assessment run. ,  5 - Amazon Resource Name (ARN) of this assessment run. ,  ; - Date on which the assessment run was created using the !StartReplicationTaskAssessmentRun operation. ,  > - Encryption mode used to encrypt the assessment run results. ,   - ARN of the KMS encryption key used to encrypt the assessment run results. ,   - Amazon S3 bucket where DMS stores the results of this assessment run. ,   - Folder in an Amazon S3 bucket where DMS stores the results of this assessment run. ,   - ARN of the service role used to start the assessment run using the !StartReplicationTaskAssessmentRun% operation. The role must allow the  iam:PassRole action. ,   - Assessment run status.1This status can have one of the following values:  "cancelling"/ @ The assessment run was canceled by the "CancelReplicationTaskAssessmentRun operation. "deleting". @ The assessment run was deleted by the "DeleteReplicationTaskAssessmentRun operation."failed"< @ At least one individual assessment completed with a failed status."error-provisioning" @ An internal error occurred while resources were provisioned (during  provisioning status)."error-executing" @ An internal error occurred while individual assessments ran (during running status)."invalid state"- @ The assessment run is in an unknown state."passed" @ All individual assessments have completed, and none has a failed status."provisioning" @ Resources required to run individual assessments are being provisioned. "running"( @ Individual assessments are being run. "starting" @ The assessment run is starting, but resources are not yet being provisioned for individual assessments.  amazonka-dmsIndication of the completion progress for the individual assessments specified to run.  amazonka-dms"Unique name of the assessment run.  amazonka-dms;Last message generated by an individual assessment failure.  amazonka-dmsARN of the migration task associated with this premigration assessment run.  amazonka-dms2Amazon Resource Name (ARN) of this assessment run.  amazonka-dms8Date on which the assessment run was created using the !StartReplicationTaskAssessmentRun operation.  amazonka-dms;Encryption mode used to encrypt the assessment run results.  amazonka-dmsARN of the KMS encryption key used to encrypt the assessment run results.  amazonka-dmsAmazon S3 bucket where DMS stores the results of this assessment run.  amazonka-dmsFolder in an Amazon S3 bucket where DMS stores the results of this assessment run.  amazonka-dmsARN of the service role used to start the assessment run using the !StartReplicationTaskAssessmentRun% operation. The role must allow the  iam:PassRole action.  amazonka-dmsAssessment run status.1This status can have one of the following values:  "cancelling"/ @ The assessment run was canceled by the "CancelReplicationTaskAssessmentRun operation. "deleting". @ The assessment run was deleted by the "DeleteReplicationTaskAssessmentRun operation."failed"< @ At least one individual assessment completed with a failed status."error-provisioning" @ An internal error occurred while resources were provisioned (during  provisioning status)."error-executing" @ An internal error occurred while individual assessments ran (during running status)."invalid state"- @ The assessment run is in an unknown state."passed" @ All individual assessments have completed, and none has a failed status."provisioning" @ Resources required to run individual assessments are being provisioned. "running"( @ Individual assessments are being run. "starting" @ The assessment run is starting, but resources are not yet being provisioned for individual assessments.  <(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';i  amazonka-dmsProvides information that describes an individual assessment from a premigration assessment run.See:   smart constructor.  amazonka-dms#Name of this individual assessment.  amazonka-dmsARN of the premigration assessment run that is created to run this individual assessment.  amazonka-dms9Amazon Resource Name (ARN) of this individual assessment.  amazonka-dmsDate when this individual assessment was started as part of running the !StartReplicationTaskAssessmentRun operation.  amazonka-dmsIndividual assessment status.1This status can have one of the following values:  "cancelled" "error" "failed" "passed"  "pending"  "running"  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  & - Name of this individual assessment. ,   - ARN of the premigration assessment run that is created to run this individual assessment. ,  < - Amazon Resource Name (ARN) of this individual assessment. ,   - Date when this individual assessment was started as part of running the !StartReplicationTaskAssessmentRun operation. ,   - Individual assessment status.1This status can have one of the following values:  "cancelled" "error" "failed" "passed"  "pending"  "running"  amazonka-dms#Name of this individual assessment.  amazonka-dmsARN of the premigration assessment run that is created to run this individual assessment.  amazonka-dms9Amazon Resource Name (ARN) of this individual assessment.  amazonka-dmsDate when this individual assessment was started as part of running the !StartReplicationTaskAssessmentRun operation.  amazonka-dmsIndividual assessment status.1This status can have one of the following values:  "cancelled" "error" "failed" "passed"  "pending"  "running" =(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dms In response to a request by the DescribeReplicationTasks operation, this object provides a collection of statistics about a replication task.See:   smart constructor.  amazonka-dms.The elapsed time of the task, in milliseconds.  amazonka-dmsThe date the replication task was started either with a fresh start or a target reload.  amazonka-dms6The date the replication task full load was completed.  amazonka-dms6The percent complete for the full load migration task.  amazonka-dms4The date the replication task full load was started.  amazonka-dmsThe date the replication task was started either with a fresh start or a resume. For more information, see  https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html#DMS-StartReplicationTask-request-StartReplicationTaskTypeStartReplicationTaskType.  amazonka-dms*The date the replication task was stopped.  amazonka-dms9The number of errors that have occurred during this task.  amazonka-dms*The number of tables loaded for this task.  amazonka-dms5The number of tables currently loading for this task.  amazonka-dms*The number of tables queued for this task.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  1 - The elapsed time of the task, in milliseconds. ,   - The date the replication task was started either with a fresh start or a target reload. ,  9 - The date the replication task full load was completed. ,  9 - The percent complete for the full load migration task. ,  7 - The date the replication task full load was started. ,   - The date the replication task was started either with a fresh start or a resume. For more information, see  https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html#DMS-StartReplicationTask-request-StartReplicationTaskTypeStartReplicationTaskType. ,  - - The date the replication task was stopped. ,  < - The number of errors that have occurred during this task. ,  - - The number of tables loaded for this task. ,  8 - The number of tables currently loading for this task. ,  - - The number of tables queued for this task.  amazonka-dms.The elapsed time of the task, in milliseconds.  amazonka-dmsThe date the replication task was started either with a fresh start or a target reload.  amazonka-dms6The date the replication task full load was completed.  amazonka-dms6The percent complete for the full load migration task.  amazonka-dms4The date the replication task full load was started.  amazonka-dmsThe date the replication task was started either with a fresh start or a resume. For more information, see  https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html#DMS-StartReplicationTask-request-StartReplicationTaskTypeStartReplicationTaskType.  amazonka-dms*The date the replication task was stopped.  amazonka-dms9The number of errors that have occurred during this task.  amazonka-dms*The number of tables loaded for this task.  amazonka-dms5The number of tables currently loading for this task.  amazonka-dms*The number of tables queued for this task.  >(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';(  amazonka-dmsProvides information that describes a replication task created by the CreateReplicationTask operation.See:   smart constructor.  amazonka-dmsIndicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or  CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@  amazonka-dmsIndicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @  amazonka-dmsThe last error (failure) message generated for the replication task.  amazonka-dmsThe type of migration.  amazonka-dmsIndicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.  amazonka-dms$The ARN of the replication instance.  amazonka-dms7The Amazon Resource Name (ARN) of the replication task.  amazonka-dms*The date the replication task was created.  amazonka-dms6The user-assigned replication task identifier or name. Constraints:6Must contain 1-255 alphanumeric characters or hyphens.!First character must be a letter.DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad) parameters in versions 3.1.4 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true$ for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true% for the same endpoint, but not both.  amazonka-dmsMaximum length of the interval, defined in seconds, after which to output a file to Amazon S3.When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template. The default value is 60 seconds.  amazonka-dmsMinimum file size, defined in kilobytes, to reach for a file output to Amazon S3.When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.  amazonka-dmsSpecifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactionsPreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder BucketFolder and  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName BucketName .For example, if you specify CdcPath as  MyChangedData, and you specify  BucketName as MyTargetBucket but do not specify  BucketFolder., DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.If you specify the same CdcPath, and you specify  BucketName as MyTargetBucket and  BucketFolder as  MyTargetData/, DMS creates the CDC folder path following: )MyTargetBucket/MyTargetData/MyChangedData.For more information on CDC including transaction order on an S3 target, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later.  amazonka-dmsAn optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.  amazonka-dmsThe delimiter used to separate columns in the .csv file for both source and target. The default is a comma.  amazonka-dmsThis setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValueUseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.:This setting is supported in DMS versions 3.4.1 and later.  amazonka-dmsAn optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.The default value is NULL(. Valid values include any valid string.  amazonka-dmsThe delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).  amazonka-dmsThe format of the data that you want to use for output. You can choose one of the following:csv : This is a row-based file format with comma-separated values (.csv).parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.  amazonka-dmsThe size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.  amazonka-dmsSpecifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.  amazonka-dms When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.DatePartitioning$Using date-based folder partitioning.  amazonka-dmsIdentifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.  amazonka-dms)When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true%, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":" Asia/Seoul&", "BucketName": "dms-nattarat-test"}'  amazonka-dmsThe maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN< encoding. This size is used for .parquet file format only.  amazonka-dmsA value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false! to disable. Statistics include NULL, DISTINCT, MAX, and MIN% values. This parameter defaults to true3. This value is used for .parquet file format only.  amazonka-dms#The type of encoding you are using:RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.PLAIN< doesn't use encoding at all. Values are stored as they are.PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.  amazonka-dmsThe type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint6 operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S30. But you can@t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions: s3:CreateBucket  s3:ListBucket s3:DeleteBucket s3:GetBucketLocation  s3:GetObject  s3:PutObject s3:DeleteObject s3:GetObjectVersion s3:GetBucketPolicy s3:PutBucketPolicy s3:DeleteBucketPolicy  amazonka-dmsTo specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting. Example: (--s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.  amazonka-dms=Specifies how tables are defined in the S3 source files only.  amazonka-dmsWhen this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.The default is 0.  amazonka-dmsA value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.DMS supports the IncludeOpForFullLoad( parameter in versions 3.1.4 and later.=For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.%This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..  amazonka-dmsA value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.  amazonka-dms,A value that specifies the precision of any  TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.DMS supports the ParquetTimestampInMillisecond( parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond is set to true or y, DMS writes all  TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for  TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.DMS writes any  TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.  amazonka-dmsThe version of the Apache Parquet format that you want to use:  parquet_1_0 (the default) or  parquet_2_0.  amazonka-dms If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPathCdcPath . For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later.  amazonka-dms,For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is true. Valid values include true, false, y, and n.  amazonka-dmsThe number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only./If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).  amazonka-dmsIf you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: .aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value ,BucketName=value5,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value  amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.  amazonka-dmsA value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.DMS supports the TimestampColumnName( parameter in versions 3.1.4 and later.DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.6The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database. When the  AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.  amazonka-dmsThis setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue CsvNoSupValue . If not set or set to false-, DMS uses the null value for these columns.:This setting is supported in DMS versions 3.4.1 and later.  amazonka-dmsWhen set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when $useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When $useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  + - An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.The default value is false. Valid values are true, false, y, and n. ,  & - Use the S3 target endpoint setting AddTrailingPaddingCharacter6 to add padding on string data. The default value is false. ,   - An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path   bucketFolder/ schema_name/ table_name/=. If this parameter isn't specified, then the path used is   schema_name/ table_name/. ,   - The name of the S3 bucket. ,   - A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see  http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Canned ACL in the Amazon S3 Developer Guide.The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL. ,   - A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true8, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..DMS supports the use of the CdcInsertsAndUpdates( parameter in versions 3.3.1 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true$ for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true% for the same endpoint, but not both. ,   - A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..>DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad) parameters in versions 3.1.4 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true$ for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true% for the same endpoint, but not both. ,   - Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template. The default value is 60 seconds. ,   - Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB. ,   - Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactionsPreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder BucketFolder and  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName BucketName .For example, if you specify CdcPath as  MyChangedData, and you specify  BucketName as MyTargetBucket but do not specify  BucketFolder., DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.If you specify the same CdcPath, and you specify  BucketName as MyTargetBucket and  BucketFolder as  MyTargetData/, DMS creates the CDC folder path following: )MyTargetBucket/MyTargetData/MyChangedData.For more information on CDC including transaction order on an S3 target, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later. ,   - An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats. ,   - The delimiter used to separate columns in the .csv file for both source and target. The default is a comma. ,   - This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValueUseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.:This setting is supported in DMS versions 3.4.1 and later. ,   - An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.The default value is NULL(. Valid values include any valid string. ,   - The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n). ,   - The format of the data that you want to use for output. You can choose one of the following:csv : This is a row-based file format with comma-separated values (.csv).parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response. ,   - The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only. ,   - Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true. ,   - When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.DatePartitioning$Using date-based folder partitioning. ,   - Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true. ,  , - When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true%, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":" Asia/Seoul&", "BucketName": "dms-nattarat-test"}' ,   - The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN< encoding. This size is used for .parquet file format only. ,   - A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false! to disable. Statistics include NULL, DISTINCT, MAX, and MIN% values. This parameter defaults to true3. This value is used for .parquet file format only. ,  & - The type of encoding you are using:RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.PLAIN< doesn't use encoding at all. Values are stored as they are.PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk. ,   - The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint6 operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S30. But you can@t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions: s3:CreateBucket  s3:ListBucket s3:DeleteBucket s3:GetBucketLocation  s3:GetObject  s3:PutObject s3:DeleteObject s3:GetObjectVersion s3:GetBucketPolicy s3:PutBucketPolicy s3:DeleteBucketPolicy ,   - To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting. Example: (--s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter. ,   - Specifies how tables are defined in the S3 source files only. ,   - When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.The default is 0. ,   - A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.DMS supports the IncludeOpForFullLoad( parameter in versions 3.1.4 and later.=For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.%This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide.. ,   - A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576. ,  / - A value that specifies the precision of any  TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.DMS supports the ParquetTimestampInMillisecond( parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond is set to true or y, DMS writes all  TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for  TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.DMS writes any  TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter. ,   - The version of the Apache Parquet format that you want to use:  parquet_1_0 (the default) or  parquet_2_0. ,   - If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPathCdcPath . For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later. ,  / - For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is true. Valid values include true, false, y, and n. ,   - The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only./If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024). ,   - If you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: .aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value ,BucketName=value5,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value  ,   - The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket. ,   - A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.DMS supports the TimestampColumnName( parameter in versions 3.1.4 and later.DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.6The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database. When the  AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName. ,   - This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue CsvNoSupValue . If not set or set to false-, DMS uses the null value for these columns.:This setting is supported in DMS versions 3.4.1 and later. ,   - When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when $useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When $useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target.  amazonka-dms(An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.The default value is false. Valid values are true, false, y, and n.  amazonka-dms#Use the S3 target endpoint setting AddTrailingPaddingCharacter6 to add padding on string data. The default value is false.  amazonka-dmsAn optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path   bucketFolder/ schema_name/ table_name/=. If this parameter isn't specified, then the path used is   schema_name/ table_name/.  amazonka-dmsThe name of the S3 bucket.  amazonka-dmsA value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see  http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Canned ACL in the Amazon S3 Developer Guide.The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and BUCKET_OWNER_FULL_CONTROL.  amazonka-dmsA value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true8, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..DMS supports the use of the CdcInsertsAndUpdates( parameter in versions 3.3.1 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true$ for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true% for the same endpoint, but not both.  amazonka-dmsA value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..>DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad) parameters in versions 3.1.4 and later.CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true$ for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true% for the same endpoint, but not both.  amazonka-dmsMaximum length of the interval, defined in seconds, after which to output a file to Amazon S3.When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template. The default value is 60 seconds.  amazonka-dmsMinimum file size, defined in kilobytes, to reach for a file output to Amazon S3.When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write is triggered by whichever parameter condition is met first within an DMS CloudFormation template.The default value is 32 MB.  amazonka-dmsSpecifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactionsPreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder BucketFolder and  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName BucketName .For example, if you specify CdcPath as  MyChangedData, and you specify  BucketName as MyTargetBucket but do not specify  BucketFolder., DMS creates the CDC folder path following: MyTargetBucket/MyChangedData.If you specify the same CdcPath, and you specify  BucketName as MyTargetBucket and  BucketFolder as  MyTargetData/, DMS creates the CDC folder path following: )MyTargetBucket/MyTargetData/MyChangedData.For more information on CDC including transaction order on an S3 target, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later.  amazonka-dmsAn optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.  amazonka-dmsThe delimiter used to separate columns in the .csv file for both source and target. The default is a comma.  amazonka-dmsThis setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValueUseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.:This setting is supported in DMS versions 3.4.1 and later.  amazonka-dmsAn optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are not nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.The default value is NULL(. Valid values include any valid string.  amazonka-dmsThe delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).  amazonka-dmsThe format of the data that you want to use for output. You can choose one of the following:csv : This is a row-based file format with comma-separated values (.csv).parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.  amazonka-dmsThe size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.  amazonka-dmsSpecifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.  amazonka-dms When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.DatePartitioning$Using date-based folder partitioning.  amazonka-dmsIdentifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.  amazonka-dms)When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true%, as shown in the following example.s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", "DatePartitionTimezone":" Asia/Seoul&", "BucketName": "dms-nattarat-test"}'  amazonka-dmsThe maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN< encoding. This size is used for .parquet file format only.  amazonka-dmsA value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false! to disable. Statistics include NULL, DISTINCT, MAX, and MIN% values. This parameter defaults to true3. This value is used for .parquet file format only.  amazonka-dms#The type of encoding you are using:RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.PLAIN< doesn't use encoding at all. Values are stored as they are.PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.  amazonka-dmsThe type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.For the ModifyEndpoint6 operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S30. But you can@t change the existing value from SSE_S3 to SSE_KMS.To use SSE_S3, you need an Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions: s3:CreateBucket  s3:ListBucket s3:DeleteBucket s3:GetBucketLocation  s3:GetObject  s3:PutObject s3:DeleteObject s3:GetObjectVersion s3:GetBucketPolicy s3:PutBucketPolicy s3:DeleteBucketPolicy  amazonka-dmsTo specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting. Example: (--s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}'When you make a request to test a connection or perform a migration, S3 checks the account ID of the bucket owner against the specified parameter.  amazonka-dms=Specifies how tables are defined in the S3 source files only.  amazonka-dmsWhen this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.The default is 0.  amazonka-dmsA value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.DMS supports the IncludeOpForFullLoad( parameter in versions 3.1.4 and later.=For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.%This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps3Indicating Source DB Operations in Migrated S3 Data in the &Database Migration Service User Guide..  amazonka-dmsA value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576.  amazonka-dms,A value that specifies the precision of any  TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.DMS supports the ParquetTimestampInMillisecond( parameter in versions 3.1.4 and later.When ParquetTimestampInMillisecond is set to true or y, DMS writes all  TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.Currently, Amazon Athena and Glue can handle only millisecond precision for  TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or Glue.DMS writes any  TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.  amazonka-dmsThe version of the Apache Parquet format that you want to use:  parquet_1_0 (the default) or  parquet_2_0.  amazonka-dms If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPathCdcPath . For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPathCapturing data changes (CDC) including transaction order on the S3 target.:This setting is supported in DMS versions 3.4.2 and later.  amazonka-dms,For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.For an S3 target, an optional parameter used to set behavior to comply with RFC 4180 for data migrated to Amazon S3 using .csv file format only. When this value is set to true or y using Amazon S3 as a target, if the data has quotation marks or newline characters in it, DMS encloses the entire column with an additional pair of double quotation marks ("). Every quotation mark within the data is repeated twice.The default value is true. Valid values include true, false, y, and n.  amazonka-dmsThe number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only./If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).  amazonka-dmsIf you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.Here is a CLI example: .aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value ,BucketName=value5,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value  amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.  amazonka-dmsA value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.DMS supports the TimestampColumnName( parameter in versions 3.1.4 and later.DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.6The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database. When the  AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.  amazonka-dmsThis setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by  https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue CsvNoSupValue . If not set or set to false-, DMS uses the null value for these columns.:This setting is supported in DMS versions 3.4.1 and later.  amazonka-dmsWhen set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when $useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.When $useTaskStartTimeForFullLoadTimestamp is set to false, the full load timestamp in the timestamp column increments with the time data arrives at the target. A(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?! B(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';T  amazonka-dmsProvides information that defines a Microsoft SQL Server endpoint.See:   smart constructor.  amazonka-dmsThe maximum size of the packets (in bytes) used to transfer data using BCP.  amazonka-dmsSpecifies a file group for the DMS internal tables. When the replication task starts, all the internal DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.  amazonka-dmsDatabase name for the endpoint.  amazonka-dmsEndpoint connection password.  amazonka-dmsEndpoint TCP port.  amazonka-dmsCleans and recreates table metadata information on the replication instance when a mismatch occurs. An example is a situation where running an alter DDL statement on a table might result in different information about the table cached in the replication instance.  amazonka-dmsWhen this attribute is set to Y, DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.  amazonka-dmsUse this attribute to minimize the need to access the backup log and enable DMS to prevent truncation using one of the following two methods.#Start transactions in the database: This is the default method. When this method is used, DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.0Exclusively use sp_repldone within a single task: When this method is used, DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one DMS task can access the database at any given time. Therefore, if you need to run parallel DMS tasks against the same database, use the default method.  amazonka-dmsThe full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the  iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the SQL Server endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId,. Or you can specify clear-text values for UserName, Password,  ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanagerThe name of a database in a Fleet Advisor collector inventory. amazonka-dms:The ID of a schema in a Fleet Advisor collector inventory. amazonka-dmsThe name of a database in a Fleet Advisor collector inventory. amazonka-dms:The ID of a schema in a Fleet Advisor collector inventory. amazonka-dmsThe name of a database in a Fleet Advisor collector inventory. amazonka-dmsThe IP address of a database in a Fleet Advisor collector inventory. amazonka-dmsThe number of schemas in a Fleet Advisor collector inventory database. amazonka-dmsThe server name of a database in a Fleet Advisor collector inventory. amazonka-dmsThe software details of a database in a Fleet Advisor collector inventory, such as database engine and version. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 5 - A list of collectors associated with the database., ? - The ID of a database in a Fleet Advisor collector inventory.,  - The name of a database in a Fleet Advisor collector inventory.,  - The IP address of a database in a Fleet Advisor collector inventory.,  - The number of schemas in a Fleet Advisor collector inventory database.,  - The server name of a database in a Fleet Advisor collector inventory.,  - The software details of a database in a Fleet Advisor collector inventory, such as database engine and version. amazonka-dms2A list of collectors associated with the database. amazonka-dmsThe name of a database in a Fleet Advisor collector inventory. amazonka-dmsThe IP address of a database in a Fleet Advisor collector inventory. amazonka-dmsThe number of schemas in a Fleet Advisor collector inventory database. amazonka-dmsThe server name of a database in a Fleet Advisor collector inventory. amazonka-dmsThe software details of a database in a Fleet Advisor collector inventory, such as database engine and version.G(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?zwH(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';  amazonka-dmsDescribes an identifiable significant activity that affects a replication instance or task. This object can provide the message, the available event categories, the date and source of the event, and the DMS resource type.See:  smart constructor. amazonka-dmsThe date of the event. amazonka-dms=The event categories available for the specified source type. amazonka-dmsThe event message. amazonka-dms"The identifier of an event source. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | endpoint | replication-task amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The date of the event.,  - The event categories available for the specified source type.,  - The event message., % - The identifier of an event source., 2 - The type of DMS resource that generates events.Valid values: replication-instance | endpoint | replication-task amazonka-dmsThe date of the event. amazonka-dms=The event categories available for the specified source type. amazonka-dmsThe event message. amazonka-dms"The identifier of an event source. amazonka-dms/The type of DMS resource that generates events.Valid values: replication-instance | endpoint | replication-task  I(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";?J(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dms:Provides information that defines a Redis target endpoint.See:  smart constructor. amazonka-dmsThe password provided with the  auth-role and  auth-token options of the AuthType% setting for a Redis target endpoint. amazonka-dmsThe type of authentication to perform when connecting to a Redis target. Options include none,  auth-token, and  auth-role. The  auth-token option requires an  AuthPassword value to be provided. The  auth-role option requires  AuthUserName and  AuthPassword values to be provided. amazonka-dms The user name provided with the  auth-role option of the AuthType& setting for a Redis target endpoint. amazonka-dmsThe Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint. amazonka-dmsThe connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include  plaintext and ssl-encryption. The default is ssl-encryption. The ssl-encryption option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using the SslCaCertificateArn setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The  plaintext option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database. amazonka-dms,Fully qualified domain name of the endpoint. amazonka-dms:Transmission Control Protocol (TCP) port for the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, " - The password provided with the  auth-role and  auth-token options of the AuthType% setting for a Redis target endpoint.,  - The type of authentication to perform when connecting to a Redis target. Options include none,  auth-token, and  auth-role. The  auth-token option requires an  AuthPassword value to be provided. The  auth-role option requires  AuthUserName and  AuthPassword values to be provided., # - The user name provided with the  auth-role option of the AuthType& setting for a Redis target endpoint.,  - The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint.,  - The connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include  plaintext and ssl-encryption. The default is ssl-encryption. The ssl-encryption option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using the SslCaCertificateArn setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The  plaintext option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database., / - Fully qualified domain name of the endpoint., = - Transmission Control Protocol (TCP) port for the endpoint. amazonka-dmsThe password provided with the  auth-role and  auth-token options of the AuthType% setting for a Redis target endpoint. amazonka-dmsThe type of authentication to perform when connecting to a Redis target. Options include none,  auth-token, and  auth-role. The  auth-token option requires an  AuthPassword value to be provided. The  auth-role option requires  AuthUserName and  AuthPassword values to be provided. amazonka-dms The user name provided with the  auth-role option of the AuthType& setting for a Redis target endpoint. amazonka-dmsThe Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses to connect to your Redis target endpoint. amazonka-dmsThe connection to a Redis target endpoint using Transport Layer Security (TLS). Valid values include  plaintext and ssl-encryption. The default is ssl-encryption. The ssl-encryption option makes an encrypted connection. Optionally, you can identify an Amazon Resource Name (ARN) for an SSL certificate authority (CA) using the SslCaCertificateArn setting. If an ARN isn't given for a CA, DMS uses the Amazon root CA.The  plaintext option doesn't provide Transport Layer Security (TLS) encryption for traffic between endpoint and database. amazonka-dms,Fully qualified domain name of the endpoint. amazonka-dms:Transmission Control Protocol (TCP) port for the endpoint. amazonka-dms amazonka-dmsK(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? L(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';e amazonka-dms In response to a request by the DescribeReplicationSubnetGroups operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.See:  smart constructor. amazonka-dms$The Availability Zone of the subnet. amazonka-dmsThe subnet identifier. amazonka-dmsThe status of the subnet. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ' - The Availability Zone of the subnet.,  - The subnet identifier.,  - The status of the subnet. amazonka-dms$The Availability Zone of the subnet. amazonka-dmsThe subnet identifier. amazonka-dmsThe status of the subnet.  M(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dms:Describes a subnet group in response to a request by the DescribeReplicationSubnetGroups operation.See:  smart constructor. amazonka-dms/A description for the replication subnet group. amazonka-dms8The identifier of the replication instance subnet group. amazonka-dmsThe status of the subnet group. amazonka-dms)The subnets that are in the subnet group. amazonka-dmsThe IP addressing protocol supported by the subnet group. This is used by a replication instance with values such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported. amazonka-dmsThe ID of the VPC. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 2 - A description for the replication subnet group., ; - The identifier of the replication instance subnet group., " - The status of the subnet group., , - The subnets that are in the subnet group.,  - The IP addressing protocol supported by the subnet group. This is used by a replication instance with values such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported.,  - The ID of the VPC. amazonka-dms/A description for the replication subnet group. amazonka-dms8The identifier of the replication instance subnet group. amazonka-dmsThe status of the subnet group. amazonka-dms)The subnets that are in the subnet group. amazonka-dmsThe IP addressing protocol supported by the subnet group. This is used by a replication instance with values such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported. amazonka-dmsThe ID of the VPC.N(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';0  amazonka-dmsProvides information about types of supported endpoints in response to a request by the DescribeEndpointTypes operation. This information includes the type of endpoint, the database engine name, and whether change data capture (CDC) is supported.See:  smart constructor. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dms;The expanded name for the engine name. For example, if the  EngineName parameter is "aurora", this value would be "Amazon Aurora MySQL". amazonka-dmsThe database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish". amazonka-dmsThe earliest DMS engine version that supports this endpoint engine. Note that endpoint engines released with DMS versions earlier than 3.1.1 do not return a value for this parameter. amazonka-dms4Indicates if change data capture (CDC) is supported. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, * - The type of endpoint. Valid values are source and target., > - The expanded name for the engine name. For example, if the  EngineName parameter is "aurora", this value would be "Amazon Aurora MySQL".,  - The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish".,  - The earliest DMS engine version that supports this endpoint engine. Note that endpoint engines released with DMS versions earlier than 3.1.1 do not return a value for this parameter., 7 - Indicates if change data capture (CDC) is supported. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dms;The expanded name for the engine name. For example, if the  EngineName parameter is "aurora", this value would be "Amazon Aurora MySQL". amazonka-dmsThe database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish". amazonka-dmsThe earliest DMS engine version that supports this endpoint engine. Note that endpoint engines released with DMS versions earlier than 3.1.1 do not return a value for this parameter. amazonka-dms4Indicates if change data capture (CDC) is supported.  O(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';ʎ amazonka-dms5Provides information that defines a SAP ASE endpoint.See:  smart constructor. amazonka-dmsDatabase name for the endpoint. amazonka-dmsEndpoint connection password. amazonka-dms'Endpoint TCP port. The default is 5000. amazonka-dmsThe full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret. The role must allow the  iam:PassRole action. SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager secret that allows access to the SAP ASE endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId,. Or you can specify clear-text values for UserName, Password,  ServerName, and Port. You can't specify both. For more information on creating this SecretsManagerSecret and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to access it, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager - The expanded name for the engine name. For example, if the  EngineName parameter is "aurora", this value would be "Amazon Aurora MySQL".,  - The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish".,  - Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account., ! - The external table definition.,  - Additional connection attributes used to connect to the endpoint., = - Settings in JSON format for the source GCP MySQL endpoint.,  - The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.,  - The settings for the Apache Kafka target endpoint. For more information, see the  KafkaSettings structure.,  - The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.,  - An KMS key identifier that is used to encrypt the connection parameters for the endpoint.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.,  - The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.,  - The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.,  - The settings for the MySQL source and target endpoint. For more information, see the  MySQLSettings structure.,  - The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.,  - The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure., . - The port value used to access the endpoint.,  - The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.,  - The settings for the Redis target endpoint. For more information, see the  RedisSettings structure., - - Settings for the Amazon Redshift endpoint.,  - The settings for the S3 target endpoint. For more information, see the  S3Settings structure., * - The name of the server at the endpoint.,  - The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action.,  - The SSL mode used to connect to the endpoint. The default value is none.,  - The status of the endpoint.,  - The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure., 1 - The user name used to connect to the endpoint. amazonka-dmsThe Amazon Resource Name (ARN) used for SSL connection to the endpoint. amazonka-dms)The name of the database at the endpoint. amazonka-dmsThe settings for the DMS Transfer type source. For more information, see the DmsTransferSettings structure. amazonka-dmsUndocumented member. amazonka-dmsThe settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure. amazonka-dmsThe settings for the OpenSearch source endpoint. For more information, see the ElasticsearchSettings structure. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsThe database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dms;The expanded name for the engine name. For example, if the  EngineName parameter is "aurora", this value would be "Amazon Aurora MySQL". amazonka-dmsThe database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish". amazonka-dmsValue returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account. amazonka-dmsThe external table definition. amazonka-dmsAdditional connection attributes used to connect to the endpoint. amazonka-dms:Settings in JSON format for the source GCP MySQL endpoint. amazonka-dmsThe settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure. amazonka-dmsThe settings for the Apache Kafka target endpoint. For more information, see the  KafkaSettings structure. amazonka-dmsThe settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure. amazonka-dmsAn KMS key identifier that is used to encrypt the connection parameters for the endpoint.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region. amazonka-dmsThe settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure. amazonka-dmsThe settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure. amazonka-dmsThe settings for the MySQL source and target endpoint. For more information, see the  MySQLSettings structure. amazonka-dmsThe settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure. amazonka-dmsThe settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure. amazonka-dms+The port value used to access the endpoint. amazonka-dmsThe settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure. amazonka-dmsThe settings for the Redis target endpoint. For more information, see the  RedisSettings structure. amazonka-dms*Settings for the Amazon Redshift endpoint. amazonka-dmsThe settings for the S3 target endpoint. For more information, see the  S3Settings structure. amazonka-dms'The name of the server at the endpoint. amazonka-dmsThe Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the  iam:PassRole action. amazonka-dmsThe SSL mode used to connect to the endpoint. The default value is none. amazonka-dmsThe status of the endpoint. amazonka-dmsThe settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure. amazonka-dms.The user name used to connect to the endpoint.W(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred";? X(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';; amazonka-dms$Describes a Fleet Advisor collector.See:  smart constructor. amazonka-dms)The name of the Fleet Advisor collector . amazonka-dms0The reference ID of the Fleet Advisor collector. amazonka-dmsThe version of your Fleet Advisor collector, in semantic versioning format, for example 1.0.2 amazonka-dmsThe timestamp when you created the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dms5A summary description of the Fleet Advisor collector. amazonka-dmsThe timestamp of the last time the collector received data, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe timestamp when DMS last modified the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe timestamp when DMS registered the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe Amazon S3 bucket that the Fleet Advisor collector uses to store inventory metadata. amazonka-dmsThe IAM role that grants permissions to access the specified Amazon S3 bucket. amazonka-dms,Whether the collector version is up to date. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Undocumented member., , - The name of the Fleet Advisor collector ., 3 - The reference ID of the Fleet Advisor collector.,  - The version of your Fleet Advisor collector, in semantic versioning format, for example 1.0.2,  - The timestamp when you created the collector, in the following format: 2022-01-24T19:04:02.596113Z, 8 - A summary description of the Fleet Advisor collector.,  - Undocumented member.,  - The timestamp of the last time the collector received data, in the following format: 2022-01-24T19:04:02.596113Z,  - The timestamp when DMS last modified the collector, in the following format: 2022-01-24T19:04:02.596113Z,  - The timestamp when DMS registered the collector, in the following format: 2022-01-24T19:04:02.596113Z,  - The Amazon S3 bucket that the Fleet Advisor collector uses to store inventory metadata.,  - The IAM role that grants permissions to access the specified Amazon S3 bucket., / - Whether the collector version is up to date. amazonka-dmsUndocumented member. amazonka-dms)The name of the Fleet Advisor collector . amazonka-dms0The reference ID of the Fleet Advisor collector. amazonka-dmsThe version of your Fleet Advisor collector, in semantic versioning format, for example 1.0.2 amazonka-dmsThe timestamp when you created the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dms5A summary description of the Fleet Advisor collector. amazonka-dmsUndocumented member. amazonka-dmsThe timestamp of the last time the collector received data, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe timestamp when DMS last modified the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe timestamp when DMS registered the collector, in the following format: 2022-01-24T19:04:02.596113Z amazonka-dmsThe Amazon S3 bucket that the Fleet Advisor collector uses to store inventory metadata. amazonka-dmsThe IAM role that grants permissions to access the specified Amazon S3 bucket. amazonka-dms,Whether the collector version is up to date.Y(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&'; amazonka-dmsDescribes the status of a security group associated with the virtual private cloud (VPC) hosting your replication and DB instances.See:  smart constructor. amazonka-dms%The status of the VPC security group. amazonka-dmsThe VPC security group ID. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ( - The status of the VPC security group.,  - The VPC security group ID. amazonka-dms%The status of the VPC security group. amazonka-dmsThe VPC security group ID.Z(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"%&';]4 amazonka-dms9Provides information that defines a replication instance.See:  smart constructor. amazonka-dmsThe amount of storage (in gigabytes) that is allocated for the replication instance. amazonka-dmsBoolean value indicating if minor version upgrades will be automatically applied to the instance. amazonka-dms'The Availability Zone for the instance. amazonka-dmsThe DNS name servers supported for the replication instance to access your on-premise source or target database. amazonka-dms6The engine version number of the replication instance.If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.When modifying a major engine version of an instance, also set AllowMajorVersionUpgrade to true. amazonka-dmsThe expiration date of the free replication instance that is part of the Free DMS program. amazonka-dms.The time the replication instance was created. amazonka-dmsAn KMS key identifier that is used to encrypt the data on the replication instance.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region. amazonka-dmsSpecifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true. amazonka-dmsThe type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported. amazonka-dms The pending modification values. amazonka-dmsThe maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time. amazonka-dmsSpecifies the accessibility options for the replication instance. A value of true> represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration. amazonka-dmsThe replication instance identifier is a required parameter. This parameter is stored as a lowercase string. Constraints:5Must contain 1-63 alphanumeric characters or hyphens.!First character must be a letter.One or more private IP addresses for the replication instance. amazonka-dms2The public IP address of the replication instance. amazonka-dms=One or more public IP addresses for the replication instance. amazonka-dmsThe status of the replication instance. The possible return values include:  "available"  "creating"  "deleted"  "deleting" "failed"  "modifying"  "upgrading"  "rebooting" "resetting-master-credentials" "storage-full" "incompatible-credentials" "incompatible-network"  "maintenance" amazonka-dms.The subnet group for the replication instance. amazonka-dmsThe Availability Zone of the standby replication instance in a Multi-AZ deployment. amazonka-dms(The VPC security group for the instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The amount of storage (in gigabytes) that is allocated for the replication instance.,  - Boolean value indicating if minor version upgrades will be automatically applied to the instance., * - The Availability Zone for the instance.,  - The DNS name servers supported for the replication instance to access your on-premise source or target database., 9 - The engine version number of the replication instance.If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.When modifying a major engine version of an instance, also set AllowMajorVersionUpgrade to true.,  - The expiration date of the free replication instance that is part of the Free DMS program., 1 - The time the replication instance was created.,  - An KMS key identifier that is used to encrypt the data on the replication instance.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.,  - Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true.,  - The type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported., # - The pending modification values.,  - The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.,  - Specifies the accessibility options for the replication instance. A value of true> represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true., > - The Amazon Resource Name (ARN) of the replication instance.,  - The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.,  - The replication instance identifier is a required parameter. This parameter is stored as a lowercase string. Constraints:5Must contain 1-63 alphanumeric characters or hyphens.!First character must be a letter. represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a default value is pre-selected in the DMS console.For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration. amazonka-dmsThe replication instance identifier is a required parameter. This parameter is stored as a lowercase string. Constraints:5Must contain 1-63 alphanumeric characters or hyphens.!First character must be a letter.One or more private IP addresses for the replication instance. amazonka-dms2The public IP address of the replication instance. amazonka-dms=One or more public IP addresses for the replication instance. amazonka-dmsThe status of the replication instance. The possible return values include:  "available"  "creating"  "deleted"  "deleting" "failed"  "modifying"  "upgrading"  "rebooting" "resetting-master-credentials" "storage-full" "incompatible-credentials" "incompatible-network"  "maintenance" amazonka-dms.The subnet group for the replication instance. amazonka-dmsThe Availability Zone of the standby replication instance in a Multi-AZ deployment. amazonka-dms(The VPC security group for the instance.55[(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred"% amazonka-dms API version  2016-01-01< of the Amazon Database Migration Service SDK configuration. amazonka-dmsDMS was denied access to the endpoint. Check that the role is correctly configured. amazonka-dms&The specified collector doesn't exist. amazonka-dmsThere are not enough resources allocated to the database migration. amazonka-dmsThe certificate was not valid. amazonka-dms.The action or operation requested isn't valid. amazonka-dmsThe resource is in a state that prevents it from being used for database migration. amazonka-dmsThe subnet provided is invalid. amazonka-dmsThe ciphertext references a key that doesn't exist or that the DMS account doesn't have access to. amazonka-dms$The specified KMS key isn't enabled. amazonka-dmsAn Key Management Service (KMS) error is preventing access to KMS. amazonka-dmsThe state of the specified KMS resource isn't valid for this request. amazonka-dmsDMS cannot access the KMS key. amazonka-dms4The specified KMS entity or resource can't be found. amazonka-dms.This request triggered KMS request throttling. amazonka-dmsThe replication subnet group does not cover enough Availability Zones (AZs). Edit the replication subnet group and add more AZs. amazonka-dms9The resource you are attempting to create already exists. amazonka-dms The resource could not be found. amazonka-dms4The quota for this resource quota has been exceeded. amazonka-dmsInsufficient privileges are preventing access to an Amazon S3 object. amazonka-dmsA specified Amazon S3 bucket, bucket folder, or other object can't be found. amazonka-dmsThe SNS topic is invalid. amazonka-dms0You are not authorized for the SNS subscription. amazonka-dms$The storage quota has been exceeded. amazonka-dms'The specified subnet is already in use. amazonka-dms;An upgrade dependency is preventing the database migration. )-,*+ABCDEMWVUTSRQPNOklmnopqrstuvwxyz{|}~  )-,*+-,MWVUTSRQPNOWVUTSRQP ABCDEklmnopqrstuvwxyz{|}~      \(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsThe connection tested. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, > - The Amazon Resource Name (ARN) of the replication instance.,  - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The connection tested., # - The response's http status code. amazonka-dmsThe connection tested. amazonka-dms The response's http status code. amazonka-dms amazonka-dms amazonka-dms](c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsThe replication task stopped. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name(ARN) of the replication task to be stopped. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name(ARN) of the replication task to be stopped. amazonka-dmsThe Amazon Resource Name(ARN) of the replication task to be stopped. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The replication task stopped., # - The response's http status code. amazonka-dmsThe replication task stopped. amazonka-dms The response's http status code. amazonka-dms amazonka-dms  ^(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';8> amazonka-dmsSee:  smart constructor. amazonka-dms1The premigration assessment run that was started. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsSpace-separated list of names for specific individual assessments that you want to exclude. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for Exclude if you also set a value for  IncludeOnly in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter. amazonka-dmsSpace-separated list of names for specific individual assessments that you want to include. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for  IncludeOnly if you also set a value for Exclude in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter. amazonka-dmsEncryption mode that you can specify to encrypt the results of this assessment run. If you don't specify this request parameter, DMS stores the assessment run results without encryption. You can specify one of the options following:"SSE_S3" @ The server-side encryption provided as a default by Amazon S3. "SSE_KMS" @ Key Management Service (KMS) encryption. This encryption can use either a custom KMS encryption key that you specify or the default KMS encryption key that DMS provides. amazonka-dmsARN of a custom KMS encryption key that you specify when you set ResultEncryptionMode to "SSE_KMS". amazonka-dmsFolder within an Amazon S3 bucket where you want DMS to store the results of this assessment run. amazonka-dmsAmazon Resource Name (ARN) of the migration task associated with the premigration assessment run that you want to start. amazonka-dmsARN of the service role needed to start the assessment run. The role must allow the  iam:PassRole action. amazonka-dmsAmazon S3 bucket where you want DMS to store the results of this assessment run. amazonka-dms+Unique name to identify the assessment run. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Space-separated list of names for specific individual assessments that you want to exclude. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for Exclude if you also set a value for  IncludeOnly in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter.,  - Space-separated list of names for specific individual assessments that you want to include. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for  IncludeOnly if you also set a value for Exclude in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter.,  - Encryption mode that you can specify to encrypt the results of this assessment run. If you don't specify this request parameter, DMS stores the assessment run results without encryption. You can specify one of the options following:"SSE_S3" @ The server-side encryption provided as a default by Amazon S3. "SSE_KMS" @ Key Management Service (KMS) encryption. This encryption can use either a custom KMS encryption key that you specify or the default KMS encryption key that DMS provides.,  - ARN of a custom KMS encryption key that you specify when you set ResultEncryptionMode to "SSE_KMS".,  - Folder within an Amazon S3 bucket where you want DMS to store the results of this assessment run.,  - Amazon Resource Name (ARN) of the migration task associated with the premigration assessment run that you want to start.,  - ARN of the service role needed to start the assessment run. The role must allow the  iam:PassRole action.,  - Amazon S3 bucket where you want DMS to store the results of this assessment run., . - Unique name to identify the assessment run. amazonka-dmsSpace-separated list of names for specific individual assessments that you want to exclude. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for Exclude if you also set a value for  IncludeOnly in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter. amazonka-dmsSpace-separated list of names for specific individual assessments that you want to include. These names come from the default list of individual assessments that DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.You can't set a value for  IncludeOnly if you also set a value for Exclude in the API operation.To identify the names of the default individual assessments that DMS supports for the associated migration task, run the 'DescribeApplicableIndividualAssessments operation using its own ReplicationTaskArn request parameter. amazonka-dmsEncryption mode that you can specify to encrypt the results of this assessment run. If you don't specify this request parameter, DMS stores the assessment run results without encryption. You can specify one of the options following:"SSE_S3" @ The server-side encryption provided as a default by Amazon S3. "SSE_KMS" @ Key Management Service (KMS) encryption. This encryption can use either a custom KMS encryption key that you specify or the default KMS encryption key that DMS provides. amazonka-dmsARN of a custom KMS encryption key that you specify when you set ResultEncryptionMode to "SSE_KMS". amazonka-dmsFolder within an Amazon S3 bucket where you want DMS to store the results of this assessment run. amazonka-dmsAmazon Resource Name (ARN) of the migration task associated with the premigration assessment run that you want to start. amazonka-dmsARN of the service role needed to start the assessment run. The role must allow the  iam:PassRole action. amazonka-dmsAmazon S3 bucket where you want DMS to store the results of this assessment run. amazonka-dms+Unique name to identify the assessment run. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 4 - The premigration assessment run that was started., # - The response's http status code. amazonka-dms1The premigration assessment run that was started. amazonka-dms The response's http status code. amazonka-dms amazonka-dms amazonka-dms amazonka-dms amazonka-dms_(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';>  amazonka-dmsSee:  smart constructor. amazonka-dmsThe assessed replication task. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, : - The Amazon Resource Name (ARN) of the replication task. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ! - The assessed replication task., # - The response's http status code. amazonka-dmsThe assessed replication task. amazonka-dms The response's http status code. amazonka-dms amazonka-dms  `(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';`g amazonka-dmsSee:  smart constructor. amazonka-dmsThe replication task started. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIndicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra Connection Attributes When Using PostgreSQL as a Source for DMS. amazonka-dmsIndicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.9Timestamp Example: --cdc-start-time @2018-03-08T12:12:12@ amazonka-dmsIndicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @ amazonka-dmsThe Amazon Resource Name (ARN) of the replication task to be started. amazonka-dms&The type of replication task to start.When the migration type is  full-load or full-load-and-cdc9, the only valid value for the first run of the task is start-replication . You use  reload-target to restart the task and resume-processing to resume the task.When the migration type is cdc , you use start-replication$ to start or restart the task, and resume-processing to resume the task.  reload-target9 is not a valid value for a task with migration type of cdc. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra Connection Attributes When Using PostgreSQL as a Source for DMS.,  - Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.9Timestamp Example: --cdc-start-time @2018-03-08T12:12:12@,  - Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @,  - The Amazon Resource Name (ARN) of the replication task to be started., ) - The type of replication task to start.When the migration type is  full-load or full-load-and-cdc9, the only valid value for the first run of the task is start-replication . You use  reload-target to restart the task and resume-processing to resume the task.When the migration type is cdc , you use start-replication$ to start or restart the task, and resume-processing to resume the task.  reload-target9 is not a valid value for a task with migration type of cdc. amazonka-dmsIndicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra Connection Attributes When Using PostgreSQL as a Source for DMS. amazonka-dmsIndicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.9Timestamp Example: --cdc-start-time @2018-03-08T12:12:12@ amazonka-dmsIndicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @ amazonka-dmsThe Amazon Resource Name (ARN) of the replication task to be started. amazonka-dms&The type of replication task to start.When the migration type is  full-load or full-load-and-cdc9, the only valid value for the first run of the task is start-replication . You use  reload-target to restart the task and resume-processing to resume the task.When the migration type is cdc , you use start-replication$ to start or restart the task, and resume-processing to resume the task.  reload-target9 is not a valid value for a task with migration type of cdc. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The replication task started., # - The response's http status code. amazonka-dmsThe replication task started. amazonka-dms The response's http status code. amazonka-dms amazonka-dms amazonka-dmsa(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';f  amazonka-dmsSee:  smart constructor. amazonka-dmsThe ID of the LSA analysis run. amazonka-dms,The status of the LSA analysis, for example  COMPLETED. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, " - The ID of the LSA analysis run., / - The status of the LSA analysis, for example  COMPLETED., # - The response's http status code. amazonka-dmsThe ID of the LSA analysis run. amazonka-dms,The status of the LSA analysis, for example  COMPLETED. amazonka-dms The response's http status code. amazonka-dms  b(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';n  amazonka-dmsSee:  smart constructor. amazonka-dms The response's http status code. amazonka-dms.Removes one or more tags from an DMS resource.See:  smart constructor. amazonka-dmsAn DMS resource from which you want to remove tag(s). The value for this parameter is an Amazon Resource Name (ARN). amazonka-dms,The tag key (name) of the tag to be removed. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An DMS resource from which you want to remove tag(s). The value for this parameter is an Amazon Resource Name (ARN)., / - The tag key (name) of the tag to be removed. amazonka-dmsAn DMS resource from which you want to remove tag(s). The value for this parameter is an Amazon Resource Name (ARN). amazonka-dms,The tag key (name) of the tag to be removed. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, # - The response's http status code. amazonka-dms The response's http status code. amazonka-dms amazonka-dms  c(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';yx amazonka-dmsSee:  smart constructor. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsOptions for reload. Specify  data-reload to reload the data and re-validate it if validation is enabled. Specify  validate-only to re-validate the table. This option applies only when validation is enabled for the task.(Valid values: data-reload, validate-onlyDefault value is data-reload. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dms0The name and schema of the table to be reloaded. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Options for reload. Specify  data-reload to reload the data and re-validate it if validation is enabled. Specify  validate-only to re-validate the table. This option applies only when validation is enabled for the task.(Valid values: data-reload, validate-onlyDefault value is data-reload., : - The Amazon Resource Name (ARN) of the replication task., 3 - The name and schema of the table to be reloaded. amazonka-dmsOptions for reload. Specify  data-reload to reload the data and re-validate it if validation is enabled. Specify  validate-only to re-validate the table. This option applies only when validation is enabled for the task.(Valid values: data-reload, validate-onlyDefault value is data-reload. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dms0The name and schema of the table to be reloaded. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, : - The Amazon Resource Name (ARN) of the replication task., # - The response's http status code. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsd(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';O  amazonka-dmsSee:  smart constructor. amazonka-dms#The status of the refreshed schema. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint., > - The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, & - The status of the refreshed schema., # - The response's http status code. amazonka-dms#The status of the refreshed schema. amazonka-dms The response's http status code. amazonka-dms amazonka-dms amazonka-dmse(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dms0The replication instance that is being rebooted. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIf this parameter is true, the reboot is conducted through a Multi-AZ failover. If the instance isn't configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.) amazonka-dmsIf this parameter is true, the reboot is conducted through a planned Multi-AZ failover where resources are released and cleaned up prior to conducting the failover. If the instance isn''t configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.) amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - If this parameter is true, the reboot is conducted through a Multi-AZ failover. If the instance isn't configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.),  - If this parameter is true, the reboot is conducted through a planned Multi-AZ failover where resources are released and cleaned up prior to conducting the failover. If the instance isn''t configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.), > - The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsIf this parameter is true, the reboot is conducted through a Multi-AZ failover. If the instance isn't configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.) amazonka-dmsIf this parameter is true, the reboot is conducted through a planned Multi-AZ failover where resources are released and cleaned up prior to conducting the failover. If the instance isn''t configured for Multi-AZ, then you can't specify true. ( --force-planned-failover and --force-failover can't both be set to true.) amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 3 - The replication instance that is being rebooted., # - The response's http status code. amazonka-dms0The replication instance that is being rebooted. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsf(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dms$The replication task that was moved. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) of the task that you want to move. amazonka-dmsThe ARN of the replication instance where you want to move the task to. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) of the task that you want to move.,  - The ARN of the replication instance where you want to move the task to. amazonka-dmsThe Amazon Resource Name (ARN) of the task that you want to move. amazonka-dmsThe ARN of the replication instance where you want to move the task to. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ' - The replication task that was moved., # - The response's http status code. amazonka-dms$The replication task that was moved. amazonka-dms The response's http status code. amazonka-dms amazonka-dms amazonka-dmsg(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dms'The replication task that was modified. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIndicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra Connection Attributes When Using PostgreSQL as a Source for DMS. amazonka-dmsIndicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.9Timestamp Example: --cdc-start-time @2018-03-08T12:12:12@ amazonka-dmsIndicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @ amazonka-dms"The migration type. Valid values:  full-load | cdc | full-load-and-cdc amazonka-dms The replication task identifier. Constraints:6Must contain 1-255 alphanumeric characters or hyphens.!First character must be a letter. - The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsThe amount of storage (in gigabytes) to be allocated for the replication instance. amazonka-dmsIndicates that major version upgrades are allowed. Changing this parameter does not result in an outage, and the change is asynchronously applied as soon as possible.This parameter must be set to true" when specifying a value for the  EngineVersion parameter that is a different major version than the replication instance's current version. amazonka-dmsIndicates whether the changes should be applied immediately or during the next maintenance window. amazonka-dmsA value that indicates that minor version upgrades are applied automatically to the replication instance during the maintenance window. Changing this parameter doesn't result in an outage, except in the case described following. The change is asynchronously applied as soon as possible.-An outage does result if these factors apply:This parameter is set to true during the maintenance window.#A newer minor version is available.DMS has enabled automatic patching for the given engine version. amazonka-dms6The engine version number of the replication instance.When modifying a major engine version of an instance, also set AllowMajorVersionUpgrade to true. amazonka-dmsSpecifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true. amazonka-dmsThe type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported. amazonka-dmsThe weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter does not result in an outage, except in the following situation, and the change is asynchronously applied as soon as possible. If moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure pending changes are applied.Default: Uses existing settingFormat: ddd:hh24:mi-ddd:hh24:mi3Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun(Constraints: Must be at least 30 minutes amazonka-dmsThe compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large".For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration. amazonka-dmsThe replication instance identifier. This parameter is stored as a lowercase string. amazonka-dmsSpecifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, % - The modified replication instance., # - The response's http status code. amazonka-dms"The modified replication instance. amazonka-dms The response's http status code. amazonka-dms amazonka-dms""j(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dms The modified event subscription. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsA Boolean value; set to true to activate the subscription. amazonka-dmsA list of event categories for a source type that you want to subscribe to. Use the DescribeEventCategories+ action to see a list of event categories. amazonka-dmsThe Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it. amazonka-dmsThe type of DMS resource that generates the events you want to subscribe to.5Valid values: replication-instance | replication-task amazonka-dmsThe name of the DMS event notification subscription to be modified. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A Boolean value; set to true to activate the subscription.,  - A list of event categories for a source type that you want to subscribe to. Use the DescribeEventCategories+ action to see a list of event categories.,  - The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.,  - The type of DMS resource that generates the events you want to subscribe to.5Valid values: replication-instance | replication-task,  - The name of the DMS event notification subscription to be modified. amazonka-dmsA Boolean value; set to true to activate the subscription. amazonka-dmsA list of event categories for a source type that you want to subscribe to. Use the DescribeEventCategories+ action to see a list of event categories. amazonka-dmsThe Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it. amazonka-dmsThe type of DMS resource that generates the events you want to subscribe to.5Valid values: replication-instance | replication-task amazonka-dmsThe name of the DMS event notification subscription to be modified. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, # - The modified event subscription., # - The response's http status code. amazonka-dms The modified event subscription. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsk(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';| amazonka-dmsSee:  smart constructor. amazonka-dmsThe modified endpoint. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) of the certificate used for SSL connection. amazonka-dmsThe name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName. amazonka-dmsThe settings in JSON format for the DMS transfer type of source endpoint.!Attributes include the following:serviceAccessRoleArn - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action..BucketName - The name of the S3 bucket to use.4Shorthand syntax for these settings is as follows: .ServiceAccessRoleArn=string ,BucketName=string/JSON syntax for these settings is as follows: <{ "ServiceAccessRoleArn": "string", "BucketName": "string"}  amazonka-dmsSettings in JSON format for the source DocumentDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DocumentDB.html;Using DocumentDB as a Target for Database Migration Service in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the &Database Migration Service User Guide. amazonka-dmsThe database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dmsThe database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish". amazonka-dms,If this attribute is Y, the current call to ModifyEndpoint replaces all existing endpoint settings with the exact settings that you specify in this call. If this attribute is N, the current call to ModifyEndpoint does two things:It replaces any endpoint settings that already exist with new values, for settings with the same names.It creates new endpoint settings that you specify in the call, for settings with different names.For example, if you call 5create-endpoint ... --endpoint-settings '{"a":1}' ...5, the endpoint has the following endpoint settings:  '{"a":1}'. If you then call 5modify-endpoint ... --endpoint-settings '{"b":2}' ... for the same endpoint, the endpoint has the following settings: '{"a":1,"b":2}'.6However, suppose that you follow this with a call to modify-endpoint ... --endpoint-settings '{"b":2}' --exact-settings ... for that same endpoint again. Then the endpoint has the following settings:  '{"b":2}'. All existing settings are replaced with the exact settings that you specify. amazonka-dmsThe external table definition. amazonka-dmsAdditional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument. amazonka-dms:Settings in JSON format for the source GCP MySQL endpoint. amazonka-dmsSettings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide. amazonka-dms:The password to be used to login to the endpoint database. amazonka-dms'The port used by the endpoint database. amazonka-dmsSettings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide. amazonka-dms6Settings in JSON format for the Redis target endpoint. amazonka-dmsSettings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide. amazonka-dms;The name of the server where the endpoint database resides. amazonka-dmsThe Amazon Resource Name (ARN) for the IAM role you want to use to modify the endpoint. The role must allow the  iam:PassRole action. amazonka-dmsThe SSL mode used to connect to the endpoint. The default value is none. amazonka-dmsSettings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide. amazonka-dms;The user name to be used to login to the endpoint database. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) of the certificate used for SSL connection.,  - The name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName.,  - The settings in JSON format for the DMS transfer type of source endpoint.!Attributes include the following:serviceAccessRoleArn - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action..BucketName - The name of the S3 bucket to use.4Shorthand syntax for these settings is as follows: .ServiceAccessRoleArn=string ,BucketName=string/JSON syntax for these settings is as follows: <{ "ServiceAccessRoleArn": "string", "BucketName": "string"} ,  - Settings in JSON format for the source DocumentDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DocumentDB.html;Using DocumentDB as a Target for Database Migration Service in the &Database Migration Service User Guide.,  - Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide.,  - Settings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the &Database Migration Service User Guide.,  - The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens., * - The type of endpoint. Valid values are source and target.,  - The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish"., / - If this attribute is Y, the current call to ModifyEndpoint replaces all existing endpoint settings with the exact settings that you specify in this call. If this attribute is N, the current call to ModifyEndpoint does two things:It replaces any endpoint settings that already exist with new values, for settings with the same names.It creates new endpoint settings that you specify in the call, for settings with different names.For example, if you call 5create-endpoint ... --endpoint-settings '{"a":1}' ...5, the endpoint has the following endpoint settings:  '{"a":1}'. If you then call 5modify-endpoint ... --endpoint-settings '{"b":2}' ... for the same endpoint, the endpoint has the following settings: '{"a":1,"b":2}'.6However, suppose that you follow this with a call to modify-endpoint ... --endpoint-settings '{"b":2}' --exact-settings ... for that same endpoint again. Then the endpoint has the following settings:  '{"b":2}'. All existing settings are replaced with the exact settings that you specify., ! - The external table definition.,  - Additional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument., = - Settings in JSON format for the source GCP MySQL endpoint.,  - Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide.,  - Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide.,  - Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide.,  - Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide.,  - Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide.,  - Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide.,  - Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide.,  - Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide., = - The password to be used to login to the endpoint database., * - The port used by the endpoint database.,  - Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide., 9 - Settings in JSON format for the Redis target endpoint.,  - Undocumented member.,  - Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide., > - The name of the server where the endpoint database resides.,  - The Amazon Resource Name (ARN) for the IAM role you want to use to modify the endpoint. The role must allow the  iam:PassRole action.,  - The SSL mode used to connect to the endpoint. The default value is none.,  - Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide., > - The user name to be used to login to the endpoint database.,  - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsThe Amazon Resource Name (ARN) of the certificate used for SSL connection. amazonka-dmsThe name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName. amazonka-dmsThe settings in JSON format for the DMS transfer type of source endpoint.!Attributes include the following:serviceAccessRoleArn - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action..BucketName - The name of the S3 bucket to use.4Shorthand syntax for these settings is as follows: .ServiceAccessRoleArn=string ,BucketName=string/JSON syntax for these settings is as follows: <{ "ServiceAccessRoleArn": "string", "BucketName": "string"}  amazonka-dmsSettings in JSON format for the source DocumentDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DocumentDB.html;Using DocumentDB as a Target for Database Migration Service in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the &Database Migration Service User Guide. amazonka-dmsThe database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dms'The type of endpoint. Valid values are source and target. amazonka-dmsThe database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch",  "documentdb",  "sqlserver",  "neptune", and  "babelfish". amazonka-dms,If this attribute is Y, the current call to ModifyEndpoint replaces all existing endpoint settings with the exact settings that you specify in this call. If this attribute is N, the current call to ModifyEndpoint does two things:It replaces any endpoint settings that already exist with new values, for settings with the same names.It creates new endpoint settings that you specify in the call, for settings with different names.For example, if you call 5create-endpoint ... --endpoint-settings '{"a":1}' ...5, the endpoint has the following endpoint settings:  '{"a":1}'. If you then call 5modify-endpoint ... --endpoint-settings '{"b":2}' ... for the same endpoint, the endpoint has the following settings: '{"a":1,"b":2}'.6However, suppose that you follow this with a call to modify-endpoint ... --endpoint-settings '{"b":2}' --exact-settings ... for that same endpoint again. Then the endpoint has the following settings:  '{"b":2}'. All existing settings are replaced with the exact settings that you specify. amazonka-dmsThe external table definition. amazonka-dmsAdditional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument. amazonka-dms:Settings in JSON format for the source GCP MySQL endpoint. amazonka-dmsSettings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide. amazonka-dmsSettings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide. amazonka-dms:The password to be used to login to the endpoint database. amazonka-dms'The port used by the endpoint database. amazonka-dmsSettings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide. amazonka-dms6Settings in JSON format for the Redis target endpoint. amazonka-dmsUndocumented member. amazonka-dmsSettings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide. amazonka-dms;The name of the server where the endpoint database resides. amazonka-dmsThe Amazon Resource Name (ARN) for the IAM role you want to use to modify the endpoint. The role must allow the  iam:PassRole action. amazonka-dmsThe SSL mode used to connect to the endpoint. The default value is none. amazonka-dmsSettings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide. amazonka-dms;The user name to be used to login to the endpoint database. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The modified endpoint., # - The response's http status code. amazonka-dmsThe modified endpoint. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsl(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dms A list of tags for the resource. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the DMS resource to list tags for. This returns a list of keys (names of tags) created for the resource and their associated tag values. amazonka-dmsList of ARNs that identify multiple DMS resources that you want to list tags for. This returns a list of keys (tag names) and their associated tag values. It also returns each tag's associated  ResourceArn value, which is the ARN of the resource for which each listed tag is created. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) string that uniquely identifies the DMS resource to list tags for. This returns a list of keys (names of tags) created for the resource and their associated tag values.,  - List of ARNs that identify multiple DMS resources that you want to list tags for. This returns a list of keys (tag names) and their associated tag values. It also returns each tag's associated  ResourceArn value, which is the ARN of the resource for which each listed tag is created. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the DMS resource to list tags for. This returns a list of keys (names of tags) created for the resource and their associated tag values. amazonka-dmsList of ARNs that identify multiple DMS resources that you want to list tags for. This returns a list of keys (tag names) and their associated tag values. It also returns each tag's associated  ResourceArn value, which is the ARN of the resource for which each listed tag is created. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, # - A list of tags for the resource., # - The response's http status code. amazonka-dms A list of tags for the resource. amazonka-dms The response's http status code. amazonka-dmsm(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';b amazonka-dmsSee:  smart constructor. amazonka-dmsThe certificate to be uploaded. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe contents of a .pem+ file, which contains an X.509 certificate. amazonka-dmsThe location of an imported Oracle Wallet certificate for use with SSL. Provide the name of a .sso file using the fileb://3 prefix. You can't provide the certificate inline. Example: /filebase64("${path.root}/rds-ca-2019-root.sso") amazonka-dms)The tags associated with the certificate. amazonka-dmsA customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The contents of a .pem+ file, which contains an X.509 certificate.,  - The location of an imported Oracle Wallet certificate for use with SSL. Provide the name of a .sso file using the fileb://3 prefix. You can't provide the certificate inline. Example: /filebase64("${path.root}/rds-ca-2019-root.sso")-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data., , - The tags associated with the certificate.,  - A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dmsThe contents of a .pem+ file, which contains an X.509 certificate. amazonka-dmsThe location of an imported Oracle Wallet certificate for use with SSL. Provide the name of a .sso file using the fileb://3 prefix. You can't provide the certificate inline. Example: /filebase64("${path.root}/rds-ca-2019-root.sso")-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens- accepts and returns only raw unencoded data. amazonka-dms)The tags associated with the certificate. amazonka-dmsA customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, " - The certificate to be uploaded., # - The response's http status code. amazonka-dmsThe certificate to be uploaded. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsn(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsThe table statistics. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms$Filters applied to table statistics.:Valid filter names: schema-name | table-name | table-stateA combination of filters creates an AND condition where each record matches all specified filters. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 500. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ' - Filters applied to table statistics.:Valid filter names: schema-name | table-name | table-stateA combination of filters creates an AND condition where each record matches all specified filters.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 500., : - The Amazon Resource Name (ARN) of the replication task. amazonka-dms$Filters applied to table statistics.:Valid filter names: schema-name | table-name | table-stateA combination of filters creates an AND condition where each record matches all specified filters. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 500. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., : - The Amazon Resource Name (ARN) of the replication task.,  - The table statistics., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms7The Amazon Resource Name (ARN) of the replication task. amazonka-dmsThe table statistics. amazonka-dms The response's http status code. amazonka-dms amazonka-dmso(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe described schema. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100.,  - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The described schema., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe described schema. amazonka-dms The response's http status code. amazonka-dms amazonka-dmsp(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms'A description of the replication tasks. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms%Filters applied to replication tasks.Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsAn option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose true; otherwise, choose false (the default). amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ( - Filters applied to replication tasks.Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100.,  - An option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose true; otherwise, choose false (the default). amazonka-dms%Filters applied to replication tasks.Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsAn option to set to avoid returning information about settings. Use this to reduce overhead when setting information is too large. To use this option, choose true; otherwise, choose false (the default). amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., * - A description of the replication tasks., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms'A description of the replication tasks. amazonka-dms The response's http status code. amazonka-dmsq(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dmsA pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms3One or more individual assessments as specified by Filters. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsFilters applied to the individual assessments described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, status amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Filters applied to the individual assessments described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, status,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsFilters applied to the individual assessments described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, status amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords., 6 - One or more individual assessments as specified by Filters., # - The response's http status code. amazonka-dmsA pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms3One or more individual assessments as specified by Filters. amazonka-dms The response's http status code. amazonka-dmsr(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; amazonka-dmsSee:  smart constructor. amazonka-dmsA pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms9One or more premigration assessment runs as specified by Filters. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsFilters applied to the premigration assessment runs described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, replication-instance-arn, status amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Filters applied to the premigration assessment runs described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, replication-instance-arn, status,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsFilters applied to the premigration assessment runs described in the form of key-value pairs.Valid filter names: #replication-task-assessment-run-arn, replication-task-arn, replication-instance-arn, status amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords., < - One or more premigration assessment runs as specified by Filters., # - The response's http status code. amazonka-dmsA pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms9One or more premigration assessment runs as specified by Filters. amazonka-dms The response's http status code. amazonka-dmss(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon S3 bucket where the task assessment report is located. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe task assessment report. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified, the API returns only one result and ignore the values of the  MaxRecords and Marker parameters. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100.,  - The Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified, the API returns only one result and ignore the values of the  MaxRecords and Marker parameters. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified, the API returns only one result and ignore the values of the  MaxRecords and Marker parameters. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - - The Amazon S3 bucket where the task assessment report is located.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The task assessment report., # - The response's http status code. amazonka-dmsThe Amazon S3 bucket where the task assessment report is located. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe task assessment report. amazonka-dms The response's http status code. amazonka-dmst(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms/A description of the replication subnet groups. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms-Filters applied to replication subnet groups./Valid filter names: replication-subnet-group-id amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, 0 - Filters applied to replication subnet groups./Valid filter names: replication-subnet-group-id,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms-Filters applied to replication subnet groups./Valid filter names: replication-subnet-group-id amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., 2 - A description of the replication subnet groups., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms/A description of the replication subnet groups. amazonka-dms The response's http status code. amazonka-dmsu(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; %^ amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms$The replication instances described. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms)Filters applied to replication instances.Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, , - Filters applied to replication instances.Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms)Filters applied to replication instances.Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., ' - The replication instances described., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms$The replication instances described. amazonka-dms The response's http status code. amazonka-dmsv(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; 6 amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsAn array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes). amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100., > - The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., > - The Amazon Resource Name (ARN) of the replication instance.,  - An array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes)., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsAn array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes). amazonka-dms The response's http status code. amazonka-dms amazonka-dmsw(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; =  amazonka-dmsSee:  smart constructor. amazonka-dmsThe status of the schema. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The status of the schema., # - The response's http status code. amazonka-dmsThe status of the schema. amazonka-dms The response's http status code. amazonka-dms amazonka-dms  x(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; M  amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe pending maintenance action. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  -,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100., > - The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms;The Amazon Resource Name (ARN) of the replication instance. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., " - The pending maintenance action., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe pending maintenance action. amazonka-dms The response's http status code. amazonka-dmsy(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; [ amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms/The order-able replication instances available. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., 2 - The order-able replication instances available., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms/The order-able replication instances available. amazonka-dms The response's http status code. amazonka-dmsz(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; r amazonka-dmsSee:  smart constructor. amazonka-dmsA collection of SchemaResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIf you specify any of the following filters, the output includes information for only those schemas that meet the filter criteria:  complexity( @ The schema's complexity, for example Simple. database-id# @ The ID of the schema's database.database-ip-address+ @ The IP address of the schema's database. database-name% @ The name of the schema's database.database-engine, @ The name of the schema database's engine.original-schema-name8 @ The name of the schema's database's main schema. schema-id% @ The ID of the schema, for example 15. schema-name @ The name of the schema.server-ip-address7 @ The IP address of the schema database's server.An example is: describe-fleet-advisor-schemas --filter Name="schema-id",Values="50" amazonka-dms is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged., # - The response's http status code. amazonka-dmsA collection of SchemaResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dms{(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsA collection of  FleetAdvisorSchemaObjectResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIf you specify any of the following filters, the output includes information for only those schema objects that meet the filter criteria: schema-id* @ The ID of the schema, for example $d4610ac5-e323-4ad9-bc50-eaf7249dfe9d. Example: describe-fleet-advisor-schema-object-summary --filter Name="schema-id",Values="50" amazonka-dms is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged., # - The response's http status code. amazonka-dmsA collection of  FleetAdvisorSchemaObjectResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dms|(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; * amazonka-dmsSee:  smart constructor. amazonka-dms A list of FleetAdvisorLsaAnalysisResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged., # - The response's http status code. amazonka-dms A list of FleetAdvisorLsaAnalysisResponse objects. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dms}(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsProvides descriptions of the Fleet Advisor collector databases, including the database's collector, ID, and name. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIf you specify any of the following filters, the output includes information for only those databases that meet the filter criteria: database-id @ The ID of the database. database-name @ The name of the database.database-engine# @ The name of the database engine.server-ip-address) @ The IP address of the database server.database-ip-address" @ The IP address of the database.collector-name; @ The name of the associated Fleet Advisor collector.An example is: describe-fleet-advisor-databases --filter Name="database-id",Values="45" amazonka-dms is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged., # - The response's http status code. amazonka-dmsProvides descriptions of the Fleet Advisor collector databases, including the database's collector, ID, and name. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dms~(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; z amazonka-dmsSee:  smart constructor. amazonka-dmsProvides descriptions of the Fleet Advisor collectors, including the collectors' name and ID, and the latest inventory data. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsIf you specify any of the following filters, the output includes information for only those collectors that meet the filter criteria:collector-referenced-id3 @ The ID of the collector agent, for example $d4610ac5-e323-4ad9-bc50-eaf7249dfe9d.collector-name# @ The name of the collector agent.An example is: describe-fleet-advisor-collectors --filter Name="collector-referenced-id",Values="d4610ac5-e323-4ad9-bc50-eaf7249dfe9d" amazonka-dms is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged., # - The response's http status code. amazonka-dmsProvides descriptions of the Fleet Advisor collectors, including the collectors' name and ID, and the latest inventory data. amazonka-dmsIf  NextToken> is returned, there are more results available. The value of  NextToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; D amazonka-dmsSee:  smart constructor. amazonka-dmsThe events described. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms(The duration of the events to be listed. amazonka-dms)The end time for the events to be listed. amazonka-dmsA list of event categories for the source type that you've chosen. amazonka-dms5Filters applied to events. The only valid filter is replication-instance-id. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms"The identifier of an event source. amazonka-dms/The type of DMS resource that generates events.5Valid values: replication-instance | replication-task amazonka-dms+The start time for the events to be listed. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, + - The duration of the events to be listed., , - The end time for the events to be listed.,  - A list of event categories for the source type that you've chosen., 8 - Filters applied to events. The only valid filter is replication-instance-id.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100., % - The identifier of an event source., 2 - The type of DMS resource that generates events.5Valid values: replication-instance | replication-task, . - The start time for the events to be listed. amazonka-dms(The duration of the events to be listed. amazonka-dms)The end time for the events to be listed. amazonka-dmsA list of event categories for the source type that you've chosen. amazonka-dms5Filters applied to events. The only valid filter is replication-instance-id. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms"The identifier of an event source. amazonka-dms/The type of DMS resource that generates events.5Valid values: replication-instance | replication-task amazonka-dms+The start time for the events to be listed. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The events described.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., # - The response's http status code. amazonka-dmsThe events described. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; < amazonka-dmsSee:  smart constructor. amazonka-dmsA list of event subscriptions. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms'Filters applied to event subscriptions.Valid filter names: event-subscription-arn | event-subscription-id amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms7The name of the DMS event subscription to be described. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, * - Filters applied to event subscriptions.Valid filter names: event-subscription-arn | event-subscription-id,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100., : - The name of the DMS event subscription to be described. amazonka-dms'Filters applied to event subscriptions.Valid filter names: event-subscription-arn | event-subscription-id amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms7The name of the DMS event subscription to be described. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ! - A list of event subscriptions.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., # - The response's http status code. amazonka-dmsA list of event subscriptions. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsA list of event categories. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms(Filters applied to the event categories. amazonka-dms/The type of DMS resource that generates events.5Valid values: replication-instance | replication-task amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, + - Filters applied to the event categories., 2 - The type of DMS resource that generates events.5Valid values: replication-instance | replication-task amazonka-dms(Filters applied to the event categories. amazonka-dms/The type of DMS resource that generates events.5Valid values: replication-instance | replication-task amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - A list of event categories., # - The response's http status code. amazonka-dmsA list of event categories. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsEndpoint description. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms!Filters applied to the endpoints.Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, $ - Filters applied to the endpoints.Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms!Filters applied to the endpoints.Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Endpoint description.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., # - The response's http status code. amazonka-dmsEndpoint description. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms*The types of endpoints that are supported. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms&Filters applied to the endpoint types./Valid filter names: engine-name | endpoint-type amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ) - Filters applied to the endpoint types./Valid filter names: engine-name | endpoint-type,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms&Filters applied to the endpoint types./Valid filter names: engine-name | endpoint-type amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., - - The types of endpoints that are supported., # - The response's http status code. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms*The types of endpoints that are supported. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&';  amazonka-dmsSee:  smart constructor. amazonka-dmsDescriptions of the endpoint settings available for your source or target database engine. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dms;The databse engine used for your source or target endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved., > - The databse engine used for your source or target endpoint. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dms;The databse engine used for your source or target endpoint. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Descriptions of the endpoint settings available for your source or target database engine.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., # - The response's http status code. amazonka-dmsDescriptions of the endpoint settings available for your source or target database engine. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ' amazonka-dmsSee:  smart constructor. amazonka-dms!A description of the connections. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dms&The filters applied to the connection.;Valid filter names: endpoint-arn | replication-instance-arn amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, ) - The filters applied to the connection.;Valid filter names: endpoint-arn | replication-instance-arn,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dms&The filters applied to the connection.;Valid filter names: endpoint-arn | replication-instance-arn amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 100%Constraints: Minimum 20, maximum 100. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:, $ - A description of the connections.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords., # - The response's http status code. amazonka-dms!A description of the connections. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; 6; amazonka-dmsSee:  smart constructor. amazonka-dmsThe Secure Sockets Layer (SSL) certificates associated with the replication instance. amazonka-dmsThe pagination token. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsFilters applied to the certificates described in the form of key-value pairs. Valid values are certificate-arn and certificate-id. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 10 amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Filters applied to the certificates described in the form of key-value pairs. Valid values are certificate-arn and certificate-id.,  - An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - The maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 10 amazonka-dmsFilters applied to the certificates described in the form of key-value pairs. Valid values are certificate-arn and certificate-id. amazonka-dmsAn optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsThe maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. Default: 10 amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - The Secure Sockets Layer (SSL) certificates associated with the replication instance.,  - The pagination token., # - The response's http status code. amazonka-dmsThe Secure Sockets Layer (SSL) certificates associated with the replication instance. amazonka-dmsThe pagination token. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; P amazonka-dmsSee:  smart constructor. amazonka-dmsList of names for the individual assessments supported by the premigration assessment run that you start based on the specified request parameters. For more information on the available individual assessments, including compatibility with different migration task configurations, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.AssessmentReport.html)Working with premigration assessment runs in the &Database Migration Service User Guide. amazonka-dmsPagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsOptional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsMaximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsName of the migration type that each provided individual assessment must support. amazonka-dmsARN of a replication instance on which you want to base the default list of individual assessments. amazonka-dmsAmazon Resource Name (ARN) of a migration task on which you want to base the default list of individual assessments. amazonka-dmsName of a database engine that the specified replication instance supports as a source. amazonka-dmsName of a database engine that the specified replication instance supports as a target. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords.,  - Maximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.,  - Name of the migration type that each provided individual assessment must support.,  - ARN of a replication instance on which you want to base the default list of individual assessments.,  - Amazon Resource Name (ARN) of a migration task on which you want to base the default list of individual assessments.,  - Name of a database engine that the specified replication instance supports as a source.,  - Name of a database engine that the specified replication instance supports as a target. amazonka-dmsOptional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by  MaxRecords. amazonka-dmsMaximum number of records to include in the response. If more records exist than the specified  MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved. amazonka-dmsName of the migration type that each provided individual assessment must support. amazonka-dmsARN of a replication instance on which you want to base the default list of individual assessments. amazonka-dmsAmazon Resource Name (ARN) of a migration task on which you want to base the default list of individual assessments. amazonka-dmsName of a database engine that the specified replication instance supports as a source. amazonka-dmsName of a database engine that the specified replication instance supports as a target. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - List of names for the individual assessments supported by the premigration assessment run that you start based on the specified request parameters. For more information on the available individual assessments, including compatibility with different migration task configurations, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.AssessmentReport.html)Working with premigration assessment runs in the &Database Migration Service User Guide.,  - Pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords., # - The response's http status code. amazonka-dmsList of names for the individual assessments supported by the premigration assessment run that you start based on the specified request parameters. For more information on the available individual assessments, including compatibility with different migration task configurations, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.AssessmentReport.html)Working with premigration assessment runs in the &Database Migration Service User Guide. amazonka-dmsPagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by  MaxRecords. amazonka-dms The response's http status code. amazonka-dms(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ]  amazonka-dmsSee:  smart constructor. amazonka-dmsAccount quota information. amazonka-dmsA unique DMS identifier for an account in a particular Amazon Web Services Region. The value of this identifier has the following format:  c99999999999. DMS uses this identifier to name artifacts. For example, DMS uses this identifier to name the default Amazon S3 bucket for storing task assessment reports in a given Amazon Web Services Region. The format of this S3 bucket name is the following: dms- AccountNumber-UniqueAccountIdentifier.7 Here is an example name for this default S3 bucket: dms-111122223333-c44445555666.DMS supports the UniqueAccountIdentifier( parameter in versions 3.1.4 and later. amazonka-dms The response's http status code. amazonka-dmsSee:  smart constructor. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields. amazonka-dmsCreate a value of " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:,  - Account quota information.,  - A unique DMS identifier for an account in a particular Amazon Web Services Region. The value of this identifier has the following format:  c99999999999. DMS uses this identifier to name artifacts. For example, DMS uses this identifier to name the default Amazon S3 bucket for storing task assessment reports in a given Amazon Web Services Region. The format of this S3 bucket name is the following: dms- AccountNumber-UniqueAccountIdentifier.7 Here is an example name for this default S3 bucket: dms-111122223333-c44445555666.DMS supports the UniqueAccountIdentifier( parameter in versions 3.1.4 and later., # - The response's http status code. amazonka-dmsAccount quota information. amazonka-dmsA unique DMS identifier for an account in a particular Amazon Web Services Region. The value of this identifier has the following format:  c99999999999. DMS uses this identifier to name artifacts. For example, DMS uses this identifier to name the default Amazon S3 bucket for storing task assessment reports in a given Amazon Web Services Region. The format of this S3 bucket name is the following: dms- AccountNumber-UniqueAccountIdentifier.7 Here is an example name for this default S3 bucket: dms-111122223333-c44445555666.DMS supports the UniqueAccountIdentifier( parameter in versions 3.1.4 and later. amazonka-dms The response's http status code. amazonka-dms  (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; e  amazonka-dmsSee:   smart constructor.  amazonka-dmsThe ReplicationTaskAssessmentRun( object for the deleted assessment run.  amazonka-dms The response's http status code.  amazonka-dmsSee:   smart constructor.  amazonka-dmsAmazon Resource Name (ARN) of the premigration assessment run to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - Amazon Resource Name (ARN) of the premigration assessment run to be deleted.  amazonka-dmsAmazon Resource Name (ARN) of the premigration assessment run to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The ReplicationTaskAssessmentRun( object for the deleted assessment run. ,  # - The response's http status code.  amazonka-dmsThe ReplicationTaskAssessmentRun( object for the deleted assessment run.  amazonka-dms The response's http status code.  amazonka-dms  amazonka-dms (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; k  amazonka-dmsSee:   smart constructor.  amazonka-dmsThe deleted replication task.  amazonka-dms The response's http status code.  amazonka-dmsSee:   smart constructor.  amazonka-dmsThe Amazon Resource Name (ARN) of the replication task to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The Amazon Resource Name (ARN) of the replication task to be deleted.  amazonka-dmsThe Amazon Resource Name (ARN) of the replication task to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The deleted replication task. ,  # - The response's http status code.  amazonka-dmsThe deleted replication task.  amazonka-dms The response's http status code.  amazonka-dms  amazonka-dms (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; q  amazonka-dmsSee:   smart constructor.  amazonka-dms The response's http status code.  amazonka-dmsSee:   smart constructor.  amazonka-dms2The subnet group name of the replication instance.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  5 - The subnet group name of the replication instance.  amazonka-dms2The subnet group name of the replication instance.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  # - The response's http status code.  amazonka-dms The response's http status code.  amazonka-dms  amazonka-dms (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; xH  amazonka-dmsSee:   smart constructor.  amazonka-dms*The replication instance that was deleted.  amazonka-dms The response's http status code.  amazonka-dmsSee:   smart constructor.  amazonka-dmsThe Amazon Resource Name (ARN) of the replication instance to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,   - The Amazon Resource Name (ARN) of the replication instance to be deleted.  amazonka-dmsThe Amazon Resource Name (ARN) of the replication instance to be deleted.  amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility: ,  - - The replication instance that was deleted. ,  # - The response's http status code.  amazonka-dms*The replication instance that was deleted.  amazonka-dms The response's http status code.  amazonka-dms  amazonka-dms (c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ~  amazonka-dmsSee: ! smart constructor.! amazonka-dms4The IDs of the databases that the operation deleted.! amazonka-dms The response's http status code.! amazonka-dmsSee: ! smart constructor.! amazonka-dms;The IDs of the Fleet Advisor collector databases to delete.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, !> - The IDs of the Fleet Advisor collector databases to delete.! amazonka-dms;The IDs of the Fleet Advisor collector databases to delete.! amazonka-dmsCreate a value of  " with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, !7 - The IDs of the databases that the operation deleted.!, !# - The response's http status code.! amazonka-dms4The IDs of the databases that the operation deleted.! amazonka-dms The response's http status code.! amazonka-dms! !!!!!!!!!! !!!!! !!!!!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ! amazonka-dmsSee: ! smart constructor.! amazonka-dmsSee: ! smart constructor.! amazonka-dms:The reference ID of the Fleet Advisor collector to delete.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, != - The reference ID of the Fleet Advisor collector to delete.! amazonka-dms:The reference ID of the Fleet Advisor collector to delete.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.! amazonka-dms!!!!!!!!!!!!!!!!!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; G ! amazonka-dmsSee: ! smart constructor.! amazonka-dms(The event subscription that was deleted.! amazonka-dms The response's http status code.! amazonka-dmsSee: ! smart constructor.! amazonka-dmsThe name of the DMS event notification subscription to be deleted.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, ! - The name of the DMS event notification subscription to be deleted.! amazonka-dmsThe name of the DMS event notification subscription to be deleted.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, !+ - The event subscription that was deleted.!, !# - The response's http status code.! amazonka-dms(The event subscription that was deleted.! amazonka-dms The response's http status code.! amazonka-dms!! amazonka-dms! !!!!!!!!!!!! !!!!!!!!!!!!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ! amazonka-dmsSee: ! smart constructor.! amazonka-dmsThe endpoint that was deleted.! amazonka-dms The response's http status code.! amazonka-dmsSee: ! smart constructor.! amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, ! - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.! amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, !! - The endpoint that was deleted.!, !# - The response's http status code.! amazonka-dmsThe endpoint that was deleted.! amazonka-dms The response's http status code.! amazonka-dms!! amazonka-dms! !!!!!!!!!!!! !!!!!!!!!!!!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; ! amazonka-dmsSee: ! smart constructor.! amazonka-dms%The connection that is being deleted.! amazonka-dms The response's http status code.! amazonka-dmsSee: ! smart constructor.! amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint.! amazonka-dms;The Amazon Resource Name (ARN) of the replication instance.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, ! - The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.!, !> - The Amazon Resource Name (ARN) of the replication instance.! amazonka-dmsThe Amazon Resource Name (ARN) string that uniquely identifies the endpoint.! amazonka-dms;The Amazon Resource Name (ARN) of the replication instance.! amazonka-dmsCreate a value of !" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:!, !( - The connection that is being deleted.!, !# - The response's http status code.! amazonka-dms%The connection that is being deleted.! amazonka-dms The response's http status code.! amazonka-dms! amazonka-dms!! amazonka-dms!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; Z " amazonka-dmsSee: " smart constructor." amazonka-dms+The Secure Sockets Layer (SSL) certificate." amazonka-dms The response's http status code." amazonka-dmsSee: " smart constructor." amazonka-dms2The Amazon Resource Name (ARN) of the certificate." amazonka-dmsCreate a value of "" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:", "5 - The Amazon Resource Name (ARN) of the certificate." amazonka-dms2The Amazon Resource Name (ARN) of the certificate." amazonka-dmsCreate a value of "" with all optional fields omitted.Use  0https://hackage.haskell.org/package/generic-lens generic-lens or  *https://hackage.haskell.org/package/opticsoptics! to modify other optional fields.The following record fields are available, with the corresponding lenses provided for backwards compatibility:", ". - The Secure Sockets Layer (SSL) certificate.", "# - The response's http status code." amazonka-dms+The Secure Sockets Layer (SSL) certificate." amazonka-dms The response's http status code." amazonka-dms"" amazonka-dms" """""""""""" """"""""""""(c) 2013-2023 Brendan HayMozilla Public License, v. 2.0. Brendan Hayauto-generatednon-portable (GHC extensions) Safe-Inferred "%&'; )"" amazonka-dmsSee: " smart constructor." amazonka-dms&The replication task that was created." amazonka-dms The response's http status code." amazonka-dmsSee: " smart constructor." amazonka-dmsIndicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.8The value can be in date, checkpoint, or LSN/SCN format.8Date Example: --cdc-start-position @2018-03-08T12:12:12@8Checkpoint Example: --cdc-start-position "checkpoint:V127mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:187600*0#93"LSN Example: --cdc-start-position @mysql-bin-changelog.000024:373@When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra Connection Attributes When Using PostgreSQL as a Source for DMS." amazonka-dmsIndicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.9Timestamp Example: --cdc-start-time @2018-03-08T12:12:12@" amazonka-dmsIndicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.Server time example: --cdc-stop-position @server_time:2018-02-09T12:12:12@Commit time example: --cdc-stop-position @commit_time: 2018-02-09T12:12:12 @" amazonka-dmsOverall settings for the task, in JSON format. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html=Specifying Task Settings for Database Migration Service Tasks in the &Database Migration Service User Guide." amazonka-dms?A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn." amazonka-dms8One or more tags to be assigned to the replication task." amazonka-dmsSupplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.TaskData.html.Specifying Supplemental Data for Task Settings in the &Database Migration Service User Guide." amazonka-dms'An identifier for the replication task. Constraints:6Must contain 1-255 alphanumeric characters or hyphens.!First character must be a letter. represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true.# amazonka-dms:A subnet group to associate with the replication instance.# amazonka-dms?A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.# amazonka-dms represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true.", #= - A subnet group to associate with the replication instance.", # - A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.#, #? - One or more tags to be assigned to the replication instance.#, # - Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.", # - The replication instance identifier. This parameter is stored as a lowercase string. Constraints:5Must contain 1-63 alphanumeric characters or hyphens.!First character must be a letter.;Can't end with a hyphen or contain two consecutive hyphens. Example:  myrepinstance", # - The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large".For more information on the settings and capacities for the available replication instance classes, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth?Selecting the right DMS replication instance for your migration.# amazonka-dmsThe amount of storage (in gigabytes) to be initially allocated for the replication instance.# amazonka-dmsA value that indicates whether minor engine upgrades are applied automatically to the replication instance during the maintenance window. This parameter defaults to true. Default: true# amazonka-dmsThe Availability Zone where the replication instance will be created. The default value is a random, system-chosen Availability Zone in the endpoint's Amazon Web Services Region, for example:  us-east-1d# amazonka-dmsA list of custom DNS name servers supported for the replication instance to access your on-premise source or target database. This list overrides the default name servers supported by the replication instance. You can specify a comma-separated list of internet addresses for up to four on-premise DNS name servers. For example: !"1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4"# amazonka-dms6The engine version number of the replication instance.If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.# amazonka-dmsAn KMS key identifier that is used to encrypt the data on the replication instance.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.# amazonka-dmsSpecifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone0 parameter if the Multi-AZ parameter is set to true.# amazonka-dmsThe type of IP address protocol used by a replication instance, such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not yet supported.# amazonka-dmsThe weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).Format: ddd:hh24:mi-ddd:hh24:miDefault: A 30-minute window selected at random from an 8-hour block of time per Amazon Web Services Region, occurring on a random day of the week.-Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun&Constraints: Minimum 30-minute window.# amazonka-dmsSpecifies the accessibility options for the replication instance. A value of true> represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true.# amazonka-dms:A subnet group to associate with the replication instance.# amazonka-dms?A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.# amazonka-dms{ "ServiceAccessRoleArn": "string", "BucketName": "string", } $ amazonka-dmsSettings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the %Database Migration Service User Guide.$ amazonka-dmsThe external table definition.$ amazonka-dmsAdditional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.htmlWorking with DMS Endpoints in the &Database Migration Service User Guide.$ amazonka-dms:Settings in JSON format for the source GCP MySQL endpoint.$ amazonka-dmsSettings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide.$ amazonka-dmsAn KMS key identifier that is used to encrypt the connection parameters for the endpoint.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.$ amazonka-dmsSettings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source MongoDB endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms;The password to be used to log in to the endpoint database.$ amazonka-dms'The port used by the endpoint database.$ amazonka-dmsSettings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms6Settings in JSON format for the target Redis endpoint.$ amazonka-dms?A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.$ amazonka-dmsSettings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide.$ amazonka-dms;The name of the server where the endpoint database resides.$ amazonka-dmsThe Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint. The role must allow the  iam:PassRole action.$ amazonka-dmsThe Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none$ amazonka-dmsSettings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms0One or more tags to be assigned to the endpoint.$ amazonka-dms{ "ServiceAccessRoleArn": "string", "BucketName": "string", } $, $ - Undocumented member.$, $ - Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the %Database Migration Service User Guide.$, $! - The external table definition.$, $ - Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.htmlWorking with DMS Endpoints in the &Database Migration Service User Guide.$, $= - Settings in JSON format for the source GCP MySQL endpoint.$, $ - Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide.$, $ - An KMS key identifier that is used to encrypt the connection parameters for the endpoint.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.$, $ - Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide.$, $ - Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide.$, $> - The password to be used to log in to the endpoint database.$, $* - The port used by the endpoint database.$, $ - Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide.$, $9 - Settings in JSON format for the target Redis endpoint.$, $ - Undocumented member.$, $ - A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.$, $ - Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide.$, $> - The name of the server where the endpoint database resides.$, $ - The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint. The role must allow the  iam:PassRole action.$, $ - The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none$, $ - Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide.$, $3 - One or more tags to be assigned to the endpoint.$, $? - The user name to be used to log in to the endpoint database.$, $ - The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen, or contain two consecutive hyphens.$, $* - The type of endpoint. Valid values are source and target.$, $ - The type of engine for the endpoint. Valid values, depending on the  EndpointType value, include "mysql", "oracle",  "postgres",  "mariadb", "aurora", "aurora-postgresql",  "opensearch",  "redshift", "s3", "db2",  "db2-zos",  "azuredb", "sybase",  "dynamodb",  "mongodb",  "kinesis", "kafka", "elasticsearch", "docdb",  "sqlserver",  "neptune", and  "babelfish".$ amazonka-dms3The Amazon Resource Name (ARN) for the certificate.$ amazonka-dmsThe name of the endpoint database. For a MySQL source or target endpoint, do not specify DatabaseName. To migrate to a specific database, use this setting and  targetDbType.$ amazonka-dmsThe settings in JSON format for the DMS transfer type of source endpoint.(Possible settings include the following:ServiceAccessRoleArn - The Amazon Resource Name (ARN) used by the service access IAM role. The role must allow the  iam:PassRole action. BucketName$ - The name of the S3 bucket to use.4Shorthand syntax for these settings is as follows: -ServiceAccessRoleArn=string,BucketName=string/JSON syntax for these settings is as follows: >{ "ServiceAccessRoleArn": "string", "BucketName": "string", } $ amazonka-dmsUndocumented member.$ amazonka-dmsSettings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html#CHAP_Target.DynamoDB.ObjectMapping0Using Object Mapping to Migrate Data to DynamoDB in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target OpenSearch endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.ConfigurationExtra Connection Attributes When Using OpenSearch as a Target for DMS in the %Database Migration Service User Guide.$ amazonka-dmsThe external table definition.$ amazonka-dmsAdditional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.htmlWorking with DMS Endpoints in the &Database Migration Service User Guide.$ amazonka-dms:Settings in JSON format for the source GCP MySQL endpoint.$ amazonka-dmsSettings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.DB2.html#CHAP_Source.DB2.ConnectionAttribExtra connection attributes when using Db2 LUW as a source for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html#CHAP_Target.Kafka.ObjectMapping5Using object mapping to migrate data to a Kafka topic in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping=Using object mapping to migrate data to a Kinesis data stream in the &Database Migration Service User Guide.$ amazonka-dmsAn KMS key identifier that is used to encrypt the connection parameters for the endpoint.%If you don't specify a value for the KmsKeyId7 parameter, then DMS uses your default encryption key.KMS creates the default encryption key for your Amazon Web Services account. Your Amazon Web Services account has a different default encryption key for each Amazon Web Services Region.$ amazonka-dmsSettings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html#CHAP_Target.SQLServer.ConnectionAttribExtra connection attributes when using SQL Server as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source MongoDB endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#CHAP_Source.MongoDB.ConfigurationEndpoint configuration settings when using MongoDB as a source for Database Migration Service in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source and target MySQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.ConnectionAttribExtra connection attributes when using MySQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html#CHAP_Target.MySQL.ConnectionAttribExtra connection attributes when using a MySQL-compatible database as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettingsSpecifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target in the &Database Migration Service User Guide.$ amazonka-dmsSettings in JSON format for the source and target Oracle endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.ConnectionAttribExtra connection attributes when using Oracle as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms;The password to be used to log in to the endpoint database.$ amazonka-dms'The port used by the endpoint database.$ amazonka-dmsSettings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html#CHAP_Target.PostgreSQL.ConnectionAttribExtra connection attributes when using PostgreSQL as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms6Settings in JSON format for the target Redis endpoint.$ amazonka-dmsUndocumented member.$ amazonka-dms?A friendly name for the resource identifier at the end of the  EndpointArn5 response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1/. For example, this value might result in the  EndpointArn value 7arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1. If you don't specify a ResourceIdentifier value, DMS generates a default identifier value for the end of  EndpointArn.$ amazonka-dmsSettings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.ConfiguringExtra Connection Attributes When Using Amazon S3 as a Target for DMS in the &Database Migration Service User Guide.$ amazonka-dms;The name of the server where the endpoint database resides.$ amazonka-dmsThe Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint. The role must allow the  iam:PassRole action.$ amazonka-dmsThe Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none$ amazonka-dmsSettings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a source for DMS and  https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SAP.html#CHAP_Target.SAP.ConnectionAttribExtra connection attributes when using SAP ASE as a target for DMS in the &Database Migration Service User Guide.$ amazonka-dms0One or more tags to be assigned to the endpoint.$ amazonka-dms > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ??????????@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@AAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCDDDDDDDDDDDDDDDDEEEEEEEEEEEEEEEEEEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFGGGGGGGGGGGGGGGGGGGGGGGHHHHHHHHHHHHHHHHHHHHIIIIIIIIIIIIIIIIIIIIIIIIJJJJJJJJJJJJJJJJJJJJJJJJKKKKKKKKKKKKKKKKKKKKKKKKKLLLLLLLLLLLLLLLLMMMMMMMMMMMMMMMMMMMMMMNNNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPQQQQQQQQQQQQQQRRRRRRRRRRRRRRRRRSSSSSSSSSSSSSSSSSSSSSSSSTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVWWWWWWWWWWWWWWWWWWWWWWWWWXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXYYYYYYYYYYYYYYZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ[[[[[[[[[[[[[[[[[[[[[[[[[[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]]]]]]]]]]]]]]]]]]]]]]^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^____________________________````````````````````````````````````aaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbccccccccccccccccccccccccccccccccddddddddddddddddddddddddddddddeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffffffffgggggggggggggggggggggggggggggggggggggggggggghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiijjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkllllllllllllllllllllllllllllllmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnooooooooooooooooooooooooooooooooooopppppppppppppppppppppppppppppppppppppqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrssssssssssssssssssssssssssssssssssssstttttttttttttttttttttttttttttttttttuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvwwwwwwwwwwwwwwwwwwwwwwwwwwwwxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{||||||||||||||||||||||||||||||||}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""################################################################################################################################$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&'amazonka-dms-2.0-LVCLJv4CY4nJuf0WXCDs3OAmazonka.DMS.Types.AccountQuota%Amazonka.DMS.Types.AuthMechanismValue Amazonka.DMS.Types.AuthTypeValue#Amazonka.DMS.Types.AvailabilityZone+Amazonka.DMS.Types.CannedAclForObjectsValueAmazonka.DMS.Types.Certificate&Amazonka.DMS.Types.CharLengthSemantics-Amazonka.DMS.Types.CollectorShortInfoResponse"Amazonka.DMS.Types.CollectorStatus'Amazonka.DMS.Types.CollectorHealthCheck'Amazonka.DMS.Types.CompressionTypeValueAmazonka.DMS.Types.Connection"Amazonka.DMS.Types.DataFormatValue:Amazonka.DMS.Types.DatabaseInstanceSoftwareDetailsResponse,Amazonka.DMS.Types.DatabaseShortInfoResponse.Amazonka.DMS.Types.DatePartitionDelimiterValue-Amazonka.DMS.Types.DatePartitionSequenceValue"Amazonka.DMS.Types.DmsSslModeValue&Amazonka.DMS.Types.DmsTransferSettings#Amazonka.DMS.Types.DynamoDbSettings(Amazonka.DMS.Types.ElasticsearchSettings$Amazonka.DMS.Types.EncodingTypeValue&Amazonka.DMS.Types.EncryptionModeValue+Amazonka.DMS.Types.EndpointSettingTypeValue"Amazonka.DMS.Types.EndpointSetting%Amazonka.DMS.Types.EventCategoryGroup$Amazonka.DMS.Types.EventSubscriptionAmazonka.DMS.Types.Filter2Amazonka.DMS.Types.FleetAdvisorLsaAnalysisResponse3Amazonka.DMS.Types.FleetAdvisorSchemaObjectResponse!Amazonka.DMS.Types.IBMDb2Settings Amazonka.DMS.Types.InventoryData(Amazonka.DMS.Types.KafkaSecurityProtocol%Amazonka.DMS.Types.MessageFormatValue"Amazonka.DMS.Types.KinesisSettings Amazonka.DMS.Types.KafkaSettings%Amazonka.DMS.Types.MigrationTypeValue"Amazonka.DMS.Types.NeptuneSettings$Amazonka.DMS.Types.NestingLevelValue"Amazonka.DMS.Types.MongoDbSettings Amazonka.DMS.Types.DocDbSettings!Amazonka.DMS.Types.OracleSettings&Amazonka.DMS.Types.ParquetVersionValue+Amazonka.DMS.Types.PendingMaintenanceAction"Amazonka.DMS.Types.PluginNameValue%Amazonka.DMS.Types.PostgreSQLSettings%Amazonka.DMS.Types.RedisAuthTypeValue#Amazonka.DMS.Types.RedshiftSettings0Amazonka.DMS.Types.RefreshSchemasStatusTypeValue'Amazonka.DMS.Types.RefreshSchemasStatus&Amazonka.DMS.Types.ReleaseStatusValues/Amazonka.DMS.Types.OrderableReplicationInstance$Amazonka.DMS.Types.ReloadOptionValue/Amazonka.DMS.Types.ReplicationEndpointTypeValue-Amazonka.DMS.Types.ReplicationInstanceTaskLog3Amazonka.DMS.Types.ReplicationPendingModifiedValues2Amazonka.DMS.Types.ReplicationTaskAssessmentResult7Amazonka.DMS.Types.ReplicationTaskAssessmentRunProgress/Amazonka.DMS.Types.ReplicationTaskAssessmentRun6Amazonka.DMS.Types.ReplicationTaskIndividualAssessment'Amazonka.DMS.Types.ReplicationTaskStats"Amazonka.DMS.Types.ReplicationTask4Amazonka.DMS.Types.ResourcePendingMaintenanceActionsAmazonka.DMS.Types.S3Settings"Amazonka.DMS.Types.SafeguardPolicy-Amazonka.DMS.Types.MicrosoftSQLServerSettings*Amazonka.DMS.Types.SchemaShortInfoResponse*Amazonka.DMS.Types.ServerShortInfoResponse!Amazonka.DMS.Types.SchemaResponse#Amazonka.DMS.Types.DatabaseResponseAmazonka.DMS.Types.SourceTypeAmazonka.DMS.Types.Event+Amazonka.DMS.Types.SslSecurityProtocolValue Amazonka.DMS.Types.RedisSettings0Amazonka.DMS.Types.StartReplicationTaskTypeValueAmazonka.DMS.Types.Subnet)Amazonka.DMS.Types.ReplicationSubnetGroup(Amazonka.DMS.Types.SupportedEndpointType!Amazonka.DMS.Types.SybaseSettings"Amazonka.DMS.Types.TableStatistics Amazonka.DMS.Types.TableToReloadAmazonka.DMS.Types.TagAmazonka.DMS.Types.TargetDbType Amazonka.DMS.Types.MySQLSettings#Amazonka.DMS.Types.GcpMySQLSettingsAmazonka.DMS.Types.Endpoint Amazonka.DMS.Types.VersionStatus$Amazonka.DMS.Types.CollectorResponse-Amazonka.DMS.Types.VpcSecurityGroupMembership&Amazonka.DMS.Types.ReplicationInstanceAmazonka.DMS.TypesAmazonka.DMS.TestConnection Amazonka.DMS.StopReplicationTask.Amazonka.DMS.StartReplicationTaskAssessmentRun+Amazonka.DMS.StartReplicationTaskAssessment!Amazonka.DMS.StartReplicationTask'Amazonka.DMS.RunFleetAdvisorLsaAnalysis#Amazonka.DMS.RemoveTagsFromResourceAmazonka.DMS.ReloadTablesAmazonka.DMS.RefreshSchemas&Amazonka.DMS.RebootReplicationInstance Amazonka.DMS.MoveReplicationTask"Amazonka.DMS.ModifyReplicationTask)Amazonka.DMS.ModifyReplicationSubnetGroup&Amazonka.DMS.ModifyReplicationInstance$Amazonka.DMS.ModifyEventSubscriptionAmazonka.DMS.ModifyEndpoint Amazonka.DMS.ListTagsForResourceAmazonka.DMS.ImportCertificate$Amazonka.DMS.DescribeTableStatisticsAmazonka.DMS.DescribeSchemas%Amazonka.DMS.DescribeReplicationTasks9Amazonka.DMS.DescribeReplicationTaskIndividualAssessments2Amazonka.DMS.DescribeReplicationTaskAssessmentRuns5Amazonka.DMS.DescribeReplicationTaskAssessmentResults,Amazonka.DMS.DescribeReplicationSubnetGroups)Amazonka.DMS.DescribeReplicationInstances0Amazonka.DMS.DescribeReplicationInstanceTaskLogs)Amazonka.DMS.DescribeRefreshSchemasStatus.Amazonka.DMS.DescribePendingMaintenanceActions2Amazonka.DMS.DescribeOrderableReplicationInstances(Amazonka.DMS.DescribeFleetAdvisorSchemas4Amazonka.DMS.DescribeFleetAdvisorSchemaObjectSummary,Amazonka.DMS.DescribeFleetAdvisorLsaAnalysis*Amazonka.DMS.DescribeFleetAdvisorDatabases+Amazonka.DMS.DescribeFleetAdvisorCollectorsAmazonka.DMS.DescribeEvents'Amazonka.DMS.DescribeEventSubscriptions$Amazonka.DMS.DescribeEventCategoriesAmazonka.DMS.DescribeEndpoints"Amazonka.DMS.DescribeEndpointTypes%Amazonka.DMS.DescribeEndpointSettings Amazonka.DMS.DescribeConnections!Amazonka.DMS.DescribeCertificates4Amazonka.DMS.DescribeApplicableIndividualAssessments&Amazonka.DMS.DescribeAccountAttributes/Amazonka.DMS.DeleteReplicationTaskAssessmentRun"Amazonka.DMS.DeleteReplicationTask)Amazonka.DMS.DeleteReplicationSubnetGroup&Amazonka.DMS.DeleteReplicationInstance(Amazonka.DMS.DeleteFleetAdvisorDatabases(Amazonka.DMS.DeleteFleetAdvisorCollector$Amazonka.DMS.DeleteEventSubscriptionAmazonka.DMS.DeleteEndpointAmazonka.DMS.DeleteConnectionAmazonka.DMS.DeleteCertificate"Amazonka.DMS.CreateReplicationTask)Amazonka.DMS.CreateReplicationSubnetGroup&Amazonka.DMS.CreateReplicationInstance(Amazonka.DMS.CreateFleetAdvisorCollector$Amazonka.DMS.CreateEventSubscriptionAmazonka.DMS.CreateEndpoint/Amazonka.DMS.CancelReplicationTaskAssessmentRun*Amazonka.DMS.ApplyPendingMaintenanceActionAmazonka.DMS.AddTagsToResource-Amazonka.DMS.UpdateSubscriptionsToEventBridgeAmazonka.DMS.WaitersAmazonka.DMS.Lens Amazonka.DMSDescribeEndpointsDescribeReplicationInstancesDescribeReplicationTasksDescribeConnections AccountQuota AccountQuota'#$sel:accountQuotaName:AccountQuota'$sel:max:AccountQuota'$sel:used:AccountQuota'newAccountQuotaaccountQuota_accountQuotaNameaccountQuota_maxaccountQuota_used$fNFDataAccountQuota$fHashableAccountQuota$fFromJSONAccountQuota$fEqAccountQuota$fReadAccountQuota$fShowAccountQuota$fGenericAccountQuotaAuthMechanismValueAuthMechanismValue'fromAuthMechanismValueAuthMechanismValue_Scram_sha_1AuthMechanismValue_Mongodb_crAuthMechanismValue_Default$fShowAuthMechanismValue$fReadAuthMechanismValue$fEqAuthMechanismValue$fOrdAuthMechanismValue$fGenericAuthMechanismValue$fHashableAuthMechanismValue$fNFDataAuthMechanismValue$fFromTextAuthMechanismValue$fToTextAuthMechanismValue $fToByteStringAuthMechanismValue$fToLogAuthMechanismValue$fToHeaderAuthMechanismValue$fToQueryAuthMechanismValue$fFromJSONAuthMechanismValue$fFromJSONKeyAuthMechanismValue$fToJSONAuthMechanismValue$fToJSONKeyAuthMechanismValue$fFromXMLAuthMechanismValue$fToXMLAuthMechanismValue AuthTypeValueAuthTypeValue'fromAuthTypeValueAuthTypeValue_PasswordAuthTypeValue_No$fShowAuthTypeValue$fReadAuthTypeValue$fEqAuthTypeValue$fOrdAuthTypeValue$fGenericAuthTypeValue$fHashableAuthTypeValue$fNFDataAuthTypeValue$fFromTextAuthTypeValue$fToTextAuthTypeValue$fToByteStringAuthTypeValue$fToLogAuthTypeValue$fToHeaderAuthTypeValue$fToQueryAuthTypeValue$fFromJSONAuthTypeValue$fFromJSONKeyAuthTypeValue$fToJSONAuthTypeValue$fToJSONKeyAuthTypeValue$fFromXMLAuthTypeValue$fToXMLAuthTypeValueAvailabilityZoneAvailabilityZone'$sel:name:AvailabilityZone'newAvailabilityZoneavailabilityZone_name$fNFDataAvailabilityZone$fHashableAvailabilityZone$fFromJSONAvailabilityZone$fEqAvailabilityZone$fReadAvailabilityZone$fShowAvailabilityZone$fGenericAvailabilityZoneCannedAclForObjectsValueCannedAclForObjectsValue'fromCannedAclForObjectsValue*CannedAclForObjectsValue_Public_read_write$CannedAclForObjectsValue_Public_read CannedAclForObjectsValue_PrivateCannedAclForObjectsValue_None*CannedAclForObjectsValue_Bucket_owner_read2CannedAclForObjectsValue_Bucket_owner_full_control&CannedAclForObjectsValue_Aws_exec_read+CannedAclForObjectsValue_Authenticated_read$fShowCannedAclForObjectsValue$fReadCannedAclForObjectsValue$fEqCannedAclForObjectsValue$fOrdCannedAclForObjectsValue!$fGenericCannedAclForObjectsValue"$fHashableCannedAclForObjectsValue $fNFDataCannedAclForObjectsValue"$fFromTextCannedAclForObjectsValue $fToTextCannedAclForObjectsValue&$fToByteStringCannedAclForObjectsValue$fToLogCannedAclForObjectsValue"$fToHeaderCannedAclForObjectsValue!$fToQueryCannedAclForObjectsValue"$fFromJSONCannedAclForObjectsValue%$fFromJSONKeyCannedAclForObjectsValue $fToJSONCannedAclForObjectsValue#$fToJSONKeyCannedAclForObjectsValue!$fFromXMLCannedAclForObjectsValue$fToXMLCannedAclForObjectsValue Certificate Certificate' $sel:certificateArn:Certificate')$sel:certificateCreationDate:Certificate''$sel:certificateIdentifier:Certificate'"$sel:certificateOwner:Certificate' $sel:certificatePem:Certificate'#$sel:certificateWallet:Certificate'$sel:keyLength:Certificate'"$sel:signingAlgorithm:Certificate'$sel:validFromDate:Certificate'$sel:validToDate:Certificate'newCertificatecertificate_certificateArn#certificate_certificateCreationDate!certificate_certificateIdentifiercertificate_certificateOwnercertificate_certificatePemcertificate_certificateWalletcertificate_keyLengthcertificate_signingAlgorithmcertificate_validFromDatecertificate_validToDate$fNFDataCertificate$fHashableCertificate$fFromJSONCertificate$fEqCertificate$fReadCertificate$fShowCertificate$fGenericCertificateCharLengthSemanticsCharLengthSemantics'fromCharLengthSemanticsCharLengthSemantics_DefaultCharLengthSemantics_CharCharLengthSemantics_Byte$fShowCharLengthSemantics$fReadCharLengthSemantics$fEqCharLengthSemantics$fOrdCharLengthSemantics$fGenericCharLengthSemantics$fHashableCharLengthSemantics$fNFDataCharLengthSemantics$fFromTextCharLengthSemantics$fToTextCharLengthSemantics!$fToByteStringCharLengthSemantics$fToLogCharLengthSemantics$fToHeaderCharLengthSemantics$fToQueryCharLengthSemantics$fFromJSONCharLengthSemantics $fFromJSONKeyCharLengthSemantics$fToJSONCharLengthSemantics$fToJSONKeyCharLengthSemantics$fFromXMLCharLengthSemantics$fToXMLCharLengthSemanticsCollectorShortInfoResponseCollectorShortInfoResponse'.$sel:collectorName:CollectorShortInfoResponse'6$sel:collectorReferencedId:CollectorShortInfoResponse'newCollectorShortInfoResponse(collectorShortInfoResponse_collectorName0collectorShortInfoResponse_collectorReferencedId"$fNFDataCollectorShortInfoResponse$$fHashableCollectorShortInfoResponse$$fFromJSONCollectorShortInfoResponse$fEqCollectorShortInfoResponse $fReadCollectorShortInfoResponse $fShowCollectorShortInfoResponse#$fGenericCollectorShortInfoResponseCollectorStatusCollectorStatus'fromCollectorStatusCollectorStatus_UNREGISTEREDCollectorStatus_ACTIVE$fShowCollectorStatus$fReadCollectorStatus$fEqCollectorStatus$fOrdCollectorStatus$fGenericCollectorStatus$fHashableCollectorStatus$fNFDataCollectorStatus$fFromTextCollectorStatus$fToTextCollectorStatus$fToByteStringCollectorStatus$fToLogCollectorStatus$fToHeaderCollectorStatus$fToQueryCollectorStatus$fFromJSONCollectorStatus$fFromJSONKeyCollectorStatus$fToJSONCollectorStatus$fToJSONKeyCollectorStatus$fFromXMLCollectorStatus$fToXMLCollectorStatusCollectorHealthCheckCollectorHealthCheck'*$sel:collectorStatus:CollectorHealthCheck'1$sel:localCollectorS3Access:CollectorHealthCheck'=$sel:webCollectorGrantedRoleBasedAccess:CollectorHealthCheck'/$sel:webCollectorS3Access:CollectorHealthCheck'newCollectorHealthCheck$collectorHealthCheck_collectorStatus+collectorHealthCheck_localCollectorS3Access7collectorHealthCheck_webCollectorGrantedRoleBasedAccess)collectorHealthCheck_webCollectorS3Access$fNFDataCollectorHealthCheck$fHashableCollectorHealthCheck$fFromJSONCollectorHealthCheck$fEqCollectorHealthCheck$fReadCollectorHealthCheck$fShowCollectorHealthCheck$fGenericCollectorHealthCheckCompressionTypeValueCompressionTypeValue'fromCompressionTypeValueCompressionTypeValue_NoneCompressionTypeValue_Gzip$fShowCompressionTypeValue$fReadCompressionTypeValue$fEqCompressionTypeValue$fOrdCompressionTypeValue$fGenericCompressionTypeValue$fHashableCompressionTypeValue$fNFDataCompressionTypeValue$fFromTextCompressionTypeValue$fToTextCompressionTypeValue"$fToByteStringCompressionTypeValue$fToLogCompressionTypeValue$fToHeaderCompressionTypeValue$fToQueryCompressionTypeValue$fFromJSONCompressionTypeValue!$fFromJSONKeyCompressionTypeValue$fToJSONCompressionTypeValue$fToJSONKeyCompressionTypeValue$fFromXMLCompressionTypeValue$fToXMLCompressionTypeValue Connection Connection'$sel:endpointArn:Connection'#$sel:endpointIdentifier:Connection'#$sel:lastFailureMessage:Connection''$sel:replicationInstanceArn:Connection'.$sel:replicationInstanceIdentifier:Connection'$sel:status:Connection' newConnectionconnection_endpointArnconnection_endpointIdentifierconnection_lastFailureMessage!connection_replicationInstanceArn(connection_replicationInstanceIdentifierconnection_status$fNFDataConnection$fHashableConnection$fFromJSONConnection$fEqConnection$fReadConnection$fShowConnection$fGenericConnectionDataFormatValueDataFormatValue'fromDataFormatValueDataFormatValue_ParquetDataFormatValue_Csv$fShowDataFormatValue$fReadDataFormatValue$fEqDataFormatValue$fOrdDataFormatValue$fGenericDataFormatValue$fHashableDataFormatValue$fNFDataDataFormatValue$fFromTextDataFormatValue$fToTextDataFormatValue$fToByteStringDataFormatValue$fToLogDataFormatValue$fToHeaderDataFormatValue$fToQueryDataFormatValue$fFromJSONDataFormatValue$fFromJSONKeyDataFormatValue$fToJSONDataFormatValue$fToJSONKeyDataFormatValue$fFromXMLDataFormatValue$fToXMLDataFormatValue'DatabaseInstanceSoftwareDetailsResponse(DatabaseInstanceSoftwareDetailsResponse'4$sel:engine:DatabaseInstanceSoftwareDetailsResponse';$sel:engineEdition:DatabaseInstanceSoftwareDetailsResponse';$sel:engineVersion:DatabaseInstanceSoftwareDetailsResponse'<$sel:osArchitecture:DatabaseInstanceSoftwareDetailsResponse'9$sel:servicePack:DatabaseInstanceSoftwareDetailsResponse':$sel:supportLevel:DatabaseInstanceSoftwareDetailsResponse'5$sel:tooltip:DatabaseInstanceSoftwareDetailsResponse'*newDatabaseInstanceSoftwareDetailsResponse.databaseInstanceSoftwareDetailsResponse_engine5databaseInstanceSoftwareDetailsResponse_engineEdition5databaseInstanceSoftwareDetailsResponse_engineVersion6databaseInstanceSoftwareDetailsResponse_osArchitecture3databaseInstanceSoftwareDetailsResponse_servicePack4databaseInstanceSoftwareDetailsResponse_supportLevel/databaseInstanceSoftwareDetailsResponse_tooltip/$fNFDataDatabaseInstanceSoftwareDetailsResponse1$fHashableDatabaseInstanceSoftwareDetailsResponse1$fFromJSONDatabaseInstanceSoftwareDetailsResponse+$fEqDatabaseInstanceSoftwareDetailsResponse-$fReadDatabaseInstanceSoftwareDetailsResponse-$fShowDatabaseInstanceSoftwareDetailsResponse0$fGenericDatabaseInstanceSoftwareDetailsResponseDatabaseShortInfoResponseDatabaseShortInfoResponse'.$sel:databaseEngine:DatabaseShortInfoResponse'*$sel:databaseId:DatabaseShortInfoResponse'1$sel:databaseIpAddress:DatabaseShortInfoResponse',$sel:databaseName:DatabaseShortInfoResponse'newDatabaseShortInfoResponse(databaseShortInfoResponse_databaseEngine$databaseShortInfoResponse_databaseId+databaseShortInfoResponse_databaseIpAddress&databaseShortInfoResponse_databaseName!$fNFDataDatabaseShortInfoResponse#$fHashableDatabaseShortInfoResponse#$fFromJSONDatabaseShortInfoResponse$fEqDatabaseShortInfoResponse$fReadDatabaseShortInfoResponse$fShowDatabaseShortInfoResponse"$fGenericDatabaseShortInfoResponseDatePartitionDelimiterValueDatePartitionDelimiterValue'fromDatePartitionDelimiterValue&DatePartitionDelimiterValue_UNDERSCORE!DatePartitionDelimiterValue_SLASH DatePartitionDelimiterValue_NONE DatePartitionDelimiterValue_DASH!$fShowDatePartitionDelimiterValue!$fReadDatePartitionDelimiterValue$fEqDatePartitionDelimiterValue $fOrdDatePartitionDelimiterValue$$fGenericDatePartitionDelimiterValue%$fHashableDatePartitionDelimiterValue#$fNFDataDatePartitionDelimiterValue%$fFromTextDatePartitionDelimiterValue#$fToTextDatePartitionDelimiterValue)$fToByteStringDatePartitionDelimiterValue"$fToLogDatePartitionDelimiterValue%$fToHeaderDatePartitionDelimiterValue$$fToQueryDatePartitionDelimiterValue%$fFromJSONDatePartitionDelimiterValue($fFromJSONKeyDatePartitionDelimiterValue#$fToJSONDatePartitionDelimiterValue&$fToJSONKeyDatePartitionDelimiterValue$$fFromXMLDatePartitionDelimiterValue"$fToXMLDatePartitionDelimiterValueDatePartitionSequenceValueDatePartitionSequenceValue'fromDatePartitionSequenceValue%DatePartitionSequenceValue_YYYYMMDDHH#DatePartitionSequenceValue_YYYYMMDD!DatePartitionSequenceValue_YYYYMM#DatePartitionSequenceValue_MMYYYYDD#DatePartitionSequenceValue_DDMMYYYY $fShowDatePartitionSequenceValue $fReadDatePartitionSequenceValue$fEqDatePartitionSequenceValue$fOrdDatePartitionSequenceValue#$fGenericDatePartitionSequenceValue$$fHashableDatePartitionSequenceValue"$fNFDataDatePartitionSequenceValue$$fFromTextDatePartitionSequenceValue"$fToTextDatePartitionSequenceValue($fToByteStringDatePartitionSequenceValue!$fToLogDatePartitionSequenceValue$$fToHeaderDatePartitionSequenceValue#$fToQueryDatePartitionSequenceValue$$fFromJSONDatePartitionSequenceValue'$fFromJSONKeyDatePartitionSequenceValue"$fToJSONDatePartitionSequenceValue%$fToJSONKeyDatePartitionSequenceValue#$fFromXMLDatePartitionSequenceValue!$fToXMLDatePartitionSequenceValueDmsSslModeValueDmsSslModeValue'fromDmsSslModeValueDmsSslModeValue_Verify_fullDmsSslModeValue_Verify_caDmsSslModeValue_RequireDmsSslModeValue_None$fShowDmsSslModeValue$fReadDmsSslModeValue$fEqDmsSslModeValue$fOrdDmsSslModeValue$fGenericDmsSslModeValue$fHashableDmsSslModeValue$fNFDataDmsSslModeValue$fFromTextDmsSslModeValue$fToTextDmsSslModeValue$fToByteStringDmsSslModeValue$fToLogDmsSslModeValue$fToHeaderDmsSslModeValue$fToQueryDmsSslModeValue$fFromJSONDmsSslModeValue$fFromJSONKeyDmsSslModeValue$fToJSONDmsSslModeValue$fToJSONKeyDmsSslModeValue$fFromXMLDmsSslModeValue$fToXMLDmsSslModeValueDmsTransferSettingsDmsTransferSettings'$$sel:bucketName:DmsTransferSettings'.$sel:serviceAccessRoleArn:DmsTransferSettings'newDmsTransferSettingsdmsTransferSettings_bucketName(dmsTransferSettings_serviceAccessRoleArn$fToJSONDmsTransferSettings$fNFDataDmsTransferSettings$fHashableDmsTransferSettings$fFromJSONDmsTransferSettings$fEqDmsTransferSettings$fReadDmsTransferSettings$fShowDmsTransferSettings$fGenericDmsTransferSettingsDynamoDbSettingsDynamoDbSettings'+$sel:serviceAccessRoleArn:DynamoDbSettings'newDynamoDbSettings%dynamoDbSettings_serviceAccessRoleArn$fToJSONDynamoDbSettings$fNFDataDynamoDbSettings$fHashableDynamoDbSettings$fFromJSONDynamoDbSettings$fEqDynamoDbSettings$fReadDynamoDbSettings$fShowDynamoDbSettings$fGenericDynamoDbSettingsElasticsearchSettingsElasticsearchSettings'.$sel:errorRetryDuration:ElasticsearchSettings'3$sel:fullLoadErrorPercentage:ElasticsearchSettings'-$sel:useNewMappingType:ElasticsearchSettings'0$sel:serviceAccessRoleArn:ElasticsearchSettings''$sel:endpointUri:ElasticsearchSettings'newElasticsearchSettings(elasticsearchSettings_errorRetryDuration-elasticsearchSettings_fullLoadErrorPercentage'elasticsearchSettings_useNewMappingType*elasticsearchSettings_serviceAccessRoleArn!elasticsearchSettings_endpointUri$fToJSONElasticsearchSettings$fNFDataElasticsearchSettings$fHashableElasticsearchSettings$fFromJSONElasticsearchSettings$fEqElasticsearchSettings$fReadElasticsearchSettings$fShowElasticsearchSettings$fGenericElasticsearchSettingsEncodingTypeValueEncodingTypeValue'fromEncodingTypeValue EncodingTypeValue_Rle_dictionary"EncodingTypeValue_Plain_dictionaryEncodingTypeValue_Plain$fShowEncodingTypeValue$fReadEncodingTypeValue$fEqEncodingTypeValue$fOrdEncodingTypeValue$fGenericEncodingTypeValue$fHashableEncodingTypeValue$fNFDataEncodingTypeValue$fFromTextEncodingTypeValue$fToTextEncodingTypeValue$fToByteStringEncodingTypeValue$fToLogEncodingTypeValue$fToHeaderEncodingTypeValue$fToQueryEncodingTypeValue$fFromJSONEncodingTypeValue$fFromJSONKeyEncodingTypeValue$fToJSONEncodingTypeValue$fToJSONKeyEncodingTypeValue$fFromXMLEncodingTypeValue$fToXMLEncodingTypeValueEncryptionModeValueEncryptionModeValue'fromEncryptionModeValueEncryptionModeValue_Sse_s3EncryptionModeValue_Sse_kms$fShowEncryptionModeValue$fReadEncryptionModeValue$fEqEncryptionModeValue$fOrdEncryptionModeValue$fGenericEncryptionModeValue$fHashableEncryptionModeValue$fNFDataEncryptionModeValue$fFromTextEncryptionModeValue$fToTextEncryptionModeValue!$fToByteStringEncryptionModeValue$fToLogEncryptionModeValue$fToHeaderEncryptionModeValue$fToQueryEncryptionModeValue$fFromJSONEncryptionModeValue $fFromJSONKeyEncryptionModeValue$fToJSONEncryptionModeValue$fToJSONKeyEncryptionModeValue$fFromXMLEncryptionModeValue$fToXMLEncryptionModeValueEndpointSettingTypeValueEndpointSettingTypeValue'fromEndpointSettingTypeValueEndpointSettingTypeValue_String EndpointSettingTypeValue_IntegerEndpointSettingTypeValue_Enum EndpointSettingTypeValue_Boolean$fShowEndpointSettingTypeValue$fReadEndpointSettingTypeValue$fEqEndpointSettingTypeValue$fOrdEndpointSettingTypeValue!$fGenericEndpointSettingTypeValue"$fHashableEndpointSettingTypeValue $fNFDataEndpointSettingTypeValue"$fFromTextEndpointSettingTypeValue $fToTextEndpointSettingTypeValue&$fToByteStringEndpointSettingTypeValue$fToLogEndpointSettingTypeValue"$fToHeaderEndpointSettingTypeValue!$fToQueryEndpointSettingTypeValue"$fFromJSONEndpointSettingTypeValue%$fFromJSONKeyEndpointSettingTypeValue $fToJSONEndpointSettingTypeValue#$fToJSONKeyEndpointSettingTypeValue!$fFromXMLEndpointSettingTypeValue$fToXMLEndpointSettingTypeValueEndpointSettingEndpointSetting'#$sel:applicability:EndpointSetting'"$sel:defaultValue:EndpointSetting' $sel:enumValues:EndpointSetting'!$sel:intValueMax:EndpointSetting'!$sel:intValueMin:EndpointSetting'$sel:name:EndpointSetting'$sel:sensitive:EndpointSetting'$sel:type':EndpointSetting'$sel:units:EndpointSetting'newEndpointSettingendpointSetting_applicabilityendpointSetting_defaultValueendpointSetting_enumValuesendpointSetting_intValueMaxendpointSetting_intValueMinendpointSetting_nameendpointSetting_sensitiveendpointSetting_typeendpointSetting_units$fNFDataEndpointSetting$fHashableEndpointSetting$fFromJSONEndpointSetting$fEqEndpointSetting$fReadEndpointSetting$fShowEndpointSetting$fGenericEndpointSettingEventCategoryGroupEventCategoryGroup'($sel:eventCategories:EventCategoryGroup'#$sel:sourceType:EventCategoryGroup'newEventCategoryGroup"eventCategoryGroup_eventCategorieseventCategoryGroup_sourceType$fNFDataEventCategoryGroup$fHashableEventCategoryGroup$fFromJSONEventCategoryGroup$fEqEventCategoryGroup$fReadEventCategoryGroup$fShowEventCategoryGroup$fGenericEventCategoryGroupEventSubscriptionEventSubscription'*$sel:custSubscriptionId:EventSubscription'%$sel:customerAwsId:EventSubscription'$sel:enabled:EventSubscription'+$sel:eventCategoriesList:EventSubscription'#$sel:snsTopicArn:EventSubscription'%$sel:sourceIdsList:EventSubscription'"$sel:sourceType:EventSubscription'$sel:status:EventSubscription'0$sel:subscriptionCreationTime:EventSubscription'newEventSubscription$eventSubscription_custSubscriptionIdeventSubscription_customerAwsIdeventSubscription_enabled%eventSubscription_eventCategoriesListeventSubscription_snsTopicArneventSubscription_sourceIdsListeventSubscription_sourceTypeeventSubscription_status*eventSubscription_subscriptionCreationTime$fNFDataEventSubscription$fHashableEventSubscription$fFromJSONEventSubscription$fEqEventSubscription$fReadEventSubscription$fShowEventSubscription$fGenericEventSubscriptionFilterFilter'$sel:name:Filter'$sel:values:Filter' newFilter filter_name filter_values$fToJSONFilter$fNFDataFilter$fHashableFilter $fEqFilter $fReadFilter $fShowFilter$fGenericFilterFleetAdvisorLsaAnalysisResponse FleetAdvisorLsaAnalysisResponse'3$sel:lsaAnalysisId:FleetAdvisorLsaAnalysisResponse',$sel:status:FleetAdvisorLsaAnalysisResponse'"newFleetAdvisorLsaAnalysisResponse-fleetAdvisorLsaAnalysisResponse_lsaAnalysisId&fleetAdvisorLsaAnalysisResponse_status'$fNFDataFleetAdvisorLsaAnalysisResponse)$fHashableFleetAdvisorLsaAnalysisResponse)$fFromJSONFleetAdvisorLsaAnalysisResponse#$fEqFleetAdvisorLsaAnalysisResponse%$fReadFleetAdvisorLsaAnalysisResponse%$fShowFleetAdvisorLsaAnalysisResponse($fGenericFleetAdvisorLsaAnalysisResponse FleetAdvisorSchemaObjectResponse!FleetAdvisorSchemaObjectResponse'4$sel:codeLineCount:FleetAdvisorSchemaObjectResponse'/$sel:codeSize:FleetAdvisorSchemaObjectResponse'6$sel:numberOfObjects:FleetAdvisorSchemaObjectResponse'1$sel:objectType:FleetAdvisorSchemaObjectResponse'/$sel:schemaId:FleetAdvisorSchemaObjectResponse'#newFleetAdvisorSchemaObjectResponse.fleetAdvisorSchemaObjectResponse_codeLineCount)fleetAdvisorSchemaObjectResponse_codeSize0fleetAdvisorSchemaObjectResponse_numberOfObjects+fleetAdvisorSchemaObjectResponse_objectType)fleetAdvisorSchemaObjectResponse_schemaId($fNFDataFleetAdvisorSchemaObjectResponse*$fHashableFleetAdvisorSchemaObjectResponse*$fFromJSONFleetAdvisorSchemaObjectResponse$$fEqFleetAdvisorSchemaObjectResponse&$fReadFleetAdvisorSchemaObjectResponse&$fShowFleetAdvisorSchemaObjectResponse)$fGenericFleetAdvisorSchemaObjectResponseIBMDb2SettingsIBMDb2Settings'$sel:currentLsn:IBMDb2Settings'!$sel:databaseName:IBMDb2Settings'%$sel:maxKBytesPerRead:IBMDb2Settings'$sel:password:IBMDb2Settings'$sel:port:IBMDb2Settings'0$sel:secretsManagerAccessRoleArn:IBMDb2Settings'+$sel:secretsManagerSecretId:IBMDb2Settings'$sel:serverName:IBMDb2Settings'*$sel:setDataCaptureChanges:IBMDb2Settings'$sel:username:IBMDb2Settings'newIBMDb2SettingsiBMDb2Settings_currentLsniBMDb2Settings_databaseNameiBMDb2Settings_maxKBytesPerReadiBMDb2Settings_passwordiBMDb2Settings_port*iBMDb2Settings_secretsManagerAccessRoleArn%iBMDb2Settings_secretsManagerSecretIdiBMDb2Settings_serverName$iBMDb2Settings_setDataCaptureChangesiBMDb2Settings_username$fToJSONIBMDb2Settings$fNFDataIBMDb2Settings$fHashableIBMDb2Settings$fFromJSONIBMDb2Settings$fEqIBMDb2Settings$fShowIBMDb2Settings$fGenericIBMDb2Settings InventoryDataInventoryData'%$sel:numberOfDatabases:InventoryData'#$sel:numberOfSchemas:InventoryData'newInventoryDatainventoryData_numberOfDatabasesinventoryData_numberOfSchemas$fNFDataInventoryData$fHashableInventoryData$fFromJSONInventoryData$fEqInventoryData$fReadInventoryData$fShowInventoryData$fGenericInventoryDataKafkaSecurityProtocolKafkaSecurityProtocol'fromKafkaSecurityProtocol$KafkaSecurityProtocol_Ssl_encryption(KafkaSecurityProtocol_Ssl_authenticationKafkaSecurityProtocol_Sasl_sslKafkaSecurityProtocol_Plaintext$fShowKafkaSecurityProtocol$fReadKafkaSecurityProtocol$fEqKafkaSecurityProtocol$fOrdKafkaSecurityProtocol$fGenericKafkaSecurityProtocol$fHashableKafkaSecurityProtocol$fNFDataKafkaSecurityProtocol$fFromTextKafkaSecurityProtocol$fToTextKafkaSecurityProtocol#$fToByteStringKafkaSecurityProtocol$fToLogKafkaSecurityProtocol$fToHeaderKafkaSecurityProtocol$fToQueryKafkaSecurityProtocol$fFromJSONKafkaSecurityProtocol"$fFromJSONKeyKafkaSecurityProtocol$fToJSONKafkaSecurityProtocol $fToJSONKeyKafkaSecurityProtocol$fFromXMLKafkaSecurityProtocol$fToXMLKafkaSecurityProtocolMessageFormatValueMessageFormatValue'fromMessageFormatValue#MessageFormatValue_Json_unformattedMessageFormatValue_Json$fShowMessageFormatValue$fReadMessageFormatValue$fEqMessageFormatValue$fOrdMessageFormatValue$fGenericMessageFormatValue$fHashableMessageFormatValue$fNFDataMessageFormatValue$fFromTextMessageFormatValue$fToTextMessageFormatValue $fToByteStringMessageFormatValue$fToLogMessageFormatValue$fToHeaderMessageFormatValue$fToQueryMessageFormatValue$fFromJSONMessageFormatValue$fFromJSONKeyMessageFormatValue$fToJSONMessageFormatValue$fToJSONKeyMessageFormatValue$fFromXMLMessageFormatValue$fToXMLMessageFormatValueKinesisSettingsKinesisSettings'+$sel:includeControlDetails:KinesisSettings')$sel:includeNullAndEmpty:KinesisSettings'+$sel:includePartitionValue:KinesisSettings'1$sel:includeTableAlterOperations:KinesisSettings'/$sel:includeTransactionDetails:KinesisSettings'#$sel:messageFormat:KinesisSettings'!$sel:noHexPrefix:KinesisSettings'1$sel:partitionIncludeSchemaTable:KinesisSettings'*$sel:serviceAccessRoleArn:KinesisSettings'$sel:streamArn:KinesisSettings'newKinesisSettings%kinesisSettings_includeControlDetails#kinesisSettings_includeNullAndEmpty%kinesisSettings_includePartitionValue+kinesisSettings_includeTableAlterOperations)kinesisSettings_includeTransactionDetailskinesisSettings_messageFormatkinesisSettings_noHexPrefix+kinesisSettings_partitionIncludeSchemaTable$kinesisSettings_serviceAccessRoleArnkinesisSettings_streamArn$fToJSONKinesisSettings$fNFDataKinesisSettings$fHashableKinesisSettings$fFromJSONKinesisSettings$fEqKinesisSettings$fReadKinesisSettings$fShowKinesisSettings$fGenericKinesisSettings KafkaSettingsKafkaSettings'$sel:broker:KafkaSettings')$sel:includeControlDetails:KafkaSettings''$sel:includeNullAndEmpty:KafkaSettings')$sel:includePartitionValue:KafkaSettings'/$sel:includeTableAlterOperations:KafkaSettings'-$sel:includeTransactionDetails:KafkaSettings'!$sel:messageFormat:KafkaSettings'#$sel:messageMaxBytes:KafkaSettings'$sel:noHexPrefix:KafkaSettings'/$sel:partitionIncludeSchemaTable:KafkaSettings' $sel:saslPassword:KafkaSettings' $sel:saslUsername:KafkaSettings'$$sel:securityProtocol:KafkaSettings''$sel:sslCaCertificateArn:KafkaSettings'+$sel:sslClientCertificateArn:KafkaSettings'#$sel:sslClientKeyArn:KafkaSettings'($sel:sslClientKeyPassword:KafkaSettings'$sel:topic:KafkaSettings'newKafkaSettingskafkaSettings_broker#kafkaSettings_includeControlDetails!kafkaSettings_includeNullAndEmpty#kafkaSettings_includePartitionValue)kafkaSettings_includeTableAlterOperations'kafkaSettings_includeTransactionDetailskafkaSettings_messageFormatkafkaSettings_messageMaxByteskafkaSettings_noHexPrefix)kafkaSettings_partitionIncludeSchemaTablekafkaSettings_saslPasswordkafkaSettings_saslUsernamekafkaSettings_securityProtocol!kafkaSettings_sslCaCertificateArn%kafkaSettings_sslClientCertificateArnkafkaSettings_sslClientKeyArn"kafkaSettings_sslClientKeyPasswordkafkaSettings_topic$fToJSONKafkaSettings$fNFDataKafkaSettings$fHashableKafkaSettings$fFromJSONKafkaSettings$fEqKafkaSettings$fShowKafkaSettings$fGenericKafkaSettingsMigrationTypeValueMigrationTypeValue'fromMigrationTypeValue$MigrationTypeValue_Full_load_and_cdcMigrationTypeValue_Full_loadMigrationTypeValue_Cdc$fShowMigrationTypeValue$fReadMigrationTypeValue$fEqMigrationTypeValue$fOrdMigrationTypeValue$fGenericMigrationTypeValue$fHashableMigrationTypeValue$fNFDataMigrationTypeValue$fFromTextMigrationTypeValue$fToTextMigrationTypeValue $fToByteStringMigrationTypeValue$fToLogMigrationTypeValue$fToHeaderMigrationTypeValue$fToQueryMigrationTypeValue$fFromJSONMigrationTypeValue$fFromJSONKeyMigrationTypeValue$fToJSONMigrationTypeValue$fToJSONKeyMigrationTypeValue$fFromXMLMigrationTypeValue$fToXMLMigrationTypeValueNeptuneSettingsNeptuneSettings'($sel:errorRetryDuration:NeptuneSettings'$$sel:iamAuthEnabled:NeptuneSettings'!$sel:maxFileSize:NeptuneSettings'#$sel:maxRetryCount:NeptuneSettings'*$sel:serviceAccessRoleArn:NeptuneSettings'"$sel:s3BucketName:NeptuneSettings'$$sel:s3BucketFolder:NeptuneSettings'newNeptuneSettings"neptuneSettings_errorRetryDurationneptuneSettings_iamAuthEnabledneptuneSettings_maxFileSizeneptuneSettings_maxRetryCount$neptuneSettings_serviceAccessRoleArnneptuneSettings_s3BucketNameneptuneSettings_s3BucketFolder$fToJSONNeptuneSettings$fNFDataNeptuneSettings$fHashableNeptuneSettings$fFromJSONNeptuneSettings$fEqNeptuneSettings$fReadNeptuneSettings$fShowNeptuneSettings$fGenericNeptuneSettingsNestingLevelValueNestingLevelValue'fromNestingLevelValueNestingLevelValue_OneNestingLevelValue_None$fShowNestingLevelValue$fReadNestingLevelValue$fEqNestingLevelValue$fOrdNestingLevelValue$fGenericNestingLevelValue$fHashableNestingLevelValue$fNFDataNestingLevelValue$fFromTextNestingLevelValue$fToTextNestingLevelValue$fToByteStringNestingLevelValue$fToLogNestingLevelValue$fToHeaderNestingLevelValue$fToQueryNestingLevelValue$fFromJSONNestingLevelValue$fFromJSONKeyNestingLevelValue$fToJSONNestingLevelValue$fToJSONKeyNestingLevelValue$fFromXMLNestingLevelValue$fToXMLNestingLevelValueMongoDbSettingsMongoDbSettings'#$sel:authMechanism:MongoDbSettings' $sel:authSource:MongoDbSettings'$sel:authType:MongoDbSettings'"$sel:databaseName:MongoDbSettings''$sel:docsToInvestigate:MongoDbSettings'"$sel:extractDocId:MongoDbSettings'$sel:kmsKeyId:MongoDbSettings'"$sel:nestingLevel:MongoDbSettings'$sel:password:MongoDbSettings'$sel:port:MongoDbSettings'1$sel:secretsManagerAccessRoleArn:MongoDbSettings',$sel:secretsManagerSecretId:MongoDbSettings' $sel:serverName:MongoDbSettings'$sel:username:MongoDbSettings'newMongoDbSettingsmongoDbSettings_authMechanismmongoDbSettings_authSourcemongoDbSettings_authTypemongoDbSettings_databaseName!mongoDbSettings_docsToInvestigatemongoDbSettings_extractDocIdmongoDbSettings_kmsKeyIdmongoDbSettings_nestingLevelmongoDbSettings_passwordmongoDbSettings_port+mongoDbSettings_secretsManagerAccessRoleArn&mongoDbSettings_secretsManagerSecretIdmongoDbSettings_serverNamemongoDbSettings_username$fToJSONMongoDbSettings$fNFDataMongoDbSettings$fHashableMongoDbSettings$fFromJSONMongoDbSettings$fEqMongoDbSettings$fShowMongoDbSettings$fGenericMongoDbSettings DocDbSettingsDocDbSettings' $sel:databaseName:DocDbSettings'%$sel:docsToInvestigate:DocDbSettings' $sel:extractDocId:DocDbSettings'$sel:kmsKeyId:DocDbSettings' $sel:nestingLevel:DocDbSettings'$sel:password:DocDbSettings'$sel:port:DocDbSettings'/$sel:secretsManagerAccessRoleArn:DocDbSettings'*$sel:secretsManagerSecretId:DocDbSettings'$sel:serverName:DocDbSettings'$sel:username:DocDbSettings'newDocDbSettingsdocDbSettings_databaseNamedocDbSettings_docsToInvestigatedocDbSettings_extractDocIddocDbSettings_kmsKeyIddocDbSettings_nestingLeveldocDbSettings_passworddocDbSettings_port)docDbSettings_secretsManagerAccessRoleArn$docDbSettings_secretsManagerSecretIddocDbSettings_serverNamedocDbSettings_username$fToJSONDocDbSettings$fNFDataDocDbSettings$fHashableDocDbSettings$fFromJSONDocDbSettings$fEqDocDbSettings$fShowDocDbSettings$fGenericDocDbSettingsOracleSettingsOracleSettings',$sel:accessAlternateDirectly:OracleSettings'+$sel:addSupplementalLogging:OracleSettings'0$sel:additionalArchivedLogDestId:OracleSettings',$sel:allowSelectNestedTables:OracleSettings'&$sel:archivedLogDestId:OracleSettings'%$sel:archivedLogsOnly:OracleSettings' $sel:asmPassword:OracleSettings'$sel:asmServer:OracleSettings'$sel:asmUser:OracleSettings'($sel:charLengthSemantics:OracleSettings'!$sel:databaseName:OracleSettings'$$sel:directPathNoLog:OracleSettings'+$sel:directPathParallelLoad:OracleSettings'/$sel:enableHomogenousTablespace:OracleSettings',$sel:extraArchivedLogDestIds:OracleSettings'-$sel:failTasksOnLobTruncation:OracleSettings'($sel:numberDatatypeScale:OracleSettings'%$sel:oraclePathPrefix:OracleSettings'+$sel:parallelAsmReadThreads:OracleSettings'$sel:password:OracleSettings'$sel:port:OracleSettings'$$sel:readAheadBlocks:OracleSettings''$sel:readTableSpaceName:OracleSettings'&$sel:replacePathPrefix:OracleSettings'"$sel:retryInterval:OracleSettings'0$sel:secretsManagerAccessRoleArn:OracleSettings'9$sel:secretsManagerOracleAsmAccessRoleArn:OracleSettings'4$sel:secretsManagerOracleAsmSecretId:OracleSettings'+$sel:secretsManagerSecretId:OracleSettings')$sel:securityDbEncryption:OracleSettings'-$sel:securityDbEncryptionName:OracleSettings'$sel:serverName:OracleSettings';$sel:spatialDataOptionToGeoJsonFunctionName:OracleSettings'%$sel:standbyDelayTime:OracleSettings'$$sel:trimSpaceInChar:OracleSettings'0$sel:useAlternateFolderForOnline:OracleSettings'$sel:useBFile:OracleSettings'*$sel:useDirectPathFullLoad:OracleSettings'&$sel:useLogminerReader:OracleSettings'"$sel:usePathPrefix:OracleSettings'$sel:username:OracleSettings'newOracleSettings&oracleSettings_accessAlternateDirectly%oracleSettings_addSupplementalLogging*oracleSettings_additionalArchivedLogDestId&oracleSettings_allowSelectNestedTables oracleSettings_archivedLogDestIdoracleSettings_archivedLogsOnlyoracleSettings_asmPasswordoracleSettings_asmServeroracleSettings_asmUser"oracleSettings_charLengthSemanticsoracleSettings_databaseNameoracleSettings_directPathNoLog%oracleSettings_directPathParallelLoad)oracleSettings_enableHomogenousTablespace&oracleSettings_extraArchivedLogDestIds'oracleSettings_failTasksOnLobTruncation"oracleSettings_numberDatatypeScaleoracleSettings_oraclePathPrefix%oracleSettings_parallelAsmReadThreadsoracleSettings_passwordoracleSettings_portoracleSettings_readAheadBlocks!oracleSettings_readTableSpaceName oracleSettings_replacePathPrefixoracleSettings_retryInterval*oracleSettings_secretsManagerAccessRoleArn3oracleSettings_secretsManagerOracleAsmAccessRoleArn.oracleSettings_secretsManagerOracleAsmSecretId%oracleSettings_secretsManagerSecretId#oracleSettings_securityDbEncryption'oracleSettings_securityDbEncryptionNameoracleSettings_serverName5oracleSettings_spatialDataOptionToGeoJsonFunctionNameoracleSettings_standbyDelayTimeoracleSettings_trimSpaceInChar*oracleSettings_useAlternateFolderForOnlineoracleSettings_useBFile$oracleSettings_useDirectPathFullLoad oracleSettings_useLogminerReaderoracleSettings_usePathPrefixoracleSettings_username$fToJSONOracleSettings$fNFDataOracleSettings$fHashableOracleSettings$fFromJSONOracleSettings$fEqOracleSettings$fShowOracleSettings$fGenericOracleSettingsParquetVersionValueParquetVersionValue'fromParquetVersionValueParquetVersionValue_Parquet_2_0ParquetVersionValue_Parquet_1_0$fShowParquetVersionValue$fReadParquetVersionValue$fEqParquetVersionValue$fOrdParquetVersionValue$fGenericParquetVersionValue$fHashableParquetVersionValue$fNFDataParquetVersionValue$fFromTextParquetVersionValue$fToTextParquetVersionValue!$fToByteStringParquetVersionValue$fToLogParquetVersionValue$fToHeaderParquetVersionValue$fToQueryParquetVersionValue$fFromJSONParquetVersionValue $fFromJSONKeyParquetVersionValue$fToJSONParquetVersionValue$fToJSONKeyParquetVersionValue$fFromXMLParquetVersionValue$fToXMLParquetVersionValuePendingMaintenanceActionPendingMaintenanceAction'%$sel:action:PendingMaintenanceAction'3$sel:autoAppliedAfterDate:PendingMaintenanceAction'/$sel:currentApplyDate:PendingMaintenanceAction'*$sel:description:PendingMaintenanceAction'.$sel:forcedApplyDate:PendingMaintenanceAction'*$sel:optInStatus:PendingMaintenanceAction'newPendingMaintenanceActionpendingMaintenanceAction_action-pendingMaintenanceAction_autoAppliedAfterDate)pendingMaintenanceAction_currentApplyDate$pendingMaintenanceAction_description(pendingMaintenanceAction_forcedApplyDate$pendingMaintenanceAction_optInStatus $fNFDataPendingMaintenanceAction"$fHashablePendingMaintenanceAction"$fFromJSONPendingMaintenanceAction$fEqPendingMaintenanceAction$fReadPendingMaintenanceAction$fShowPendingMaintenanceAction!$fGenericPendingMaintenanceActionPluginNameValuePluginNameValue'fromPluginNameValuePluginNameValue_Test_decodingPluginNameValue_PglogicalPluginNameValue_No_preference$fShowPluginNameValue$fReadPluginNameValue$fEqPluginNameValue$fOrdPluginNameValue$fGenericPluginNameValue$fHashablePluginNameValue$fNFDataPluginNameValue$fFromTextPluginNameValue$fToTextPluginNameValue$fToByteStringPluginNameValue$fToLogPluginNameValue$fToHeaderPluginNameValue$fToQueryPluginNameValue$fFromJSONPluginNameValue$fFromJSONKeyPluginNameValue$fToJSONPluginNameValue$fToJSONKeyPluginNameValue$fFromXMLPluginNameValue$fToXMLPluginNameValuePostgreSQLSettingsPostgreSQLSettings'+$sel:afterConnectScript:PostgreSQLSettings'$$sel:captureDdls:PostgreSQLSettings'%$sel:databaseName:PostgreSQLSettings'+$sel:ddlArtifactsSchema:PostgreSQLSettings''$sel:executeTimeout:PostgreSQLSettings'1$sel:failTasksOnLobTruncation:PostgreSQLSettings'($sel:heartbeatEnable:PostgreSQLSettings'+$sel:heartbeatFrequency:PostgreSQLSettings'($sel:heartbeatSchema:PostgreSQLSettings'$$sel:maxFileSize:PostgreSQLSettings'!$sel:password:PostgreSQLSettings'#$sel:pluginName:PostgreSQLSettings'$sel:port:PostgreSQLSettings'4$sel:secretsManagerAccessRoleArn:PostgreSQLSettings'/$sel:secretsManagerSecretId:PostgreSQLSettings'#$sel:serverName:PostgreSQLSettings'!$sel:slotName:PostgreSQLSettings'($sel:trimSpaceInChar:PostgreSQLSettings'!$sel:username:PostgreSQLSettings'newPostgreSQLSettings%postgreSQLSettings_afterConnectScriptpostgreSQLSettings_captureDdlspostgreSQLSettings_databaseName%postgreSQLSettings_ddlArtifactsSchema!postgreSQLSettings_executeTimeout+postgreSQLSettings_failTasksOnLobTruncation"postgreSQLSettings_heartbeatEnable%postgreSQLSettings_heartbeatFrequency"postgreSQLSettings_heartbeatSchemapostgreSQLSettings_maxFileSizepostgreSQLSettings_passwordpostgreSQLSettings_pluginNamepostgreSQLSettings_port.postgreSQLSettings_secretsManagerAccessRoleArn)postgreSQLSettings_secretsManagerSecretIdpostgreSQLSettings_serverNamepostgreSQLSettings_slotName"postgreSQLSettings_trimSpaceInCharpostgreSQLSettings_username$fToJSONPostgreSQLSettings$fNFDataPostgreSQLSettings$fHashablePostgreSQLSettings$fFromJSONPostgreSQLSettings$fEqPostgreSQLSettings$fShowPostgreSQLSettings$fGenericPostgreSQLSettingsRedisAuthTypeValueRedisAuthTypeValue'fromRedisAuthTypeValueRedisAuthTypeValue_NoneRedisAuthTypeValue_Auth_tokenRedisAuthTypeValue_Auth_role$fShowRedisAuthTypeValue$fReadRedisAuthTypeValue$fEqRedisAuthTypeValue$fOrdRedisAuthTypeValue$fGenericRedisAuthTypeValue$fHashableRedisAuthTypeValue$fNFDataRedisAuthTypeValue$fFromTextRedisAuthTypeValue$fToTextRedisAuthTypeValue $fToByteStringRedisAuthTypeValue$fToLogRedisAuthTypeValue$fToHeaderRedisAuthTypeValue$fToQueryRedisAuthTypeValue$fFromJSONRedisAuthTypeValue$fFromJSONKeyRedisAuthTypeValue$fToJSONRedisAuthTypeValue$fToJSONKeyRedisAuthTypeValue$fFromXMLRedisAuthTypeValue$fToXMLRedisAuthTypeValueRedshiftSettingsRedshiftSettings'$$sel:acceptAnyDate:RedshiftSettings')$sel:afterConnectScript:RedshiftSettings'#$sel:bucketFolder:RedshiftSettings'!$sel:bucketName:RedshiftSettings')$sel:caseSensitiveNames:RedshiftSettings'!$sel:compUpdate:RedshiftSettings'($sel:connectionTimeout:RedshiftSettings'#$sel:databaseName:RedshiftSettings'!$sel:dateFormat:RedshiftSettings'"$sel:emptyAsNull:RedshiftSettings'%$sel:encryptionMode:RedshiftSettings'"$sel:explicitIds:RedshiftSettings'0$sel:fileTransferUploadStreams:RedshiftSettings'"$sel:loadTimeout:RedshiftSettings'"$sel:maxFileSize:RedshiftSettings'$sel:password:RedshiftSettings'$sel:port:RedshiftSettings'#$sel:removeQuotes:RedshiftSettings'#$sel:replaceChars:RedshiftSettings'*$sel:replaceInvalidChars:RedshiftSettings'2$sel:secretsManagerAccessRoleArn:RedshiftSettings'-$sel:secretsManagerSecretId:RedshiftSettings'!$sel:serverName:RedshiftSettings'3$sel:serverSideEncryptionKmsKeyId:RedshiftSettings'+$sel:serviceAccessRoleArn:RedshiftSettings'!$sel:timeFormat:RedshiftSettings'!$sel:trimBlanks:RedshiftSettings'&$sel:truncateColumns:RedshiftSettings'$sel:username:RedshiftSettings'&$sel:writeBufferSize:RedshiftSettings'newRedshiftSettingsredshiftSettings_acceptAnyDate#redshiftSettings_afterConnectScriptredshiftSettings_bucketFolderredshiftSettings_bucketName#redshiftSettings_caseSensitiveNamesredshiftSettings_compUpdate"redshiftSettings_connectionTimeoutredshiftSettings_databaseNameredshiftSettings_dateFormatredshiftSettings_emptyAsNullredshiftSettings_encryptionModeredshiftSettings_explicitIds*redshiftSettings_fileTransferUploadStreamsredshiftSettings_loadTimeoutredshiftSettings_maxFileSizeredshiftSettings_passwordredshiftSettings_portredshiftSettings_removeQuotesredshiftSettings_replaceChars$redshiftSettings_replaceInvalidChars,redshiftSettings_secretsManagerAccessRoleArn'redshiftSettings_secretsManagerSecretIdredshiftSettings_serverName-redshiftSettings_serverSideEncryptionKmsKeyId%redshiftSettings_serviceAccessRoleArnredshiftSettings_timeFormatredshiftSettings_trimBlanks redshiftSettings_truncateColumnsredshiftSettings_username redshiftSettings_writeBufferSize$fToJSONRedshiftSettings$fNFDataRedshiftSettings$fHashableRedshiftSettings$fFromJSONRedshiftSettings$fEqRedshiftSettings$fShowRedshiftSettings$fGenericRedshiftSettingsRefreshSchemasStatusTypeValueRefreshSchemasStatusTypeValue'!fromRefreshSchemasStatusTypeValue(RefreshSchemasStatusTypeValue_Successful(RefreshSchemasStatusTypeValue_Refreshing$RefreshSchemasStatusTypeValue_Failed#$fShowRefreshSchemasStatusTypeValue#$fReadRefreshSchemasStatusTypeValue!$fEqRefreshSchemasStatusTypeValue"$fOrdRefreshSchemasStatusTypeValue&$fGenericRefreshSchemasStatusTypeValue'$fHashableRefreshSchemasStatusTypeValue%$fNFDataRefreshSchemasStatusTypeValue'$fFromTextRefreshSchemasStatusTypeValue%$fToTextRefreshSchemasStatusTypeValue+$fToByteStringRefreshSchemasStatusTypeValue$$fToLogRefreshSchemasStatusTypeValue'$fToHeaderRefreshSchemasStatusTypeValue&$fToQueryRefreshSchemasStatusTypeValue'$fFromJSONRefreshSchemasStatusTypeValue*$fFromJSONKeyRefreshSchemasStatusTypeValue%$fToJSONRefreshSchemasStatusTypeValue($fToJSONKeyRefreshSchemasStatusTypeValue&$fFromXMLRefreshSchemasStatusTypeValue$$fToXMLRefreshSchemasStatusTypeValueRefreshSchemasStatusRefreshSchemasStatus'&$sel:endpointArn:RefreshSchemasStatus'-$sel:lastFailureMessage:RefreshSchemasStatus'*$sel:lastRefreshDate:RefreshSchemasStatus'1$sel:replicationInstanceArn:RefreshSchemasStatus'!$sel:status:RefreshSchemasStatus'newRefreshSchemasStatus refreshSchemasStatus_endpointArn'refreshSchemasStatus_lastFailureMessage$refreshSchemasStatus_lastRefreshDate+refreshSchemasStatus_replicationInstanceArnrefreshSchemasStatus_status$fNFDataRefreshSchemasStatus$fHashableRefreshSchemasStatus$fFromJSONRefreshSchemasStatus$fEqRefreshSchemasStatus$fReadRefreshSchemasStatus$fShowRefreshSchemasStatus$fGenericRefreshSchemasStatusReleaseStatusValuesReleaseStatusValues'fromReleaseStatusValuesReleaseStatusValues_Beta$fShowReleaseStatusValues$fReadReleaseStatusValues$fEqReleaseStatusValues$fOrdReleaseStatusValues$fGenericReleaseStatusValues$fHashableReleaseStatusValues$fNFDataReleaseStatusValues$fFromTextReleaseStatusValues$fToTextReleaseStatusValues!$fToByteStringReleaseStatusValues$fToLogReleaseStatusValues$fToHeaderReleaseStatusValues$fToQueryReleaseStatusValues$fFromJSONReleaseStatusValues $fFromJSONKeyReleaseStatusValues$fToJSONReleaseStatusValues$fToJSONKeyReleaseStatusValues$fFromXMLReleaseStatusValues$fToXMLReleaseStatusValuesOrderableReplicationInstanceOrderableReplicationInstance'4$sel:availabilityZones:OrderableReplicationInstance':$sel:defaultAllocatedStorage:OrderableReplicationInstance'0$sel:engineVersion:OrderableReplicationInstance';$sel:includedAllocatedStorage:OrderableReplicationInstance'6$sel:maxAllocatedStorage:OrderableReplicationInstance'6$sel:minAllocatedStorage:OrderableReplicationInstance'0$sel:releaseStatus:OrderableReplicationInstance';$sel:replicationInstanceClass:OrderableReplicationInstance'.$sel:storageType:OrderableReplicationInstance'newOrderableReplicationInstance.orderableReplicationInstance_availabilityZones4orderableReplicationInstance_defaultAllocatedStorage*orderableReplicationInstance_engineVersion5orderableReplicationInstance_includedAllocatedStorage0orderableReplicationInstance_maxAllocatedStorage0orderableReplicationInstance_minAllocatedStorage*orderableReplicationInstance_releaseStatus5orderableReplicationInstance_replicationInstanceClass(orderableReplicationInstance_storageType$$fNFDataOrderableReplicationInstance&$fHashableOrderableReplicationInstance&$fFromJSONOrderableReplicationInstance $fEqOrderableReplicationInstance"$fReadOrderableReplicationInstance"$fShowOrderableReplicationInstance%$fGenericOrderableReplicationInstanceReloadOptionValueReloadOptionValue'fromReloadOptionValueReloadOptionValue_Validate_onlyReloadOptionValue_Data_reload$fShowReloadOptionValue$fReadReloadOptionValue$fEqReloadOptionValue$fOrdReloadOptionValue$fGenericReloadOptionValue$fHashableReloadOptionValue$fNFDataReloadOptionValue$fFromTextReloadOptionValue$fToTextReloadOptionValue$fToByteStringReloadOptionValue$fToLogReloadOptionValue$fToHeaderReloadOptionValue$fToQueryReloadOptionValue$fFromJSONReloadOptionValue$fFromJSONKeyReloadOptionValue$fToJSONReloadOptionValue$fToJSONKeyReloadOptionValue$fFromXMLReloadOptionValue$fToXMLReloadOptionValueReplicationEndpointTypeValueReplicationEndpointTypeValue' fromReplicationEndpointTypeValue#ReplicationEndpointTypeValue_Target#ReplicationEndpointTypeValue_Source"$fShowReplicationEndpointTypeValue"$fReadReplicationEndpointTypeValue $fEqReplicationEndpointTypeValue!$fOrdReplicationEndpointTypeValue%$fGenericReplicationEndpointTypeValue&$fHashableReplicationEndpointTypeValue$$fNFDataReplicationEndpointTypeValue&$fFromTextReplicationEndpointTypeValue$$fToTextReplicationEndpointTypeValue*$fToByteStringReplicationEndpointTypeValue#$fToLogReplicationEndpointTypeValue&$fToHeaderReplicationEndpointTypeValue%$fToQueryReplicationEndpointTypeValue&$fFromJSONReplicationEndpointTypeValue)$fFromJSONKeyReplicationEndpointTypeValue$$fToJSONReplicationEndpointTypeValue'$fToJSONKeyReplicationEndpointTypeValue%$fFromXMLReplicationEndpointTypeValue#$fToXMLReplicationEndpointTypeValueReplicationInstanceTaskLogReplicationInstanceTaskLog'?$sel:replicationInstanceTaskLogSize:ReplicationInstanceTaskLog'3$sel:replicationTaskArn:ReplicationInstanceTaskLog'4$sel:replicationTaskName:ReplicationInstanceTaskLog'newReplicationInstanceTaskLog9replicationInstanceTaskLog_replicationInstanceTaskLogSize-replicationInstanceTaskLog_replicationTaskArn.replicationInstanceTaskLog_replicationTaskName"$fNFDataReplicationInstanceTaskLog$$fHashableReplicationInstanceTaskLog$$fFromJSONReplicationInstanceTaskLog$fEqReplicationInstanceTaskLog $fReadReplicationInstanceTaskLog $fShowReplicationInstanceTaskLog#$fGenericReplicationInstanceTaskLog ReplicationPendingModifiedValues!ReplicationPendingModifiedValues'7$sel:allocatedStorage:ReplicationPendingModifiedValues'4$sel:engineVersion:ReplicationPendingModifiedValues'.$sel:multiAZ:ReplicationPendingModifiedValues'2$sel:networkType:ReplicationPendingModifiedValues'?$sel:replicationInstanceClass:ReplicationPendingModifiedValues'#newReplicationPendingModifiedValues1replicationPendingModifiedValues_allocatedStorage.replicationPendingModifiedValues_engineVersion(replicationPendingModifiedValues_multiAZ,replicationPendingModifiedValues_networkType9replicationPendingModifiedValues_replicationInstanceClass($fNFDataReplicationPendingModifiedValues*$fHashableReplicationPendingModifiedValues*$fFromJSONReplicationPendingModifiedValues$$fEqReplicationPendingModifiedValues&$fReadReplicationPendingModifiedValues&$fShowReplicationPendingModifiedValues)$fGenericReplicationPendingModifiedValuesReplicationTaskAssessmentResult ReplicationTaskAssessmentResult'7$sel:assessmentResults:ReplicationTaskAssessmentResult';$sel:assessmentResultsFile:ReplicationTaskAssessmentResult'6$sel:assessmentStatus:ReplicationTaskAssessmentResult'8$sel:replicationTaskArn:ReplicationTaskAssessmentResult'?$sel:replicationTaskIdentifier:ReplicationTaskAssessmentResult'$sel:replicationTaskLastAssessmentDate:ReplicationTaskAssessmentResult'1$sel:s3ObjectUrl:ReplicationTaskAssessmentResult'"newReplicationTaskAssessmentResult1replicationTaskAssessmentResult_assessmentResults5replicationTaskAssessmentResult_assessmentResultsFile0replicationTaskAssessmentResult_assessmentStatus2replicationTaskAssessmentResult_replicationTaskArn9replicationTaskAssessmentResult_replicationTaskIdentifierreplicationTaskAssessmentResult_replicationTaskLastAssessmentDate+replicationTaskAssessmentResult_s3ObjectUrl'$fNFDataReplicationTaskAssessmentResult)$fHashableReplicationTaskAssessmentResult)$fFromJSONReplicationTaskAssessmentResult#$fEqReplicationTaskAssessmentResult%$fReadReplicationTaskAssessmentResult%$fShowReplicationTaskAssessmentResult($fGenericReplicationTaskAssessmentResult$ReplicationTaskAssessmentRunProgress%ReplicationTaskAssessmentRunProgress'$sel:individualAssessmentCompletedCount:ReplicationTaskAssessmentRunProgress'$sel:individualAssessmentCount:ReplicationTaskAssessmentRunProgress''newReplicationTaskAssessmentRunProgressreplicationTaskAssessmentRunProgress_individualAssessmentCompletedCount>replicationTaskAssessmentRunProgress_individualAssessmentCount,$fNFDataReplicationTaskAssessmentRunProgress.$fHashableReplicationTaskAssessmentRunProgress.$fFromJSONReplicationTaskAssessmentRunProgress($fEqReplicationTaskAssessmentRunProgress*$fReadReplicationTaskAssessmentRunProgress*$fShowReplicationTaskAssessmentRunProgress-$fGenericReplicationTaskAssessmentRunProgressReplicationTaskAssessmentRunReplicationTaskAssessmentRun'5$sel:assessmentProgress:ReplicationTaskAssessmentRun'4$sel:assessmentRunName:ReplicationTaskAssessmentRun'5$sel:lastFailureMessage:ReplicationTaskAssessmentRun'5$sel:replicationTaskArn:ReplicationTaskAssessmentRun'$sel:replicationTaskAssessmentRunArn:ReplicationTaskAssessmentRun'$sel:replicationTaskAssessmentRunCreationDate:ReplicationTaskAssessmentRun'7$sel:resultEncryptionMode:ReplicationTaskAssessmentRun'2$sel:resultKmsKeyArn:ReplicationTaskAssessmentRun'7$sel:resultLocationBucket:ReplicationTaskAssessmentRun'7$sel:resultLocationFolder:ReplicationTaskAssessmentRun'7$sel:serviceAccessRoleArn:ReplicationTaskAssessmentRun')$sel:status:ReplicationTaskAssessmentRun'newReplicationTaskAssessmentRun/replicationTaskAssessmentRun_assessmentProgress.replicationTaskAssessmentRun_assessmentRunName/replicationTaskAssessmentRun_lastFailureMessage/replicationTaskAssessmentRun_replicationTaskArn$sel:replicationSubnetGroupDescription:ReplicationSubnetGroup'=$sel:replicationSubnetGroupIdentifier:ReplicationSubnetGroup'.$sel:subnetGroupStatus:ReplicationSubnetGroup'$$sel:subnets:ReplicationSubnetGroup'2$sel:supportedNetworkTypes:ReplicationSubnetGroup'"$sel:vpcId:ReplicationSubnetGroup'newReplicationSubnetGroup8replicationSubnetGroup_replicationSubnetGroupDescription7replicationSubnetGroup_replicationSubnetGroupIdentifier(replicationSubnetGroup_subnetGroupStatusreplicationSubnetGroup_subnets,replicationSubnetGroup_supportedNetworkTypesreplicationSubnetGroup_vpcId$fNFDataReplicationSubnetGroup $fHashableReplicationSubnetGroup $fFromJSONReplicationSubnetGroup$fEqReplicationSubnetGroup$fReadReplicationSubnetGroup$fShowReplicationSubnetGroup$fGenericReplicationSubnetGroupSupportedEndpointTypeSupportedEndpointType'($sel:endpointType:SupportedEndpointType'-$sel:engineDisplayName:SupportedEndpointType'&$sel:engineName:SupportedEndpointType'$sel:replicationInstanceEngineMinimumVersion:SupportedEndpointType''$sel:supportsCDC:SupportedEndpointType'newSupportedEndpointType"supportedEndpointType_endpointType'supportedEndpointType_engineDisplayName supportedEndpointType_engineName=supportedEndpointType_replicationInstanceEngineMinimumVersion!supportedEndpointType_supportsCDC$fNFDataSupportedEndpointType$fHashableSupportedEndpointType$fFromJSONSupportedEndpointType$fEqSupportedEndpointType$fReadSupportedEndpointType$fShowSupportedEndpointType$fGenericSupportedEndpointTypeSybaseSettingsSybaseSettings'!$sel:databaseName:SybaseSettings'$sel:password:SybaseSettings'$sel:port:SybaseSettings'0$sel:secretsManagerAccessRoleArn:SybaseSettings'+$sel:secretsManagerSecretId:SybaseSettings'$sel:serverName:SybaseSettings'$sel:username:SybaseSettings'newSybaseSettingssybaseSettings_databaseNamesybaseSettings_passwordsybaseSettings_port*sybaseSettings_secretsManagerAccessRoleArn%sybaseSettings_secretsManagerSecretIdsybaseSettings_serverNamesybaseSettings_username$fToJSONSybaseSettings$fNFDataSybaseSettings$fHashableSybaseSettings$fFromJSONSybaseSettings$fEqSybaseSettings$fShowSybaseSettings$fGenericSybaseSettingsTableStatisticsTableStatistics'!$sel:appliedDdls:TableStatistics'$$sel:appliedDeletes:TableStatistics'$$sel:appliedInserts:TableStatistics'$$sel:appliedUpdates:TableStatistics'$sel:ddls:TableStatistics'$sel:deletes:TableStatistics'2$sel:fullLoadCondtnlChkFailedRows:TableStatistics'%$sel:fullLoadEndTime:TableStatistics''$sel:fullLoadErrorRows:TableStatistics'&$sel:fullLoadReloaded:TableStatistics'"$sel:fullLoadRows:TableStatistics''$sel:fullLoadStartTime:TableStatistics'$sel:inserts:TableStatistics'$$sel:lastUpdateTime:TableStatistics' $sel:schemaName:TableStatistics'$sel:tableName:TableStatistics' $sel:tableState:TableStatistics'$sel:updates:TableStatistics'-$sel:validationFailedRecords:TableStatistics'.$sel:validationPendingRecords:TableStatistics'%$sel:validationState:TableStatistics',$sel:validationStateDetails:TableStatistics'0$sel:validationSuspendedRecords:TableStatistics'newTableStatisticstableStatistics_appliedDdlstableStatistics_appliedDeletestableStatistics_appliedInsertstableStatistics_appliedUpdatestableStatistics_ddlstableStatistics_deletes,tableStatistics_fullLoadCondtnlChkFailedRowstableStatistics_fullLoadEndTime!tableStatistics_fullLoadErrorRows tableStatistics_fullLoadReloadedtableStatistics_fullLoadRows!tableStatistics_fullLoadStartTimetableStatistics_insertstableStatistics_lastUpdateTimetableStatistics_schemaNametableStatistics_tableNametableStatistics_tableStatetableStatistics_updates'tableStatistics_validationFailedRecords(tableStatistics_validationPendingRecordstableStatistics_validationState&tableStatistics_validationStateDetails*tableStatistics_validationSuspendedRecords$fNFDataTableStatistics$fHashableTableStatistics$fFromJSONTableStatistics$fEqTableStatistics$fReadTableStatistics$fShowTableStatistics$fGenericTableStatistics TableToReloadTableToReload'$sel:schemaName:TableToReload'$sel:tableName:TableToReload'newTableToReloadtableToReload_schemaNametableToReload_tableName$fToJSONTableToReload$fNFDataTableToReload$fHashableTableToReload$fEqTableToReload$fReadTableToReload$fShowTableToReload$fGenericTableToReloadTagTag' $sel:key:Tag'$sel:resourceArn:Tag'$sel:value:Tag'newTagtag_keytag_resourceArn tag_value $fToJSONTag $fNFDataTag $fHashableTag $fFromJSONTag$fEqTag $fReadTag $fShowTag $fGenericTag TargetDbType TargetDbType'fromTargetDbTypeTargetDbType_Specific_databaseTargetDbType_Multiple_databases$fShowTargetDbType$fReadTargetDbType$fEqTargetDbType$fOrdTargetDbType$fGenericTargetDbType$fHashableTargetDbType$fNFDataTargetDbType$fFromTextTargetDbType$fToTextTargetDbType$fToByteStringTargetDbType$fToLogTargetDbType$fToHeaderTargetDbType$fToQueryTargetDbType$fFromJSONTargetDbType$fFromJSONKeyTargetDbType$fToJSONTargetDbType$fToJSONKeyTargetDbType$fFromXMLTargetDbType$fToXMLTargetDbType MySQLSettingsMySQLSettings'&$sel:afterConnectScript:MySQLSettings'1$sel:cleanSourceMetadataOnMismatch:MySQLSettings' $sel:databaseName:MySQLSettings'&$sel:eventsPollInterval:MySQLSettings'$sel:maxFileSize:MySQLSettings''$sel:parallelLoadThreads:MySQLSettings'$sel:password:MySQLSettings'$sel:port:MySQLSettings'/$sel:secretsManagerAccessRoleArn:MySQLSettings'*$sel:secretsManagerSecretId:MySQLSettings'$sel:serverName:MySQLSettings'"$sel:serverTimezone:MySQLSettings' $sel:targetDbType:MySQLSettings'$sel:username:MySQLSettings'newMySQLSettings mySQLSettings_afterConnectScript+mySQLSettings_cleanSourceMetadataOnMismatchmySQLSettings_databaseName mySQLSettings_eventsPollIntervalmySQLSettings_maxFileSize!mySQLSettings_parallelLoadThreadsmySQLSettings_passwordmySQLSettings_port)mySQLSettings_secretsManagerAccessRoleArn$mySQLSettings_secretsManagerSecretIdmySQLSettings_serverNamemySQLSettings_serverTimezonemySQLSettings_targetDbTypemySQLSettings_username$fToJSONMySQLSettings$fNFDataMySQLSettings$fHashableMySQLSettings$fFromJSONMySQLSettings$fEqMySQLSettings$fShowMySQLSettings$fGenericMySQLSettingsGcpMySQLSettingsGcpMySQLSettings')$sel:afterConnectScript:GcpMySQLSettings'4$sel:cleanSourceMetadataOnMismatch:GcpMySQLSettings'#$sel:databaseName:GcpMySQLSettings')$sel:eventsPollInterval:GcpMySQLSettings'"$sel:maxFileSize:GcpMySQLSettings'*$sel:parallelLoadThreads:GcpMySQLSettings'$sel:password:GcpMySQLSettings'$sel:port:GcpMySQLSettings'2$sel:secretsManagerAccessRoleArn:GcpMySQLSettings'-$sel:secretsManagerSecretId:GcpMySQLSettings'!$sel:serverName:GcpMySQLSettings'%$sel:serverTimezone:GcpMySQLSettings'#$sel:targetDbType:GcpMySQLSettings'$sel:username:GcpMySQLSettings'newGcpMySQLSettings#gcpMySQLSettings_afterConnectScript.gcpMySQLSettings_cleanSourceMetadataOnMismatchgcpMySQLSettings_databaseName#gcpMySQLSettings_eventsPollIntervalgcpMySQLSettings_maxFileSize$gcpMySQLSettings_parallelLoadThreadsgcpMySQLSettings_passwordgcpMySQLSettings_port,gcpMySQLSettings_secretsManagerAccessRoleArn'gcpMySQLSettings_secretsManagerSecretIdgcpMySQLSettings_serverNamegcpMySQLSettings_serverTimezonegcpMySQLSettings_targetDbTypegcpMySQLSettings_username$fToJSONGcpMySQLSettings$fNFDataGcpMySQLSettings$fHashableGcpMySQLSettings$fFromJSONGcpMySQLSettings$fEqGcpMySQLSettings$fShowGcpMySQLSettings$fGenericGcpMySQLSettingsEndpoint Endpoint'$sel:certificateArn:Endpoint'$sel:databaseName:Endpoint'"$sel:dmsTransferSettings:Endpoint'$sel:docDbSettings:Endpoint'$sel:dynamoDbSettings:Endpoint'$$sel:elasticsearchSettings:Endpoint'$sel:endpointArn:Endpoint'!$sel:endpointIdentifier:Endpoint'$sel:endpointType:Endpoint' $sel:engineDisplayName:Endpoint'$sel:engineName:Endpoint'$sel:externalId:Endpoint'&$sel:externalTableDefinition:Endpoint'($sel:extraConnectionAttributes:Endpoint'$sel:gcpMySQLSettings:Endpoint'$sel:iBMDb2Settings:Endpoint'$sel:kafkaSettings:Endpoint'$sel:kinesisSettings:Endpoint'$sel:kmsKeyId:Endpoint')$sel:microsoftSQLServerSettings:Endpoint'$sel:mongoDbSettings:Endpoint'$sel:mySQLSettings:Endpoint'$sel:neptuneSettings:Endpoint'$sel:oracleSettings:Endpoint'$sel:port:Endpoint'!$sel:postgreSQLSettings:Endpoint'$sel:redisSettings:Endpoint'$sel:redshiftSettings:Endpoint'$sel:s3Settings:Endpoint'$sel:serverName:Endpoint'#$sel:serviceAccessRoleArn:Endpoint'$sel:sslMode:Endpoint'$sel:status:Endpoint'$sel:sybaseSettings:Endpoint'$sel:username:Endpoint' newEndpointendpoint_certificateArnendpoint_databaseNameendpoint_dmsTransferSettingsendpoint_docDbSettingsendpoint_dynamoDbSettingsendpoint_elasticsearchSettingsendpoint_endpointArnendpoint_endpointIdentifierendpoint_endpointTypeendpoint_engineDisplayNameendpoint_engineNameendpoint_externalId endpoint_externalTableDefinition"endpoint_extraConnectionAttributesendpoint_gcpMySQLSettingsendpoint_iBMDb2Settingsendpoint_kafkaSettingsendpoint_kinesisSettingsendpoint_kmsKeyId#endpoint_microsoftSQLServerSettingsendpoint_mongoDbSettingsendpoint_mySQLSettingsendpoint_neptuneSettingsendpoint_oracleSettings endpoint_portendpoint_postgreSQLSettingsendpoint_redisSettingsendpoint_redshiftSettingsendpoint_s3Settingsendpoint_serverNameendpoint_serviceAccessRoleArnendpoint_sslModeendpoint_statusendpoint_sybaseSettingsendpoint_username$fNFDataEndpoint$fHashableEndpoint$fFromJSONEndpoint $fEqEndpoint$fShowEndpoint$fGenericEndpoint VersionStatusVersionStatus'fromVersionStatusVersionStatus_UP_TO_DATEVersionStatus_UNSUPPORTEDVersionStatus_OUTDATED$fShowVersionStatus$fReadVersionStatus$fEqVersionStatus$fOrdVersionStatus$fGenericVersionStatus$fHashableVersionStatus$fNFDataVersionStatus$fFromTextVersionStatus$fToTextVersionStatus$fToByteStringVersionStatus$fToLogVersionStatus$fToHeaderVersionStatus$fToQueryVersionStatus$fFromJSONVersionStatus$fFromJSONKeyVersionStatus$fToJSONVersionStatus$fToJSONKeyVersionStatus$fFromXMLVersionStatus$fToXMLVersionStatusCollectorResponseCollectorResponse',$sel:collectorHealthCheck:CollectorResponse'%$sel:collectorName:CollectorResponse'-$sel:collectorReferencedId:CollectorResponse'($sel:collectorVersion:CollectorResponse'#$sel:createdDate:CollectorResponse'#$sel:description:CollectorResponse'%$sel:inventoryData:CollectorResponse'($sel:lastDataReceived:CollectorResponse'$$sel:modifiedDate:CollectorResponse'&$sel:registeredDate:CollectorResponse'$$sel:s3BucketName:CollectorResponse',$sel:serviceAccessRoleArn:CollectorResponse'%$sel:versionStatus:CollectorResponse'newCollectorResponse&collectorResponse_collectorHealthCheckcollectorResponse_collectorName'collectorResponse_collectorReferencedId"collectorResponse_collectorVersioncollectorResponse_createdDatecollectorResponse_descriptioncollectorResponse_inventoryData"collectorResponse_lastDataReceivedcollectorResponse_modifiedDate collectorResponse_registeredDatecollectorResponse_s3BucketName&collectorResponse_serviceAccessRoleArncollectorResponse_versionStatus$fNFDataCollectorResponse$fHashableCollectorResponse$fFromJSONCollectorResponse$fEqCollectorResponse$fReadCollectorResponse$fShowCollectorResponse$fGenericCollectorResponseVpcSecurityGroupMembershipVpcSecurityGroupMembership''$sel:status:VpcSecurityGroupMembership'3$sel:vpcSecurityGroupId:VpcSecurityGroupMembership'newVpcSecurityGroupMembership!vpcSecurityGroupMembership_status-vpcSecurityGroupMembership_vpcSecurityGroupId"$fNFDataVpcSecurityGroupMembership$$fHashableVpcSecurityGroupMembership$$fFromJSONVpcSecurityGroupMembership$fEqVpcSecurityGroupMembership $fReadVpcSecurityGroupMembership $fShowVpcSecurityGroupMembership#$fGenericVpcSecurityGroupMembershipReplicationInstanceReplicationInstance'*$sel:allocatedStorage:ReplicationInstance'1$sel:autoMinorVersionUpgrade:ReplicationInstance'*$sel:availabilityZone:ReplicationInstance'($sel:dnsNameServers:ReplicationInstance''$sel:engineVersion:ReplicationInstance'#$sel:freeUntil:ReplicationInstance',$sel:instanceCreateTime:ReplicationInstance'"$sel:kmsKeyId:ReplicationInstance'!$sel:multiAZ:ReplicationInstance'%$sel:networkType:ReplicationInstance'/$sel:pendingModifiedValues:ReplicationInstance'4$sel:preferredMaintenanceWindow:ReplicationInstance',$sel:publiclyAccessible:ReplicationInstance'0$sel:replicationInstanceArn:ReplicationInstance'2$sel:replicationInstanceClass:ReplicationInstance'7$sel:replicationInstanceIdentifier:ReplicationInstance':$sel:replicationInstanceIpv6Addresses:ReplicationInstance'=$sel:replicationInstancePrivateIpAddress:ReplicationInstance'?$sel:replicationInstancePrivateIpAddresses:ReplicationInstance'<$sel:replicationInstancePublicIpAddress:ReplicationInstance'>$sel:replicationInstancePublicIpAddresses:ReplicationInstance'3$sel:replicationInstanceStatus:ReplicationInstance'0$sel:replicationSubnetGroup:ReplicationInstance'3$sel:secondaryAvailabilityZone:ReplicationInstance'+$sel:vpcSecurityGroups:ReplicationInstance'newReplicationInstance$replicationInstance_allocatedStorage+replicationInstance_autoMinorVersionUpgrade$replicationInstance_availabilityZone"replicationInstance_dnsNameServers!replicationInstance_engineVersionreplicationInstance_freeUntil&replicationInstance_instanceCreateTimereplicationInstance_kmsKeyIdreplicationInstance_multiAZreplicationInstance_networkType)replicationInstance_pendingModifiedValues.replicationInstance_preferredMaintenanceWindow&replicationInstance_publiclyAccessible*replicationInstance_replicationInstanceArn,replicationInstance_replicationInstanceClass1replicationInstance_replicationInstanceIdentifier4replicationInstance_replicationInstanceIpv6Addresses7replicationInstance_replicationInstancePrivateIpAddress9replicationInstance_replicationInstancePrivateIpAddresses6replicationInstance_replicationInstancePublicIpAddress8replicationInstance_replicationInstancePublicIpAddresses-replicationInstance_replicationInstanceStatus*replicationInstance_replicationSubnetGroup-replicationInstance_secondaryAvailabilityZone%replicationInstance_vpcSecurityGroups$fNFDataReplicationInstance$fHashableReplicationInstance$fFromJSONReplicationInstance$fEqReplicationInstance$fReadReplicationInstance$fShowReplicationInstance$fGenericReplicationInstancedefaultService_AccessDeniedFault_CollectorNotFoundFault"_InsufficientResourceCapacityFault_InvalidCertificateFault_InvalidOperationFault_InvalidResourceStateFault_InvalidSubnet_KMSAccessDeniedFault_KMSDisabledFault _KMSFault_KMSInvalidStateFault_KMSKeyNotAccessibleFault_KMSNotFoundFault_KMSThrottlingFault,_ReplicationSubnetGroupDoesNotCoverEnoughAZs_ResourceAlreadyExistsFault_ResourceNotFoundFault_ResourceQuotaExceededFault_S3AccessDeniedFault_S3ResourceNotFoundFault_SNSInvalidTopicFault_SNSNoAuthorizationFault_StorageQuotaExceededFault_SubnetAlreadyInUse_UpgradeDependencyFailureFaultTestConnectionResponseTestConnectionResponse''$sel:connection:TestConnectionResponse''$sel:httpStatus:TestConnectionResponse'TestConnectionTestConnection'+$sel:replicationInstanceArn:TestConnection' $sel:endpointArn:TestConnection'newTestConnection%testConnection_replicationInstanceArntestConnection_endpointArnnewTestConnectionResponse!testConnectionResponse_connection!testConnectionResponse_httpStatus$fToQueryTestConnection$fToPathTestConnection$fToJSONTestConnection$fToHeadersTestConnection$fNFDataTestConnection$fHashableTestConnection$fNFDataTestConnectionResponse$fAWSRequestTestConnection$fEqTestConnectionResponse$fReadTestConnectionResponse$fShowTestConnectionResponse$fGenericTestConnectionResponse$fEqTestConnection$fReadTestConnection$fShowTestConnection$fGenericTestConnectionStopReplicationTaskResponseStopReplicationTaskResponse'1$sel:replicationTask:StopReplicationTaskResponse',$sel:httpStatus:StopReplicationTaskResponse'StopReplicationTaskStopReplicationTask',$sel:replicationTaskArn:StopReplicationTask'newStopReplicationTask&stopReplicationTask_replicationTaskArnnewStopReplicationTaskResponse+stopReplicationTaskResponse_replicationTask&stopReplicationTaskResponse_httpStatus$fToQueryStopReplicationTask$fToPathStopReplicationTask$fToJSONStopReplicationTask$fToHeadersStopReplicationTask$fNFDataStopReplicationTask$fHashableStopReplicationTask#$fNFDataStopReplicationTaskResponse$fAWSRequestStopReplicationTask$fEqStopReplicationTaskResponse!$fReadStopReplicationTaskResponse!$fShowStopReplicationTaskResponse$$fGenericStopReplicationTaskResponse$fEqStopReplicationTask$fReadStopReplicationTask$fShowStopReplicationTask$fGenericStopReplicationTask)StartReplicationTaskAssessmentRunResponse*StartReplicationTaskAssessmentRunResponse'$sel:replicationTaskAssessmentRun:StartReplicationTaskAssessmentRunResponse':$sel:httpStatus:StartReplicationTaskAssessmentRunResponse'!StartReplicationTaskAssessmentRun"StartReplicationTaskAssessmentRun'/$sel:exclude:StartReplicationTaskAssessmentRun'3$sel:includeOnly:StartReplicationTaskAssessmentRun'<$sel:resultEncryptionMode:StartReplicationTaskAssessmentRun'7$sel:resultKmsKeyArn:StartReplicationTaskAssessmentRun'<$sel:resultLocationFolder:StartReplicationTaskAssessmentRun':$sel:replicationTaskArn:StartReplicationTaskAssessmentRun'<$sel:serviceAccessRoleArn:StartReplicationTaskAssessmentRun'<$sel:resultLocationBucket:StartReplicationTaskAssessmentRun'9$sel:assessmentRunName:StartReplicationTaskAssessmentRun'$newStartReplicationTaskAssessmentRun)startReplicationTaskAssessmentRun_exclude-startReplicationTaskAssessmentRun_includeOnly6startReplicationTaskAssessmentRun_resultEncryptionMode1startReplicationTaskAssessmentRun_resultKmsKeyArn6startReplicationTaskAssessmentRun_resultLocationFolder4startReplicationTaskAssessmentRun_replicationTaskArn6startReplicationTaskAssessmentRun_serviceAccessRoleArn6startReplicationTaskAssessmentRun_resultLocationBucket3startReplicationTaskAssessmentRun_assessmentRunName,newStartReplicationTaskAssessmentRunResponsestartReplicationTaskAssessmentRunResponse_replicationTaskAssessmentRun4startReplicationTaskAssessmentRunResponse_httpStatus*$fToQueryStartReplicationTaskAssessmentRun)$fToPathStartReplicationTaskAssessmentRun)$fToJSONStartReplicationTaskAssessmentRun,$fToHeadersStartReplicationTaskAssessmentRun)$fNFDataStartReplicationTaskAssessmentRun+$fHashableStartReplicationTaskAssessmentRun1$fNFDataStartReplicationTaskAssessmentRunResponse-$fAWSRequestStartReplicationTaskAssessmentRun-$fEqStartReplicationTaskAssessmentRunResponse/$fReadStartReplicationTaskAssessmentRunResponse/$fShowStartReplicationTaskAssessmentRunResponse2$fGenericStartReplicationTaskAssessmentRunResponse%$fEqStartReplicationTaskAssessmentRun'$fReadStartReplicationTaskAssessmentRun'$fShowStartReplicationTaskAssessmentRun*$fGenericStartReplicationTaskAssessmentRun&StartReplicationTaskAssessmentResponse'StartReplicationTaskAssessmentResponse'<$sel:replicationTask:StartReplicationTaskAssessmentResponse'7$sel:httpStatus:StartReplicationTaskAssessmentResponse'StartReplicationTaskAssessmentStartReplicationTaskAssessment'7$sel:replicationTaskArn:StartReplicationTaskAssessment'!newStartReplicationTaskAssessment1startReplicationTaskAssessment_replicationTaskArn)newStartReplicationTaskAssessmentResponse6startReplicationTaskAssessmentResponse_replicationTask1startReplicationTaskAssessmentResponse_httpStatus'$fToQueryStartReplicationTaskAssessment&$fToPathStartReplicationTaskAssessment&$fToJSONStartReplicationTaskAssessment)$fToHeadersStartReplicationTaskAssessment&$fNFDataStartReplicationTaskAssessment($fHashableStartReplicationTaskAssessment.$fNFDataStartReplicationTaskAssessmentResponse*$fAWSRequestStartReplicationTaskAssessment*$fEqStartReplicationTaskAssessmentResponse,$fReadStartReplicationTaskAssessmentResponse,$fShowStartReplicationTaskAssessmentResponse/$fGenericStartReplicationTaskAssessmentResponse"$fEqStartReplicationTaskAssessment$$fReadStartReplicationTaskAssessment$$fShowStartReplicationTaskAssessment'$fGenericStartReplicationTaskAssessmentStartReplicationTaskResponseStartReplicationTaskResponse'2$sel:replicationTask:StartReplicationTaskResponse'-$sel:httpStatus:StartReplicationTaskResponse'StartReplicationTaskStartReplicationTask'+$sel:cdcStartPosition:StartReplicationTask''$sel:cdcStartTime:StartReplicationTask'*$sel:cdcStopPosition:StartReplicationTask'-$sel:replicationTaskArn:StartReplicationTask'3$sel:startReplicationTaskType:StartReplicationTask'newStartReplicationTask%startReplicationTask_cdcStartPosition!startReplicationTask_cdcStartTime$startReplicationTask_cdcStopPosition'startReplicationTask_replicationTaskArn-startReplicationTask_startReplicationTaskTypenewStartReplicationTaskResponse,startReplicationTaskResponse_replicationTask'startReplicationTaskResponse_httpStatus$fToQueryStartReplicationTask$fToPathStartReplicationTask$fToJSONStartReplicationTask$fToHeadersStartReplicationTask$fNFDataStartReplicationTask$fHashableStartReplicationTask$$fNFDataStartReplicationTaskResponse $fAWSRequestStartReplicationTask $fEqStartReplicationTaskResponse"$fReadStartReplicationTaskResponse"$fShowStartReplicationTaskResponse%$fGenericStartReplicationTaskResponse$fEqStartReplicationTask$fReadStartReplicationTask$fShowStartReplicationTask$fGenericStartReplicationTask"RunFleetAdvisorLsaAnalysisResponse#RunFleetAdvisorLsaAnalysisResponse'6$sel:lsaAnalysisId:RunFleetAdvisorLsaAnalysisResponse'/$sel:status:RunFleetAdvisorLsaAnalysisResponse'3$sel:httpStatus:RunFleetAdvisorLsaAnalysisResponse'RunFleetAdvisorLsaAnalysisRunFleetAdvisorLsaAnalysis'newRunFleetAdvisorLsaAnalysis%newRunFleetAdvisorLsaAnalysisResponse0runFleetAdvisorLsaAnalysisResponse_lsaAnalysisId)runFleetAdvisorLsaAnalysisResponse_status-runFleetAdvisorLsaAnalysisResponse_httpStatus#$fToQueryRunFleetAdvisorLsaAnalysis"$fToPathRunFleetAdvisorLsaAnalysis"$fToJSONRunFleetAdvisorLsaAnalysis%$fToHeadersRunFleetAdvisorLsaAnalysis"$fNFDataRunFleetAdvisorLsaAnalysis$$fHashableRunFleetAdvisorLsaAnalysis*$fNFDataRunFleetAdvisorLsaAnalysisResponse&$fAWSRequestRunFleetAdvisorLsaAnalysis&$fEqRunFleetAdvisorLsaAnalysisResponse($fReadRunFleetAdvisorLsaAnalysisResponse($fShowRunFleetAdvisorLsaAnalysisResponse+$fGenericRunFleetAdvisorLsaAnalysisResponse$fEqRunFleetAdvisorLsaAnalysis $fReadRunFleetAdvisorLsaAnalysis $fShowRunFleetAdvisorLsaAnalysis#$fGenericRunFleetAdvisorLsaAnalysisRemoveTagsFromResourceResponseRemoveTagsFromResourceResponse'/$sel:httpStatus:RemoveTagsFromResourceResponse'RemoveTagsFromResourceRemoveTagsFromResource'($sel:resourceArn:RemoveTagsFromResource'$$sel:tagKeys:RemoveTagsFromResource'newRemoveTagsFromResource"removeTagsFromResource_resourceArnremoveTagsFromResource_tagKeys!newRemoveTagsFromResourceResponse)removeTagsFromResourceResponse_httpStatus$fToQueryRemoveTagsFromResource$fToPathRemoveTagsFromResource$fToJSONRemoveTagsFromResource!$fToHeadersRemoveTagsFromResource$fNFDataRemoveTagsFromResource $fHashableRemoveTagsFromResource&$fNFDataRemoveTagsFromResourceResponse"$fAWSRequestRemoveTagsFromResource"$fEqRemoveTagsFromResourceResponse$$fReadRemoveTagsFromResourceResponse$$fShowRemoveTagsFromResourceResponse'$fGenericRemoveTagsFromResourceResponse$fEqRemoveTagsFromResource$fReadRemoveTagsFromResource$fShowRemoveTagsFromResource$fGenericRemoveTagsFromResourceReloadTablesResponseReloadTablesResponse'-$sel:replicationTaskArn:ReloadTablesResponse'%$sel:httpStatus:ReloadTablesResponse' ReloadTables ReloadTables'$sel:reloadOption:ReloadTables'%$sel:replicationTaskArn:ReloadTables'!$sel:tablesToReload:ReloadTables'newReloadTablesreloadTables_reloadOptionreloadTables_replicationTaskArnreloadTables_tablesToReloadnewReloadTablesResponse'reloadTablesResponse_replicationTaskArnreloadTablesResponse_httpStatus$fToQueryReloadTables$fToPathReloadTables$fToJSONReloadTables$fToHeadersReloadTables$fNFDataReloadTables$fHashableReloadTables$fNFDataReloadTablesResponse$fAWSRequestReloadTables$fEqReloadTablesResponse$fReadReloadTablesResponse$fShowReloadTablesResponse$fGenericReloadTablesResponse$fEqReloadTables$fReadReloadTables$fShowReloadTables$fGenericReloadTablesRefreshSchemasResponseRefreshSchemasResponse'1$sel:refreshSchemasStatus:RefreshSchemasResponse''$sel:httpStatus:RefreshSchemasResponse'RefreshSchemasRefreshSchemas' $sel:endpointArn:RefreshSchemas'+$sel:replicationInstanceArn:RefreshSchemas'newRefreshSchemasrefreshSchemas_endpointArn%refreshSchemas_replicationInstanceArnnewRefreshSchemasResponse+refreshSchemasResponse_refreshSchemasStatus!refreshSchemasResponse_httpStatus$fToQueryRefreshSchemas$fToPathRefreshSchemas$fToJSONRefreshSchemas$fToHeadersRefreshSchemas$fNFDataRefreshSchemas$fHashableRefreshSchemas$fNFDataRefreshSchemasResponse$fAWSRequestRefreshSchemas$fEqRefreshSchemasResponse$fReadRefreshSchemasResponse$fShowRefreshSchemasResponse$fGenericRefreshSchemasResponse$fEqRefreshSchemas$fReadRefreshSchemas$fShowRefreshSchemas$fGenericRefreshSchemas!RebootReplicationInstanceResponse"RebootReplicationInstanceResponse';$sel:replicationInstance:RebootReplicationInstanceResponse'2$sel:httpStatus:RebootReplicationInstanceResponse'RebootReplicationInstanceRebootReplicationInstance'-$sel:forceFailover:RebootReplicationInstance'4$sel:forcePlannedFailover:RebootReplicationInstance'6$sel:replicationInstanceArn:RebootReplicationInstance'newRebootReplicationInstance'rebootReplicationInstance_forceFailover.rebootReplicationInstance_forcePlannedFailover0rebootReplicationInstance_replicationInstanceArn$newRebootReplicationInstanceResponse5rebootReplicationInstanceResponse_replicationInstance,rebootReplicationInstanceResponse_httpStatus"$fToQueryRebootReplicationInstance!$fToPathRebootReplicationInstance!$fToJSONRebootReplicationInstance$$fToHeadersRebootReplicationInstance!$fNFDataRebootReplicationInstance#$fHashableRebootReplicationInstance)$fNFDataRebootReplicationInstanceResponse%$fAWSRequestRebootReplicationInstance%$fEqRebootReplicationInstanceResponse'$fReadRebootReplicationInstanceResponse'$fShowRebootReplicationInstanceResponse*$fGenericRebootReplicationInstanceResponse$fEqRebootReplicationInstance$fReadRebootReplicationInstance$fShowRebootReplicationInstance"$fGenericRebootReplicationInstanceMoveReplicationTaskResponseMoveReplicationTaskResponse'1$sel:replicationTask:MoveReplicationTaskResponse',$sel:httpStatus:MoveReplicationTaskResponse'MoveReplicationTaskMoveReplicationTask',$sel:replicationTaskArn:MoveReplicationTask'6$sel:targetReplicationInstanceArn:MoveReplicationTask'newMoveReplicationTask&moveReplicationTask_replicationTaskArn0moveReplicationTask_targetReplicationInstanceArnnewMoveReplicationTaskResponse+moveReplicationTaskResponse_replicationTask&moveReplicationTaskResponse_httpStatus$fToQueryMoveReplicationTask$fToPathMoveReplicationTask$fToJSONMoveReplicationTask$fToHeadersMoveReplicationTask$fNFDataMoveReplicationTask$fHashableMoveReplicationTask#$fNFDataMoveReplicationTaskResponse$fAWSRequestMoveReplicationTask$fEqMoveReplicationTaskResponse!$fReadMoveReplicationTaskResponse!$fShowMoveReplicationTaskResponse$$fGenericMoveReplicationTaskResponse$fEqMoveReplicationTask$fReadMoveReplicationTask$fShowMoveReplicationTask$fGenericMoveReplicationTaskModifyReplicationTaskResponseModifyReplicationTaskResponse'3$sel:replicationTask:ModifyReplicationTaskResponse'.$sel:httpStatus:ModifyReplicationTaskResponse'ModifyReplicationTaskModifyReplicationTask',$sel:cdcStartPosition:ModifyReplicationTask'($sel:cdcStartTime:ModifyReplicationTask'+$sel:cdcStopPosition:ModifyReplicationTask')$sel:migrationType:ModifyReplicationTask'5$sel:replicationTaskIdentifier:ModifyReplicationTask'3$sel:replicationTaskSettings:ModifyReplicationTask')$sel:tableMappings:ModifyReplicationTask'$$sel:taskData:ModifyReplicationTask'.$sel:replicationTaskArn:ModifyReplicationTask'newModifyReplicationTask&modifyReplicationTask_cdcStartPosition"modifyReplicationTask_cdcStartTime%modifyReplicationTask_cdcStopPosition#modifyReplicationTask_migrationType/modifyReplicationTask_replicationTaskIdentifier-modifyReplicationTask_replicationTaskSettings#modifyReplicationTask_tableMappingsmodifyReplicationTask_taskData(modifyReplicationTask_replicationTaskArn newModifyReplicationTaskResponse-modifyReplicationTaskResponse_replicationTask(modifyReplicationTaskResponse_httpStatus$fToQueryModifyReplicationTask$fToPathModifyReplicationTask$fToJSONModifyReplicationTask $fToHeadersModifyReplicationTask$fNFDataModifyReplicationTask$fHashableModifyReplicationTask%$fNFDataModifyReplicationTaskResponse!$fAWSRequestModifyReplicationTask!$fEqModifyReplicationTaskResponse#$fReadModifyReplicationTaskResponse#$fShowModifyReplicationTaskResponse&$fGenericModifyReplicationTaskResponse$fEqModifyReplicationTask$fReadModifyReplicationTask$fShowModifyReplicationTask$fGenericModifyReplicationTask$ModifyReplicationSubnetGroupResponse%ModifyReplicationSubnetGroupResponse'$sel:replicationSubnetGroup:ModifyReplicationSubnetGroupResponse'5$sel:httpStatus:ModifyReplicationSubnetGroupResponse'ModifyReplicationSubnetGroupModifyReplicationSubnetGroup'$sel:replicationSubnetGroupDescription:ModifyReplicationSubnetGroup'$sel:replicationSubnetGroupIdentifier:ModifyReplicationSubnetGroup',$sel:subnetIds:ModifyReplicationSubnetGroup'newModifyReplicationSubnetGroup>modifyReplicationSubnetGroup_replicationSubnetGroupDescription=modifyReplicationSubnetGroup_replicationSubnetGroupIdentifier&modifyReplicationSubnetGroup_subnetIds'newModifyReplicationSubnetGroupResponse;modifyReplicationSubnetGroupResponse_replicationSubnetGroup/modifyReplicationSubnetGroupResponse_httpStatus%$fToQueryModifyReplicationSubnetGroup$$fToPathModifyReplicationSubnetGroup$$fToJSONModifyReplicationSubnetGroup'$fToHeadersModifyReplicationSubnetGroup$$fNFDataModifyReplicationSubnetGroup&$fHashableModifyReplicationSubnetGroup,$fNFDataModifyReplicationSubnetGroupResponse($fAWSRequestModifyReplicationSubnetGroup($fEqModifyReplicationSubnetGroupResponse*$fReadModifyReplicationSubnetGroupResponse*$fShowModifyReplicationSubnetGroupResponse-$fGenericModifyReplicationSubnetGroupResponse $fEqModifyReplicationSubnetGroup"$fReadModifyReplicationSubnetGroup"$fShowModifyReplicationSubnetGroup%$fGenericModifyReplicationSubnetGroup!ModifyReplicationInstanceResponse"ModifyReplicationInstanceResponse';$sel:replicationInstance:ModifyReplicationInstanceResponse'2$sel:httpStatus:ModifyReplicationInstanceResponse'ModifyReplicationInstanceModifyReplicationInstance'0$sel:allocatedStorage:ModifyReplicationInstance'8$sel:allowMajorVersionUpgrade:ModifyReplicationInstance'0$sel:applyImmediately:ModifyReplicationInstance'7$sel:autoMinorVersionUpgrade:ModifyReplicationInstance'-$sel:engineVersion:ModifyReplicationInstance''$sel:multiAZ:ModifyReplicationInstance'+$sel:networkType:ModifyReplicationInstance':$sel:preferredMaintenanceWindow:ModifyReplicationInstance'8$sel:replicationInstanceClass:ModifyReplicationInstance'=$sel:replicationInstanceIdentifier:ModifyReplicationInstance'3$sel:vpcSecurityGroupIds:ModifyReplicationInstance'6$sel:replicationInstanceArn:ModifyReplicationInstance'newModifyReplicationInstance*modifyReplicationInstance_allocatedStorage2modifyReplicationInstance_allowMajorVersionUpgrade*modifyReplicationInstance_applyImmediately1modifyReplicationInstance_autoMinorVersionUpgrade'modifyReplicationInstance_engineVersion!modifyReplicationInstance_multiAZ%modifyReplicationInstance_networkType4modifyReplicationInstance_preferredMaintenanceWindow2modifyReplicationInstance_replicationInstanceClass7modifyReplicationInstance_replicationInstanceIdentifier-modifyReplicationInstance_vpcSecurityGroupIds0modifyReplicationInstance_replicationInstanceArn$newModifyReplicationInstanceResponse5modifyReplicationInstanceResponse_replicationInstance,modifyReplicationInstanceResponse_httpStatus"$fToQueryModifyReplicationInstance!$fToPathModifyReplicationInstance!$fToJSONModifyReplicationInstance$$fToHeadersModifyReplicationInstance!$fNFDataModifyReplicationInstance#$fHashableModifyReplicationInstance)$fNFDataModifyReplicationInstanceResponse%$fAWSRequestModifyReplicationInstance%$fEqModifyReplicationInstanceResponse'$fReadModifyReplicationInstanceResponse'$fShowModifyReplicationInstanceResponse*$fGenericModifyReplicationInstanceResponse$fEqModifyReplicationInstance$fReadModifyReplicationInstance$fShowModifyReplicationInstance"$fGenericModifyReplicationInstanceModifyEventSubscriptionResponse ModifyEventSubscriptionResponse'7$sel:eventSubscription:ModifyEventSubscriptionResponse'0$sel:httpStatus:ModifyEventSubscriptionResponse'ModifyEventSubscriptionModifyEventSubscription'%$sel:enabled:ModifyEventSubscription'-$sel:eventCategories:ModifyEventSubscription')$sel:snsTopicArn:ModifyEventSubscription'($sel:sourceType:ModifyEventSubscription'.$sel:subscriptionName:ModifyEventSubscription'newModifyEventSubscriptionmodifyEventSubscription_enabled'modifyEventSubscription_eventCategories#modifyEventSubscription_snsTopicArn"modifyEventSubscription_sourceType(modifyEventSubscription_subscriptionName"newModifyEventSubscriptionResponse1modifyEventSubscriptionResponse_eventSubscription*modifyEventSubscriptionResponse_httpStatus $fToQueryModifyEventSubscription$fToPathModifyEventSubscription$fToJSONModifyEventSubscription"$fToHeadersModifyEventSubscription$fNFDataModifyEventSubscription!$fHashableModifyEventSubscription'$fNFDataModifyEventSubscriptionResponse#$fAWSRequestModifyEventSubscription#$fEqModifyEventSubscriptionResponse%$fReadModifyEventSubscriptionResponse%$fShowModifyEventSubscriptionResponse($fGenericModifyEventSubscriptionResponse$fEqModifyEventSubscription$fReadModifyEventSubscription$fShowModifyEventSubscription $fGenericModifyEventSubscriptionModifyEndpointResponseModifyEndpointResponse'%$sel:endpoint:ModifyEndpointResponse''$sel:httpStatus:ModifyEndpointResponse'ModifyEndpointModifyEndpoint'#$sel:certificateArn:ModifyEndpoint'!$sel:databaseName:ModifyEndpoint'($sel:dmsTransferSettings:ModifyEndpoint'"$sel:docDbSettings:ModifyEndpoint'%$sel:dynamoDbSettings:ModifyEndpoint'*$sel:elasticsearchSettings:ModifyEndpoint''$sel:endpointIdentifier:ModifyEndpoint'!$sel:endpointType:ModifyEndpoint'$sel:engineName:ModifyEndpoint'"$sel:exactSettings:ModifyEndpoint',$sel:externalTableDefinition:ModifyEndpoint'.$sel:extraConnectionAttributes:ModifyEndpoint'%$sel:gcpMySQLSettings:ModifyEndpoint'#$sel:iBMDb2Settings:ModifyEndpoint'"$sel:kafkaSettings:ModifyEndpoint'$$sel:kinesisSettings:ModifyEndpoint'/$sel:microsoftSQLServerSettings:ModifyEndpoint'$$sel:mongoDbSettings:ModifyEndpoint'"$sel:mySQLSettings:ModifyEndpoint'$$sel:neptuneSettings:ModifyEndpoint'#$sel:oracleSettings:ModifyEndpoint'$sel:password:ModifyEndpoint'$sel:port:ModifyEndpoint''$sel:postgreSQLSettings:ModifyEndpoint'"$sel:redisSettings:ModifyEndpoint'%$sel:redshiftSettings:ModifyEndpoint'$sel:s3Settings:ModifyEndpoint'$sel:serverName:ModifyEndpoint')$sel:serviceAccessRoleArn:ModifyEndpoint'$sel:sslMode:ModifyEndpoint'#$sel:sybaseSettings:ModifyEndpoint'$sel:username:ModifyEndpoint' $sel:endpointArn:ModifyEndpoint'newModifyEndpointmodifyEndpoint_certificateArnmodifyEndpoint_databaseName"modifyEndpoint_dmsTransferSettingsmodifyEndpoint_docDbSettingsmodifyEndpoint_dynamoDbSettings$modifyEndpoint_elasticsearchSettings!modifyEndpoint_endpointIdentifiermodifyEndpoint_endpointTypemodifyEndpoint_engineNamemodifyEndpoint_exactSettings&modifyEndpoint_externalTableDefinition(modifyEndpoint_extraConnectionAttributesmodifyEndpoint_gcpMySQLSettingsmodifyEndpoint_iBMDb2SettingsmodifyEndpoint_kafkaSettingsmodifyEndpoint_kinesisSettings)modifyEndpoint_microsoftSQLServerSettingsmodifyEndpoint_mongoDbSettingsmodifyEndpoint_mySQLSettingsmodifyEndpoint_neptuneSettingsmodifyEndpoint_oracleSettingsmodifyEndpoint_passwordmodifyEndpoint_port!modifyEndpoint_postgreSQLSettingsmodifyEndpoint_redisSettingsmodifyEndpoint_redshiftSettingsmodifyEndpoint_s3SettingsmodifyEndpoint_serverName#modifyEndpoint_serviceAccessRoleArnmodifyEndpoint_sslModemodifyEndpoint_sybaseSettingsmodifyEndpoint_usernamemodifyEndpoint_endpointArnnewModifyEndpointResponsemodifyEndpointResponse_endpoint!modifyEndpointResponse_httpStatus$fToQueryModifyEndpoint$fToPathModifyEndpoint$fToJSONModifyEndpoint$fToHeadersModifyEndpoint$fNFDataModifyEndpoint$fHashableModifyEndpoint$fNFDataModifyEndpointResponse$fAWSRequestModifyEndpoint$fEqModifyEndpointResponse$fShowModifyEndpointResponse$fGenericModifyEndpointResponse$fEqModifyEndpoint$fShowModifyEndpoint$fGenericModifyEndpointListTagsForResourceResponseListTagsForResourceResponse')$sel:tagList:ListTagsForResourceResponse',$sel:httpStatus:ListTagsForResourceResponse'ListTagsForResourceListTagsForResource'%$sel:resourceArn:ListTagsForResource')$sel:resourceArnList:ListTagsForResource'newListTagsForResourcelistTagsForResource_resourceArn#listTagsForResource_resourceArnListnewListTagsForResourceResponse#listTagsForResourceResponse_tagList&listTagsForResourceResponse_httpStatus$fToQueryListTagsForResource$fToPathListTagsForResource$fToJSONListTagsForResource$fToHeadersListTagsForResource$fNFDataListTagsForResource$fHashableListTagsForResource#$fNFDataListTagsForResourceResponse$fAWSRequestListTagsForResource$fEqListTagsForResourceResponse!$fReadListTagsForResourceResponse!$fShowListTagsForResourceResponse$$fGenericListTagsForResourceResponse$fEqListTagsForResource$fReadListTagsForResource$fShowListTagsForResource$fGenericListTagsForResourceImportCertificateResponseImportCertificateResponse'+$sel:certificate:ImportCertificateResponse'*$sel:httpStatus:ImportCertificateResponse'ImportCertificateImportCertificate'&$sel:certificatePem:ImportCertificate')$sel:certificateWallet:ImportCertificate'$sel:tags:ImportCertificate'-$sel:certificateIdentifier:ImportCertificate'newImportCertificate importCertificate_certificatePem#importCertificate_certificateWalletimportCertificate_tags'importCertificate_certificateIdentifiernewImportCertificateResponse%importCertificateResponse_certificate$importCertificateResponse_httpStatus$fToQueryImportCertificate$fToPathImportCertificate$fToJSONImportCertificate$fToHeadersImportCertificate$fNFDataImportCertificate$fHashableImportCertificate!$fNFDataImportCertificateResponse$fAWSRequestImportCertificate$fEqImportCertificateResponse$fReadImportCertificateResponse$fShowImportCertificateResponse"$fGenericImportCertificateResponse$fEqImportCertificate$fShowImportCertificate$fGenericImportCertificateDescribeTableStatisticsResponse DescribeTableStatisticsResponse',$sel:marker:DescribeTableStatisticsResponse'8$sel:replicationTaskArn:DescribeTableStatisticsResponse'5$sel:tableStatistics:DescribeTableStatisticsResponse'0$sel:httpStatus:DescribeTableStatisticsResponse'DescribeTableStatisticsDescribeTableStatistics'%$sel:filters:DescribeTableStatistics'$$sel:marker:DescribeTableStatistics'($sel:maxRecords:DescribeTableStatistics'0$sel:replicationTaskArn:DescribeTableStatistics'newDescribeTableStatisticsdescribeTableStatistics_filtersdescribeTableStatistics_marker"describeTableStatistics_maxRecords*describeTableStatistics_replicationTaskArn"newDescribeTableStatisticsResponse&describeTableStatisticsResponse_marker2describeTableStatisticsResponse_replicationTaskArn/describeTableStatisticsResponse_tableStatistics*describeTableStatisticsResponse_httpStatus $fToQueryDescribeTableStatistics$fToPathDescribeTableStatistics$fToJSONDescribeTableStatistics"$fToHeadersDescribeTableStatistics$fNFDataDescribeTableStatistics!$fHashableDescribeTableStatistics!$fAWSPagerDescribeTableStatistics'$fNFDataDescribeTableStatisticsResponse#$fAWSRequestDescribeTableStatistics#$fEqDescribeTableStatisticsResponse%$fReadDescribeTableStatisticsResponse%$fShowDescribeTableStatisticsResponse($fGenericDescribeTableStatisticsResponse$fEqDescribeTableStatistics$fReadDescribeTableStatistics$fShowDescribeTableStatistics $fGenericDescribeTableStatisticsDescribeSchemasResponseDescribeSchemasResponse'$$sel:marker:DescribeSchemasResponse'%$sel:schemas:DescribeSchemasResponse'($sel:httpStatus:DescribeSchemasResponse'DescribeSchemasDescribeSchemas'$sel:marker:DescribeSchemas' $sel:maxRecords:DescribeSchemas'!$sel:endpointArn:DescribeSchemas'newDescribeSchemasdescribeSchemas_markerdescribeSchemas_maxRecordsdescribeSchemas_endpointArnnewDescribeSchemasResponsedescribeSchemasResponse_markerdescribeSchemasResponse_schemas"describeSchemasResponse_httpStatus$fToQueryDescribeSchemas$fToPathDescribeSchemas$fToJSONDescribeSchemas$fToHeadersDescribeSchemas$fNFDataDescribeSchemas$fHashableDescribeSchemas$fAWSPagerDescribeSchemas$fNFDataDescribeSchemasResponse$fAWSRequestDescribeSchemas$fEqDescribeSchemasResponse$fReadDescribeSchemasResponse$fShowDescribeSchemasResponse $fGenericDescribeSchemasResponse$fEqDescribeSchemas$fReadDescribeSchemas$fShowDescribeSchemas$fGenericDescribeSchemas DescribeReplicationTasksResponse!DescribeReplicationTasksResponse'-$sel:marker:DescribeReplicationTasksResponse'7$sel:replicationTasks:DescribeReplicationTasksResponse'1$sel:httpStatus:DescribeReplicationTasksResponse'DescribeReplicationTasks'&$sel:filters:DescribeReplicationTasks'%$sel:marker:DescribeReplicationTasks')$sel:maxRecords:DescribeReplicationTasks'.$sel:withoutSettings:DescribeReplicationTasks'newDescribeReplicationTasks describeReplicationTasks_filtersdescribeReplicationTasks_marker#describeReplicationTasks_maxRecords(describeReplicationTasks_withoutSettings#newDescribeReplicationTasksResponse'describeReplicationTasksResponse_marker1describeReplicationTasksResponse_replicationTasks+describeReplicationTasksResponse_httpStatus!$fToQueryDescribeReplicationTasks $fToPathDescribeReplicationTasks $fToJSONDescribeReplicationTasks#$fToHeadersDescribeReplicationTasks $fNFDataDescribeReplicationTasks"$fHashableDescribeReplicationTasks"$fAWSPagerDescribeReplicationTasks($fNFDataDescribeReplicationTasksResponse$$fAWSRequestDescribeReplicationTasks$$fEqDescribeReplicationTasksResponse&$fReadDescribeReplicationTasksResponse&$fShowDescribeReplicationTasksResponse)$fGenericDescribeReplicationTasksResponse$fEqDescribeReplicationTasks$fReadDescribeReplicationTasks$fShowDescribeReplicationTasks!$fGenericDescribeReplicationTasks4DescribeReplicationTaskIndividualAssessmentsResponse5DescribeReplicationTaskIndividualAssessmentsResponse'$sel:marker:DescribeReplicationTaskIndividualAssessmentsResponse'$sel:replicationTaskIndividualAssessments:DescribeReplicationTaskIndividualAssessmentsResponse'$sel:httpStatus:DescribeReplicationTaskIndividualAssessmentsResponse',DescribeReplicationTaskIndividualAssessments-DescribeReplicationTaskIndividualAssessments':$sel:filters:DescribeReplicationTaskIndividualAssessments'9$sel:marker:DescribeReplicationTaskIndividualAssessments'=$sel:maxRecords:DescribeReplicationTaskIndividualAssessments'/newDescribeReplicationTaskIndividualAssessments4describeReplicationTaskIndividualAssessments_filters3describeReplicationTaskIndividualAssessments_marker7describeReplicationTaskIndividualAssessments_maxRecords7newDescribeReplicationTaskIndividualAssessmentsResponse;describeReplicationTaskIndividualAssessmentsResponse_markerdescribeReplicationTaskIndividualAssessmentsResponse_replicationTaskIndividualAssessments?describeReplicationTaskIndividualAssessmentsResponse_httpStatus5$fToQueryDescribeReplicationTaskIndividualAssessments4$fToPathDescribeReplicationTaskIndividualAssessments4$fToJSONDescribeReplicationTaskIndividualAssessments7$fToHeadersDescribeReplicationTaskIndividualAssessments4$fNFDataDescribeReplicationTaskIndividualAssessments6$fHashableDescribeReplicationTaskIndividualAssessments<$fNFDataDescribeReplicationTaskIndividualAssessmentsResponse8$fAWSRequestDescribeReplicationTaskIndividualAssessments8$fEqDescribeReplicationTaskIndividualAssessmentsResponse:$fReadDescribeReplicationTaskIndividualAssessmentsResponse:$fShowDescribeReplicationTaskIndividualAssessmentsResponse=$fGenericDescribeReplicationTaskIndividualAssessmentsResponse0$fEqDescribeReplicationTaskIndividualAssessments2$fReadDescribeReplicationTaskIndividualAssessments2$fShowDescribeReplicationTaskIndividualAssessments5$fGenericDescribeReplicationTaskIndividualAssessments-DescribeReplicationTaskAssessmentRunsResponse.DescribeReplicationTaskAssessmentRunsResponse':$sel:marker:DescribeReplicationTaskAssessmentRunsResponse'$sel:replicationTaskAssessmentRuns:DescribeReplicationTaskAssessmentRunsResponse'>$sel:httpStatus:DescribeReplicationTaskAssessmentRunsResponse'%DescribeReplicationTaskAssessmentRuns&DescribeReplicationTaskAssessmentRuns'3$sel:filters:DescribeReplicationTaskAssessmentRuns'2$sel:marker:DescribeReplicationTaskAssessmentRuns'6$sel:maxRecords:DescribeReplicationTaskAssessmentRuns'(newDescribeReplicationTaskAssessmentRuns-describeReplicationTaskAssessmentRuns_filters,describeReplicationTaskAssessmentRuns_marker0describeReplicationTaskAssessmentRuns_maxRecords0newDescribeReplicationTaskAssessmentRunsResponse4describeReplicationTaskAssessmentRunsResponse_markerdescribeReplicationTaskAssessmentRunsResponse_replicationTaskAssessmentRuns8describeReplicationTaskAssessmentRunsResponse_httpStatus.$fToQueryDescribeReplicationTaskAssessmentRuns-$fToPathDescribeReplicationTaskAssessmentRuns-$fToJSONDescribeReplicationTaskAssessmentRuns0$fToHeadersDescribeReplicationTaskAssessmentRuns-$fNFDataDescribeReplicationTaskAssessmentRuns/$fHashableDescribeReplicationTaskAssessmentRuns5$fNFDataDescribeReplicationTaskAssessmentRunsResponse1$fAWSRequestDescribeReplicationTaskAssessmentRuns1$fEqDescribeReplicationTaskAssessmentRunsResponse3$fReadDescribeReplicationTaskAssessmentRunsResponse3$fShowDescribeReplicationTaskAssessmentRunsResponse6$fGenericDescribeReplicationTaskAssessmentRunsResponse)$fEqDescribeReplicationTaskAssessmentRuns+$fReadDescribeReplicationTaskAssessmentRuns+$fShowDescribeReplicationTaskAssessmentRuns.$fGenericDescribeReplicationTaskAssessmentRuns0DescribeReplicationTaskAssessmentResultsResponse1DescribeReplicationTaskAssessmentResultsResponse'$sel:bucketName:DescribeReplicationTaskAssessmentResultsResponse'=$sel:marker:DescribeReplicationTaskAssessmentResultsResponse'$sel:replicationTaskAssessmentResults:DescribeReplicationTaskAssessmentResultsResponse'$sel:httpStatus:DescribeReplicationTaskAssessmentResultsResponse'(DescribeReplicationTaskAssessmentResults)DescribeReplicationTaskAssessmentResults'5$sel:marker:DescribeReplicationTaskAssessmentResults'9$sel:maxRecords:DescribeReplicationTaskAssessmentResults'$sel:replicationTaskArn:DescribeReplicationTaskAssessmentResults'+newDescribeReplicationTaskAssessmentResults/describeReplicationTaskAssessmentResults_marker3describeReplicationTaskAssessmentResults_maxRecords;describeReplicationTaskAssessmentResults_replicationTaskArn3newDescribeReplicationTaskAssessmentResultsResponse;describeReplicationTaskAssessmentResultsResponse_bucketName7describeReplicationTaskAssessmentResultsResponse_markerdescribeReplicationTaskAssessmentResultsResponse_replicationTaskAssessmentResults;describeReplicationTaskAssessmentResultsResponse_httpStatus1$fToQueryDescribeReplicationTaskAssessmentResults0$fToPathDescribeReplicationTaskAssessmentResults0$fToJSONDescribeReplicationTaskAssessmentResults3$fToHeadersDescribeReplicationTaskAssessmentResults0$fNFDataDescribeReplicationTaskAssessmentResults2$fHashableDescribeReplicationTaskAssessmentResults2$fAWSPagerDescribeReplicationTaskAssessmentResults8$fNFDataDescribeReplicationTaskAssessmentResultsResponse4$fAWSRequestDescribeReplicationTaskAssessmentResults4$fEqDescribeReplicationTaskAssessmentResultsResponse6$fReadDescribeReplicationTaskAssessmentResultsResponse6$fShowDescribeReplicationTaskAssessmentResultsResponse9$fGenericDescribeReplicationTaskAssessmentResultsResponse,$fEqDescribeReplicationTaskAssessmentResults.$fReadDescribeReplicationTaskAssessmentResults.$fShowDescribeReplicationTaskAssessmentResults1$fGenericDescribeReplicationTaskAssessmentResults'DescribeReplicationSubnetGroupsResponse(DescribeReplicationSubnetGroupsResponse'4$sel:marker:DescribeReplicationSubnetGroupsResponse'$sel:replicationSubnetGroups:DescribeReplicationSubnetGroupsResponse'8$sel:httpStatus:DescribeReplicationSubnetGroupsResponse'DescribeReplicationSubnetGroups DescribeReplicationSubnetGroups'-$sel:filters:DescribeReplicationSubnetGroups',$sel:marker:DescribeReplicationSubnetGroups'0$sel:maxRecords:DescribeReplicationSubnetGroups'"newDescribeReplicationSubnetGroups'describeReplicationSubnetGroups_filters&describeReplicationSubnetGroups_marker*describeReplicationSubnetGroups_maxRecords*newDescribeReplicationSubnetGroupsResponse.describeReplicationSubnetGroupsResponse_marker?describeReplicationSubnetGroupsResponse_replicationSubnetGroups2describeReplicationSubnetGroupsResponse_httpStatus($fToQueryDescribeReplicationSubnetGroups'$fToPathDescribeReplicationSubnetGroups'$fToJSONDescribeReplicationSubnetGroups*$fToHeadersDescribeReplicationSubnetGroups'$fNFDataDescribeReplicationSubnetGroups)$fHashableDescribeReplicationSubnetGroups)$fAWSPagerDescribeReplicationSubnetGroups/$fNFDataDescribeReplicationSubnetGroupsResponse+$fAWSRequestDescribeReplicationSubnetGroups+$fEqDescribeReplicationSubnetGroupsResponse-$fReadDescribeReplicationSubnetGroupsResponse-$fShowDescribeReplicationSubnetGroupsResponse0$fGenericDescribeReplicationSubnetGroupsResponse#$fEqDescribeReplicationSubnetGroups%$fReadDescribeReplicationSubnetGroups%$fShowDescribeReplicationSubnetGroups($fGenericDescribeReplicationSubnetGroups$DescribeReplicationInstancesResponse%DescribeReplicationInstancesResponse'1$sel:marker:DescribeReplicationInstancesResponse'?$sel:replicationInstances:DescribeReplicationInstancesResponse'5$sel:httpStatus:DescribeReplicationInstancesResponse'DescribeReplicationInstances'*$sel:filters:DescribeReplicationInstances')$sel:marker:DescribeReplicationInstances'-$sel:maxRecords:DescribeReplicationInstances'newDescribeReplicationInstances$describeReplicationInstances_filters#describeReplicationInstances_marker'describeReplicationInstances_maxRecords'newDescribeReplicationInstancesResponse+describeReplicationInstancesResponse_marker9describeReplicationInstancesResponse_replicationInstances/describeReplicationInstancesResponse_httpStatus%$fToQueryDescribeReplicationInstances$$fToPathDescribeReplicationInstances$$fToJSONDescribeReplicationInstances'$fToHeadersDescribeReplicationInstances$$fNFDataDescribeReplicationInstances&$fHashableDescribeReplicationInstances&$fAWSPagerDescribeReplicationInstances,$fNFDataDescribeReplicationInstancesResponse($fAWSRequestDescribeReplicationInstances($fEqDescribeReplicationInstancesResponse*$fReadDescribeReplicationInstancesResponse*$fShowDescribeReplicationInstancesResponse-$fGenericDescribeReplicationInstancesResponse $fEqDescribeReplicationInstances"$fReadDescribeReplicationInstances"$fShowDescribeReplicationInstances%$fGenericDescribeReplicationInstances+DescribeReplicationInstanceTaskLogsResponse,DescribeReplicationInstanceTaskLogsResponse'8$sel:marker:DescribeReplicationInstanceTaskLogsResponse'$sel:replicationInstanceArn:DescribeReplicationInstanceTaskLogsResponse'$sel:replicationInstanceTaskLogs:DescribeReplicationInstanceTaskLogsResponse'<$sel:httpStatus:DescribeReplicationInstanceTaskLogsResponse'#DescribeReplicationInstanceTaskLogs$DescribeReplicationInstanceTaskLogs'0$sel:marker:DescribeReplicationInstanceTaskLogs'4$sel:maxRecords:DescribeReplicationInstanceTaskLogs'$sel:replicationInstanceArn:DescribeReplicationInstanceTaskLogs'&newDescribeReplicationInstanceTaskLogs*describeReplicationInstanceTaskLogs_marker.describeReplicationInstanceTaskLogs_maxRecords:describeReplicationInstanceTaskLogs_replicationInstanceArn.newDescribeReplicationInstanceTaskLogsResponse2describeReplicationInstanceTaskLogsResponse_markerdescribeReplicationInstanceTaskLogsResponse_replicationInstanceArndescribeReplicationInstanceTaskLogsResponse_replicationInstanceTaskLogs6describeReplicationInstanceTaskLogsResponse_httpStatus,$fToQueryDescribeReplicationInstanceTaskLogs+$fToPathDescribeReplicationInstanceTaskLogs+$fToJSONDescribeReplicationInstanceTaskLogs.$fToHeadersDescribeReplicationInstanceTaskLogs+$fNFDataDescribeReplicationInstanceTaskLogs-$fHashableDescribeReplicationInstanceTaskLogs3$fNFDataDescribeReplicationInstanceTaskLogsResponse/$fAWSRequestDescribeReplicationInstanceTaskLogs/$fEqDescribeReplicationInstanceTaskLogsResponse1$fReadDescribeReplicationInstanceTaskLogsResponse1$fShowDescribeReplicationInstanceTaskLogsResponse4$fGenericDescribeReplicationInstanceTaskLogsResponse'$fEqDescribeReplicationInstanceTaskLogs)$fReadDescribeReplicationInstanceTaskLogs)$fShowDescribeReplicationInstanceTaskLogs,$fGenericDescribeReplicationInstanceTaskLogs$DescribeRefreshSchemasStatusResponse%DescribeRefreshSchemasStatusResponse'?$sel:refreshSchemasStatus:DescribeRefreshSchemasStatusResponse'5$sel:httpStatus:DescribeRefreshSchemasStatusResponse'DescribeRefreshSchemasStatusDescribeRefreshSchemasStatus'.$sel:endpointArn:DescribeRefreshSchemasStatus'newDescribeRefreshSchemasStatus(describeRefreshSchemasStatus_endpointArn'newDescribeRefreshSchemasStatusResponse9describeRefreshSchemasStatusResponse_refreshSchemasStatus/describeRefreshSchemasStatusResponse_httpStatus%$fToQueryDescribeRefreshSchemasStatus$$fToPathDescribeRefreshSchemasStatus$$fToJSONDescribeRefreshSchemasStatus'$fToHeadersDescribeRefreshSchemasStatus$$fNFDataDescribeRefreshSchemasStatus&$fHashableDescribeRefreshSchemasStatus,$fNFDataDescribeRefreshSchemasStatusResponse($fAWSRequestDescribeRefreshSchemasStatus($fEqDescribeRefreshSchemasStatusResponse*$fReadDescribeRefreshSchemasStatusResponse*$fShowDescribeRefreshSchemasStatusResponse-$fGenericDescribeRefreshSchemasStatusResponse $fEqDescribeRefreshSchemasStatus"$fReadDescribeRefreshSchemasStatus"$fShowDescribeRefreshSchemasStatus%$fGenericDescribeRefreshSchemasStatus)DescribePendingMaintenanceActionsResponse*DescribePendingMaintenanceActionsResponse'6$sel:marker:DescribePendingMaintenanceActionsResponse'$sel:pendingMaintenanceActions:DescribePendingMaintenanceActionsResponse':$sel:httpStatus:DescribePendingMaintenanceActionsResponse'!DescribePendingMaintenanceActions"DescribePendingMaintenanceActions'/$sel:filters:DescribePendingMaintenanceActions'.$sel:marker:DescribePendingMaintenanceActions'2$sel:maxRecords:DescribePendingMaintenanceActions'>$sel:replicationInstanceArn:DescribePendingMaintenanceActions'$newDescribePendingMaintenanceActions)describePendingMaintenanceActions_filters(describePendingMaintenanceActions_marker,describePendingMaintenanceActions_maxRecords8describePendingMaintenanceActions_replicationInstanceArn,newDescribePendingMaintenanceActionsResponse0describePendingMaintenanceActionsResponse_markerdescribePendingMaintenanceActionsResponse_pendingMaintenanceActions4describePendingMaintenanceActionsResponse_httpStatus*$fToQueryDescribePendingMaintenanceActions)$fToPathDescribePendingMaintenanceActions)$fToJSONDescribePendingMaintenanceActions,$fToHeadersDescribePendingMaintenanceActions)$fNFDataDescribePendingMaintenanceActions+$fHashableDescribePendingMaintenanceActions1$fNFDataDescribePendingMaintenanceActionsResponse-$fAWSRequestDescribePendingMaintenanceActions-$fEqDescribePendingMaintenanceActionsResponse/$fReadDescribePendingMaintenanceActionsResponse/$fShowDescribePendingMaintenanceActionsResponse2$fGenericDescribePendingMaintenanceActionsResponse%$fEqDescribePendingMaintenanceActions'$fReadDescribePendingMaintenanceActions'$fShowDescribePendingMaintenanceActions*$fGenericDescribePendingMaintenanceActions-DescribeOrderableReplicationInstancesResponse.DescribeOrderableReplicationInstancesResponse':$sel:marker:DescribeOrderableReplicationInstancesResponse'$sel:orderableReplicationInstances:DescribeOrderableReplicationInstancesResponse'>$sel:httpStatus:DescribeOrderableReplicationInstancesResponse'%DescribeOrderableReplicationInstances&DescribeOrderableReplicationInstances'2$sel:marker:DescribeOrderableReplicationInstances'6$sel:maxRecords:DescribeOrderableReplicationInstances'(newDescribeOrderableReplicationInstances,describeOrderableReplicationInstances_marker0describeOrderableReplicationInstances_maxRecords0newDescribeOrderableReplicationInstancesResponse4describeOrderableReplicationInstancesResponse_markerdescribeOrderableReplicationInstancesResponse_orderableReplicationInstances8describeOrderableReplicationInstancesResponse_httpStatus.$fToQueryDescribeOrderableReplicationInstances-$fToPathDescribeOrderableReplicationInstances-$fToJSONDescribeOrderableReplicationInstances0$fToHeadersDescribeOrderableReplicationInstances-$fNFDataDescribeOrderableReplicationInstances/$fHashableDescribeOrderableReplicationInstances/$fAWSPagerDescribeOrderableReplicationInstances5$fNFDataDescribeOrderableReplicationInstancesResponse1$fAWSRequestDescribeOrderableReplicationInstances1$fEqDescribeOrderableReplicationInstancesResponse3$fReadDescribeOrderableReplicationInstancesResponse3$fShowDescribeOrderableReplicationInstancesResponse6$fGenericDescribeOrderableReplicationInstancesResponse)$fEqDescribeOrderableReplicationInstances+$fReadDescribeOrderableReplicationInstances+$fShowDescribeOrderableReplicationInstances.$fGenericDescribeOrderableReplicationInstances#DescribeFleetAdvisorSchemasResponse$DescribeFleetAdvisorSchemasResponse'=$sel:fleetAdvisorSchemas:DescribeFleetAdvisorSchemasResponse'3$sel:nextToken:DescribeFleetAdvisorSchemasResponse'4$sel:httpStatus:DescribeFleetAdvisorSchemasResponse'DescribeFleetAdvisorSchemasDescribeFleetAdvisorSchemas')$sel:filters:DescribeFleetAdvisorSchemas',$sel:maxRecords:DescribeFleetAdvisorSchemas'+$sel:nextToken:DescribeFleetAdvisorSchemas'newDescribeFleetAdvisorSchemas#describeFleetAdvisorSchemas_filters&describeFleetAdvisorSchemas_maxRecords%describeFleetAdvisorSchemas_nextToken&newDescribeFleetAdvisorSchemasResponse7describeFleetAdvisorSchemasResponse_fleetAdvisorSchemas-describeFleetAdvisorSchemasResponse_nextToken.describeFleetAdvisorSchemasResponse_httpStatus$$fToQueryDescribeFleetAdvisorSchemas#$fToPathDescribeFleetAdvisorSchemas#$fToJSONDescribeFleetAdvisorSchemas&$fToHeadersDescribeFleetAdvisorSchemas#$fNFDataDescribeFleetAdvisorSchemas%$fHashableDescribeFleetAdvisorSchemas+$fNFDataDescribeFleetAdvisorSchemasResponse'$fAWSRequestDescribeFleetAdvisorSchemas'$fEqDescribeFleetAdvisorSchemasResponse)$fReadDescribeFleetAdvisorSchemasResponse)$fShowDescribeFleetAdvisorSchemasResponse,$fGenericDescribeFleetAdvisorSchemasResponse$fEqDescribeFleetAdvisorSchemas!$fReadDescribeFleetAdvisorSchemas!$fShowDescribeFleetAdvisorSchemas$$fGenericDescribeFleetAdvisorSchemas/DescribeFleetAdvisorSchemaObjectSummaryResponse0DescribeFleetAdvisorSchemaObjectSummaryResponse'$sel:fleetAdvisorSchemaObjects:DescribeFleetAdvisorSchemaObjectSummaryResponse'?$sel:nextToken:DescribeFleetAdvisorSchemaObjectSummaryResponse'$sel:httpStatus:DescribeFleetAdvisorSchemaObjectSummaryResponse''DescribeFleetAdvisorSchemaObjectSummary(DescribeFleetAdvisorSchemaObjectSummary'5$sel:filters:DescribeFleetAdvisorSchemaObjectSummary'8$sel:maxRecords:DescribeFleetAdvisorSchemaObjectSummary'7$sel:nextToken:DescribeFleetAdvisorSchemaObjectSummary'*newDescribeFleetAdvisorSchemaObjectSummary/describeFleetAdvisorSchemaObjectSummary_filters2describeFleetAdvisorSchemaObjectSummary_maxRecords1describeFleetAdvisorSchemaObjectSummary_nextToken2newDescribeFleetAdvisorSchemaObjectSummaryResponsedescribeFleetAdvisorSchemaObjectSummaryResponse_fleetAdvisorSchemaObjects9describeFleetAdvisorSchemaObjectSummaryResponse_nextToken:describeFleetAdvisorSchemaObjectSummaryResponse_httpStatus0$fToQueryDescribeFleetAdvisorSchemaObjectSummary/$fToPathDescribeFleetAdvisorSchemaObjectSummary/$fToJSONDescribeFleetAdvisorSchemaObjectSummary2$fToHeadersDescribeFleetAdvisorSchemaObjectSummary/$fNFDataDescribeFleetAdvisorSchemaObjectSummary1$fHashableDescribeFleetAdvisorSchemaObjectSummary7$fNFDataDescribeFleetAdvisorSchemaObjectSummaryResponse3$fAWSRequestDescribeFleetAdvisorSchemaObjectSummary3$fEqDescribeFleetAdvisorSchemaObjectSummaryResponse5$fReadDescribeFleetAdvisorSchemaObjectSummaryResponse5$fShowDescribeFleetAdvisorSchemaObjectSummaryResponse8$fGenericDescribeFleetAdvisorSchemaObjectSummaryResponse+$fEqDescribeFleetAdvisorSchemaObjectSummary-$fReadDescribeFleetAdvisorSchemaObjectSummary-$fShowDescribeFleetAdvisorSchemaObjectSummary0$fGenericDescribeFleetAdvisorSchemaObjectSummary'DescribeFleetAdvisorLsaAnalysisResponse(DescribeFleetAdvisorLsaAnalysisResponse'6$sel:analysis:DescribeFleetAdvisorLsaAnalysisResponse'7$sel:nextToken:DescribeFleetAdvisorLsaAnalysisResponse'8$sel:httpStatus:DescribeFleetAdvisorLsaAnalysisResponse'DescribeFleetAdvisorLsaAnalysis DescribeFleetAdvisorLsaAnalysis'0$sel:maxRecords:DescribeFleetAdvisorLsaAnalysis'/$sel:nextToken:DescribeFleetAdvisorLsaAnalysis'"newDescribeFleetAdvisorLsaAnalysis*describeFleetAdvisorLsaAnalysis_maxRecords)describeFleetAdvisorLsaAnalysis_nextToken*newDescribeFleetAdvisorLsaAnalysisResponse0describeFleetAdvisorLsaAnalysisResponse_analysis1describeFleetAdvisorLsaAnalysisResponse_nextToken2describeFleetAdvisorLsaAnalysisResponse_httpStatus($fToQueryDescribeFleetAdvisorLsaAnalysis'$fToPathDescribeFleetAdvisorLsaAnalysis'$fToJSONDescribeFleetAdvisorLsaAnalysis*$fToHeadersDescribeFleetAdvisorLsaAnalysis'$fNFDataDescribeFleetAdvisorLsaAnalysis)$fHashableDescribeFleetAdvisorLsaAnalysis/$fNFDataDescribeFleetAdvisorLsaAnalysisResponse+$fAWSRequestDescribeFleetAdvisorLsaAnalysis+$fEqDescribeFleetAdvisorLsaAnalysisResponse-$fReadDescribeFleetAdvisorLsaAnalysisResponse-$fShowDescribeFleetAdvisorLsaAnalysisResponse0$fGenericDescribeFleetAdvisorLsaAnalysisResponse#$fEqDescribeFleetAdvisorLsaAnalysis%$fReadDescribeFleetAdvisorLsaAnalysis%$fShowDescribeFleetAdvisorLsaAnalysis($fGenericDescribeFleetAdvisorLsaAnalysis%DescribeFleetAdvisorDatabasesResponse&DescribeFleetAdvisorDatabasesResponse'5$sel:databases:DescribeFleetAdvisorDatabasesResponse'5$sel:nextToken:DescribeFleetAdvisorDatabasesResponse'6$sel:httpStatus:DescribeFleetAdvisorDatabasesResponse'DescribeFleetAdvisorDatabasesDescribeFleetAdvisorDatabases'+$sel:filters:DescribeFleetAdvisorDatabases'.$sel:maxRecords:DescribeFleetAdvisorDatabases'-$sel:nextToken:DescribeFleetAdvisorDatabases' newDescribeFleetAdvisorDatabases%describeFleetAdvisorDatabases_filters(describeFleetAdvisorDatabases_maxRecords'describeFleetAdvisorDatabases_nextToken(newDescribeFleetAdvisorDatabasesResponse/describeFleetAdvisorDatabasesResponse_databases/describeFleetAdvisorDatabasesResponse_nextToken0describeFleetAdvisorDatabasesResponse_httpStatus&$fToQueryDescribeFleetAdvisorDatabases%$fToPathDescribeFleetAdvisorDatabases%$fToJSONDescribeFleetAdvisorDatabases($fToHeadersDescribeFleetAdvisorDatabases%$fNFDataDescribeFleetAdvisorDatabases'$fHashableDescribeFleetAdvisorDatabases-$fNFDataDescribeFleetAdvisorDatabasesResponse)$fAWSRequestDescribeFleetAdvisorDatabases)$fEqDescribeFleetAdvisorDatabasesResponse+$fReadDescribeFleetAdvisorDatabasesResponse+$fShowDescribeFleetAdvisorDatabasesResponse.$fGenericDescribeFleetAdvisorDatabasesResponse!$fEqDescribeFleetAdvisorDatabases#$fReadDescribeFleetAdvisorDatabases#$fShowDescribeFleetAdvisorDatabases&$fGenericDescribeFleetAdvisorDatabases&DescribeFleetAdvisorCollectorsResponse'DescribeFleetAdvisorCollectorsResponse'7$sel:collectors:DescribeFleetAdvisorCollectorsResponse'6$sel:nextToken:DescribeFleetAdvisorCollectorsResponse'7$sel:httpStatus:DescribeFleetAdvisorCollectorsResponse'DescribeFleetAdvisorCollectorsDescribeFleetAdvisorCollectors',$sel:filters:DescribeFleetAdvisorCollectors'/$sel:maxRecords:DescribeFleetAdvisorCollectors'.$sel:nextToken:DescribeFleetAdvisorCollectors'!newDescribeFleetAdvisorCollectors&describeFleetAdvisorCollectors_filters)describeFleetAdvisorCollectors_maxRecords(describeFleetAdvisorCollectors_nextToken)newDescribeFleetAdvisorCollectorsResponse1describeFleetAdvisorCollectorsResponse_collectors0describeFleetAdvisorCollectorsResponse_nextToken1describeFleetAdvisorCollectorsResponse_httpStatus'$fToQueryDescribeFleetAdvisorCollectors&$fToPathDescribeFleetAdvisorCollectors&$fToJSONDescribeFleetAdvisorCollectors)$fToHeadersDescribeFleetAdvisorCollectors&$fNFDataDescribeFleetAdvisorCollectors($fHashableDescribeFleetAdvisorCollectors.$fNFDataDescribeFleetAdvisorCollectorsResponse*$fAWSRequestDescribeFleetAdvisorCollectors*$fEqDescribeFleetAdvisorCollectorsResponse,$fReadDescribeFleetAdvisorCollectorsResponse,$fShowDescribeFleetAdvisorCollectorsResponse/$fGenericDescribeFleetAdvisorCollectorsResponse"$fEqDescribeFleetAdvisorCollectors$$fReadDescribeFleetAdvisorCollectors$$fShowDescribeFleetAdvisorCollectors'$fGenericDescribeFleetAdvisorCollectorsDescribeEventsResponseDescribeEventsResponse'#$sel:events:DescribeEventsResponse'#$sel:marker:DescribeEventsResponse''$sel:httpStatus:DescribeEventsResponse'DescribeEventsDescribeEvents'$sel:duration:DescribeEvents'$sel:endTime:DescribeEvents'$$sel:eventCategories:DescribeEvents'$sel:filters:DescribeEvents'$sel:marker:DescribeEvents'$sel:maxRecords:DescribeEvents'%$sel:sourceIdentifier:DescribeEvents'$sel:sourceType:DescribeEvents'$sel:startTime:DescribeEvents'newDescribeEventsdescribeEvents_durationdescribeEvents_endTimedescribeEvents_eventCategoriesdescribeEvents_filtersdescribeEvents_markerdescribeEvents_maxRecordsdescribeEvents_sourceIdentifierdescribeEvents_sourceTypedescribeEvents_startTimenewDescribeEventsResponsedescribeEventsResponse_eventsdescribeEventsResponse_marker!describeEventsResponse_httpStatus$fToQueryDescribeEvents$fToPathDescribeEvents$fToJSONDescribeEvents$fToHeadersDescribeEvents$fNFDataDescribeEvents$fHashableDescribeEvents$fAWSPagerDescribeEvents$fNFDataDescribeEventsResponse$fAWSRequestDescribeEvents$fEqDescribeEventsResponse$fReadDescribeEventsResponse$fShowDescribeEventsResponse$fGenericDescribeEventsResponse$fEqDescribeEvents$fReadDescribeEvents$fShowDescribeEvents$fGenericDescribeEvents"DescribeEventSubscriptionsResponse#DescribeEventSubscriptionsResponse'?$sel:eventSubscriptionsList:DescribeEventSubscriptionsResponse'/$sel:marker:DescribeEventSubscriptionsResponse'3$sel:httpStatus:DescribeEventSubscriptionsResponse'DescribeEventSubscriptionsDescribeEventSubscriptions'($sel:filters:DescribeEventSubscriptions''$sel:marker:DescribeEventSubscriptions'+$sel:maxRecords:DescribeEventSubscriptions'1$sel:subscriptionName:DescribeEventSubscriptions'newDescribeEventSubscriptions"describeEventSubscriptions_filters!describeEventSubscriptions_marker%describeEventSubscriptions_maxRecords+describeEventSubscriptions_subscriptionName%newDescribeEventSubscriptionsResponse9describeEventSubscriptionsResponse_eventSubscriptionsList)describeEventSubscriptionsResponse_marker-describeEventSubscriptionsResponse_httpStatus#$fToQueryDescribeEventSubscriptions"$fToPathDescribeEventSubscriptions"$fToJSONDescribeEventSubscriptions%$fToHeadersDescribeEventSubscriptions"$fNFDataDescribeEventSubscriptions$$fHashableDescribeEventSubscriptions$$fAWSPagerDescribeEventSubscriptions*$fNFDataDescribeEventSubscriptionsResponse&$fAWSRequestDescribeEventSubscriptions&$fEqDescribeEventSubscriptionsResponse($fReadDescribeEventSubscriptionsResponse($fShowDescribeEventSubscriptionsResponse+$fGenericDescribeEventSubscriptionsResponse$fEqDescribeEventSubscriptions $fReadDescribeEventSubscriptions $fShowDescribeEventSubscriptions#$fGenericDescribeEventSubscriptionsDescribeEventCategoriesResponse DescribeEventCategoriesResponse'<$sel:eventCategoryGroupList:DescribeEventCategoriesResponse'0$sel:httpStatus:DescribeEventCategoriesResponse'DescribeEventCategoriesDescribeEventCategories'%$sel:filters:DescribeEventCategories'($sel:sourceType:DescribeEventCategories'newDescribeEventCategoriesdescribeEventCategories_filters"describeEventCategories_sourceType"newDescribeEventCategoriesResponse6describeEventCategoriesResponse_eventCategoryGroupList*describeEventCategoriesResponse_httpStatus $fToQueryDescribeEventCategories$fToPathDescribeEventCategories$fToJSONDescribeEventCategories"$fToHeadersDescribeEventCategories$fNFDataDescribeEventCategories!$fHashableDescribeEventCategories'$fNFDataDescribeEventCategoriesResponse#$fAWSRequestDescribeEventCategories#$fEqDescribeEventCategoriesResponse%$fReadDescribeEventCategoriesResponse%$fShowDescribeEventCategoriesResponse($fGenericDescribeEventCategoriesResponse$fEqDescribeEventCategories$fReadDescribeEventCategories$fShowDescribeEventCategories $fGenericDescribeEventCategoriesDescribeEndpointsResponseDescribeEndpointsResponse')$sel:endpoints:DescribeEndpointsResponse'&$sel:marker:DescribeEndpointsResponse'*$sel:httpStatus:DescribeEndpointsResponse'DescribeEndpoints'$sel:filters:DescribeEndpoints'$sel:marker:DescribeEndpoints'"$sel:maxRecords:DescribeEndpoints'newDescribeEndpointsdescribeEndpoints_filtersdescribeEndpoints_markerdescribeEndpoints_maxRecordsnewDescribeEndpointsResponse#describeEndpointsResponse_endpoints describeEndpointsResponse_marker$describeEndpointsResponse_httpStatus$fToQueryDescribeEndpoints$fToPathDescribeEndpoints$fToJSONDescribeEndpoints$fToHeadersDescribeEndpoints$fNFDataDescribeEndpoints$fHashableDescribeEndpoints$fAWSPagerDescribeEndpoints!$fNFDataDescribeEndpointsResponse$fAWSRequestDescribeEndpoints$fEqDescribeEndpointsResponse$fShowDescribeEndpointsResponse"$fGenericDescribeEndpointsResponse$fEqDescribeEndpoints$fReadDescribeEndpoints$fShowDescribeEndpoints$fGenericDescribeEndpointsDescribeEndpointTypesResponseDescribeEndpointTypesResponse'*$sel:marker:DescribeEndpointTypesResponse':$sel:supportedEndpointTypes:DescribeEndpointTypesResponse'.$sel:httpStatus:DescribeEndpointTypesResponse'DescribeEndpointTypesDescribeEndpointTypes'#$sel:filters:DescribeEndpointTypes'"$sel:marker:DescribeEndpointTypes'&$sel:maxRecords:DescribeEndpointTypes'newDescribeEndpointTypesdescribeEndpointTypes_filtersdescribeEndpointTypes_marker describeEndpointTypes_maxRecords newDescribeEndpointTypesResponse$describeEndpointTypesResponse_marker4describeEndpointTypesResponse_supportedEndpointTypes(describeEndpointTypesResponse_httpStatus$fToQueryDescribeEndpointTypes$fToPathDescribeEndpointTypes$fToJSONDescribeEndpointTypes $fToHeadersDescribeEndpointTypes$fNFDataDescribeEndpointTypes$fHashableDescribeEndpointTypes$fAWSPagerDescribeEndpointTypes%$fNFDataDescribeEndpointTypesResponse!$fAWSRequestDescribeEndpointTypes!$fEqDescribeEndpointTypesResponse#$fReadDescribeEndpointTypesResponse#$fShowDescribeEndpointTypesResponse&$fGenericDescribeEndpointTypesResponse$fEqDescribeEndpointTypes$fReadDescribeEndpointTypes$fShowDescribeEndpointTypes$fGenericDescribeEndpointTypes DescribeEndpointSettingsResponse!DescribeEndpointSettingsResponse'7$sel:endpointSettings:DescribeEndpointSettingsResponse'-$sel:marker:DescribeEndpointSettingsResponse'1$sel:httpStatus:DescribeEndpointSettingsResponse'DescribeEndpointSettingsDescribeEndpointSettings'%$sel:marker:DescribeEndpointSettings')$sel:maxRecords:DescribeEndpointSettings')$sel:engineName:DescribeEndpointSettings'newDescribeEndpointSettingsdescribeEndpointSettings_marker#describeEndpointSettings_maxRecords#describeEndpointSettings_engineName#newDescribeEndpointSettingsResponse1describeEndpointSettingsResponse_endpointSettings'describeEndpointSettingsResponse_marker+describeEndpointSettingsResponse_httpStatus!$fToQueryDescribeEndpointSettings $fToPathDescribeEndpointSettings $fToJSONDescribeEndpointSettings#$fToHeadersDescribeEndpointSettings $fNFDataDescribeEndpointSettings"$fHashableDescribeEndpointSettings($fNFDataDescribeEndpointSettingsResponse$$fAWSRequestDescribeEndpointSettings$$fEqDescribeEndpointSettingsResponse&$fReadDescribeEndpointSettingsResponse&$fShowDescribeEndpointSettingsResponse)$fGenericDescribeEndpointSettingsResponse$fEqDescribeEndpointSettings$fReadDescribeEndpointSettings$fShowDescribeEndpointSettings!$fGenericDescribeEndpointSettingsDescribeConnectionsResponseDescribeConnectionsResponse'-$sel:connections:DescribeConnectionsResponse'($sel:marker:DescribeConnectionsResponse',$sel:httpStatus:DescribeConnectionsResponse'DescribeConnections'!$sel:filters:DescribeConnections' $sel:marker:DescribeConnections'$$sel:maxRecords:DescribeConnections'newDescribeConnectionsdescribeConnections_filtersdescribeConnections_markerdescribeConnections_maxRecordsnewDescribeConnectionsResponse'describeConnectionsResponse_connections"describeConnectionsResponse_marker&describeConnectionsResponse_httpStatus$fToQueryDescribeConnections$fToPathDescribeConnections$fToJSONDescribeConnections$fToHeadersDescribeConnections$fNFDataDescribeConnections$fHashableDescribeConnections$fAWSPagerDescribeConnections#$fNFDataDescribeConnectionsResponse$fAWSRequestDescribeConnections$fEqDescribeConnectionsResponse!$fReadDescribeConnectionsResponse!$fShowDescribeConnectionsResponse$$fGenericDescribeConnectionsResponse$fEqDescribeConnections$fReadDescribeConnections$fShowDescribeConnections$fGenericDescribeConnectionsDescribeCertificatesResponseDescribeCertificatesResponse'/$sel:certificates:DescribeCertificatesResponse')$sel:marker:DescribeCertificatesResponse'-$sel:httpStatus:DescribeCertificatesResponse'DescribeCertificatesDescribeCertificates'"$sel:filters:DescribeCertificates'!$sel:marker:DescribeCertificates'%$sel:maxRecords:DescribeCertificates'newDescribeCertificatesdescribeCertificates_filtersdescribeCertificates_markerdescribeCertificates_maxRecordsnewDescribeCertificatesResponse)describeCertificatesResponse_certificates#describeCertificatesResponse_marker'describeCertificatesResponse_httpStatus$fToQueryDescribeCertificates$fToPathDescribeCertificates$fToJSONDescribeCertificates$fToHeadersDescribeCertificates$fNFDataDescribeCertificates$fHashableDescribeCertificates$fAWSPagerDescribeCertificates$$fNFDataDescribeCertificatesResponse $fAWSRequestDescribeCertificates $fEqDescribeCertificatesResponse"$fReadDescribeCertificatesResponse"$fShowDescribeCertificatesResponse%$fGenericDescribeCertificatesResponse$fEqDescribeCertificates$fReadDescribeCertificates$fShowDescribeCertificates$fGenericDescribeCertificates/DescribeApplicableIndividualAssessmentsResponse0DescribeApplicableIndividualAssessmentsResponse'$sel:individualAssessmentNames:DescribeApplicableIndividualAssessmentsResponse'<$sel:marker:DescribeApplicableIndividualAssessmentsResponse'$sel:httpStatus:DescribeApplicableIndividualAssessmentsResponse''DescribeApplicableIndividualAssessments(DescribeApplicableIndividualAssessments'4$sel:marker:DescribeApplicableIndividualAssessments'8$sel:maxRecords:DescribeApplicableIndividualAssessments';$sel:migrationType:DescribeApplicableIndividualAssessments'$sel:replicationInstanceArn:DescribeApplicableIndividualAssessments'$sel:replicationTaskArn:DescribeApplicableIndividualAssessments'>$sel:sourceEngineName:DescribeApplicableIndividualAssessments'>$sel:targetEngineName:DescribeApplicableIndividualAssessments'*newDescribeApplicableIndividualAssessments.describeApplicableIndividualAssessments_marker2describeApplicableIndividualAssessments_maxRecords5describeApplicableIndividualAssessments_migrationType>describeApplicableIndividualAssessments_replicationInstanceArn:describeApplicableIndividualAssessments_replicationTaskArn8describeApplicableIndividualAssessments_sourceEngineName8describeApplicableIndividualAssessments_targetEngineName2newDescribeApplicableIndividualAssessmentsResponsedescribeApplicableIndividualAssessmentsResponse_individualAssessmentNames6describeApplicableIndividualAssessmentsResponse_marker:describeApplicableIndividualAssessmentsResponse_httpStatus0$fToQueryDescribeApplicableIndividualAssessments/$fToPathDescribeApplicableIndividualAssessments/$fToJSONDescribeApplicableIndividualAssessments2$fToHeadersDescribeApplicableIndividualAssessments/$fNFDataDescribeApplicableIndividualAssessments1$fHashableDescribeApplicableIndividualAssessments7$fNFDataDescribeApplicableIndividualAssessmentsResponse3$fAWSRequestDescribeApplicableIndividualAssessments3$fEqDescribeApplicableIndividualAssessmentsResponse5$fReadDescribeApplicableIndividualAssessmentsResponse5$fShowDescribeApplicableIndividualAssessmentsResponse8$fGenericDescribeApplicableIndividualAssessmentsResponse+$fEqDescribeApplicableIndividualAssessments-$fReadDescribeApplicableIndividualAssessments-$fShowDescribeApplicableIndividualAssessments0$fGenericDescribeApplicableIndividualAssessments!DescribeAccountAttributesResponse"DescribeAccountAttributesResponse'5$sel:accountQuotas:DescribeAccountAttributesResponse'?$sel:uniqueAccountIdentifier:DescribeAccountAttributesResponse'2$sel:httpStatus:DescribeAccountAttributesResponse'DescribeAccountAttributesDescribeAccountAttributes'newDescribeAccountAttributes$newDescribeAccountAttributesResponse/describeAccountAttributesResponse_accountQuotas9describeAccountAttributesResponse_uniqueAccountIdentifier,describeAccountAttributesResponse_httpStatus"$fToQueryDescribeAccountAttributes!$fToPathDescribeAccountAttributes!$fToJSONDescribeAccountAttributes$$fToHeadersDescribeAccountAttributes!$fNFDataDescribeAccountAttributes#$fHashableDescribeAccountAttributes)$fNFDataDescribeAccountAttributesResponse%$fAWSRequestDescribeAccountAttributes%$fEqDescribeAccountAttributesResponse'$fReadDescribeAccountAttributesResponse'$fShowDescribeAccountAttributesResponse*$fGenericDescribeAccountAttributesResponse$fEqDescribeAccountAttributes$fReadDescribeAccountAttributes$fShowDescribeAccountAttributes"$fGenericDescribeAccountAttributes*DeleteReplicationTaskAssessmentRunResponse+DeleteReplicationTaskAssessmentRunResponse'$sel:replicationTaskAssessmentRun:DeleteReplicationTaskAssessmentRunResponse';$sel:httpStatus:DeleteReplicationTaskAssessmentRunResponse'"DeleteReplicationTaskAssessmentRun#DeleteReplicationTaskAssessmentRun'$sel:replicationTaskAssessmentRunArn:DeleteReplicationTaskAssessmentRun'%newDeleteReplicationTaskAssessmentRundeleteReplicationTaskAssessmentRun_replicationTaskAssessmentRunArn-newDeleteReplicationTaskAssessmentRunResponsedeleteReplicationTaskAssessmentRunResponse_replicationTaskAssessmentRun5deleteReplicationTaskAssessmentRunResponse_httpStatus+$fToQueryDeleteReplicationTaskAssessmentRun*$fToPathDeleteReplicationTaskAssessmentRun*$fToJSONDeleteReplicationTaskAssessmentRun-$fToHeadersDeleteReplicationTaskAssessmentRun*$fNFDataDeleteReplicationTaskAssessmentRun,$fHashableDeleteReplicationTaskAssessmentRun2$fNFDataDeleteReplicationTaskAssessmentRunResponse.$fAWSRequestDeleteReplicationTaskAssessmentRun.$fEqDeleteReplicationTaskAssessmentRunResponse0$fReadDeleteReplicationTaskAssessmentRunResponse0$fShowDeleteReplicationTaskAssessmentRunResponse3$fGenericDeleteReplicationTaskAssessmentRunResponse&$fEqDeleteReplicationTaskAssessmentRun($fReadDeleteReplicationTaskAssessmentRun($fShowDeleteReplicationTaskAssessmentRun+$fGenericDeleteReplicationTaskAssessmentRunDeleteReplicationTaskResponseDeleteReplicationTaskResponse'3$sel:replicationTask:DeleteReplicationTaskResponse'.$sel:httpStatus:DeleteReplicationTaskResponse'DeleteReplicationTaskDeleteReplicationTask'.$sel:replicationTaskArn:DeleteReplicationTask'newDeleteReplicationTask(deleteReplicationTask_replicationTaskArn newDeleteReplicationTaskResponse-deleteReplicationTaskResponse_replicationTask(deleteReplicationTaskResponse_httpStatus$fToQueryDeleteReplicationTask$fToPathDeleteReplicationTask$fToJSONDeleteReplicationTask $fToHeadersDeleteReplicationTask$fNFDataDeleteReplicationTask$fHashableDeleteReplicationTask%$fNFDataDeleteReplicationTaskResponse!$fAWSRequestDeleteReplicationTask!$fEqDeleteReplicationTaskResponse#$fReadDeleteReplicationTaskResponse#$fShowDeleteReplicationTaskResponse&$fGenericDeleteReplicationTaskResponse$fEqDeleteReplicationTask$fReadDeleteReplicationTask$fShowDeleteReplicationTask$fGenericDeleteReplicationTask$DeleteReplicationSubnetGroupResponse%DeleteReplicationSubnetGroupResponse'5$sel:httpStatus:DeleteReplicationSubnetGroupResponse'DeleteReplicationSubnetGroupDeleteReplicationSubnetGroup'$sel:replicationSubnetGroupIdentifier:DeleteReplicationSubnetGroup'newDeleteReplicationSubnetGroup=deleteReplicationSubnetGroup_replicationSubnetGroupIdentifier'newDeleteReplicationSubnetGroupResponse/deleteReplicationSubnetGroupResponse_httpStatus%$fToQueryDeleteReplicationSubnetGroup$$fToPathDeleteReplicationSubnetGroup$$fToJSONDeleteReplicationSubnetGroup'$fToHeadersDeleteReplicationSubnetGroup$$fNFDataDeleteReplicationSubnetGroup&$fHashableDeleteReplicationSubnetGroup,$fNFDataDeleteReplicationSubnetGroupResponse($fAWSRequestDeleteReplicationSubnetGroup($fEqDeleteReplicationSubnetGroupResponse*$fReadDeleteReplicationSubnetGroupResponse*$fShowDeleteReplicationSubnetGroupResponse-$fGenericDeleteReplicationSubnetGroupResponse $fEqDeleteReplicationSubnetGroup"$fReadDeleteReplicationSubnetGroup"$fShowDeleteReplicationSubnetGroup%$fGenericDeleteReplicationSubnetGroup!DeleteReplicationInstanceResponse"DeleteReplicationInstanceResponse';$sel:replicationInstance:DeleteReplicationInstanceResponse'2$sel:httpStatus:DeleteReplicationInstanceResponse'DeleteReplicationInstanceDeleteReplicationInstance'6$sel:replicationInstanceArn:DeleteReplicationInstance'newDeleteReplicationInstance0deleteReplicationInstance_replicationInstanceArn$newDeleteReplicationInstanceResponse5deleteReplicationInstanceResponse_replicationInstance,deleteReplicationInstanceResponse_httpStatus"$fToQueryDeleteReplicationInstance!$fToPathDeleteReplicationInstance!$fToJSONDeleteReplicationInstance$$fToHeadersDeleteReplicationInstance!$fNFDataDeleteReplicationInstance#$fHashableDeleteReplicationInstance)$fNFDataDeleteReplicationInstanceResponse%$fAWSRequestDeleteReplicationInstance%$fEqDeleteReplicationInstanceResponse'$fReadDeleteReplicationInstanceResponse'$fShowDeleteReplicationInstanceResponse*$fGenericDeleteReplicationInstanceResponse$fEqDeleteReplicationInstance$fReadDeleteReplicationInstance$fShowDeleteReplicationInstance"$fGenericDeleteReplicationInstance#DeleteFleetAdvisorDatabasesResponse$DeleteFleetAdvisorDatabasesResponse'5$sel:databaseIds:DeleteFleetAdvisorDatabasesResponse'4$sel:httpStatus:DeleteFleetAdvisorDatabasesResponse'DeleteFleetAdvisorDatabasesDeleteFleetAdvisorDatabases'-$sel:databaseIds:DeleteFleetAdvisorDatabases'newDeleteFleetAdvisorDatabases'deleteFleetAdvisorDatabases_databaseIds&newDeleteFleetAdvisorDatabasesResponse/deleteFleetAdvisorDatabasesResponse_databaseIds.deleteFleetAdvisorDatabasesResponse_httpStatus$$fToQueryDeleteFleetAdvisorDatabases#$fToPathDeleteFleetAdvisorDatabases#$fToJSONDeleteFleetAdvisorDatabases&$fToHeadersDeleteFleetAdvisorDatabases#$fNFDataDeleteFleetAdvisorDatabases%$fHashableDeleteFleetAdvisorDatabases+$fNFDataDeleteFleetAdvisorDatabasesResponse'$fAWSRequestDeleteFleetAdvisorDatabases'$fEqDeleteFleetAdvisorDatabasesResponse)$fReadDeleteFleetAdvisorDatabasesResponse)$fShowDeleteFleetAdvisorDatabasesResponse,$fGenericDeleteFleetAdvisorDatabasesResponse$fEqDeleteFleetAdvisorDatabases!$fReadDeleteFleetAdvisorDatabases!$fShowDeleteFleetAdvisorDatabases$$fGenericDeleteFleetAdvisorDatabases#DeleteFleetAdvisorCollectorResponse$DeleteFleetAdvisorCollectorResponse'DeleteFleetAdvisorCollectorDeleteFleetAdvisorCollector'7$sel:collectorReferencedId:DeleteFleetAdvisorCollector'newDeleteFleetAdvisorCollector1deleteFleetAdvisorCollector_collectorReferencedId&newDeleteFleetAdvisorCollectorResponse$$fToQueryDeleteFleetAdvisorCollector#$fToPathDeleteFleetAdvisorCollector#$fToJSONDeleteFleetAdvisorCollector&$fToHeadersDeleteFleetAdvisorCollector#$fNFDataDeleteFleetAdvisorCollector%$fHashableDeleteFleetAdvisorCollector+$fNFDataDeleteFleetAdvisorCollectorResponse'$fAWSRequestDeleteFleetAdvisorCollector'$fEqDeleteFleetAdvisorCollectorResponse)$fReadDeleteFleetAdvisorCollectorResponse)$fShowDeleteFleetAdvisorCollectorResponse,$fGenericDeleteFleetAdvisorCollectorResponse$fEqDeleteFleetAdvisorCollector!$fReadDeleteFleetAdvisorCollector!$fShowDeleteFleetAdvisorCollector$$fGenericDeleteFleetAdvisorCollectorDeleteEventSubscriptionResponse DeleteEventSubscriptionResponse'7$sel:eventSubscription:DeleteEventSubscriptionResponse'0$sel:httpStatus:DeleteEventSubscriptionResponse'DeleteEventSubscriptionDeleteEventSubscription'.$sel:subscriptionName:DeleteEventSubscription'newDeleteEventSubscription(deleteEventSubscription_subscriptionName"newDeleteEventSubscriptionResponse1deleteEventSubscriptionResponse_eventSubscription*deleteEventSubscriptionResponse_httpStatus $fToQueryDeleteEventSubscription$fToPathDeleteEventSubscription$fToJSONDeleteEventSubscription"$fToHeadersDeleteEventSubscription$fNFDataDeleteEventSubscription!$fHashableDeleteEventSubscription'$fNFDataDeleteEventSubscriptionResponse#$fAWSRequestDeleteEventSubscription#$fEqDeleteEventSubscriptionResponse%$fReadDeleteEventSubscriptionResponse%$fShowDeleteEventSubscriptionResponse($fGenericDeleteEventSubscriptionResponse$fEqDeleteEventSubscription$fReadDeleteEventSubscription$fShowDeleteEventSubscription $fGenericDeleteEventSubscriptionDeleteEndpointResponseDeleteEndpointResponse'%$sel:endpoint:DeleteEndpointResponse''$sel:httpStatus:DeleteEndpointResponse'DeleteEndpointDeleteEndpoint' $sel:endpointArn:DeleteEndpoint'newDeleteEndpointdeleteEndpoint_endpointArnnewDeleteEndpointResponsedeleteEndpointResponse_endpoint!deleteEndpointResponse_httpStatus$fToQueryDeleteEndpoint$fToPathDeleteEndpoint$fToJSONDeleteEndpoint$fToHeadersDeleteEndpoint$fNFDataDeleteEndpoint$fHashableDeleteEndpoint$fNFDataDeleteEndpointResponse$fAWSRequestDeleteEndpoint$fEqDeleteEndpointResponse$fShowDeleteEndpointResponse$fGenericDeleteEndpointResponse$fEqDeleteEndpoint$fReadDeleteEndpoint$fShowDeleteEndpoint$fGenericDeleteEndpointDeleteConnectionResponseDeleteConnectionResponse')$sel:connection:DeleteConnectionResponse')$sel:httpStatus:DeleteConnectionResponse'DeleteConnectionDeleteConnection'"$sel:endpointArn:DeleteConnection'-$sel:replicationInstanceArn:DeleteConnection'newDeleteConnectiondeleteConnection_endpointArn'deleteConnection_replicationInstanceArnnewDeleteConnectionResponse#deleteConnectionResponse_connection#deleteConnectionResponse_httpStatus$fToQueryDeleteConnection$fToPathDeleteConnection$fToJSONDeleteConnection$fToHeadersDeleteConnection$fNFDataDeleteConnection$fHashableDeleteConnection $fNFDataDeleteConnectionResponse$fAWSRequestDeleteConnection$fEqDeleteConnectionResponse$fReadDeleteConnectionResponse$fShowDeleteConnectionResponse!$fGenericDeleteConnectionResponse$fEqDeleteConnection$fReadDeleteConnection$fShowDeleteConnection$fGenericDeleteConnectionDeleteCertificateResponseDeleteCertificateResponse'+$sel:certificate:DeleteCertificateResponse'*$sel:httpStatus:DeleteCertificateResponse'DeleteCertificateDeleteCertificate'&$sel:certificateArn:DeleteCertificate'newDeleteCertificate deleteCertificate_certificateArnnewDeleteCertificateResponse%deleteCertificateResponse_certificate$deleteCertificateResponse_httpStatus$fToQueryDeleteCertificate$fToPathDeleteCertificate$fToJSONDeleteCertificate$fToHeadersDeleteCertificate$fNFDataDeleteCertificate$fHashableDeleteCertificate!$fNFDataDeleteCertificateResponse$fAWSRequestDeleteCertificate$fEqDeleteCertificateResponse$fReadDeleteCertificateResponse$fShowDeleteCertificateResponse"$fGenericDeleteCertificateResponse$fEqDeleteCertificate$fReadDeleteCertificate$fShowDeleteCertificate$fGenericDeleteCertificateCreateReplicationTaskResponseCreateReplicationTaskResponse'3$sel:replicationTask:CreateReplicationTaskResponse'.$sel:httpStatus:CreateReplicationTaskResponse'CreateReplicationTaskCreateReplicationTask',$sel:cdcStartPosition:CreateReplicationTask'($sel:cdcStartTime:CreateReplicationTask'+$sel:cdcStopPosition:CreateReplicationTask'3$sel:replicationTaskSettings:CreateReplicationTask'.$sel:resourceIdentifier:CreateReplicationTask' $sel:tags:CreateReplicationTask'$$sel:taskData:CreateReplicationTask'5$sel:replicationTaskIdentifier:CreateReplicationTask'-$sel:sourceEndpointArn:CreateReplicationTask'-$sel:targetEndpointArn:CreateReplicationTask'2$sel:replicationInstanceArn:CreateReplicationTask')$sel:migrationType:CreateReplicationTask')$sel:tableMappings:CreateReplicationTask'newCreateReplicationTask&createReplicationTask_cdcStartPosition"createReplicationTask_cdcStartTime%createReplicationTask_cdcStopPosition-createReplicationTask_replicationTaskSettings(createReplicationTask_resourceIdentifiercreateReplicationTask_tagscreateReplicationTask_taskData/createReplicationTask_replicationTaskIdentifier'createReplicationTask_sourceEndpointArn'createReplicationTask_targetEndpointArn,createReplicationTask_replicationInstanceArn#createReplicationTask_migrationType#createReplicationTask_tableMappings newCreateReplicationTaskResponse-createReplicationTaskResponse_replicationTask(createReplicationTaskResponse_httpStatus$fToQueryCreateReplicationTask$fToPathCreateReplicationTask$fToJSONCreateReplicationTask $fToHeadersCreateReplicationTask$fNFDataCreateReplicationTask$fHashableCreateReplicationTask%$fNFDataCreateReplicationTaskResponse!$fAWSRequestCreateReplicationTask!$fEqCreateReplicationTaskResponse#$fReadCreateReplicationTaskResponse#$fShowCreateReplicationTaskResponse&$fGenericCreateReplicationTaskResponse$fEqCreateReplicationTask$fReadCreateReplicationTask$fShowCreateReplicationTask$fGenericCreateReplicationTask$CreateReplicationSubnetGroupResponse%CreateReplicationSubnetGroupResponse'$sel:replicationSubnetGroup:CreateReplicationSubnetGroupResponse'5$sel:httpStatus:CreateReplicationSubnetGroupResponse'CreateReplicationSubnetGroupCreateReplicationSubnetGroup''$sel:tags:CreateReplicationSubnetGroup'$sel:replicationSubnetGroupIdentifier:CreateReplicationSubnetGroup'$sel:replicationSubnetGroupDescription:CreateReplicationSubnetGroup',$sel:subnetIds:CreateReplicationSubnetGroup'newCreateReplicationSubnetGroup!createReplicationSubnetGroup_tags=createReplicationSubnetGroup_replicationSubnetGroupIdentifier>createReplicationSubnetGroup_replicationSubnetGroupDescription&createReplicationSubnetGroup_subnetIds'newCreateReplicationSubnetGroupResponse;createReplicationSubnetGroupResponse_replicationSubnetGroup/createReplicationSubnetGroupResponse_httpStatus%$fToQueryCreateReplicationSubnetGroup$$fToPathCreateReplicationSubnetGroup$$fToJSONCreateReplicationSubnetGroup'$fToHeadersCreateReplicationSubnetGroup$$fNFDataCreateReplicationSubnetGroup&$fHashableCreateReplicationSubnetGroup,$fNFDataCreateReplicationSubnetGroupResponse($fAWSRequestCreateReplicationSubnetGroup($fEqCreateReplicationSubnetGroupResponse*$fReadCreateReplicationSubnetGroupResponse*$fShowCreateReplicationSubnetGroupResponse-$fGenericCreateReplicationSubnetGroupResponse $fEqCreateReplicationSubnetGroup"$fReadCreateReplicationSubnetGroup"$fShowCreateReplicationSubnetGroup%$fGenericCreateReplicationSubnetGroup!CreateReplicationInstanceResponse"CreateReplicationInstanceResponse';$sel:replicationInstance:CreateReplicationInstanceResponse'2$sel:httpStatus:CreateReplicationInstanceResponse'CreateReplicationInstanceCreateReplicationInstance'0$sel:allocatedStorage:CreateReplicationInstance'7$sel:autoMinorVersionUpgrade:CreateReplicationInstance'0$sel:availabilityZone:CreateReplicationInstance'.$sel:dnsNameServers:CreateReplicationInstance'-$sel:engineVersion:CreateReplicationInstance'($sel:kmsKeyId:CreateReplicationInstance''$sel:multiAZ:CreateReplicationInstance'+$sel:networkType:CreateReplicationInstance':$sel:preferredMaintenanceWindow:CreateReplicationInstance'2$sel:publiclyAccessible:CreateReplicationInstance'$sel:replicationSubnetGroupIdentifier:CreateReplicationInstance'2$sel:resourceIdentifier:CreateReplicationInstance'$$sel:tags:CreateReplicationInstance'3$sel:vpcSecurityGroupIds:CreateReplicationInstance'=$sel:replicationInstanceIdentifier:CreateReplicationInstance'8$sel:replicationInstanceClass:CreateReplicationInstance'newCreateReplicationInstance*createReplicationInstance_allocatedStorage1createReplicationInstance_autoMinorVersionUpgrade*createReplicationInstance_availabilityZone(createReplicationInstance_dnsNameServers'createReplicationInstance_engineVersion"createReplicationInstance_kmsKeyId!createReplicationInstance_multiAZ%createReplicationInstance_networkType4createReplicationInstance_preferredMaintenanceWindow,createReplicationInstance_publiclyAccessible:createReplicationInstance_replicationSubnetGroupIdentifier,createReplicationInstance_resourceIdentifiercreateReplicationInstance_tags-createReplicationInstance_vpcSecurityGroupIds7createReplicationInstance_replicationInstanceIdentifier2createReplicationInstance_replicationInstanceClass$newCreateReplicationInstanceResponse5createReplicationInstanceResponse_replicationInstance,createReplicationInstanceResponse_httpStatus"$fToQueryCreateReplicationInstance!$fToPathCreateReplicationInstance!$fToJSONCreateReplicationInstance$$fToHeadersCreateReplicationInstance!$fNFDataCreateReplicationInstance#$fHashableCreateReplicationInstance)$fNFDataCreateReplicationInstanceResponse%$fAWSRequestCreateReplicationInstance%$fEqCreateReplicationInstanceResponse'$fReadCreateReplicationInstanceResponse'$fShowCreateReplicationInstanceResponse*$fGenericCreateReplicationInstanceResponse$fEqCreateReplicationInstance$fReadCreateReplicationInstance$fShowCreateReplicationInstance"$fGenericCreateReplicationInstance#CreateFleetAdvisorCollectorResponse$CreateFleetAdvisorCollectorResponse'7$sel:collectorName:CreateFleetAdvisorCollectorResponse'?$sel:collectorReferencedId:CreateFleetAdvisorCollectorResponse'5$sel:description:CreateFleetAdvisorCollectorResponse'6$sel:s3BucketName:CreateFleetAdvisorCollectorResponse'>$sel:serviceAccessRoleArn:CreateFleetAdvisorCollectorResponse'4$sel:httpStatus:CreateFleetAdvisorCollectorResponse'CreateFleetAdvisorCollectorCreateFleetAdvisorCollector'-$sel:description:CreateFleetAdvisorCollector'/$sel:collectorName:CreateFleetAdvisorCollector'6$sel:serviceAccessRoleArn:CreateFleetAdvisorCollector'.$sel:s3BucketName:CreateFleetAdvisorCollector'newCreateFleetAdvisorCollector'createFleetAdvisorCollector_description)createFleetAdvisorCollector_collectorName0createFleetAdvisorCollector_serviceAccessRoleArn(createFleetAdvisorCollector_s3BucketName&newCreateFleetAdvisorCollectorResponse1createFleetAdvisorCollectorResponse_collectorName9createFleetAdvisorCollectorResponse_collectorReferencedId/createFleetAdvisorCollectorResponse_description0createFleetAdvisorCollectorResponse_s3BucketName8createFleetAdvisorCollectorResponse_serviceAccessRoleArn.createFleetAdvisorCollectorResponse_httpStatus$$fToQueryCreateFleetAdvisorCollector#$fToPathCreateFleetAdvisorCollector#$fToJSONCreateFleetAdvisorCollector&$fToHeadersCreateFleetAdvisorCollector#$fNFDataCreateFleetAdvisorCollector%$fHashableCreateFleetAdvisorCollector+$fNFDataCreateFleetAdvisorCollectorResponse'$fAWSRequestCreateFleetAdvisorCollector'$fEqCreateFleetAdvisorCollectorResponse)$fReadCreateFleetAdvisorCollectorResponse)$fShowCreateFleetAdvisorCollectorResponse,$fGenericCreateFleetAdvisorCollectorResponse$fEqCreateFleetAdvisorCollector!$fReadCreateFleetAdvisorCollector!$fShowCreateFleetAdvisorCollector$$fGenericCreateFleetAdvisorCollectorCreateEventSubscriptionResponse CreateEventSubscriptionResponse'7$sel:eventSubscription:CreateEventSubscriptionResponse'0$sel:httpStatus:CreateEventSubscriptionResponse'CreateEventSubscriptionCreateEventSubscription'%$sel:enabled:CreateEventSubscription'-$sel:eventCategories:CreateEventSubscription''$sel:sourceIds:CreateEventSubscription'($sel:sourceType:CreateEventSubscription'"$sel:tags:CreateEventSubscription'.$sel:subscriptionName:CreateEventSubscription')$sel:snsTopicArn:CreateEventSubscription'newCreateEventSubscriptioncreateEventSubscription_enabled'createEventSubscription_eventCategories!createEventSubscription_sourceIds"createEventSubscription_sourceTypecreateEventSubscription_tags(createEventSubscription_subscriptionName#createEventSubscription_snsTopicArn"newCreateEventSubscriptionResponse1createEventSubscriptionResponse_eventSubscription*createEventSubscriptionResponse_httpStatus $fToQueryCreateEventSubscription$fToPathCreateEventSubscription$fToJSONCreateEventSubscription"$fToHeadersCreateEventSubscription$fNFDataCreateEventSubscription!$fHashableCreateEventSubscription'$fNFDataCreateEventSubscriptionResponse#$fAWSRequestCreateEventSubscription#$fEqCreateEventSubscriptionResponse%$fReadCreateEventSubscriptionResponse%$fShowCreateEventSubscriptionResponse($fGenericCreateEventSubscriptionResponse$fEqCreateEventSubscription$fReadCreateEventSubscription$fShowCreateEventSubscription $fGenericCreateEventSubscriptionCreateEndpointResponseCreateEndpointResponse'%$sel:endpoint:CreateEndpointResponse''$sel:httpStatus:CreateEndpointResponse'CreateEndpointCreateEndpoint'#$sel:certificateArn:CreateEndpoint'!$sel:databaseName:CreateEndpoint'($sel:dmsTransferSettings:CreateEndpoint'"$sel:docDbSettings:CreateEndpoint'%$sel:dynamoDbSettings:CreateEndpoint'*$sel:elasticsearchSettings:CreateEndpoint',$sel:externalTableDefinition:CreateEndpoint'.$sel:extraConnectionAttributes:CreateEndpoint'%$sel:gcpMySQLSettings:CreateEndpoint'#$sel:iBMDb2Settings:CreateEndpoint'"$sel:kafkaSettings:CreateEndpoint'$$sel:kinesisSettings:CreateEndpoint'$sel:kmsKeyId:CreateEndpoint'/$sel:microsoftSQLServerSettings:CreateEndpoint'$$sel:mongoDbSettings:CreateEndpoint'"$sel:mySQLSettings:CreateEndpoint'$$sel:neptuneSettings:CreateEndpoint'#$sel:oracleSettings:CreateEndpoint'$sel:password:CreateEndpoint'$sel:port:CreateEndpoint''$sel:postgreSQLSettings:CreateEndpoint'"$sel:redisSettings:CreateEndpoint'%$sel:redshiftSettings:CreateEndpoint''$sel:resourceIdentifier:CreateEndpoint'$sel:s3Settings:CreateEndpoint'$sel:serverName:CreateEndpoint')$sel:serviceAccessRoleArn:CreateEndpoint'$sel:sslMode:CreateEndpoint'#$sel:sybaseSettings:CreateEndpoint'$sel:tags:CreateEndpoint'$sel:username:CreateEndpoint''$sel:endpointIdentifier:CreateEndpoint'!$sel:endpointType:CreateEndpoint'$sel:engineName:CreateEndpoint'newCreateEndpointcreateEndpoint_certificateArncreateEndpoint_databaseName"createEndpoint_dmsTransferSettingscreateEndpoint_docDbSettingscreateEndpoint_dynamoDbSettings$createEndpoint_elasticsearchSettings&createEndpoint_externalTableDefinition(createEndpoint_extraConnectionAttributescreateEndpoint_gcpMySQLSettingscreateEndpoint_iBMDb2SettingscreateEndpoint_kafkaSettingscreateEndpoint_kinesisSettingscreateEndpoint_kmsKeyId)createEndpoint_microsoftSQLServerSettingscreateEndpoint_mongoDbSettingscreateEndpoint_mySQLSettingscreateEndpoint_neptuneSettingscreateEndpoint_oracleSettingscreateEndpoint_passwordcreateEndpoint_port!createEndpoint_postgreSQLSettingscreateEndpoint_redisSettingscreateEndpoint_redshiftSettings!createEndpoint_resourceIdentifiercreateEndpoint_s3SettingscreateEndpoint_serverName#createEndpoint_serviceAccessRoleArncreateEndpoint_sslModecreateEndpoint_sybaseSettingscreateEndpoint_tagscreateEndpoint_username!createEndpoint_endpointIdentifiercreateEndpoint_endpointTypecreateEndpoint_engineNamenewCreateEndpointResponsecreateEndpointResponse_endpoint!createEndpointResponse_httpStatus$fToQueryCreateEndpoint$fToPathCreateEndpoint$fToJSONCreateEndpoint$fToHeadersCreateEndpoint$fNFDataCreateEndpoint$fHashableCreateEndpoint$fNFDataCreateEndpointResponse$fAWSRequestCreateEndpoint$fEqCreateEndpointResponse$fShowCreateEndpointResponse$fGenericCreateEndpointResponse$fEqCreateEndpoint$fShowCreateEndpoint$fGenericCreateEndpoint*CancelReplicationTaskAssessmentRunResponse+CancelReplicationTaskAssessmentRunResponse'$sel:replicationTaskAssessmentRun:CancelReplicationTaskAssessmentRunResponse';$sel:httpStatus:CancelReplicationTaskAssessmentRunResponse'"CancelReplicationTaskAssessmentRun#CancelReplicationTaskAssessmentRun'$sel:replicationTaskAssessmentRunArn:CancelReplicationTaskAssessmentRun'%newCancelReplicationTaskAssessmentRuncancelReplicationTaskAssessmentRun_replicationTaskAssessmentRunArn-newCancelReplicationTaskAssessmentRunResponsecancelReplicationTaskAssessmentRunResponse_replicationTaskAssessmentRun5cancelReplicationTaskAssessmentRunResponse_httpStatus+$fToQueryCancelReplicationTaskAssessmentRun*$fToPathCancelReplicationTaskAssessmentRun*$fToJSONCancelReplicationTaskAssessmentRun-$fToHeadersCancelReplicationTaskAssessmentRun*$fNFDataCancelReplicationTaskAssessmentRun,$fHashableCancelReplicationTaskAssessmentRun2$fNFDataCancelReplicationTaskAssessmentRunResponse.$fAWSRequestCancelReplicationTaskAssessmentRun.$fEqCancelReplicationTaskAssessmentRunResponse0$fReadCancelReplicationTaskAssessmentRunResponse0$fShowCancelReplicationTaskAssessmentRunResponse3$fGenericCancelReplicationTaskAssessmentRunResponse&$fEqCancelReplicationTaskAssessmentRun($fReadCancelReplicationTaskAssessmentRun($fShowCancelReplicationTaskAssessmentRun+$fGenericCancelReplicationTaskAssessmentRun%ApplyPendingMaintenanceActionResponse&ApplyPendingMaintenanceActionResponse'$sel:resourcePendingMaintenanceActions:ApplyPendingMaintenanceActionResponse'6$sel:httpStatus:ApplyPendingMaintenanceActionResponse'ApplyPendingMaintenanceActionApplyPendingMaintenanceAction':$sel:replicationInstanceArn:ApplyPendingMaintenanceAction'/$sel:applyAction:ApplyPendingMaintenanceAction'-$sel:optInType:ApplyPendingMaintenanceAction' newApplyPendingMaintenanceAction4applyPendingMaintenanceAction_replicationInstanceArn)applyPendingMaintenanceAction_applyAction'applyPendingMaintenanceAction_optInType(newApplyPendingMaintenanceActionResponseapplyPendingMaintenanceActionResponse_resourcePendingMaintenanceActions0applyPendingMaintenanceActionResponse_httpStatus&$fToQueryApplyPendingMaintenanceAction%$fToPathApplyPendingMaintenanceAction%$fToJSONApplyPendingMaintenanceAction($fToHeadersApplyPendingMaintenanceAction%$fNFDataApplyPendingMaintenanceAction'$fHashableApplyPendingMaintenanceAction-$fNFDataApplyPendingMaintenanceActionResponse)$fAWSRequestApplyPendingMaintenanceAction)$fEqApplyPendingMaintenanceActionResponse+$fReadApplyPendingMaintenanceActionResponse+$fShowApplyPendingMaintenanceActionResponse.$fGenericApplyPendingMaintenanceActionResponse!$fEqApplyPendingMaintenanceAction#$fReadApplyPendingMaintenanceAction#$fShowApplyPendingMaintenanceAction&$fGenericApplyPendingMaintenanceActionAddTagsToResourceResponseAddTagsToResourceResponse'*$sel:httpStatus:AddTagsToResourceResponse'AddTagsToResourceAddTagsToResource'#$sel:resourceArn:AddTagsToResource'$sel:tags:AddTagsToResource'newAddTagsToResourceaddTagsToResource_resourceArnaddTagsToResource_tagsnewAddTagsToResourceResponse$addTagsToResourceResponse_httpStatus$fToQueryAddTagsToResource$fToPathAddTagsToResource$fToJSONAddTagsToResource$fToHeadersAddTagsToResource$fNFDataAddTagsToResource$fHashableAddTagsToResource!$fNFDataAddTagsToResourceResponse$fAWSRequestAddTagsToResource$fEqAddTagsToResourceResponse$fReadAddTagsToResourceResponse$fShowAddTagsToResourceResponse"$fGenericAddTagsToResourceResponse$fEqAddTagsToResource$fReadAddTagsToResource$fShowAddTagsToResource$fGenericAddTagsToResource(UpdateSubscriptionsToEventBridgeResponse)UpdateSubscriptionsToEventBridgeResponse'5$sel:result:UpdateSubscriptionsToEventBridgeResponse'9$sel:httpStatus:UpdateSubscriptionsToEventBridgeResponse' UpdateSubscriptionsToEventBridge!UpdateSubscriptionsToEventBridge'0$sel:forceMove:UpdateSubscriptionsToEventBridge'#newUpdateSubscriptionsToEventBridge*updateSubscriptionsToEventBridge_forceMove+newUpdateSubscriptionsToEventBridgeResponse/updateSubscriptionsToEventBridgeResponse_result3updateSubscriptionsToEventBridgeResponse_httpStatus)$fToQueryUpdateSubscriptionsToEventBridge($fToPathUpdateSubscriptionsToEventBridge($fToJSONUpdateSubscriptionsToEventBridge+$fToHeadersUpdateSubscriptionsToEventBridge($fNFDataUpdateSubscriptionsToEventBridge*$fHashableUpdateSubscriptionsToEventBridge0$fNFDataUpdateSubscriptionsToEventBridgeResponse,$fAWSRequestUpdateSubscriptionsToEventBridge,$fEqUpdateSubscriptionsToEventBridgeResponse.$fReadUpdateSubscriptionsToEventBridgeResponse.$fShowUpdateSubscriptionsToEventBridgeResponse1$fGenericUpdateSubscriptionsToEventBridgeResponse$$fEqUpdateSubscriptionsToEventBridge&$fReadUpdateSubscriptionsToEventBridge&$fShowUpdateSubscriptionsToEventBridge)$fGenericUpdateSubscriptionsToEventBridgenewEndpointDeletednewReplicationInstanceAvailablenewReplicationInstanceDeletednewReplicationTaskDeletednewReplicationTaskReadynewReplicationTaskRunningnewReplicationTaskStoppednewTestConnectionSucceeds