diff --git a/CHANGELOG.md b/CHANGELOG.md index f2ed38cc62f..17a7ff50443 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,15 @@ +Release v1.44.317 (2023-08-04) +=== + +### Service Client Updates +* `service/acm-pca`: Updates service documentation +* `service/connect`: Updates service API and documentation +* `service/datasync`: Updates service API and documentation +* `service/ecs`: Updates service documentation + * This is a documentation update to address various tickets. +* `service/sagemaker`: Updates service API and documentation + * Including DataCaptureConfig key in the Amazon Sagemaker Search's transform job object + Release v1.44.316 (2023-08-03) === diff --git a/aws/endpoints/defaults.go b/aws/endpoints/defaults.go index f3d3fe1206a..6027df1e184 100644 --- a/aws/endpoints/defaults.go +++ b/aws/endpoints/defaults.go @@ -23162,6 +23162,9 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-3", }: endpoint{}, + endpointKey{ + Region: "il-central-1", + }: endpoint{}, endpointKey{ Region: "me-central-1", }: endpoint{}, diff --git a/aws/version.go b/aws/version.go index 644836bc2a8..0e5f95c1c1f 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.44.316" +const SDKVersion = "1.44.317" diff --git a/models/apis/acm-pca/2017-08-22/docs-2.json b/models/apis/acm-pca/2017-08-22/docs-2.json index 39e78388ae1..e5063464446 100644 --- a/models/apis/acm-pca/2017-08-22/docs-2.json +++ b/models/apis/acm-pca/2017-08-22/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "

This is the Amazon Web Services Private Certificate Authority API Reference. It provides descriptions, syntax, and usage examples for each of the actions and data types involved in creating and managing a private certificate authority (CA) for your organization.

The documentation for each action shows the API request parameters and the JSON response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you prefer. For more information, see Amazon Web Services SDKs.

Each Amazon Web Services Private CA API operation has a quota that determines the number of times the operation can be called per second. Amazon Web Services Private CA throttles API requests at different rates depending on the operation. Throttling means that Amazon Web Services Private CA rejects an otherwise valid request because the request exceeds the operation's quota for the number of requests per second. When a request is throttled, Amazon Web Services Private CA returns a ThrottlingException error. Amazon Web Services Private CA does not guarantee a minimum request rate for APIs.

To see an up-to-date list of your Amazon Web Services Private CA quotas, or to request a quota increase, log into your Amazon Web Services account and visit the Service Quotas console.

", + "service": "

This is the Amazon Web Services Private Certificate Authority API Reference. It provides descriptions, syntax, and usage examples for each of the actions and data types involved in creating and managing a private certificate authority (CA) for your organization.

The documentation for each action shows the API request parameters and the JSON response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you prefer. For more information, see Amazon Web Services SDKs.

Each Amazon Web Services Private CA API operation has a quota that determines the number of times the operation can be called per second. Amazon Web Services Private CA throttles API requests at different rates depending on the operation. Throttling means that Amazon Web Services Private CA rejects an otherwise valid request because the request exceeds the operation's quota for the number of requests per second. When a request is throttled, Amazon Web Services Private CA returns a ThrottlingException error. Amazon Web Services Private CA does not guarantee a minimum request rate for APIs.

To see an up-to-date list of your Amazon Web Services Private CA quotas, or to request a quota increase, log into your Amazon Web Services account and visit the Service Quotas console.

", "operations": { "CreateCertificateAuthority": "

Creates a root or subordinate private certificate authority (CA). You must specify the CA configuration, an optional configuration for Online Certificate Status Protocol (OCSP) and/or a certificate revocation list (CRL), the CA type, and an optional idempotency token to avoid accidental creation of multiple CAs. The CA configuration specifies the name of the algorithm and key size to be used to create the CA private key, the type of signing algorithm that the CA uses, and X.500 subject information. The OCSP configuration can optionally specify a custom URL for the OCSP responder. The CRL configuration specifies the CRL expiration period in days (the validity period of the CRL), the Amazon S3 bucket that will contain the CRL, and a CNAME alias for the S3 bucket that is included in certificates issued by the CA. If successful, this action returns the Amazon Resource Name (ARN) of the CA.

Both Amazon Web Services Private CA and the IAM principal must have permission to write to the S3 bucket that you specify. If the IAM principal making the call does not have permission to write to the bucket, then an exception is thrown. For more information, see Access policies for CRLs in Amazon S3.

Amazon Web Services Private CA assets that are stored in Amazon S3 can be protected with encryption. For more information, see Encrypting Your CRLs.

", "CreateCertificateAuthorityAuditReport": "

Creates an audit report that lists every time that your CA private key is used. The report is saved in the Amazon S3 bucket that you specify on input. The IssueCertificate and RevokeCertificate actions use the private key.

Both Amazon Web Services Private CA and the IAM principal must have permission to write to the S3 bucket that you specify. If the IAM principal making the call does not have permission to write to the bucket, then an exception is thrown. For more information, see Access policies for CRLs in Amazon S3.

Amazon Web Services Private CA assets that are stored in Amazon S3 can be protected with encryption. For more information, see Encrypting Your Audit Reports.

You can generate a maximum of one report every 30 minutes.

", @@ -485,7 +485,7 @@ "base": null, "refs": { "CreateCertificateAuthorityRequest$IdempotencyToken": "

Custom string that can be used to distinguish between calls to the CreateCertificateAuthority action. Idempotency tokens for CreateCertificateAuthority time out after five minutes. Therefore, if you call CreateCertificateAuthority multiple times with the same idempotency token within five minutes, Amazon Web Services Private CA recognizes that you are requesting only certificate authority and will issue only one. If you change the idempotency token for each call, Amazon Web Services Private CA recognizes that you are requesting multiple certificate authorities.

", - "IssueCertificateRequest$IdempotencyToken": "

Alphanumeric string that can be used to distinguish between calls to the IssueCertificate action. Idempotency tokens for IssueCertificate time out after one minute. Therefore, if you call IssueCertificate multiple times with the same idempotency token within one minute, Amazon Web Services Private CA recognizes that you are requesting only one certificate and will issue only one. If you change the idempotency token for each call, Amazon Web Services Private CA recognizes that you are requesting multiple certificates.

" + "IssueCertificateRequest$IdempotencyToken": "

Alphanumeric string that can be used to distinguish between calls to the IssueCertificate action. Idempotency tokens for IssueCertificate time out after five minutes. Therefore, if you call IssueCertificate multiple times with the same idempotency token within five minutes, Amazon Web Services Private CA recognizes that you are requesting only one certificate and will issue only one. If you change the idempotency token for each call, Amazon Web Services Private CA recognizes that you are requesting multiple certificates.

" } }, "ImportCertificateAuthorityCertificateRequest": { diff --git a/models/apis/connect/2017-08-08/api-2.json b/models/apis/connect/2017-08-08/api-2.json index 3cb6f446a02..50476eaf747 100644 --- a/models/apis/connect/2017-08-08/api-2.json +++ b/models/apis/connect/2017-08-08/api-2.json @@ -2927,6 +2927,21 @@ {"shape":"InternalServiceException"} ] }, + "UpdateRoutingProfileAgentAvailabilityTimer":{ + "name":"UpdateRoutingProfileAgentAvailabilityTimer", + "http":{ + "method":"POST", + "requestUri":"/routing-profiles/{InstanceId}/{RoutingProfileId}/agent-availability-timer" + }, + "input":{"shape":"UpdateRoutingProfileAgentAvailabilityTimerRequest"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServiceException"} + ] + }, "UpdateRoutingProfileConcurrency":{ "name":"UpdateRoutingProfileConcurrency", "http":{ @@ -3229,6 +3244,13 @@ "type":"integer", "min":0 }, + "AgentAvailabilityTimer":{ + "type":"string", + "enum":[ + "TIME_SINCE_LAST_ACTIVITY", + "TIME_SINCE_LAST_INBOUND" + ] + }, "AgentContactReference":{ "type":"structure", "members":{ @@ -4319,7 +4341,8 @@ "DefaultOutboundQueueId":{"shape":"QueueId"}, "QueueConfigs":{"shape":"RoutingProfileQueueConfigList"}, "MediaConcurrencies":{"shape":"MediaConcurrencies"}, - "Tags":{"shape":"TagMap"} + "Tags":{"shape":"TagMap"}, + "AgentAvailabilityTimer":{"shape":"AgentAvailabilityTimer"} } }, "CreateRoutingProfileResponse":{ @@ -9779,7 +9802,8 @@ "DefaultOutboundQueueId":{"shape":"QueueId"}, "Tags":{"shape":"TagMap"}, "NumberOfAssociatedQueues":{"shape":"Long"}, - "NumberOfAssociatedUsers":{"shape":"Long"} + "NumberOfAssociatedUsers":{"shape":"Long"}, + "AgentAvailabilityTimer":{"shape":"AgentAvailabilityTimer"} } }, "RoutingProfileDescription":{ @@ -11776,6 +11800,27 @@ "Description":{"shape":"UpdateQuickConnectDescription"} } }, + "UpdateRoutingProfileAgentAvailabilityTimerRequest":{ + "type":"structure", + "required":[ + "InstanceId", + "RoutingProfileId", + "AgentAvailabilityTimer" + ], + "members":{ + "InstanceId":{ + "shape":"InstanceId", + "location":"uri", + "locationName":"InstanceId" + }, + "RoutingProfileId":{ + "shape":"RoutingProfileId", + "location":"uri", + "locationName":"RoutingProfileId" + }, + "AgentAvailabilityTimer":{"shape":"AgentAvailabilityTimer"} + } + }, "UpdateRoutingProfileConcurrencyRequest":{ "type":"structure", "required":[ diff --git a/models/apis/connect/2017-08-08/docs-2.json b/models/apis/connect/2017-08-08/docs-2.json index 4a7044bb74c..2102c8c9a1b 100644 --- a/models/apis/connect/2017-08-08/docs-2.json +++ b/models/apis/connect/2017-08-08/docs-2.json @@ -183,6 +183,7 @@ "UpdateQueueStatus": "

This API is in preview release for Amazon Connect and is subject to change.

Updates the status of the queue.

", "UpdateQuickConnectConfig": "

Updates the configuration settings for the specified quick connect.

", "UpdateQuickConnectName": "

Updates the name and description of a quick connect. The request accepts the following data in JSON format. At least Name or Description must be provided.

", + "UpdateRoutingProfileAgentAvailabilityTimer": "

Whether agents with this routing profile will have their routing order calculated based on time since their last inbound contact or longest idle time.

", "UpdateRoutingProfileConcurrency": "

Updates the channels that agents can handle in the Contact Control Panel (CCP) for a routing profile.

", "UpdateRoutingProfileDefaultOutboundQueue": "

Updates the default outbound queue of a routing profile.

", "UpdateRoutingProfileName": "

Updates the name and description of a routing profile. The request accepts the following data in JSON format. At least Name or Description must be provided.

", @@ -353,6 +354,14 @@ "UserPhoneConfig$AfterContactWorkTimeLimit": "

The After Call Work (ACW) timeout setting, in seconds.

When returned by a SearchUsers call, AfterContactWorkTimeLimit is returned in milliseconds.

" } }, + "AgentAvailabilityTimer": { + "base": null, + "refs": { + "CreateRoutingProfileRequest$AgentAvailabilityTimer": "

Whether agents with this routing profile will have their routing order calculated based on time since their last inbound contact or longest idle time.

", + "RoutingProfile$AgentAvailabilityTimer": "

Whether agents with this routing profile will have their routing order calculated based on time since their last inbound contact or longest idle time.

", + "UpdateRoutingProfileAgentAvailabilityTimerRequest$AgentAvailabilityTimer": "

Whether agents with this routing profile will have their routing order calculated based on time since their last inbound contact or longest idle time.

" + } + }, "AgentContactReference": { "base": "

Information about the contact associated to the user.

", "refs": { @@ -3002,6 +3011,7 @@ "UpdateQueueStatusRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", "UpdateQuickConnectConfigRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", "UpdateQuickConnectNameRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", + "UpdateRoutingProfileAgentAvailabilityTimerRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", "UpdateRoutingProfileConcurrencyRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", "UpdateRoutingProfileDefaultOutboundQueueRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", "UpdateRoutingProfileNameRequest$InstanceId": "

The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.

", @@ -4877,6 +4887,7 @@ "RoutingProfileReference$Id": "

The identifier of the routing profile.

", "RoutingProfileSummary$Id": "

The identifier of the routing profile.

", "RoutingProfiles$member": null, + "UpdateRoutingProfileAgentAvailabilityTimerRequest$RoutingProfileId": "

The identifier of the routing profile.

", "UpdateRoutingProfileConcurrencyRequest$RoutingProfileId": "

The identifier of the routing profile.

", "UpdateRoutingProfileDefaultOutboundQueueRequest$RoutingProfileId": "

The identifier of the routing profile.

", "UpdateRoutingProfileNameRequest$RoutingProfileId": "

The identifier of the routing profile.

", @@ -6238,6 +6249,11 @@ "refs": { } }, + "UpdateRoutingProfileAgentAvailabilityTimerRequest": { + "base": null, + "refs": { + } + }, "UpdateRoutingProfileConcurrencyRequest": { "base": null, "refs": { diff --git a/models/apis/connect/2017-08-08/endpoint-rule-set-1.json b/models/apis/connect/2017-08-08/endpoint-rule-set-1.json index 57834595dab..1f6adf2f2f3 100644 --- a/models/apis/connect/2017-08-08/endpoint-rule-set-1.json +++ b/models/apis/connect/2017-08-08/endpoint-rule-set-1.json @@ -58,52 +58,56 @@ "type": "error" }, { - "conditions": [], - "type": "tree", - "rules": [ + "conditions": [ { - "conditions": [ + "fn": "booleanEquals", + "argv": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" - }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" + "ref": "UseDualStack" }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" + true + ] } - ] + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, { - "conditions": [], + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "type": "tree", "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], "type": "tree", @@ -111,13 +115,22 @@ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ { - "ref": "Region" - } - ], - "assign": "PartitionResult" + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] } ], "type": "tree", @@ -127,92 +140,83 @@ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] }, { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - }, - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://connect-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, + } + ], + "type": "tree", + "rules": [ { "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" + "endpoint": { + "url": "https://connect-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] } ], @@ -221,155 +225,115 @@ { "conditions": [ { - "fn": "booleanEquals", + "fn": "stringEquals", "argv": [ - true, + "aws-us-gov", { "fn": "getAttr", "argv": [ { "ref": "PartitionResult" }, - "supportsFIPS" + "name" ] } ] } ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "stringEquals", - "argv": [ - "aws-us-gov", - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "name" - ] - } - ] - } - ], - "endpoint": { - "url": "https://connect.{Region}.amazonaws.com", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - }, - { - "conditions": [], - "endpoint": { - "url": "https://connect-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - } - ] + "endpoint": { + "url": "https://connect.{Region}.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" }, { "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" + "endpoint": { + "url": "https://connect-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://connect.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } - ] - }, - { - "conditions": [], + ], "type": "tree", "rules": [ { "conditions": [], "endpoint": { - "url": "https://connect.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://connect.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } ] + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } ] + }, + { + "conditions": [], + "endpoint": { + "url": "https://connect.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" } ] + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/apis/datasync/2018-11-09/api-2.json b/models/apis/datasync/2018-11-09/api-2.json index a67ce2920cd..3115a63fefa 100644 --- a/models/apis/datasync/2018-11-09/api-2.json +++ b/models/apis/datasync/2018-11-09/api-2.json @@ -942,7 +942,8 @@ "members":{ "Used":{"shape":"NonNegativeLong"}, "Provisioned":{"shape":"NonNegativeLong"}, - "LogicalUsed":{"shape":"NonNegativeLong"} + "LogicalUsed":{"shape":"NonNegativeLong"}, + "ClusterCloudStorageUsed":{"shape":"NonNegativeLong"} } }, "CollectionDurationMinutes":{ @@ -2289,7 +2290,8 @@ "ClusterBlockStorageLogicalUsed":{"shape":"NonNegativeLong"}, "Recommendations":{"shape":"Recommendations"}, "RecommendationStatus":{"shape":"RecommendationStatus"}, - "LunCount":{"shape":"NonNegativeLong"} + "LunCount":{"shape":"NonNegativeLong"}, + "ClusterCloudStorageUsed":{"shape":"NonNegativeLong"} } }, "NetAppONTAPClusters":{ diff --git a/models/apis/datasync/2018-11-09/docs-2.json b/models/apis/datasync/2018-11-09/docs-2.json index 71682ab43e1..e54f63d7c8a 100644 --- a/models/apis/datasync/2018-11-09/docs-2.json +++ b/models/apis/datasync/2018-11-09/docs-2.json @@ -12,7 +12,7 @@ "CreateLocationFsxOpenZfs": "

Creates an endpoint for an Amazon FSx for OpenZFS file system that DataSync can access for a transfer. For more information, see Creating a location for FSx for OpenZFS.

Request parameters related to SMB aren't supported with the CreateLocationFsxOpenZfs operation.

", "CreateLocationFsxWindows": "

Creates an endpoint for an Amazon FSx for Windows File Server file system.

", "CreateLocationHdfs": "

Creates an endpoint for a Hadoop Distributed File System (HDFS).

", - "CreateLocationNfs": "

Creates an endpoint for an Network File System (NFS) file server that DataSync can use for a data transfer.

", + "CreateLocationNfs": "

Creates an endpoint for a Network File System (NFS) file server that DataSync can use for a data transfer.

For more information, see Configuring transfers to or from an NFS file server.

If you're copying data to or from an Snowcone device, you can also use CreateLocationNfs to create your transfer location. For more information, see Configuring transfers with Snowcone.

", "CreateLocationObjectStorage": "

Creates an endpoint for an object storage system that DataSync can access for a transfer. For more information, see Creating a location for object storage.

", "CreateLocationS3": "

A location is an endpoint for an Amazon S3 bucket. DataSync can use the location as a source or destination for copying data.

Before you create your location, make sure that you read the following sections:

For more information, see Creating an Amazon S3 location.

", "CreateLocationSmb": "

Creates an endpoint for a Server Message Block (SMB) file server that DataSync can use for a data transfer.

Before you begin, make sure that you understand how DataSync accesses an SMB file server.

", @@ -29,7 +29,7 @@ "DescribeLocationFsxOpenZfs": "

Provides details about how an DataSync location for an Amazon FSx for OpenZFS file system is configured.

Response elements related to SMB aren't supported with the DescribeLocationFsxOpenZfs operation.

", "DescribeLocationFsxWindows": "

Returns metadata about an Amazon FSx for Windows File Server location, such as information about its path.

", "DescribeLocationHdfs": "

Returns metadata, such as the authentication information about the Hadoop Distributed File System (HDFS) location.

", - "DescribeLocationNfs": "

Returns metadata, such as the path information, about an NFS location.

", + "DescribeLocationNfs": "

Provides details about how an DataSync transfer location for a Network File System (NFS) file server is configured.

", "DescribeLocationObjectStorage": "

Returns metadata about your DataSync location for an object storage system.

", "DescribeLocationS3": "

Returns metadata, such as bucket name, about an Amazon S3 bucket location.

", "DescribeLocationSmb": "

Returns metadata, such as the path and user information about an SMB location.

", @@ -56,7 +56,7 @@ "UpdateDiscoveryJob": "

Edits a DataSync discovery job configuration.

", "UpdateLocationAzureBlob": "

Modifies some configurations of the Microsoft Azure Blob Storage transfer location that you're using with DataSync.

", "UpdateLocationHdfs": "

Updates some parameters of a previously created location for a Hadoop Distributed File System cluster.

", - "UpdateLocationNfs": "

Updates some of the parameters of a previously created location for Network File System (NFS) access. For information about creating an NFS location, see Creating a location for NFS.

", + "UpdateLocationNfs": "

Modifies some configurations of the Network File System (NFS) transfer location that you're using with DataSync.

For more information, see Configuring transfers to or from an NFS file server.

", "UpdateLocationObjectStorage": "

Updates some parameters of an existing object storage location that DataSync accesses for a transfer. For information about creating a self-managed object storage location, see Creating a location for object storage.

", "UpdateLocationSmb": "

Updates some of the parameters of a previously created location for Server Message Block (SMB) file system access. For information about creating an SMB location, see Creating a location for SMB.

", "UpdateStorageSystem": "

Modifies some configurations of an on-premises storage system resource that you're using with DataSync Discovery.

", @@ -106,7 +106,7 @@ "DescribeLocationObjectStorageResponse$AgentArns": "

The ARNs of the DataSync agents that can securely connect with your location.

", "DescribeLocationS3Response$AgentArns": "

If you are using DataSync on an Amazon Web Services Outpost, the Amazon Resource Name (ARNs) of the EC2 agents deployed on your Outpost. For more information about launching a DataSync agent on an Amazon Web Services Outpost, see Deploy your DataSync agent on Outposts.

", "DescribeLocationSmbResponse$AgentArns": "

The Amazon Resource Name (ARN) of the source SMB file system location that is created.

", - "OnPremConfig$AgentArns": "

ARNs of the agents to use for an NFS location.

", + "OnPremConfig$AgentArns": "

The Amazon Resource Names (ARNs) of the agents connecting to a transfer location.

", "UpdateLocationAzureBlobRequest$AgentArns": "

Specifies the Amazon Resource Name (ARN) of the DataSync agent that can connect with your Azure Blob Storage container.

You can specify more than one agent. For more information, see Using multiple agents for your transfer.

", "UpdateLocationHdfsRequest$AgentArns": "

The ARNs of the agents that are used to connect to the HDFS cluster.

", "UpdateLocationObjectStorageRequest$AgentArns": "

Specifies the Amazon Resource Names (ARNs) of the DataSync agents that can securely connect with your location.

", @@ -1161,8 +1161,8 @@ "DescribeLocationFsxWindowsResponse$LocationArn": "

The Amazon Resource Name (ARN) of the FSx for Windows File Server location that was described.

", "DescribeLocationHdfsRequest$LocationArn": "

The Amazon Resource Name (ARN) of the HDFS cluster location to describe.

", "DescribeLocationHdfsResponse$LocationArn": "

The ARN of the HDFS cluster location.

", - "DescribeLocationNfsRequest$LocationArn": "

The Amazon Resource Name (ARN) of the NFS location to describe.

", - "DescribeLocationNfsResponse$LocationArn": "

The Amazon Resource Name (ARN) of the NFS location that was described.

", + "DescribeLocationNfsRequest$LocationArn": "

Specifies the Amazon Resource Name (ARN) of the NFS location that you want information about.

", + "DescribeLocationNfsResponse$LocationArn": "

The ARN of the NFS location.

", "DescribeLocationObjectStorageRequest$LocationArn": "

The Amazon Resource Name (ARN) of the object storage system location that you want information about.

", "DescribeLocationObjectStorageResponse$LocationArn": "

The ARN of the object storage system location.

", "DescribeLocationS3Request$LocationArn": "

The Amazon Resource Name (ARN) of the Amazon S3 bucket location to describe.

", @@ -1174,7 +1174,7 @@ "LocationListEntry$LocationArn": "

The Amazon Resource Name (ARN) of the location. For Network File System (NFS) or Amazon EFS, the location is the export path. For Amazon S3, the location is the prefix path that you want to mount and use as the root of the location.

", "UpdateLocationAzureBlobRequest$LocationArn": "

Specifies the ARN of the Azure Blob Storage transfer location that you're updating.

", "UpdateLocationHdfsRequest$LocationArn": "

The Amazon Resource Name (ARN) of the source HDFS cluster location.

", - "UpdateLocationNfsRequest$LocationArn": "

Specifies the Amazon Resource Name (ARN) of the NFS location that you want to update.

", + "UpdateLocationNfsRequest$LocationArn": "

Specifies the Amazon Resource Name (ARN) of the NFS transfer location that you want to update.

", "UpdateLocationObjectStorageRequest$LocationArn": "

Specifies the ARN of the object storage system location that you're updating.

", "UpdateLocationSmbRequest$LocationArn": "

The Amazon Resource Name (ARN) of the SMB location to update.

" } @@ -1219,7 +1219,7 @@ "DescribeLocationFsxOpenZfsResponse$LocationUri": "

The uniform resource identifier (URI) of the FSx for OpenZFS location that was described.

Example: fsxz://us-west-2.fs-1234567890abcdef02/fsx/folderA/folder

", "DescribeLocationFsxWindowsResponse$LocationUri": "

The URL of the FSx for Windows File Server location that was described.

", "DescribeLocationHdfsResponse$LocationUri": "

The URI of the HDFS cluster location.

", - "DescribeLocationNfsResponse$LocationUri": "

The URL of the source NFS location that was described.

", + "DescribeLocationNfsResponse$LocationUri": "

The URL of the NFS location.

", "DescribeLocationObjectStorageResponse$LocationUri": "

The URL of the object storage system location.

", "DescribeLocationS3Response$LocationUri": "

The URL of the Amazon S3 location that was described.

", "DescribeLocationSmbResponse$LocationUri": "

The URL of the source SMB location that was described.

", @@ -1343,8 +1343,8 @@ "NfsMountOptions": { "base": "

Specifies how DataSync can access a location using the NFS protocol.

", "refs": { - "CreateLocationNfsRequest$MountOptions": "

Specifies the mount options that DataSync can use to mount your NFS share.

", - "DescribeLocationNfsResponse$MountOptions": "

The mount options that DataSync uses to mount your NFS share.

", + "CreateLocationNfsRequest$MountOptions": "

Specifies the options that DataSync can use to mount your NFS file server.

", + "DescribeLocationNfsResponse$MountOptions": "

The mount options that DataSync uses to mount your NFS file server.

", "FsxProtocolNfs$MountOptions": null, "UpdateLocationNfsRequest$MountOptions": null } @@ -1352,8 +1352,8 @@ "NfsSubdirectory": { "base": null, "refs": { - "CreateLocationNfsRequest$Subdirectory": "

Specifies the subdirectory in the NFS file server that DataSync transfers to or from. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network.

To see all the paths exported by your NFS server, run \"showmount -e nfs-server-name\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication.

To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with no_root_squash, or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.

If you are copying data to or from your Snowcone device, see NFS Server on Snowcone for more information.

", - "UpdateLocationNfsRequest$Subdirectory": "

Specifies the subdirectory in your NFS file system that DataSync uses to read from or write to during a transfer. The NFS path should be exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network.

To see all the paths exported by your NFS server, run \"showmount -e nfs-server-name\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication.

To transfer all the data in the folder that you specified, DataSync must have permissions to read all the data. To ensure this, either configure the NFS export with no_root_squash, or ensure that the files you want DataSync to access have permissions that allow read access for all users. Doing either option enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.

If you are copying data to or from your Snowcone device, see NFS Server on Snowcone for more information.

" + "CreateLocationNfsRequest$Subdirectory": "

Specifies the export path in your NFS file server that you want DataSync to mount.

This path (or a subdirectory of the path) is where DataSync transfers data to or from. For information on configuring an export for DataSync, see Accessing NFS file servers.

", + "UpdateLocationNfsRequest$Subdirectory": "

Specifies the export path in your NFS file server that you want DataSync to mount.

This path (or a subdirectory of the path) is where DataSync transfers data to or from. For information on configuring an export for DataSync, see Accessing NFS file servers.

" } }, "NfsVersion": { @@ -1395,12 +1395,14 @@ "Capacity$Used": "

The amount of space that's being used in a storage system resource.

", "Capacity$Provisioned": "

The total amount of space available in a storage system resource.

", "Capacity$LogicalUsed": "

The amount of space that's being used in a storage system resource without accounting for compression or deduplication.

", + "Capacity$ClusterCloudStorageUsed": "

The amount of space in the cluster that's in cloud storage (for example, if you're using data tiering).

", "NetAppONTAPCluster$CifsShareCount": "

The number of CIFS shares in the cluster.

", "NetAppONTAPCluster$NfsExportedVolumes": "

The number of NFS volumes in the cluster.

", "NetAppONTAPCluster$ClusterBlockStorageSize": "

The total storage space that's available in the cluster.

", "NetAppONTAPCluster$ClusterBlockStorageUsed": "

The storage space that's being used in a cluster.

", "NetAppONTAPCluster$ClusterBlockStorageLogicalUsed": "

The storage space that's being used in the cluster without accounting for compression or deduplication.

", "NetAppONTAPCluster$LunCount": "

The number of LUNs (logical unit numbers) in the cluster.

", + "NetAppONTAPCluster$ClusterCloudStorageUsed": "

The amount of space in the cluster that's in cloud storage (for example, if you're using data tiering).

", "NetAppONTAPSVM$CifsShareCount": "

The number of CIFS shares in the SVM.

", "NetAppONTAPSVM$TotalCapacityUsed": "

The storage space that's being used in the SVM.

", "NetAppONTAPSVM$TotalCapacityProvisioned": "

The total storage space that's available in the SVM.

", @@ -1468,9 +1470,9 @@ } }, "OnPremConfig": { - "base": "

A list of Amazon Resource Names (ARNs) of agents to use for a Network File System (NFS) location.

", + "base": "

The DataSync agents that are connecting to a Network File System (NFS) location.

", "refs": { - "CreateLocationNfsRequest$OnPremConfig": "

Specifies the Amazon Resource Names (ARNs) of agents that DataSync uses to connect to your NFS file server.

If you are copying data to or from your Snowcone device, see NFS Server on Snowcone for more information.

", + "CreateLocationNfsRequest$OnPremConfig": "

Specifies the Amazon Resource Name (ARN) of the DataSync agent that want to connect to your NFS file server.

You can specify more than one agent. For more information, see Using multiple agents for transfers.

", "DescribeLocationNfsResponse$OnPremConfig": null, "UpdateLocationNfsRequest$OnPremConfig": null } @@ -1725,7 +1727,7 @@ "ServerHostname": { "base": null, "refs": { - "CreateLocationNfsRequest$ServerHostname": "

Specifies the IP address or domain name of your NFS file server. An agent that is installed on-premises uses this hostname to mount the NFS server in a network.

If you are copying data to or from your Snowcone device, see NFS Server on Snowcone for more information.

You must specify be an IP version 4 address or Domain Name System (DNS)-compliant name.

", + "CreateLocationNfsRequest$ServerHostname": "

Specifies the Domain Name System (DNS) name or IP version 4 address of the NFS file server that your DataSync agent connects to.

", "CreateLocationObjectStorageRequest$ServerHostname": "

Specifies the domain name or IP address of the object storage server. A DataSync agent uses this hostname to mount the object storage server in a network.

", "CreateLocationSmbRequest$ServerHostname": "

Specifies the Domain Name Service (DNS) name or IP address of the SMB file server that your DataSync agent will mount.

You can't specify an IP version 6 (IPv6) address.

" } @@ -2035,7 +2037,7 @@ "DescribeLocationFsxOpenZfsResponse$CreationTime": "

The time that the FSx for OpenZFS location was created.

", "DescribeLocationFsxWindowsResponse$CreationTime": "

The time that the FSx for Windows File Server location was created.

", "DescribeLocationHdfsResponse$CreationTime": "

The time that the HDFS location was created.

", - "DescribeLocationNfsResponse$CreationTime": "

The time that the NFS location was created.

", + "DescribeLocationNfsResponse$CreationTime": "

The time when the NFS location was created.

", "DescribeLocationObjectStorageResponse$CreationTime": "

The time that the location was created.

", "DescribeLocationS3Response$CreationTime": "

The time that the Amazon S3 bucket location was created.

", "DescribeLocationSmbResponse$CreationTime": "

The time that the SMB location was created.

", diff --git a/models/apis/datasync/2018-11-09/endpoint-rule-set-1.json b/models/apis/datasync/2018-11-09/endpoint-rule-set-1.json index d1fda177e76..0d32931aa5a 100644 --- a/models/apis/datasync/2018-11-09/endpoint-rule-set-1.json +++ b/models/apis/datasync/2018-11-09/endpoint-rule-set-1.json @@ -58,52 +58,56 @@ "type": "error" }, { - "conditions": [], - "type": "tree", - "rules": [ + "conditions": [ { - "conditions": [ + "fn": "booleanEquals", + "argv": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" - }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" + "ref": "UseDualStack" }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" + true + ] } - ] + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, { - "conditions": [], + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "type": "tree", "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], "type": "tree", @@ -111,13 +115,22 @@ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ { - "ref": "Region" - } - ], - "assign": "PartitionResult" + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] } ], "type": "tree", @@ -127,224 +140,175 @@ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] }, { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - }, - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://datasync-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, + } + ], + "type": "tree", + "rules": [ { "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" + "endpoint": { + "url": "https://datasync-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ { - "conditions": [], - "endpoint": { - "url": "https://datasync-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsFIPS" ] } ] - }, + } + ], + "type": "tree", + "rules": [ { "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" + "endpoint": { + "url": "https://datasync-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://datasync.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } - ] - }, - { - "conditions": [], + ], "type": "tree", "rules": [ { "conditions": [], "endpoint": { - "url": "https://datasync.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://datasync.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } ] + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } ] + }, + { + "conditions": [], + "endpoint": { + "url": "https://datasync.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" } ] + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/apis/ecs/2014-11-13/docs-2.json b/models/apis/ecs/2014-11-13/docs-2.json index ad454835fdf..f6006add98a 100644 --- a/models/apis/ecs/2014-11-13/docs-2.json +++ b/models/apis/ecs/2014-11-13/docs-2.json @@ -21,7 +21,7 @@ "DescribeServices": "

Describes the specified services running in your cluster.

", "DescribeTaskDefinition": "

Describes a task definition. You can specify a family and revision to find information about a specific task definition, or you can simply specify the family to find the latest ACTIVE revision in that family.

You can only describe INACTIVE task definitions while an active task or service references them.

", "DescribeTaskSets": "

Describes the task sets in the specified cluster and service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.

", - "DescribeTasks": "

Describes a specified task or tasks.

Currently, stopped tasks appear in the returned results for at least one hour.

", + "DescribeTasks": "

Describes a specified task or tasks.

Currently, stopped tasks appear in the returned results for at least one hour.

If you have tasks with tags, and then delete the cluster, the tagged tasks are returned in the response. If you create a new cluster with the same name as the deleted cluster, the tagged tasks are not included in the response.

", "DiscoverPollEndpoint": "

This action is only used by the Amazon ECS agent, and it is not intended for use outside of the agent.

Returns an endpoint for the Amazon ECS agent to poll for updates.

", "ExecuteCommand": "

Runs a command remotely on a container within a task.

If you use a condition key in your IAM policy to refine the conditions for the policy statement, for example limit the actions to a specific cluster, you receive an AccessDeniedException when there is a mismatch between the condition key value and the corresponding parameter value.

For information about required permissions and considerations, see Using Amazon ECS Exec for debugging in the Amazon ECS Developer Guide.

", "GetTaskProtection": "

Retrieves the protection status of tasks in an Amazon ECS service.

", @@ -34,7 +34,7 @@ "ListTagsForResource": "

List the tags for an Amazon ECS resource.

", "ListTaskDefinitionFamilies": "

Returns a list of task definition families that are registered to your account. This list includes task definition families that no longer have any ACTIVE task definition revisions.

You can filter out task definition families that don't contain any ACTIVE task definition revisions by setting the status parameter to ACTIVE. You can also filter the results with the familyPrefix parameter.

", "ListTaskDefinitions": "

Returns a list of task definitions that are registered to your account. You can filter the results by family name with the familyPrefix parameter or by status with the status parameter.

", - "ListTasks": "

Returns a list of tasks. You can filter the results by cluster, task definition family, container instance, launch type, what IAM principal started the task, or by the desired status of the task.

Recently stopped tasks might appear in the returned results. Currently, stopped tasks appear in the returned results for at least one hour.

", + "ListTasks": "

Returns a list of tasks. You can filter the results by cluster, task definition family, container instance, launch type, what IAM principal started the task, or by the desired status of the task.

Recently stopped tasks might appear in the returned results.

", "PutAccountSetting": "

Modifies an account setting. Account settings are set on a per-Region basis.

If you change the root user account setting, the default settings are reset for users and roles that do not have specified individual account settings. For more information, see Account Settings in the Amazon Elastic Container Service Developer Guide.

When serviceLongArnFormat, taskLongArnFormat, or containerInstanceLongArnFormat are specified, the Amazon Resource Name (ARN) and resource ID format of the resource type for a specified user, role, or the root user for an account is affected. The opt-in and opt-out account setting must be set for each Amazon ECS resource separately. The ARN and resource ID format of a resource is defined by the opt-in status of the user or role that created the resource. You must turn on this setting to use Amazon ECS features such as resource tagging.

When awsvpcTrunking is specified, the elastic network interface (ENI) limit for any new container instances that support the feature is changed. If awsvpcTrunking is turned on, any new container instances that support the feature are launched have the increased ENI limits available to them. For more information, see Elastic Network Interface Trunking in the Amazon Elastic Container Service Developer Guide.

When containerInsights is specified, the default setting indicating whether Amazon Web Services CloudWatch Container Insights is turned on for your clusters is changed. If containerInsights is turned on, any new clusters that are created will have Container Insights turned on unless you disable it during cluster creation. For more information, see CloudWatch Container Insights in the Amazon Elastic Container Service Developer Guide.

Amazon ECS is introducing tagging authorization for resource creation. Users must have permissions for actions that create the resource, such as ecsCreateCluster. If tags are specified when you create a resource, Amazon Web Services performs additional authorization to verify if users or roles have permissions to create tags. Therefore, you must grant explicit permissions to use the ecs:TagResource action. For more information, see Grant permission to tag resources on creation in the Amazon ECS Developer Guide.

", "PutAccountSettingDefault": "

Modifies an account setting for all users on an account for whom no individual account setting has been specified. Account settings are set on a per-Region basis.

", "PutAttributes": "

Create or update an attribute on an Amazon ECS resource. If the attribute doesn't exist, it's created. If the attribute exists, its value is replaced with the specified value. To delete an attribute, use DeleteAttributes. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.

", @@ -54,7 +54,7 @@ "UpdateClusterSettings": "

Modifies the settings to use for a cluster.

", "UpdateContainerAgent": "

Updates the Amazon ECS container agent on a specified container instance. Updating the Amazon ECS container agent doesn't interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with the Amazon ECS-optimized AMI or another operating system.

The UpdateContainerAgent API isn't supported for container instances using the Amazon ECS-optimized Amazon Linux 2 (arm64) AMI. To update the container agent, you can update the ecs-init package. This updates the agent. For more information, see Updating the Amazon ECS container agent in the Amazon Elastic Container Service Developer Guide.

Agent updates with the UpdateContainerAgent API operation do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.

The UpdateContainerAgent API requires an Amazon ECS-optimized AMI or Amazon Linux AMI with the ecs-init service installed and running. For help updating the Amazon ECS container agent on other operating systems, see Manually updating the Amazon ECS container agent in the Amazon Elastic Container Service Developer Guide.

", "UpdateContainerInstancesState": "

Modifies the status of an Amazon ECS container instance.

Once a container instance has reached an ACTIVE state, you can change the status of a container instance to DRAINING to manually remove an instance from a cluster, for example to perform system updates, update the Docker daemon, or scale down the cluster size.

A container instance can't be changed to DRAINING until it has reached an ACTIVE status. If the instance is in any other status, an error will be received.

When you set a container instance to DRAINING, Amazon ECS prevents new tasks from being scheduled for placement on the container instance and replacement service tasks are started on other container instances in the cluster if the resources are available. Service tasks on the container instance that are in the PENDING state are stopped immediately.

Service tasks on the container instance that are in the RUNNING state are stopped and replaced according to the service's deployment configuration parameters, minimumHealthyPercent and maximumPercent. You can change the deployment configuration of your service using UpdateService.

Any PENDING or RUNNING tasks that do not belong to a service aren't affected. You must wait for them to finish or stop them manually.

A container instance has completed draining when it has no more RUNNING tasks. You can verify this using ListTasks.

When a container instance has been drained, you can set a container instance to ACTIVE status and once it has reached that status the Amazon ECS scheduler can begin scheduling tasks on the instance again.

", - "UpdateService": "

Modifies the parameters of a service.

For services using the rolling update (ECS) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration.

For services using the blue/green (CODE_DEPLOY) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference.

For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet.

You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.

If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.

If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest), you don't need to create a new revision of your task definition. You can update the service using the forceNewDeployment option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.

You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.

When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout. After this, SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.

When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.

When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:

You must have a service-linked role when you update any of the following service properties. If you specified a custom role when you created the service, Amazon ECS automatically replaces the roleARN associated with the service with the ARN of your service-linked role. For more information, see Service-linked roles in the Amazon Elastic Container Service Developer Guide.

", + "UpdateService": "

Modifies the parameters of a service.

For services using the rolling update (ECS) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration.

For services using the blue/green (CODE_DEPLOY) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference.

For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet.

You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.

If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.

If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest), you don't need to create a new revision of your task definition. You can update the service using the forceNewDeployment option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.

You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.

When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout. After this, SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.

When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.

When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:

You must have a service-linked role when you update any of the following service properties:

For more information about the role see the CreateService request parameter role .

", "UpdateServicePrimaryTaskSet": "

Modifies which task set in a service is the primary task set. Any parameters that are updated on the primary task set in a service will transition to the service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.

", "UpdateTaskProtection": "

Updates the protection status of a task. You can set protectionEnabled to true to protect your task from termination during scale-in events from Service Autoscaling or deployments.

Task-protection, by default, expires after 2 hours at which point Amazon ECS clears the protectionEnabled property making the task eligible for termination by a subsequent scale-in event.

You can specify a custom expiration period for task protection from 1 minute to up to 2,880 minutes (48 hours). To specify the custom expiration period, set the expiresInMinutes property. The expiresInMinutes property is always reset when you invoke this operation for a task that already has protectionEnabled set to true. You can keep extending the protection expiration period of a task by invoking this operation repeatedly.

To learn more about Amazon ECS task protection, see Task scale-in protection in the Amazon Elastic Container Service Developer Guide .

This operation is only supported for tasks belonging to an Amazon ECS service. Invoking this operation for a standalone task will result in an TASK_NOT_VALID failure. For more information, see API failure reasons.

If you prefer to set task protection from within the container, we recommend using the Task scale-in protection endpoint.

", "UpdateTaskSet": "

Modifies a task set. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.

" @@ -219,8 +219,8 @@ "Container$exitCode": "

The exit code returned from the container.

", "ContainerDefinition$memory": "

The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the Create a container section of the Docker Remote API and the --memory option to docker run.

If using the Fargate launch type, this parameter is optional.

If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.

The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.

The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.

", "ContainerDefinition$memoryReservation": "

The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the Create a container section of the Docker Remote API and the --memory-reservation option to docker run.

If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.

For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.

The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.

The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.

", - "ContainerDefinition$startTimeout": "

Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.

When the ECS_CONTAINER_START_TIMEOUT container agent configuration variable is used, it's enforced independently from this start timeout value.

For tasks using the Fargate launch type, the task or service requires the following platforms:

For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.

", - "ContainerDefinition$stopTimeout": "

Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.

For tasks using the Fargate launch type, the task or service requires the following platforms:

The max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.

For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.

", + "ContainerDefinition$startTimeout": "

Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.

When the ECS_CONTAINER_START_TIMEOUT container agent configuration variable is used, it's enforced independently from this start timeout value.

For tasks using the Fargate launch type, the task or service requires the following platforms:

For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.

The valid values are 2-120 seconds.

", + "ContainerDefinition$stopTimeout": "

Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.

For tasks using the Fargate launch type, the task or service requires the following platforms:

The max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.

For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.

The valid values are 2-120 seconds.

", "ContainerOverride$cpu": "

The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.

", "ContainerOverride$memory": "

The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.

", "ContainerOverride$memoryReservation": "

The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.

", @@ -250,7 +250,7 @@ "NetworkBinding$containerPort": "

The port number on the container that's used with the network binding.

", "NetworkBinding$hostPort": "

The port number on the host that's used with the network binding.

", "PortMapping$containerPort": "

The port number on the container that's bound to the user-specified or automatically assigned host port.

If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.

If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.

", - "PortMapping$hostPort": "

The port number on the container instance to reserve for your container.

If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:

If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.

If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.

The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.

The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.

", + "PortMapping$hostPort": "

The port number on the container instance to reserve for your container.

If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:

If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.

If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.

The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.

The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.

", "RunTaskRequest$count": "

The number of instantiations of the specified task to place on your cluster. You can specify up to 10 tasks for each call.

", "Service$healthCheckGracePeriodSeconds": "

The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.

", "ServiceRegistry$port": "

The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.

", @@ -446,7 +446,7 @@ "refs": { "RegisterTaskDefinitionRequest$requiresCompatibilities": "

The task launch type that Amazon ECS validates the task definition against. A client exception is returned if the task definition doesn't validate against the compatibilities specified. If no value is specified, the parameter is omitted from the response.

", "TaskDefinition$compatibilities": "

The task launch types the task definition validated against during task definition registration. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.

", - "TaskDefinition$requiresCompatibilities": "

The task launch types the task definition was validated against. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.

" + "TaskDefinition$requiresCompatibilities": "

The task launch types the task definition was validated against. The valid values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.

" } }, "Connectivity": { @@ -534,7 +534,7 @@ } }, "ContainerOverride": { - "base": "

The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {\"containerOverrides\": [ ] }. If a non-empty container override is specified, the name parameter must be included.

", + "base": "

The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {\"containerOverrides\": [ ] }. If a non-empty container override is specified, the name parameter must be included.

You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.

", "refs": { "ContainerOverrides$member": null } @@ -686,7 +686,7 @@ } }, "DeploymentCircuitBreaker": { - "base": "

The deployment circuit breaker can only be used for services using the rolling update (ECS) deployment type.

The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If it is turned on, a service deployment will transition to a failed state and stop launching new tasks. You can also configure Amazon ECS to roll back your service to the last completed deployment after a failure. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide.

", + "base": "

The deployment circuit breaker can only be used for services using the rolling update (ECS) deployment type.

The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If it is turned on, a service deployment will transition to a failed state and stop launching new tasks. You can also configure Amazon ECS to roll back your service to the last completed deployment after a failure. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide.

For more information about API failure reasons, see API failure reasons in the Amazon Elastic Container Service Developer Guide.

", "refs": { "DeploymentConfiguration$deploymentCircuitBreaker": "

The deployment circuit breaker can only be used for services using the rolling update (ECS) deployment type.

The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide

" } @@ -898,7 +898,7 @@ } }, "EnvironmentFile": { - "base": "

A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file.

If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying environment variables in the Amazon Elastic Container Service Developer Guide.

This parameter is only supported for tasks hosted on Fargate using the following platform versions:

", + "base": "

A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file.

If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying environment variables in the Amazon Elastic Container Service Developer Guide.

You must use the following platforms for the Fargate launch type:

", "refs": { "EnvironmentFiles$member": null } @@ -1309,7 +1309,7 @@ } }, "LogConfiguration": { - "base": "

The log configuration for the container. This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run .

By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

Understand the following when specifying a log configuration for your containers.

", + "base": "

The log configuration for the container. This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run .

By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

Understand the following when specifying a log configuration for your containers.

", "refs": { "ContainerDefinition$logConfiguration": "

The log configuration specification for the container.

This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.

This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'

The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.

", "ServiceConnectConfiguration$logConfiguration": null @@ -1369,7 +1369,7 @@ "ManagedScaling": { "base": "

The managed scaling settings for the Auto Scaling group capacity provider.

When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. Amazon ECS manages a target tracking scaling policy using an Amazon ECS managed CloudWatch metric with the specified targetCapacity value as the target value for the metric. For more information, see Using managed scaling in the Amazon Elastic Container Service Developer Guide.

If managed scaling is off, the user must manage the scaling of the Auto Scaling group.

", "refs": { - "AutoScalingGroupProvider$managedScaling": "

The managed scaling settings for the Auto Scaling group capacity provider.

", + "AutoScalingGroupProvider$managedScaling": "

he managed scaling settings for the Auto Scaling group capacity provider.

", "AutoScalingGroupProviderUpdate$managedScaling": "

The managed scaling settings for the Auto Scaling group capacity provider.

" } }, @@ -1389,7 +1389,7 @@ "base": null, "refs": { "ManagedScaling$minimumScalingStepSize": "

The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.

When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.

If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand.

", - "ManagedScaling$maximumScalingStepSize": "

The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 1 is used.

" + "ManagedScaling$maximumScalingStepSize": "

The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 10000 is used.

" } }, "ManagedScalingTargetCapacity": { @@ -1995,7 +1995,7 @@ "Attribute$name": "

The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (\\), or periods (.).

", "Attribute$value": "

The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (\\), colons (:), or spaces. The value can't start or end with a space.

", "Attribute$targetId": "

The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).

", - "AutoScalingGroupProvider$autoScalingGroupArn": "

The Amazon Resource Name (ARN) that identifies the Auto Scaling group.

", + "AutoScalingGroupProvider$autoScalingGroupArn": "

The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.

", "CapacityProvider$capacityProviderArn": "

The Amazon Resource Name (ARN) that identifies the capacity provider.

", "CapacityProvider$name": "

The name of the capacity provider.

", "CapacityProvider$updateStatusReason": "

The update status reason. This provides further details about the update status for the capacity provider.

", @@ -2047,7 +2047,7 @@ "CreateTaskSetRequest$service": "

The short name or full Amazon Resource Name (ARN) of the service to create the task set in.

", "CreateTaskSetRequest$cluster": "

The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to create the task set in.

", "CreateTaskSetRequest$externalId": "

An optional non-unique tag that identifies this task set in external systems. If the task set is associated with a service discovery registry, the tasks in this task set will have the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute set to the provided value.

", - "CreateTaskSetRequest$taskDefinition": "

The task definition for the tasks in the task set to use.

", + "CreateTaskSetRequest$taskDefinition": "

The task definition for the tasks in the task set to use. If a revision isn't specified, the latest ACTIVE revision is used.

", "CreateTaskSetRequest$platformVersion": "

The platform version that the tasks in the task set uses. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used.

", "CreateTaskSetRequest$clientToken": "

The identifier that you provide to ensure the idempotency of the request. It's case sensitive and must be unique. It can be up to 32 ASCII characters are allowed.

", "DeleteAccountSettingRequest$principalArn": "

The Amazon Resource Name (ARN) of the principal. It can be an user, role, or the root user. If you specify the root user, it disables the account setting for all users, roles, and the root user of the account unless a user or role explicitly overrides these settings. If this field is omitted, the setting is changed only for the authenticated user.

", @@ -2157,8 +2157,8 @@ "ListTasksRequest$startedBy": "

The startedBy value to filter the task results with. Specifying a startedBy value limits the results to tasks that were started with that value.

When you specify startedBy as the filter, it must be the only filter that you use.

", "ListTasksRequest$serviceName": "

The name of the service to use when filtering the ListTasks results. Specifying a serviceName limits the results to tasks that belong to that service.

", "ListTasksResponse$nextToken": "

The nextToken value to include in a future ListTasks request. When the results of a ListTasks request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

", - "LoadBalancer$targetGroupArn": "

The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.

A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.

For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.

For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.

If your service's task definition uses the awsvpc network mode, you must choose ip as the target type, not instance. Do this when creating your target groups because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.

", - "LoadBalancer$loadBalancerName": "

The name of the load balancer to associate with the Amazon ECS service or task set.

A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.

", + "LoadBalancer$targetGroupArn": "

The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.

A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.

For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.

For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.

If your service's task definition uses the awsvpc network mode, you must choose ip as the target type, not instance. Do this when creating your target groups because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.

", + "LoadBalancer$loadBalancerName": "

The name of the load balancer to associate with the Amazon ECS service or task set.

If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.

", "LoadBalancer$containerName": "

The name of the container (as it appears in a container definition) to associate with the load balancer.

", "LogConfigurationOptionsMap$key": null, "LogConfigurationOptionsMap$value": null, @@ -2319,7 +2319,7 @@ "VersionInfo$agentVersion": "

The version number of the Amazon ECS container agent.

", "VersionInfo$agentHash": "

The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository.

", "VersionInfo$dockerVersion": "

The Docker version that's running on the container instance.

", - "Volume$name": "

The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This name is referenced in the sourceVolume parameter of container definition mountPoints.

", + "Volume$name": "

The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This name is referenced in the sourceVolume parameter of container definition mountPoints.

This is required wwhen you use an Amazon EFS volume.

", "VolumeFrom$sourceContainer": "

The name of another container within the same task definition to mount volumes from.

" } }, @@ -2335,7 +2335,7 @@ "ContainerDefinition$dnsServers": "

A list of DNS servers that are presented to the container. This parameter maps to Dns in the Create a container section of the Docker Remote API and the --dns option to docker run.

This parameter is not supported for Windows containers.

", "ContainerDefinition$dnsSearchDomains": "

A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the Create a container section of the Docker Remote API and the --dns-search option to docker run.

This parameter is not supported for Windows containers.

", "ContainerDefinition$dockerSecurityOptions": "

A list of strings to provide custom configuration for multiple security systems. For more information about valid values, see Docker Run Security Configuration. This field isn't valid for containers in tasks using the Fargate launch type.

For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.

For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.

This parameter maps to SecurityOpt in the Create a container section of the Docker Remote API and the --security-opt option to docker run.

The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.

For more information about valid values, see Docker Run Security Configuration.

Valid values: \"no-new-privileges\" | \"apparmor:PROFILE\" | \"label:value\" | \"credentialspec:CredentialSpecFilePath\"

", - "ContainerDefinition$credentialSpecs": "

A list of ARNs in SSM or Amazon S3 to a credential spec (credspeccode>) file that configures a container for Active Directory authentication. This parameter is only used with domainless authentication.

The format for each ARN is credentialspecdomainless:MyARN. Replace MyARN with the ARN in SSM or Amazon S3.

The credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.

", + "ContainerDefinition$credentialSpecs": "

A list of ARNs in SSM or Amazon S3 to a credential spec (CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.

There are two formats for each ARN.

credentialspecdomainless:MyARN

You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.

Each task that runs on any container instance can join different domains.

You can use this format without joining the container instance to a domain.

credentialspec:MyARN

You use credentialspec:MyARN to provide a CredSpec for a single domain.

You must join the container instance to the domain before you start any tasks that use this task definition.

In both formats, replace MyARN with the ARN in SSM or Amazon S3.

If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.

", "ContainerOverride$command": "

The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.

", "CreateClusterRequest$capacityProviders": "

The short name of one or more capacity providers to associate with the cluster. A capacity provider must be associated with a cluster before it can be included as part of the default capacity provider strategy of the cluster or used in a capacity provider strategy when calling the CreateService or RunTask actions.

If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must be created but not associated with another cluster. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.

To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.

The PutCapacityProvider API operation is used to update the list of available capacity providers for a cluster after the cluster is created.

", "DeleteTaskDefinitionsRequest$taskDefinitions": "

The family and revision (family:revision) or full Amazon Resource Name (ARN) of the task definition to delete. You must specify a revision.

You can specify up to 10 task definitions as a comma separated list.

", @@ -2612,7 +2612,7 @@ "TaskStopCode": { "base": null, "refs": { - "Task$stopCode": "

The stop code indicating why a task was stopped. The stoppedReason might contain additional details.

The following are valid values:

" + "Task$stopCode": "

The stop code indicating why a task was stopped. The stoppedReason might contain additional details.

For more information about stop code, see Stopped tasks error codes in the Amazon ECS User Guide.

The following are valid values:

" } }, "Tasks": { @@ -2645,7 +2645,7 @@ "Task$pullStoppedAt": "

The Unix timestamp for the time when the container image pull completed.

", "Task$startedAt": "

The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the PENDING state to the RUNNING state.

", "Task$stoppedAt": "

The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the RUNNING state to the STOPPED state.

", - "Task$stoppingAt": "

The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPED.

", + "Task$stoppingAt": "

The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPING.

", "TaskDefinition$registeredAt": "

The Unix timestamp for the time when the task definition was registered.

", "TaskDefinition$deregisteredAt": "

The Unix timestamp for the time when the task definition was deregistered.

", "TaskSet$createdAt": "

The Unix timestamp for the time when the task set was created.

", diff --git a/models/apis/ecs/2014-11-13/endpoint-rule-set-1.json b/models/apis/ecs/2014-11-13/endpoint-rule-set-1.json index 1614858d7cb..57a28815f47 100644 --- a/models/apis/ecs/2014-11-13/endpoint-rule-set-1.json +++ b/models/apis/ecs/2014-11-13/endpoint-rule-set-1.json @@ -58,52 +58,56 @@ "type": "error" }, { - "conditions": [], - "type": "tree", - "rules": [ + "conditions": [ { - "conditions": [ + "fn": "booleanEquals", + "argv": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" - }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" + "ref": "UseDualStack" }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" + true + ] } - ] + ], + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], + "endpoint": { + "url": { + "ref": "Endpoint" + }, + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, { - "conditions": [], + "conditions": [ + { + "fn": "isSet", + "argv": [ + { + "ref": "Region" + } + ] + } + ], "type": "tree", "rules": [ { "conditions": [ { - "fn": "isSet", + "fn": "aws.partition", "argv": [ { "ref": "Region" } - ] + ], + "assign": "PartitionResult" } ], "type": "tree", @@ -111,13 +115,22 @@ { "conditions": [ { - "fn": "aws.partition", + "fn": "booleanEquals", "argv": [ { - "ref": "Region" - } - ], - "assign": "PartitionResult" + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] } ], "type": "tree", @@ -127,224 +140,175 @@ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } ] }, { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - }, - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://ecs-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, + } + ], + "type": "tree", + "rules": [ { "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" + "endpoint": { + "url": "https://ecs-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseFIPS" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsFIPS" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ { - "conditions": [], - "endpoint": { - "url": "https://ecs-fips.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsFIPS" ] } ] - }, + } + ], + "type": "tree", + "rules": [ { "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" + "endpoint": { + "url": "https://ecs-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } + ] + }, + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true + ] + } + ], + "type": "tree", + "rules": [ { "conditions": [ { "fn": "booleanEquals", "argv": [ + true, { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", + "fn": "getAttr", "argv": [ - true, { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://ecs.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } + "ref": "PartitionResult" + }, + "supportsDualStack" ] } ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } - ] - }, - { - "conditions": [], + ], "type": "tree", "rules": [ { "conditions": [], "endpoint": { - "url": "https://ecs.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://ecs.{Region}.{PartitionResult#dualStackDnsSuffix}", "properties": {}, "headers": {} }, "type": "endpoint" } ] + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } ] + }, + { + "conditions": [], + "endpoint": { + "url": "https://ecs.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] - }, - { - "conditions": [], - "error": "Invalid Configuration: Missing Region", - "type": "error" } ] + }, + { + "conditions": [], + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } \ No newline at end of file diff --git a/models/apis/sagemaker/2017-07-24/api-2.json b/models/apis/sagemaker/2017-07-24/api-2.json index 9f57d491f35..d951c18ec59 100644 --- a/models/apis/sagemaker/2017-07-24/api-2.json +++ b/models/apis/sagemaker/2017-07-24/api-2.json @@ -20170,7 +20170,8 @@ "AutoMLJobArn":{"shape":"AutoMLJobArn"}, "DataProcessing":{"shape":"DataProcessing"}, "ExperimentConfig":{"shape":"ExperimentConfig"}, - "Tags":{"shape":"TagList"} + "Tags":{"shape":"TagList"}, + "DataCaptureConfig":{"shape":"BatchDataCaptureConfig"} } }, "TransformJobArn":{ diff --git a/models/apis/sagemaker/2017-07-24/docs-2.json b/models/apis/sagemaker/2017-07-24/docs-2.json index e56657bf057..70b8ae57d06 100644 --- a/models/apis/sagemaker/2017-07-24/docs-2.json +++ b/models/apis/sagemaker/2017-07-24/docs-2.json @@ -1315,7 +1315,8 @@ "base": "

Configuration to control how SageMaker captures inference data for batch transform jobs.

", "refs": { "CreateTransformJobRequest$DataCaptureConfig": "

Configuration to control how SageMaker captures inference data.

", - "DescribeTransformJobResponse$DataCaptureConfig": "

Configuration to control how SageMaker captures inference data.

" + "DescribeTransformJobResponse$DataCaptureConfig": "

Configuration to control how SageMaker captures inference data.

", + "TransformJob$DataCaptureConfig": null } }, "BatchDescribeModelPackageError": { diff --git a/models/endpoints/endpoints.json b/models/endpoints/endpoints.json index 1e96eeb5812..2846c824796 100644 --- a/models/endpoints/endpoints.json +++ b/models/endpoints/endpoints.json @@ -13280,6 +13280,7 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "il-central-1" : { }, "me-central-1" : { }, "me-south-1" : { }, "sa-east-1" : { }, diff --git a/service/acmpca/api.go b/service/acmpca/api.go index e067a3baec4..ea15a6e85ef 100644 --- a/service/acmpca/api.go +++ b/service/acmpca/api.go @@ -6234,11 +6234,11 @@ type IssueCertificateInput struct { // Alphanumeric string that can be used to distinguish between calls to the // IssueCertificate action. Idempotency tokens for IssueCertificate time out - // after one minute. Therefore, if you call IssueCertificate multiple times - // with the same idempotency token within one minute, Amazon Web Services Private - // CA recognizes that you are requesting only one certificate and will issue - // only one. If you change the idempotency token for each call, Amazon Web Services - // Private CA recognizes that you are requesting multiple certificates. + // after five minutes. Therefore, if you call IssueCertificate multiple times + // with the same idempotency token within five minutes, Amazon Web Services + // Private CA recognizes that you are requesting only one certificate and will + // issue only one. If you change the idempotency token for each call, Amazon + // Web Services Private CA recognizes that you are requesting multiple certificates. IdempotencyToken *string `min:"1" type:"string"` // The name of the algorithm that will be used to sign the certificate to be diff --git a/service/acmpca/doc.go b/service/acmpca/doc.go index 5f7ace48f84..d7844cf80e1 100644 --- a/service/acmpca/doc.go +++ b/service/acmpca/doc.go @@ -19,7 +19,7 @@ // Throttling means that Amazon Web Services Private CA rejects an otherwise // valid request because the request exceeds the operation's quota for the number // of requests per second. When a request is throttled, Amazon Web Services -// Private CA returns a ThrottlingException (https://docs.aws.amazon.com/acm-pca/latest/APIReference/CommonErrors.html) +// Private CA returns a ThrottlingException (https://docs.aws.amazon.com/privateca/latest/APIReference/CommonErrors.html) // error. Amazon Web Services Private CA does not guarantee a minimum request // rate for APIs. // diff --git a/service/connect/api.go b/service/connect/api.go index bdb550289f6..a9bbe131563 100644 --- a/service/connect/api.go +++ b/service/connect/api.go @@ -20090,6 +20090,99 @@ func (c *Connect) UpdateQuickConnectNameWithContext(ctx aws.Context, input *Upda return out, req.Send() } +const opUpdateRoutingProfileAgentAvailabilityTimer = "UpdateRoutingProfileAgentAvailabilityTimer" + +// UpdateRoutingProfileAgentAvailabilityTimerRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRoutingProfileAgentAvailabilityTimer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRoutingProfileAgentAvailabilityTimer for more information on using the UpdateRoutingProfileAgentAvailabilityTimer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// // Example sending a request using the UpdateRoutingProfileAgentAvailabilityTimerRequest method. +// req, resp := client.UpdateRoutingProfileAgentAvailabilityTimerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/connect-2017-08-08/UpdateRoutingProfileAgentAvailabilityTimer +func (c *Connect) UpdateRoutingProfileAgentAvailabilityTimerRequest(input *UpdateRoutingProfileAgentAvailabilityTimerInput) (req *request.Request, output *UpdateRoutingProfileAgentAvailabilityTimerOutput) { + op := &request.Operation{ + Name: opUpdateRoutingProfileAgentAvailabilityTimer, + HTTPMethod: "POST", + HTTPPath: "/routing-profiles/{InstanceId}/{RoutingProfileId}/agent-availability-timer", + } + + if input == nil { + input = &UpdateRoutingProfileAgentAvailabilityTimerInput{} + } + + output = &UpdateRoutingProfileAgentAvailabilityTimerOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(restjson.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateRoutingProfileAgentAvailabilityTimer API operation for Amazon Connect Service. +// +// Whether agents with this routing profile will have their routing order calculated +// based on time since their last inbound contact or longest idle time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Connect Service's +// API operation UpdateRoutingProfileAgentAvailabilityTimer for usage and error information. +// +// Returned Error Types: +// +// - InvalidRequestException +// The request is not valid. +// +// - InvalidParameterException +// One or more of the specified parameters are not valid. +// +// - ResourceNotFoundException +// The specified resource was not found. +// +// - ThrottlingException +// The throttling limit has been exceeded. +// +// - InternalServiceException +// Request processing failed because of an error or failure with the service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/connect-2017-08-08/UpdateRoutingProfileAgentAvailabilityTimer +func (c *Connect) UpdateRoutingProfileAgentAvailabilityTimer(input *UpdateRoutingProfileAgentAvailabilityTimerInput) (*UpdateRoutingProfileAgentAvailabilityTimerOutput, error) { + req, out := c.UpdateRoutingProfileAgentAvailabilityTimerRequest(input) + return out, req.Send() +} + +// UpdateRoutingProfileAgentAvailabilityTimerWithContext is the same as UpdateRoutingProfileAgentAvailabilityTimer with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRoutingProfileAgentAvailabilityTimer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Connect) UpdateRoutingProfileAgentAvailabilityTimerWithContext(ctx aws.Context, input *UpdateRoutingProfileAgentAvailabilityTimerInput, opts ...request.Option) (*UpdateRoutingProfileAgentAvailabilityTimerOutput, error) { + req, out := c.UpdateRoutingProfileAgentAvailabilityTimerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateRoutingProfileConcurrency = "UpdateRoutingProfileConcurrency" // UpdateRoutingProfileConcurrencyRequest generates a "aws/request.Request" representing the @@ -26270,6 +26363,10 @@ func (s *CreateQuickConnectOutput) SetQuickConnectId(v string) *CreateQuickConne type CreateRoutingProfileInput struct { _ struct{} `type:"structure"` + // Whether agents with this routing profile will have their routing order calculated + // based on time since their last inbound contact or longest idle time. + AgentAvailabilityTimer *string `type:"string" enum:"AgentAvailabilityTimer"` + // The default outbound queue for the routing profile. // // DefaultOutboundQueueId is a required field @@ -26390,6 +26487,12 @@ func (s *CreateRoutingProfileInput) Validate() error { return nil } +// SetAgentAvailabilityTimer sets the AgentAvailabilityTimer field's value. +func (s *CreateRoutingProfileInput) SetAgentAvailabilityTimer(v string) *CreateRoutingProfileInput { + s.AgentAvailabilityTimer = &v + return s +} + // SetDefaultOutboundQueueId sets the DefaultOutboundQueueId field's value. func (s *CreateRoutingProfileInput) SetDefaultOutboundQueueId(v string) *CreateRoutingProfileInput { s.DefaultOutboundQueueId = &v @@ -48061,6 +48164,10 @@ func (s ResumeContactRecordingOutput) GoString() string { type RoutingProfile struct { _ struct{} `type:"structure"` + // Whether agents with this routing profile will have their routing order calculated + // based on time since their last inbound contact or longest idle time. + AgentAvailabilityTimer *string `type:"string" enum:"AgentAvailabilityTimer"` + // The identifier of the default outbound queue for this routing profile. DefaultOutboundQueueId *string `type:"string"` @@ -48114,6 +48221,12 @@ func (s RoutingProfile) GoString() string { return s.String() } +// SetAgentAvailabilityTimer sets the AgentAvailabilityTimer field's value. +func (s *RoutingProfile) SetAgentAvailabilityTimer(v string) *RoutingProfile { + s.AgentAvailabilityTimer = &v + return s +} + // SetDefaultOutboundQueueId sets the DefaultOutboundQueueId field's value. func (s *RoutingProfile) SetDefaultOutboundQueueId(v string) *RoutingProfile { s.DefaultOutboundQueueId = &v @@ -57231,6 +57344,111 @@ func (s UpdateQuickConnectNameOutput) GoString() string { return s.String() } +type UpdateRoutingProfileAgentAvailabilityTimerInput struct { + _ struct{} `type:"structure"` + + // Whether agents with this routing profile will have their routing order calculated + // based on time since their last inbound contact or longest idle time. + // + // AgentAvailabilityTimer is a required field + AgentAvailabilityTimer *string `type:"string" required:"true" enum:"AgentAvailabilityTimer"` + + // The identifier of the Amazon Connect instance. You can find the instance + // ID (https://docs.aws.amazon.com/connect/latest/adminguide/find-instance-arn.html) + // in the Amazon Resource Name (ARN) of the instance. + // + // InstanceId is a required field + InstanceId *string `location:"uri" locationName:"InstanceId" min:"1" type:"string" required:"true"` + + // The identifier of the routing profile. + // + // RoutingProfileId is a required field + RoutingProfileId *string `location:"uri" locationName:"RoutingProfileId" type:"string" required:"true"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateRoutingProfileAgentAvailabilityTimerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateRoutingProfileAgentAvailabilityTimerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRoutingProfileAgentAvailabilityTimerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRoutingProfileAgentAvailabilityTimerInput"} + if s.AgentAvailabilityTimer == nil { + invalidParams.Add(request.NewErrParamRequired("AgentAvailabilityTimer")) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + if s.InstanceId != nil && len(*s.InstanceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceId", 1)) + } + if s.RoutingProfileId == nil { + invalidParams.Add(request.NewErrParamRequired("RoutingProfileId")) + } + if s.RoutingProfileId != nil && len(*s.RoutingProfileId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoutingProfileId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAgentAvailabilityTimer sets the AgentAvailabilityTimer field's value. +func (s *UpdateRoutingProfileAgentAvailabilityTimerInput) SetAgentAvailabilityTimer(v string) *UpdateRoutingProfileAgentAvailabilityTimerInput { + s.AgentAvailabilityTimer = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *UpdateRoutingProfileAgentAvailabilityTimerInput) SetInstanceId(v string) *UpdateRoutingProfileAgentAvailabilityTimerInput { + s.InstanceId = &v + return s +} + +// SetRoutingProfileId sets the RoutingProfileId field's value. +func (s *UpdateRoutingProfileAgentAvailabilityTimerInput) SetRoutingProfileId(v string) *UpdateRoutingProfileAgentAvailabilityTimerInput { + s.RoutingProfileId = &v + return s +} + +type UpdateRoutingProfileAgentAvailabilityTimerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateRoutingProfileAgentAvailabilityTimerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation. +// +// API parameter values that are decorated as "sensitive" in the API will not +// be included in the string output. The member name will be present, but the +// value will be replaced with "sensitive". +func (s UpdateRoutingProfileAgentAvailabilityTimerOutput) GoString() string { + return s.String() +} + type UpdateRoutingProfileConcurrencyInput struct { _ struct{} `type:"structure"` @@ -60440,6 +60658,22 @@ func ActionType_Values() []string { } } +const ( + // AgentAvailabilityTimerTimeSinceLastActivity is a AgentAvailabilityTimer enum value + AgentAvailabilityTimerTimeSinceLastActivity = "TIME_SINCE_LAST_ACTIVITY" + + // AgentAvailabilityTimerTimeSinceLastInbound is a AgentAvailabilityTimer enum value + AgentAvailabilityTimerTimeSinceLastInbound = "TIME_SINCE_LAST_INBOUND" +) + +// AgentAvailabilityTimer_Values returns all elements of the AgentAvailabilityTimer enum +func AgentAvailabilityTimer_Values() []string { + return []string{ + AgentAvailabilityTimerTimeSinceLastActivity, + AgentAvailabilityTimerTimeSinceLastInbound, + } +} + const ( // AgentStatusStateEnabled is a AgentStatusState enum value AgentStatusStateEnabled = "ENABLED" diff --git a/service/connect/connectiface/interface.go b/service/connect/connectiface/interface.go index c97d40cb267..0fcd406e9db 100644 --- a/service/connect/connectiface/interface.go +++ b/service/connect/connectiface/interface.go @@ -928,6 +928,10 @@ type ConnectAPI interface { UpdateQuickConnectNameWithContext(aws.Context, *connect.UpdateQuickConnectNameInput, ...request.Option) (*connect.UpdateQuickConnectNameOutput, error) UpdateQuickConnectNameRequest(*connect.UpdateQuickConnectNameInput) (*request.Request, *connect.UpdateQuickConnectNameOutput) + UpdateRoutingProfileAgentAvailabilityTimer(*connect.UpdateRoutingProfileAgentAvailabilityTimerInput) (*connect.UpdateRoutingProfileAgentAvailabilityTimerOutput, error) + UpdateRoutingProfileAgentAvailabilityTimerWithContext(aws.Context, *connect.UpdateRoutingProfileAgentAvailabilityTimerInput, ...request.Option) (*connect.UpdateRoutingProfileAgentAvailabilityTimerOutput, error) + UpdateRoutingProfileAgentAvailabilityTimerRequest(*connect.UpdateRoutingProfileAgentAvailabilityTimerInput) (*request.Request, *connect.UpdateRoutingProfileAgentAvailabilityTimerOutput) + UpdateRoutingProfileConcurrency(*connect.UpdateRoutingProfileConcurrencyInput) (*connect.UpdateRoutingProfileConcurrencyOutput, error) UpdateRoutingProfileConcurrencyWithContext(aws.Context, *connect.UpdateRoutingProfileConcurrencyInput, ...request.Option) (*connect.UpdateRoutingProfileConcurrencyOutput, error) UpdateRoutingProfileConcurrencyRequest(*connect.UpdateRoutingProfileConcurrencyInput) (*request.Request, *connect.UpdateRoutingProfileConcurrencyOutput) diff --git a/service/datasync/api.go b/service/datasync/api.go index 134cf5658a3..d36e322b54b 100644 --- a/service/datasync/api.go +++ b/service/datasync/api.go @@ -916,9 +916,16 @@ func (c *DataSync) CreateLocationNfsRequest(input *CreateLocationNfsInput) (req // CreateLocationNfs API operation for AWS DataSync. // -// Creates an endpoint for an Network File System (NFS) file server that DataSync +// Creates an endpoint for a Network File System (NFS) file server that DataSync // can use for a data transfer. // +// For more information, see Configuring transfers to or from an NFS file server +// (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html). +// +// If you're copying data to or from an Snowcone device, you can also use CreateLocationNfs +// to create your transfer location. For more information, see Configuring transfers +// with Snowcone (https://docs.aws.amazon.com/datasync/latest/userguide/nfs-on-snowcone.html). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2356,7 +2363,8 @@ func (c *DataSync) DescribeLocationNfsRequest(input *DescribeLocationNfsInput) ( // DescribeLocationNfs API operation for AWS DataSync. // -// Returns metadata, such as the path information, about an NFS location. +// Provides details about how an DataSync transfer location for a Network File +// System (NFS) file server is configured. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5171,9 +5179,11 @@ func (c *DataSync) UpdateLocationNfsRequest(input *UpdateLocationNfsInput) (req // UpdateLocationNfs API operation for AWS DataSync. // -// Updates some of the parameters of a previously created location for Network -// File System (NFS) access. For information about creating an NFS location, -// see Creating a location for NFS (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html). +// Modifies some configurations of the Network File System (NFS) transfer location +// that you're using with DataSync. +// +// For more information, see Configuring transfers to or from an NFS file server +// (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6026,6 +6036,10 @@ func (s CancelTaskExecutionOutput) GoString() string { type Capacity struct { _ struct{} `type:"structure"` + // The amount of space in the cluster that's in cloud storage (for example, + // if you're using data tiering). + ClusterCloudStorageUsed *int64 `type:"long"` + // The amount of space that's being used in a storage system resource without // accounting for compression or deduplication. LogicalUsed *int64 `type:"long"` @@ -6055,6 +6069,12 @@ func (s Capacity) GoString() string { return s.String() } +// SetClusterCloudStorageUsed sets the ClusterCloudStorageUsed field's value. +func (s *Capacity) SetClusterCloudStorageUsed(v int64) *Capacity { + s.ClusterCloudStorageUsed = &v + return s +} + // SetLogicalUsed sets the LogicalUsed field's value. func (s *Capacity) SetLogicalUsed(v int64) *Capacity { s.LogicalUsed = &v @@ -7473,53 +7493,30 @@ func (s *CreateLocationHdfsOutput) SetLocationArn(v string) *CreateLocationHdfsO type CreateLocationNfsInput struct { _ struct{} `type:"structure"` - // Specifies the mount options that DataSync can use to mount your NFS share. + // Specifies the options that DataSync can use to mount your NFS file server. MountOptions *NfsMountOptions `type:"structure"` - // Specifies the Amazon Resource Names (ARNs) of agents that DataSync uses to - // connect to your NFS file server. + // Specifies the Amazon Resource Name (ARN) of the DataSync agent that want + // to connect to your NFS file server. // - // If you are copying data to or from your Snowcone device, see NFS Server on - // Snowcone (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone) - // for more information. + // You can specify more than one agent. For more information, see Using multiple + // agents for transfers (https://docs.aws.amazon.com/datasync/latest/userguide/multiple-agents.html). // // OnPremConfig is a required field OnPremConfig *OnPremConfig `type:"structure" required:"true"` - // Specifies the IP address or domain name of your NFS file server. An agent - // that is installed on-premises uses this hostname to mount the NFS server - // in a network. - // - // If you are copying data to or from your Snowcone device, see NFS Server on - // Snowcone (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone) - // for more information. - // - // You must specify be an IP version 4 address or Domain Name System (DNS)-compliant - // name. + // Specifies the Domain Name System (DNS) name or IP version 4 address of the + // NFS file server that your DataSync agent connects to. // // ServerHostname is a required field ServerHostname *string `type:"string" required:"true"` - // Specifies the subdirectory in the NFS file server that DataSync transfers - // to or from. The NFS path should be a path that's exported by the NFS server, - // or a subdirectory of that path. The path should be such that it can be mounted - // by other NFS clients in your network. - // - // To see all the paths exported by your NFS server, run "showmount -e nfs-server-name" - // from an NFS client that has access to your server. You can specify any directory - // that appears in the results, and any subdirectory of that directory. Ensure - // that the NFS export is accessible without Kerberos authentication. + // Specifies the export path in your NFS file server that you want DataSync + // to mount. // - // To transfer all the data in the folder you specified, DataSync needs to have - // permissions to read all the data. To ensure this, either configure the NFS - // export with no_root_squash, or ensure that the permissions for all of the - // files that you want DataSync allow read access for all users. Doing either - // enables the agent to read the files. For the agent to access directories, - // you must additionally enable all execute access. - // - // If you are copying data to or from your Snowcone device, see NFS Server on - // Snowcone (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone) - // for more information. + // This path (or a subdirectory of the path) is where DataSync transfers data + // to or from. For information on configuring an export for DataSync, see Accessing + // NFS file servers (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#accessing-nfs). // // Subdirectory is a required field Subdirectory *string `type:"string" required:"true"` @@ -9927,7 +9924,8 @@ func (s *DescribeLocationHdfsOutput) SetSimpleUser(v string) *DescribeLocationHd type DescribeLocationNfsInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the NFS location to describe. + // Specifies the Amazon Resource Name (ARN) of the NFS location that you want + // information about. // // LocationArn is a required field LocationArn *string `type:"string" required:"true"` @@ -9974,20 +9972,19 @@ func (s *DescribeLocationNfsInput) SetLocationArn(v string) *DescribeLocationNfs type DescribeLocationNfsOutput struct { _ struct{} `type:"structure"` - // The time that the NFS location was created. + // The time when the NFS location was created. CreationTime *time.Time `type:"timestamp"` - // The Amazon Resource Name (ARN) of the NFS location that was described. + // The ARN of the NFS location. LocationArn *string `type:"string"` - // The URL of the source NFS location that was described. + // The URL of the NFS location. LocationUri *string `type:"string"` - // The mount options that DataSync uses to mount your NFS share. + // The mount options that DataSync uses to mount your NFS file server. MountOptions *NfsMountOptions `type:"structure"` - // A list of Amazon Resource Names (ARNs) of agents to use for a Network File - // System (NFS) location. + // The DataSync agents that are connecting to a Network File System (NFS) location. OnPremConfig *OnPremConfig `type:"structure"` } @@ -13259,6 +13256,10 @@ type NetAppONTAPCluster struct { // The storage space that's being used in a cluster. ClusterBlockStorageUsed *int64 `type:"long"` + // The amount of space in the cluster that's in cloud storage (for example, + // if you're using data tiering). + ClusterCloudStorageUsed *int64 `type:"long"` + // The name of the cluster. ClusterName *string `type:"string"` @@ -13328,6 +13329,12 @@ func (s *NetAppONTAPCluster) SetClusterBlockStorageUsed(v int64) *NetAppONTAPClu return s } +// SetClusterCloudStorageUsed sets the ClusterCloudStorageUsed field's value. +func (s *NetAppONTAPCluster) SetClusterCloudStorageUsed(v int64) *NetAppONTAPCluster { + s.ClusterCloudStorageUsed = &v + return s +} + // SetClusterName sets the ClusterName field's value. func (s *NetAppONTAPCluster) SetClusterName(v string) *NetAppONTAPCluster { s.ClusterName = &v @@ -13742,12 +13749,11 @@ func (s *NfsMountOptions) SetVersion(v string) *NfsMountOptions { return s } -// A list of Amazon Resource Names (ARNs) of agents to use for a Network File -// System (NFS) location. +// The DataSync agents that are connecting to a Network File System (NFS) location. type OnPremConfig struct { _ struct{} `type:"structure"` - // ARNs of the agents to use for an NFS location. + // The Amazon Resource Names (ARNs) of the agents connecting to a transfer location. // // AgentArns is a required field AgentArns []*string `min:"1" type:"list" required:"true"` @@ -16226,8 +16232,8 @@ func (s UpdateLocationHdfsOutput) GoString() string { type UpdateLocationNfsInput struct { _ struct{} `type:"structure"` - // Specifies the Amazon Resource Name (ARN) of the NFS location that you want - // to update. + // Specifies the Amazon Resource Name (ARN) of the NFS transfer location that + // you want to update. // // LocationArn is a required field LocationArn *string `type:"string" required:"true"` @@ -16235,30 +16241,15 @@ type UpdateLocationNfsInput struct { // Specifies how DataSync can access a location using the NFS protocol. MountOptions *NfsMountOptions `type:"structure"` - // A list of Amazon Resource Names (ARNs) of agents to use for a Network File - // System (NFS) location. + // The DataSync agents that are connecting to a Network File System (NFS) location. OnPremConfig *OnPremConfig `type:"structure"` - // Specifies the subdirectory in your NFS file system that DataSync uses to - // read from or write to during a transfer. The NFS path should be exported - // by the NFS server, or a subdirectory of that path. The path should be such - // that it can be mounted by other NFS clients in your network. + // Specifies the export path in your NFS file server that you want DataSync + // to mount. // - // To see all the paths exported by your NFS server, run "showmount -e nfs-server-name" - // from an NFS client that has access to your server. You can specify any directory - // that appears in the results, and any subdirectory of that directory. Ensure - // that the NFS export is accessible without Kerberos authentication. - // - // To transfer all the data in the folder that you specified, DataSync must - // have permissions to read all the data. To ensure this, either configure the - // NFS export with no_root_squash, or ensure that the files you want DataSync - // to access have permissions that allow read access for all users. Doing either - // option enables the agent to read the files. For the agent to access directories, - // you must additionally enable all execute access. - // - // If you are copying data to or from your Snowcone device, see NFS Server on - // Snowcone (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone) - // for more information. + // This path (or a subdirectory of the path) is where DataSync transfers data + // to or from. For information on configuring an export for DataSync, see Accessing + // NFS file servers (https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#accessing-nfs). Subdirectory *string `type:"string"` } diff --git a/service/ecs/api.go b/service/ecs/api.go index a4396227bee..5973daf0f4a 100644 --- a/service/ecs/api.go +++ b/service/ecs/api.go @@ -2124,6 +2124,10 @@ func (c *ECS) DescribeTasksRequest(input *DescribeTasksInput) (req *request.Requ // Currently, stopped tasks appear in the returned results for at least one // hour. // +// If you have tasks with tags, and then delete the cluster, the tagged tasks +// are returned in the response. If you create a new cluster with the same name +// as the deleted cluster, the tagged tasks are not included in the response. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3826,8 +3830,7 @@ func (c *ECS) ListTasksRequest(input *ListTasksInput) (req *request.Request, out // family, container instance, launch type, what IAM principal started the task, // or by the desired status of the task. // -// Recently stopped tasks might appear in the returned results. Currently, stopped -// tasks appear in the returned results for at least one hour. +// Recently stopped tasks might appear in the returned results. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6092,16 +6095,15 @@ func (c *ECS) UpdateServiceRequest(input *UpdateServiceInput) (req *request.Requ // number of running tasks for this service. // // You must have a service-linked role when you update any of the following -// service properties. If you specified a custom role when you created the service, -// Amazon ECS automatically replaces the roleARN (https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Service.html#ECS-Type-Service-roleArn) -// associated with the service with the ARN of your service-linked role. For -// more information, see Service-linked roles (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-service-linked-roles.html) -// in the Amazon Elastic Container Service Developer Guide. +// service properties: // // - loadBalancers, // // - serviceRegistries // +// For more information about the role see the CreateService request parameter +// role (https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html#ECS-CreateService-request-role). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -6869,12 +6871,13 @@ func (s *AttributeLimitExceededException) RequestID() string { type AutoScalingGroupProvider struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) that identifies the Auto Scaling group. + // The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or + // the Auto Scaling group name. // // AutoScalingGroupArn is a required field AutoScalingGroupArn *string `locationName:"autoScalingGroupArn" type:"string" required:"true"` - // The managed scaling settings for the Auto Scaling group capacity provider. + // he managed scaling settings for the Auto Scaling group capacity provider. ManagedScaling *ManagedScaling `locationName:"managedScaling" type:"structure"` // The managed termination protection setting to use for the Auto Scaling group @@ -8446,20 +8449,39 @@ type ContainerDefinition struct { // is passed to Docker as 0, which Windows interprets as 1% of one CPU. Cpu *int64 `locationName:"cpu" type:"integer"` - // A list of ARNs in SSM or Amazon S3 to a credential spec (credspeccode>) file - // that configures a container for Active Directory authentication. This parameter - // is only used with domainless authentication. + // A list of ARNs in SSM or Amazon S3 to a credential spec (CredSpec) file that + // configures the container for Active Directory authentication. We recommend + // that you use this parameter instead of the dockerSecurityOptions. The maximum + // number of ARNs is 1. + // + // There are two formats for each ARN. + // + // credentialspecdomainless:MyARN + // + // You use credentialspecdomainless:MyARN to provide a CredSpec with an additional + // section for a secret in Secrets Manager. You provide the login credentials + // to the domain in the secret. + // + // Each task that runs on any container instance can join different domains. + // + // You can use this format without joining the container instance to a domain. + // + // credentialspec:MyARN + // + // You use credentialspec:MyARN to provide a CredSpec for a single domain. // - // The format for each ARN is credentialspecdomainless:MyARN. Replace MyARN - // with the ARN in SSM or Amazon S3. + // You must join the container instance to the domain before you start any tasks + // that use this task definition. // - // The credspec must provide a ARN in Secrets Manager for a secret containing - // the username, password, and the domain to connect to. For better security, - // the instance isn't joined to the domain for domainless authentication. Other - // applications on the instance can't use the domainless credentials. You can - // use this parameter to run tasks on the same instance, even it the tasks need - // to join different domains. For more information, see Using gMSAs for Windows - // Containers (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) + // In both formats, replace MyARN with the ARN in SSM or Amazon S3. + // + // If you provide a credentialspecdomainless:MyARN, the credspec must provide + // a ARN in Secrets Manager for a secret containing the username, password, + // and the domain to connect to. For better security, the instance isn't joined + // to the domain for domainless authentication. Other applications on the instance + // can't use the domainless credentials. You can use this parameter to run tasks + // on the same instance, even it the tasks need to join different domains. For + // more information, see Using gMSAs for Windows Containers (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) // and Using gMSAs for Linux Containers (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html). CredentialSpecs []*string `locationName:"credentialSpecs" type:"list"` @@ -8897,6 +8919,8 @@ type ContainerDefinition struct { // agent and ecs-init. For more information, see Amazon ECS-optimized Linux // AMI (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) // in the Amazon Elastic Container Service Developer Guide. + // + // The valid values are 2-120 seconds. StartTimeout *int64 `locationName:"startTimeout" type:"integer"` // Time duration (in seconds) to wait before the container is forcefully killed @@ -8929,6 +8953,8 @@ type ContainerDefinition struct { // agent and ecs-init. For more information, see Amazon ECS-optimized Linux // AMI (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) // in the Amazon Elastic Container Service Developer Guide. + // + // The valid values are 2-120 seconds. StopTimeout *int64 `locationName:"stopTimeout" type:"integer"` // A list of namespaced kernel parameters to set in the container. This parameter @@ -9762,6 +9788,11 @@ func (s *ContainerInstanceHealthStatus) SetOverallStatus(v string) *ContainerIns // be passed in. An example of an empty container override is {"containerOverrides": // [ ] }. If a non-empty container override is specified, the name parameter // must be included. +// +// You can use Secrets Manager or Amazon Web Services Systems Manager Parameter +// Store to store the sensitive data. For more information, see Retrieve secrets +// through environment variables (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/secrets-envvar.html) +// in the Amazon ECS Developer Guide. type ContainerOverride struct { _ struct{} `type:"structure"` @@ -10993,7 +11024,8 @@ type CreateTaskSetInput struct { // Tags with this prefix do not count against your tags per resource limit. Tags []*Tag `locationName:"tags" type:"list"` - // The task definition for the tasks in the task set to use. + // The task definition for the tasks in the task set to use. If a revision isn't + // specified, the latest ACTIVE revision is used. // // TaskDefinition is a required field TaskDefinition *string `locationName:"taskDefinition" type:"string" required:"true"` @@ -12170,6 +12202,9 @@ func (s *DeploymentAlarms) SetRollback(v bool) *DeploymentAlarms { // You can also configure Amazon ECS to roll back your service to the last completed // deployment after a failure. For more information, see Rolling update (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html) // in the Amazon Elastic Container Service Developer Guide. +// +// For more information about API failure reasons, see API failure reasons (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/api_failures_messages.html) +// in the Amazon Elastic Container Service Developer Guide. type DeploymentCircuitBreaker struct { _ struct{} `type:"structure"` @@ -13859,8 +13894,7 @@ func (s *EFSVolumeConfiguration) SetTransitEncryptionPort(v int64) *EFSVolumeCon // environment variables (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html) // in the Amazon Elastic Container Service Developer Guide. // -// This parameter is only supported for tasks hosted on Fargate using the following -// platform versions: +// You must use the following platforms for the Fargate launch type: // // - Linux platform version 1.4.0 or later. // @@ -16849,7 +16883,6 @@ type LoadBalancer struct { // The name of the load balancer to associate with the Amazon ECS service or // task set. // - // A load balancer name is only specified when using a Classic Load Balancer. // If you are using an Application Load Balancer or a Network Load Balancer // the load balancer name parameter should be omitted. LoadBalancerName *string `locationName:"loadBalancerName" type:"string"` @@ -16858,8 +16891,7 @@ type LoadBalancer struct { // group or groups associated with a service or task set. // // A target group ARN is only specified when using an Application Load Balancer - // or Network Load Balancer. If you're using a Classic Load Balancer, omit the - // target group ARN. + // or Network Load Balancer. // // For services using the ECS deployment controller, you can specify one or // multiple target groups. For more information, see Registering multiple target @@ -16936,9 +16968,11 @@ func (s *LoadBalancer) SetTargetGroupArn(v string) *LoadBalancer { // Understand the following when specifying a log configuration for your containers. // // - Amazon ECS currently supports a subset of the logging drivers available -// to the Docker daemon (shown in the valid values below). Additional log -// drivers may be available in future releases of the Amazon ECS container -// agent. +// to the Docker daemon. Additional log drivers may be available in future +// releases of the Amazon ECS container agent. For tasks on Fargate, the +// supported log drivers are awslogs, splunk, and awsfirelens. For tasks +// hosted on Amazon EC2 instances, the supported log drivers are awslogs, +// fluentd, gelf, json-file, journald, logentries,syslog, splunk, and awsfirelens. // // - This parameter requires version 1.18 of the Docker Remote API or greater // on your container instance. @@ -17220,7 +17254,7 @@ type ManagedScaling struct { // The maximum number of Amazon EC2 instances that Amazon ECS will scale out // at one time. The scale in process is not affected by this parameter. If this - // parameter is omitted, the default value of 1 is used. + // parameter is omitted, the default value of 10000 is used. MaximumScalingStepSize *int64 `locationName:"maximumScalingStepSize" min:"1" type:"integer"` // The minimum number of Amazon EC2 instances that Amazon ECS will scale out @@ -18213,9 +18247,10 @@ type PortMapping struct { // The default ephemeral port range for Docker version 1.6.0 and later is listed // on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel // parameter is unavailable, the default ephemeral port range from 49153 through - // 65535 is used. Do not attempt to specify a host port in the ephemeral port - // range as these are reserved for automatic assignment. In general, ports below - // 32768 are outside of the ephemeral port range. + // 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to + // specify a host port in the ephemeral port range as these are reserved for + // automatic assignment. In general, ports below 32768 are outside of the ephemeral + // port range. // // The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, // and the Amazon ECS container agent ports 51678-51680. Any host port that @@ -21081,9 +21116,11 @@ type ServiceConnectConfiguration struct { // Understand the following when specifying a log configuration for your containers. // // * Amazon ECS currently supports a subset of the logging drivers available - // to the Docker daemon (shown in the valid values below). Additional log - // drivers may be available in future releases of the Amazon ECS container - // agent. + // to the Docker daemon. Additional log drivers may be available in future + // releases of the Amazon ECS container agent. For tasks on Fargate, the + // supported log drivers are awslogs, splunk, and awsfirelens. For tasks + // hosted on Amazon EC2 instances, the supported log drivers are awslogs, + // fluentd, gelf, json-file, journald, logentries,syslog, splunk, and awsfirelens. // // * This parameter requires version 1.18 of the Docker Remote API or greater // on your container instance. @@ -23083,6 +23120,9 @@ type Task struct { // The stop code indicating why a task was stopped. The stoppedReason might // contain additional details. // + // For more information about stop code, see Stopped tasks error codes (https://docs.aws.amazon.com/AmazonECS/latest/userguide/stopped-task-error-codes.html) + // in the Amazon ECS User Guide. + // // The following are valid values: // // * TaskFailedToStart @@ -23107,7 +23147,7 @@ type Task struct { StoppedReason *string `locationName:"stoppedReason" type:"string"` // The Unix timestamp for the time when the task stops. More specifically, it's - // for the time when the task transitions from the RUNNING state to STOPPED. + // for the time when the task transitions from the RUNNING state to STOPPING. StoppingAt *time.Time `locationName:"stoppingAt" type:"timestamp"` // The metadata that you apply to the task to help you categorize and organize @@ -23612,8 +23652,9 @@ type TaskDefinition struct { // This parameter isn't supported for tasks run on Fargate. RequiresAttributes []*Attribute `locationName:"requiresAttributes" type:"list"` - // The task launch types the task definition was validated against. For more - // information, see Amazon ECS launch types (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html) + // The task launch types the task definition was validated against. The valid + // values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS + // launch types (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html) // in the Amazon Elastic Container Service Developer Guide. RequiresCompatibilities []*string `locationName:"requiresCompatibilities" type:"list" enum:"Compatibility"` @@ -26112,6 +26153,8 @@ type Volume struct { // The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, // underscores, and hyphens are allowed. This name is referenced in the sourceVolume // parameter of container definition mountPoints. + // + // This is required wwhen you use an Amazon EFS volume. Name *string `locationName:"name" type:"string"` } diff --git a/service/sagemaker/api.go b/service/sagemaker/api.go index c18bc0f01b5..1cddc38dfd4 100644 --- a/service/sagemaker/api.go +++ b/service/sagemaker/api.go @@ -110151,6 +110151,10 @@ type TransformJob struct { // A timestamp that shows when the transform Job was created. CreationTime *time.Time `type:"timestamp"` + // Configuration to control how SageMaker captures inference data for batch + // transform jobs. + DataCaptureConfig *BatchDataCaptureConfig `type:"structure"` + // The data structure used to specify the data to be used for inference in a // batch transform job and to associate the data that is relevant to the prediction // results in the output. The input filter provided allows you to exclude input @@ -110289,6 +110293,12 @@ func (s *TransformJob) SetCreationTime(v time.Time) *TransformJob { return s } +// SetDataCaptureConfig sets the DataCaptureConfig field's value. +func (s *TransformJob) SetDataCaptureConfig(v *BatchDataCaptureConfig) *TransformJob { + s.DataCaptureConfig = v + return s +} + // SetDataProcessing sets the DataProcessing field's value. func (s *TransformJob) SetDataProcessing(v *DataProcessing) *TransformJob { s.DataProcessing = v