Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(rds): S3 import and export for DatabaseInstances #10370

Merged
merged 8 commits into from
Sep 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions packages/@aws-cdk/aws-rds/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,20 +268,22 @@ const cpuUtilization = cluster.metricCPUUtilization();
const readLatency = instance.metric('ReadLatency', { statistic: 'Average', periodSec: 60 });
```

### Enabling S3 integration to a cluster (non-serverless Aurora only)
### Enabling S3 integration

Data in S3 buckets can be imported to and exported from Aurora databases using SQL queries. To enable this
Data in S3 buckets can be imported to and exported from certain database engines using SQL queries. To enable this
functionality, set the `s3ImportBuckets` and `s3ExportBuckets` properties for import and export respectively. When
configured, the CDK automatically creates and configures IAM roles as required.
Additionally, the `s3ImportRole` and `s3ExportRole` properties can be used to set this role directly.

For Aurora MySQL, read more about [loading data from
S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html) and [saving
data into S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html).
You can read more about loading data to (or from) S3 here:

For Aurora PostgreSQL, read more about [loading data from
S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html) and [saving
data into S3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html).
* Aurora MySQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html)
and [export](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html).
* Aurora PostgreSQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html)
and [export](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html).
* Microsoft SQL Server - [import & export](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html)
* PostgreSQL - [import](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html)
* Oracle - [import & export](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html)

The following snippet sets up a database cluster with different S3 buckets where the data is imported and exported -

Expand Down
35 changes: 2 additions & 33 deletions packages/@aws-cdk/aws-rds/lib/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import { DatabaseClusterAttributes, IDatabaseCluster } from './cluster-ref';
import { DatabaseSecret } from './database-secret';
import { Endpoint } from './endpoint';
import { IParameterGroup } from './parameter-group';
import { setupS3ImportExport } from './private/util';
import { BackupProps, InstanceProps, Login, PerformanceInsightRetention, RotationMultiUserOptions } from './props';
import { DatabaseProxy, DatabaseProxyOptions, ProxyTarget } from './proxy';
import { CfnDBCluster, CfnDBClusterProps, CfnDBInstance, CfnDBSubnetGroup } from './rds.generated';
Expand Down Expand Up @@ -305,7 +306,7 @@ abstract class DatabaseClusterNew extends DatabaseClusterBase {
}),
];

let { s3ImportRole, s3ExportRole } = this.setupS3ImportExport(props);
let { s3ImportRole, s3ExportRole } = setupS3ImportExport(this, props);
// bind the engine to the Cluster
const clusterEngineBindConfig = props.engine.bindToCluster(this, {
s3ImportRole,
Expand Down Expand Up @@ -356,38 +357,6 @@ abstract class DatabaseClusterNew extends DatabaseClusterBase {
cluster.cfnOptions.updateReplacePolicy = CfnDeletionPolicy.SNAPSHOT;
}
}

private setupS3ImportExport(props: DatabaseClusterBaseProps): { s3ImportRole?: IRole, s3ExportRole?: IRole } {
let s3ImportRole = props.s3ImportRole;
if (props.s3ImportBuckets && props.s3ImportBuckets.length > 0) {
if (props.s3ImportRole) {
throw new Error('Only one of s3ImportRole or s3ImportBuckets must be specified, not both.');
}

s3ImportRole = new Role(this, 'S3ImportRole', {
assumedBy: new ServicePrincipal('rds.amazonaws.com'),
});
for (const bucket of props.s3ImportBuckets) {
bucket.grantRead(s3ImportRole);
}
}

let s3ExportRole = props.s3ExportRole;
if (props.s3ExportBuckets && props.s3ExportBuckets.length > 0) {
if (props.s3ExportRole) {
throw new Error('Only one of s3ExportRole or s3ExportBuckets must be specified, not both.');
}

s3ExportRole = new Role(this, 'S3ExportRole', {
assumedBy: new ServicePrincipal('rds.amazonaws.com'),
});
for (const bucket of props.s3ExportBuckets) {
bucket.grantReadWrite(s3ExportRole);
}
}

return { s3ImportRole, s3ExportRole };
}
}

/**
Expand Down
Loading