Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throughput parameter doesn't work #1583

Closed
aiell0 opened this issue Apr 21, 2023 · 1 comment · Fixed by #1584
Closed

Throughput parameter doesn't work #1583

aiell0 opened this issue Apr 21, 2023 · 1 comment · Fixed by #1584
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@aiell0
Copy link

aiell0 commented Apr 21, 2023

/kind bug

What happened?
Throughput parameter on StorageClass not being implemented for dynamic provisioning.

What you expected to happen?
I was using a gp3 EBS type and specified throughput as 250Mb/s, and it came up as 125Mb/s.

How to reproduce it (as minimally and precisely as possible)?
Define a StorageClass as follows:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: somename
parameters:
  encrypted: "true"
  iops: "5000"
  kmsKeyId: "kmskeyId"
  throughput: "250"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
Client Version: v1.24.10-eks-48e63af
Kustomize Version: v4.5.4
Server Version: v1.25.8-eks-ec5523e
  • Driver version: v1.18.0
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 21, 2023
@rdpsin
Copy link
Contributor

rdpsin commented Apr 21, 2023

Thanks for reporting this. We will fix it. In the meantime, you could use a workaround by specifying the volume type in the parameters.

type: "gp3"

That will provision the correct throughput.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants