You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just wanted to personally thank everyone involved in this effort. Training is now far more accessible on DALLE-pytorch using the pretrained VAE you provided. Compute and memory costs are substantially lower and it's even possible for people to train a relatively large transformer under 16 GiB of VRAM.
It's early days, and no one has trained a "full DALL-E" yet, but this helps with that plenty and momentum is already picking up on the repo.
So thanks and great work everyone. You're awesome.
The text was updated successfully, but these errors were encountered:
I just wanted to personally thank everyone involved in this effort. Training is now far more accessible on DALLE-pytorch using the pretrained VAE you provided. Compute and memory costs are substantially lower and it's even possible for people to train a relatively large transformer under 16 GiB of VRAM.
It's early days, and no one has trained a "full DALL-E" yet, but this helps with that plenty and momentum is already picking up on the repo.
So thanks and great work everyone. You're awesome.
The text was updated successfully, but these errors were encountered: