Skip to content

Cloud GPU

Acly edited this page Aug 9, 2024 · 9 revisions

Guide: Rent a Cloud GPU

This is a step-by-step guide on how to run the Stable Diffusion server remotely on cloud services like runpod.io, vast.ai or sailflow.ai. This allows you to use the plugin without installing a server on your local machine, and is a great option if you don't own a powerful GPU. There is no subscription, you typically pay between 0.25$ to 0.50$ per hour.

runpod.io

runpod.io is fairly stream-lined, but offers limited information about download speeds, and is a bit more pricey.

Step 1: Sign up and add funds

Go to runpod.io and create an account. Go to "Billing" and add funds (one-time payment, minimum 10$, card required).

Step 2: Select a GPU

I choose community cloud and select 1x RTX 3080. It's cheap and fast! Click Deploy.

runpod-1

You are free to choose one of the other options of course.

Step 3: Select the template

Runpod supports all kinds of workloads. To run a server for the Krita plugin, select "Stable Diffusion ComfyUI for Krita". Just typing "krita" into the search bar should find it, or use this link. Click Continue.

runpod-2

Optional: Customize which Models to download

By default a recommended set of models will be downloaded at start-up. You can change this with a custom start command.

Step 4: Deploy!

You will get a cost summary. Click Deploy.

Now you have to wait until the server is up. This can take a while (~10 minutes) depending on the download speed of your pod. Eventually it should look like this:

runpod-3

Step 5: Connect

Once your pod is running, click Connect and choose "HTTP Service [Port 3001]".

runpod-4

This should open ComfyUI running in your browser. Now simply copy the URL into the Krita plugin and connect!

runpod-5

runpod-6

Afterwards

After you're done using the pod, remember to stop it. You can keep it inactive to potentially reuse it later, but it's not free. To avoid charges, make sure to discard/delete the pod.

vast.ai

vast.ai has a very similar offering. You get more details about the machine you will run on, but it also requires you to filter the available pods to get what you want. Minimum initial funds is 5$.

The UI is very similar to runpod.io and so are the steps to set it up.

Template

You can use this template. Try to select a pod in your region with good internet connection. Click Rent.

Tip

Optional: Customize which Models to download

By default a recommended set of models will be downloaded at start-up. You can change this with a custom start command.

vast-1

Connecting

Once your instance has finished loading and is running, click the button which displays the port range to find the URL to connect.

vast-2

The URL will be the one which maps to port 3001. Copy it into Krita and connect. Make sure it doesn't contain any spaces!

vast-3

vast-4

sailflow.ai

sailflow.ai is a GPU cloud platform, which allows you to start krita-ai-diffusion faster. You don't have to install ComfyUI in the cloud GPU server in advance, you just follow the instruction below, create a job, connect to the web address and then done.

Step 1: Sign up

Go to sailflow.ai and create an account, try referral code: R5QXRVE6.

截圖 2024-03-29 上午11 26 31

Step 2: Select the template

You can select Krita template immediately at Home page. Feel free to start, being comfortable to top-up later or try subscription plans. Click Krita.

截圖 2024-03-29 上午10 53 32

You are free to choose one of the other options of course.

Step 3: Deploy!

You will get a cost summary. Click Submit.

截圖 2024-03-29 上午11 39 19

Now you have to wait until the server is up. This will be available soon (~1 minutes) depending on the download speed of your pod. Eventually it should look like this:

Step 4: Connect

Once your pod is running, you may notice that SD APP button has been enabled and you can connect to ComfyUI through this button.

截圖 2024-03-29 下午12 02 53

This should open ComfyUI running in your browser. Now simply copy the URL into the Krita plugin and connect!

截圖 2024-03-29 下午12 42 24

sailflowai-6

Afterwards

After you're done using the pod, remember to stop it. You can keep it inactive to potentially reuse it later, but it's not free. To avoid charges, make sure to discard/delete the pod.

Customization

You can tweak the Docker start command to customize which models are downloaded. The default start command is

/start.sh --recommended

which will download a recommended set of models (currently SDXL workload with some checkpoint and control models).

Here are some examples for different setups:

  • /start.sh | Don't download anything. Only useful if you want to download manually.
  • /start.sh --sd15 --checkpoints --controlnet | Models required to run SD1.5 with all recommended checkpoints and control models
  • /start.sh --sdxl --checkpoint juggernaut | Models required to run SDXL with only Juggernaut (realistic) checkpoint
  • /start.sh --all | Download everything. Will take a very long time at start up.

All options are listed below: List of arguments for download_models.py

RunPod.io

Before you deploy your pod, click "Edit Template" to get to the "Pod Template Overrides" section. Then, set the "Container Start Command" to the command you want.

Screenshot 2024-08-07 154601

Arguments for Model download at start-up

Argument Description
--minimal download the minimum viable set of models
--recommended download a recommended set of models
--all download ALL models
--sd15 [Workload] everything needed to run SD 1.5 (no checkpoints)
--sdxl [Workload] everything needed to run SDXL (no checkpoints)
--checkpoints download all checkpoints for selected workloads
--controlnet download ControlNet models for selected workloads
--checkpoint download a specific checkpoint (can specify multiple times)
--upscalers download additional upscale models

Checkpoint names which can be used with --checkpoint: realistic-vision, dreamshaper, flat2d-animerge, juggernaut, zavychroma, flux-schnell

Custom checkpoints and LoRA

You can download any custom models to the pod after it has started. Either use ComfyUI Manager (installed by default), or do it via SSH terminal.

You can either use the web terminal, or connect via ssh.

  • Download Checkpoints to /models/checkpoints
  • Download LoRA to /models/lora
cd /models/checkpoints
wget --content-disposition 'http://file-you-want-to-download'

After download is complete, click the refresh button in Krita and they should show up.

cloud-gpu-custom-checkpoint