Run ComfyUI on Dev Pods

This guide provides a step-by-step walkthrough for setting up ComfyUI on Dev Pods. Our GPU-powered Pods offer cost-effective and reliable inference services, enabling you to efficiently render graphics or train models.

Prerequisites

  • Ensure you have an active account on FastGPU with sufficient credits.
  • Basic understanding of using SSH tools (e.g., WSL, OpenSSH, PuTTY, terminal)

Step 1: Log In and Create a Pod

  1. Log in to your FastGPU account and navigate to the dashboard.
  2. Create a new Pod and select pytorch:2.2.0-py3.10-cuda12.1.0-ubuntu22.04 as the Container Image.

Step 2: Monitor Pod Creation

Once the Pod is created, you will be redirected to the Pod details page, where you can monitor the creation progress. The process typically takes less than one minute. When the status badge displays Ready, you can proceed to connect to the Pod.

Step 3: Connect to the Pod

Connect to the Pod using the provided connection details under the Connect tab. Use any commonly used SSH tool to connect to the Pod with the information provided. Or you can also connect to the pod via the terminal tab on pod details page.

Step 4: Install ComfyUI

Once connected, execute the following commands to download and install ComfyUI along with its dependencies:

git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
pip install -r requirements.txt

Step 5: Download Model Weights

Download the model weights you intend to use. For demonstration purposes, use the following command to download a specific model:

wget -O ./models/checkpoints/maturemalemix_v1.4.safetensors https://civitai.com/api/download/models/75441

Step 6: Start ComfyUI

Once the model weights are downloaded, you can start the ComfyUI application. In this tutorial, we will use nohup to run the application in the background so that it continues to run even after you disconnect from the Pod. For more information on how to use nohup, you can refer to the nohup documentation

nohup python main.py --port 18888 --listen 0.0.0.0 > main.log 2>&1 &
# [1] 475 # This is the process ID

After executing this command, the terminal will display the process ID of the ComfyUI application. You can now disconnect from the Pod, and the application will continue to run in the background.

To access the ComfyUI, you need to view the exposed port by TCP port mapping rules in the Pod details page. You can find the public IP address and port number at the bottom of the Connect tab to view the ComfyUI Web app externally.

Step 7: Generate Images

Once the ComfyUI Web app is up and running, the previously downloaded model checkpoint should be automatically loaded. To generate an image, click on Queue Prompt in the web interface.

Conclusion

By following these steps, you have successfully set up ComfyUI on a GPU Pod. This setup equips you to handle complex tasks with enhanced speed and efficiency. We welcome your feedback and invite you to follow us on Twitter @fast_gpu for more tips and updates.