After my experience developing features on top of Unreal Engine's nDisplay and Unity's Cluster Display. I really wanted to try and implement a distributed renderer ontop of ComfyUI in order to tile and scale inference across multiple GPU instances:
https://github.com/nomcycle/ComfyUI_Cluster
This project is still very W.I.P. Currently I'm implementing a custom UDP dependency broadcasting feature that will automatically distribute artifacts at intervals throughout a workflow with the idea to reconstruct larger frames out of those tiles.
A secure, remote, and persistent development environment for ComfyUI that works with cloud GPU services. This container combines Tailscale's secure VPN service with VSCode's remote development capabilities, making it perfect for developing on services like RunPod. Your development environment persists between sessions, so you only pay for GPU time when actively developing.
You can find the source code here:
https://github.com/nomcycle/comfyui-dev
https://hub.docker.com/repository/docker/nomcycle/comfyui-dev
I'm using cubiq's ComfyUI_Essentials plugin to generate text for my tarot card/sticker designs. I encountered an issue where the text would overflow the designated text region of the sticker. To fix this, I refactored the plugin to add a new text generation node that accepts rect dimensions and text as inputs. The node automatically adjusts the font size to ensure the text fits within your specified rectangle.
You can check out the pull request here: https://github.com/cubiq/ComfyUI_essentials/pull/62
I've been using liusida's plugin to automatically crop incoming images to detected faces. However, when collecting multiple face images, it can be time-consuming to set up a method for auto-cropping faces from each image.
I refactored the plugin to handle batches of images, where each image can contain multiple faces. The plugin now outputs batches of cropped faces automatically.
You can check out the pull request here: https://github.com/liusida/ComfyUI-AutoCropFaces/pull/7