-
Notifications
You must be signed in to change notification settings - Fork 9.7k
feat: Add ImagePadForOutpaint #224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add ImagePadForOutpaint #224
Conversation
@GrimblyGorn I just did exactly what described here https://comfyanonymous.github.io/ComfyUI_examples/inpaint/#outpainting, create a expanded image and corresponding mask |
The reason you get seams is probably because you are using euler, if you use a different sampler like ddim or sde it will work better. The issue I have is that it doesn't seem to work if I expand both the top and right for example and I wonder if that problem can be eliminated if the mask isn't a straight line but a slightly wobbly line instead if you understand what I mean. |
@comfyanonymous I will try to add a parameter to allow border feathering |
@comfyanonymous I've tried with top and right expanded, it works fine with inpaiting models, just the edges are too sharp. May be we can just merge this for now, and improve later with more advanced features, such as feathering, even subject detection. |
Changing from euler does change the seams some, along with other settings affecting them some as well. That's it's own issue and is less concerning in my case than it running out of memory on larger images. Seemingly this node does fine for extending the images. I think the memory failure comes from the KSampler attempting to fill in and render such large images. Possibly I will have to look into making an alternate Sampler node for doing these larger images in "chunks" somehow. Much like item 4 from my mega-wall discussion, which is speaking on TiledVAEs and MultiDiffusion. That seems about the only way I'll be able to achieve the larger sizes I was hoping for with my limited 4GB vram. This node does still seem to be a useful addition to have though :) |
@comfyanonymous @GrimblyGorn I added a Orignal: Outpaint without feathering: Outpaint with feathering 180px: |
@guoyk93 This is doing a much nicer job of blending the seams now. Thanks :) |
I think a more competent way to add something like this is simply transposing onto a image blank / blank latent image. That way it's not bound to padding a input image, but you can add your image to any image at any size or location. This seems like something that could be part of the vae decode/encode to just chain up jobs almost? |
@WASasquatch I think this is more straightforward and easy to comprehend for users. it support using different checkpoint / vae for original image and inpainting. |
@comfyanonymous fixed, now it won't feather edges not expanded |
Another consideration is how much VRAM this uses scaling tensors as apposed to raster. Manipulating sizes and such as tensors is extreme overkill when not actually doing a task with them like upscaling/denoising/diffusion (that does these things). It's a serious resource waste, and just abusing cards. I'd imagine you'd have memories issues scaling tensors as apposed to encoding at scale. Not sure though. Does seem like something a single CPU core |
I added Image Padding to WAS Suite for my GPUs sanity, and those with low VRAM cards https://github.com/WASasquatch/was-node-suite-comfyui/blob/main/WAS_Node_Suite.py#L532 |
Update custom-node-list.json
Co-authored-by: Claude <noreply@anthropic.com>
This node will create transparent paddings to existed image and generate correct mask.
Now, users can do outpaint in pipeline directly