Comfyui upscale workflow reddit. Like the leonardo AI upscaler.

I am losing sleep trying to resolve this. Welcome to the unofficial ComfyUI subreddit. Its ONE STEP. I've never had good luck with latent upscaling in the past, which is "Upscale Latent By" and then re-sampling. My nonscientific answer is that A1111 can do it around 60 seconds at 30 steps using a 1. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Please keep posted images SFW. Edit: Reddit PNG dosya indirmeye izin vermediği için workflow için json dosyası The final steps are as follows: Apply inpaint mask. 4. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Sensitive Content. However, I'm can't find any workflows that incorporate upscalers for 1. My current workflow to generate decent pictures at upscale X4, with minor glitches. 5x-2x using either SDXL Turbo or SD1. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. Usually I use two my wokrflows: SDXL setup, basic to advanced workflow. As a result, ① enhanced image ー ② loss of detail during upscaling =. 5 models. Mix and match as you wish. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. I think it was 3DS Max. 5x on 10GB NVIDIA GPU's. Would anyone be able to provide a link to such workflows? Thank you in forward! If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. 'FreeU_V2' for better contrast and detail, and 'PatchModelAddDownscale' so you can generate at a higher resolution. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. See the screenshot below. 1 but I resize with 4x-Ultrasharp set to x2 and in ComfyUI this workflow uses a nearest/exact latent upscale. Thanks tons! Welcome to the unofficial ComfyUI subreddit. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. 5 upscale) upscaler to ksampler running 20-30 steps at . We would like to show you a description here but the site won’t allow us. Create animations with AnimateDiff. Correct me, if I'm wrong. Before we continue it is important to understand two important concepts: Difference between Latent and Pixel space. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. An example of the images you can generate with this workflow: This is the image in the file, converted to a jpg. Queue the flow and you should get a yellow image from the Image Blank. Upscale / Re-generate in high-res Comfy Workflow. You can use the UpscaleImageBy node to scale up and down, by using a scale factor < 1. Basically, Two nodes are doing the heavy lifting. 55. Thx for this workflow, I wanna experiment to get an upscaler similar to Magnific and this is going in the right direction, even if it's simple and nothing new. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain an alternative free open source is, but then ya better create your own workflow in comfy ui cause it uses the same upscale models like esgran, real esgran ultramix balance. You could use canny + depth to get the edges and the composition, then prompt "red can" and it works better. This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. I Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and Image processing should include enhancing sharpness and texture, softening shadows, and eliminating unwanted artifacts. 2. Yes i am following this channel on yt, and already have watched the workflow but will give a try once more may be it helps. Hires fix 2x (two pass img) Evet Arkadaşlar, Comfy UI için upscale yöntemlerini deneyebileceğiniz bir workflow hazırladım. If you have 'high res fix' enabled in A111, you have to add an upscaler process to comfyUI. * The result should best be in the resolution-space of SDXL (1024x1024). Difference between Traditional and Machine Learning based upscaler. I spent some time fine-tuning it and really like it. I'll make this more clear in the documentation. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. ago • Edited 7 mo. ControlNet Workflow. A transparent PNG in the original size with only the newly inpainted part will be generated. A lot of people are just discovering this technology, and want to show off what they created. 0 = 0. Impact of denoise. It's a workflow to upscale image several times, gradually changing scale and parameters. ago. If you see a few red boxes, be sure to read the Questions section on the page. ultrasharp), then downscale. I also used LCM LoRA to greatly speed up inference and particularly upscaling. - Play with the Upscale models for upscale 4x or 8x. Merging 2 Images together. 6. Img2Img ComfyUI workflow. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Then use those with the Upscale Using Model node. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. Please share your tips, tricks, and workflows for using this software to create your AI art. 0. \ComfyUI\models\upscale_models. Belittling their efforts will get you banned. Since you have only 6GB VRAM i would choose Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Was suite has a number counter node that will do that. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale. Just drag-and-drop images/config to the ComfyUI web interface to get this 16:9 SDXL workflow. I've used Würstchen v3 aka Stable Cascade for months since release, tuning it, experimenting with it, learning the architecture, using build in clip-vision, control-net (canny), inpainting, HiRes upscale using the same models. factor = 2. Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. May be somewhere i will point out the issue. Thanks. You can add latent noise ( or perlin ) and re-sample @ denoise at over 0. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. take latent output and send to latent upscaler (doing a 1. Since I created that outline, key challenges "Take this idea, upscale it and create a template with straight lines, circles etc. - Play with Post process film grain, chromatic aberration and glow. So if you want 2. Give me a high quality output. For example, I can load an image, select a model (4xUltrasharp In the end, it was 30 steps using Heun and Karras that got the best results though. output is basically raw latent upscaled image. And above all, BE NICE. Over 4K: generate base image -> simple tiles decompose -> tile upscaling with IPAdapter and low strength inpaint control net -> tile reassemble -> differential diffusion over the seams (using tiled diffusion if not enough VRAM) I have noticed that It is more about the model that you use. From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. It is possible through comfyUI to build any system which A1111 uses through the use of default and custom nodes. Along with normal image preview other methods are: Latent Upscaled 2x. The ComfyUI Workflow I'm currently utilizing with an upscaler for SDXL is functioning smoothly. Thanks for the workflow! Very easy to use. . ADMIN MOD. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. There are two different workflows. Like the leonardo AI upscaler. The yellow nodes are componentized nodes, which are simply a collection of Loader, ClipTextEncode, and Upscaler, respectively. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. I'm also looking for a upscaler suggestion. Additionally, the Upscaler (SUPIR) function can be used to perform Magnific AI-style creative upscaling. Third Pass: Further upscale 1. I downloaded it but don't know where to put it. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) upvotes · comments r/StableDiffusion Beginner Workflow/SDXL; IPA plus face with Reactor and UltimateSD Upscale Just starting to learn and created my first workflow using OpenArt's ComfyUI interface. Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. Reply reply Nice, it seems like a very neat workflow and produces some nice images. Upscale image using model to a certain size. 7-develop poses / LoRA / LyCORIS etc. I downloaded a great workflow but the upscaler referenced in the a node doesn't exist. ( I am unable to upload the full-sized image. https://github. Layer copy & paste this PNG on top of the original in your go to image editing software. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). Now in my typical workflow after I generate my image, I do other passes. . r/comfyui. Regular SDXL is just a bunch of noise until 8! I tried on colab notebook with different styles, resolution and artists and results were amazing. I've been loving ComfyUI and have been playing with inpaint and masking and having a blast, but I often switch the A1111 for the X/Y plot for the needed step values, but I'd like to learn how to do it in Comfy. This new upscale workflow also runs very efficiently, being able to 1. ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. 5 based model and then do it. 9 leaked repo, you can read the README. Since one performs better than the other depending on the type of image you want to upscale, each one has a dedicated function. Save the new image. com/upscayl/upscayl HOW TO USE: - Start with GREEN NODES write your prompt and hit queue. I don't want any changes or additions to the image, just a straightforward upscale and quality enhancement. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". -> you might have to resize your input-picture first (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. Is there a workflow with all features and options combined together that I can simply load and use ? Any decent workflows for upscaling old videos that can compete with Topaz? I am looking to experiment with upscaling old digital video and captured Hi8 video to bring in some extra detail, even if said detail is inaccurate to some degree. Civitai has few workflows as well. What you could try is more a style transfer or a new image. What settings can I tweak to encourage more variety? any tweaking tips? Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. This a good starting point. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. My current workflow sometimes will change some details a bit, it makes the image blurry or makes the image too sharp. If you have the SDXL 0. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. g. Its just not intended as an upscale from the resolution used in the base model stage. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Ok, so this is a bunch of tutorials I made centered on updating the same workflow step by step to look better. 5. 2-create consistent characters [DONE using roop] 3-have multiple characters in a scene [DONE] 4-have those multiple characters be unique and reproduceable [DONE dual roop] 5-have those multiple characters interact. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. 0 Alpha + SD XL Refiner 1. Additionally, I need to incorporate FaceDetailer into the process. Currently its set up to create latent, upscale latent, and then ultimate SD upscale from there. I upscaled it to a resolution of 10240x6144 px for us to examine the results. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. Before / after. For upscaling there are many options. This workflow was created to automate the process of converting roughs generated by A1111's t2i to higher resolutions by i2i. I know there is the ComfyAnonymous workflow but it's lacking. I'm something of a novice but I think the effects you're getting are more related to your upscaler model your noise your prompt and your CFG Generates a SD1. Take the image out to a 1. Upscale and then fix will work better here. Here I modified it from the official ComfyUI site, just a simple effort to make it fit perfectly on a 16:9 monitor. pth. It takes less than 5 minutes with my 8GB VRAM GC: Generate with txt2img, for example: extremely detailed, european woman, floral, elegant, magical, fantasy, ornate, garden, nikon Z9, realistic, ZEISS 100mm, bokeh We would like to show you a description here but the site won’t allow us. 6-create and clothe the characters differently. Just my two cents. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). Versions Short version uses a special node from Impact pac Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. This workflow was built using the following custom nodes. Basic latent upscale, basic upscaling via model in pixel space, with tile controlnet, with sd ultimate upscale, with LDSR, with SUPIR and whatnot. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. run thought ksampler. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. However, this can be clarified by reloading the workflow or by asking questions. " (1024 x 1024) KSampler 3 "Upscale it again and "repair,improve" this template. if you have 'face fix' enabled in A1111 you have to add a face fix process using a node that can make use of face fixing models or techniques AP Workflow v3. ComfyUI SDXL Basics Tutorial Series 6 and 7 - upscaling and Lora usage. by comfyanonymous Stability Staff. Never use a prompt like "woman" You will upscale an eye with that prompt and everywhere will be womans. 1 or not. Second Pass: Upscale 1. comments. Don't listen to the haters, reading some comments, they criticize the changes to the image, when Magnific changes it too. Hands work too with it, but I prefer the MeshGraphormer Hand Refiner controlnet. r/StableDiffusion • 7 hr. 1. I've put a few labels in the flow for clarity Here's the possible structure of that workflow: First Pass: SDXL Turbo for the initial image generation. Bu workflow ile çalışmak isterseniz, tek yapmanız gereken, aşşağıdaki anime abla görseli mizi indirip Comfy UI panelinden Load seçeneği ile açmanız. Tulpaxx. Production value zero, rambling? a little anyway, some nice tips and tricks, shows you in a basic way how to build this workflow and why things in that workflow are done the way they are. So this is applicable to img2img. Same problem if you try to upscale a woman. " (2048 x 2048) KSampler 4 "Now go, go, go and start the rocket. It depends on how large the face in your original composition is. etc. This section explores various ways to upscale and apply hires fix to an image. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. 25- 1. ControlNet Depth ComfyUI workflow. SDXL Default ComfyUI workflow. The separate IPAdapter that is focused on the face further allows us to keep the face of the subject somewhat intact from run to run. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 2 / 4. " :) /// License - Public Domain Upload it on other AI platforms? Allowed Improve it? It would be great if there was a simple tidy UI workflow the ComfyUI for SDXL. • 8 mo. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. ComfyUI Fooocus Inpaint with Segmentation Workflow. ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. Sort by: Add a Comment. It changes the image too much and often adds mutations. Try bypassing both nodes and see how bad the image is by comparison. (Tested going from latent upscale directly to the decoder and the images are almost identical For faces you can use Facedetailer. I seem to be getting a lot of pans and other "basic" movement. • 7 mo. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Fill in your prompts. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 5x-2x with either SDXL Turbo or SD1. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. 2 Share. If it’s a close up then fix the face first. ( Image processing example ). and Crystools, or try to update missing nodes from comfy manger. 5 noise. This will allow detail to be built in during the upscale. dr_lm. A video about making an illustration in ComfyUI showcasing a new Upscale/Refining node. AP Workflow v3. Ferniclestix. 5 based model and 30 seconds using 30 steps/SD 2. Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. The AP Workflow now features two next-gen upscalers: CCSR, and the new SUPIR. I'm looking for a workflow for ComfyUI that can take an uploaded image and generate an identical one, but upscaled using Ultimate SD Upscaling. ③ upscaling with optimal detail. : r/StableDiffusion. 2x, upscale using a 4x model (e. * important note just if you have missing nodes like ComfyUI-post-processing-nodes. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. Also, I did edit the custom node ComfyUI-Custom-Scripts ' python file: string_function. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. ComfyUI-Workflow-Component Welcome to the unofficial ComfyUI subreddit. That’s a cost of about $30,000 for a full base model train. The classic GAN based upscalers are the most straightforward and, IMO, the best at the moment. Workflow is with the video. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. Add a Comment. pth or 4x_foolhardy_Remacri. Upscaling ComfyUI workflow. However, in a test a few minutes ago with a fully updated ComfyUI and up to date custom nodes, everything worked fine and other users on Discord have already posted several pictures created with this version of the workflow and without any currently reported problems. py , in order to allow the the 'preview image' node to only show the upscaled images. I have a workflow that works. Go into the mask editor for each of the two and paint in where you want your subjects. multiverse_fan. Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. ReActor is used to paste a desired face on afterwards. Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. This is a workflow to take a target face and comfyui it up into any scene. SVD + Hires Fix Upscale (no LCM = Better Quality) + workflow. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. Table of contents. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. Thank you. Best Comfyui Workflows, Ideas, and Nodes/Settings. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. I am using the primitive node to increment values like CFG, Noise Seed, etc. Can't believe people are bitching about the quality. Strait through upscale workflow critique Can some of you take my image and check the workflow I'm using and give any advice for refining it to get better/more efficient results. Go to civitai and filter for Upscale models to find the best (that you like/adapted for the style of wallpaper). 5. I would start here, compare different upscalers. This is amazing results for one step. I love the use of the rerouting nodes to change the paths. xv aq gf hl ea xu cz lr im pj  Banner