UK

Comfyui workflow directory example reddit sdxl


Comfyui workflow directory example reddit sdxl. 0 the refiner is almost always a downgrade for me. 0 Base SDXL 1. I'm currently running into certain prompts where latent just looks awful. Please keep posted images SFW. I played for a few days with ComfyUI and SDXL 1. Automatic calculation of the steps required for both the Base and the Refiner models. Reply reply ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. Oct 12, 2023 · These workflow templates are intended as multi-purpose templates for use on a wide variety of projects. So, I just made this workflow ComfyUI. 1 that are now corrected. 首先當然要下載 SDXL 1. 5 時大了足足一整倍,而且訓練數據也增加了3倍,所以最終出來的 Checkpoint File 也比 1. All you need is to download the SDXL models and use the right workflow. 5 model I don't even want. I think it is just the same as the 1. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. I had to place the image into a zip, because people have told me that Reddit strips . Edit: you could try the workflow to see it for yourself. Starts at 1280x720 and generates 3840x2160 out the other end. FAQ Q: Can I use a refiner in the image-to-image transformation process with SDXL? For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. They can be used with any SDXL checkpoint model. The blurred latent mask does its best to prevent ugly seams. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. 1 Dev Flux. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models Step 2: Download this sample Image. 0 Base. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But try both at once and they miss a bit of quality. But for a base to start at it'll work. ControlNet (Zoe depth) Advanced SDXL Template Welcome to the unofficial ComfyUI subreddit. 157 votes, 62 comments. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. Part 2 (link)- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. If you have a previous installation of ComfyUI with Models, or would like to use models stored in an external location, you can use this method to reference them instead of re-downloading them. 5 with lcm with 4 steps and 0. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). So, if you are using that, I recommend you to take a look at this new one. Combined with an sdxl stage, it brings multi subject composition with the fine tuned look of sdxl. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. 5 的大得多。 Yeah sure, ill add that to the list, theres a few different options lora-wise, Not sure the current state of SDXL loras in the wild right now but yeah some time after I do upscalers ill do some stuff on lora and probably inpainting/masking techniques too. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. They are intended for use by people that are new to SDXL and ComfyUI. Comfy Workflows Comfy Workflows. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. I think it’s a fairly decent starting point for someone transitioning from Automatic1111 and looking to expand from there. Part 3 (this post) - we will add an SDXL refiner for the full SDXL process. Part 2 - (coming in 48 hours) we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. Open the YAML file in a code or text editor I conducted an experiment on a single image using SDXL 1. true. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor, so you don't have to | Workflow file included | Plus cats, lots of it. I have to 2nd the comments here that this workflow is great. For each of the sequences, I generated about ten of them and then chose the one I === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. Before inpainting the workflow will blow the masked size up to 1024x1024 to get a nice resolution and resize before pasting back. The image generation using SDXL in ComfyUI is much faster compared to Automatic1111 which makes it a better option between the two. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. Feature/Version Flux. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello! I'm new at ComfyUI and I've been experimenting the whole saturday with it. SDXL most definitely doesn't work with the old control net. I'm revising the workflow below to include a non-latent option. But now in SDXL 1. Thanks for the tips on Comfy! I'm enjoying it a lot so far. Ignore the prompts and setup SDXL cliptext node used on left, but default on right sdxl-clip vs default clip. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself That's the one I'm referring to. But it separates LORA to another workflow (and it's not based on SDXL either). Jan 8, 2024 · Introduction of a streamlined process for Image to Image conversion with SDXL. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. yaml. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. but it has the complexity of an SD1. Encouragement of fine-tuning through the adjustment of the denoise parameter. Goto ComfyUI_windows_portable\ComfyUI\ Rename extra_model_paths. 1. 5 model. With SDXL 0. You can encode then decode bck to a normal ksampler with an 1. Instead, I created a simplified 2048X2048 workflow. Just a quick and simple workflow I whipped up this morning I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. In contrast, the SDXL-clip driven image on the left, has much greater complexity of composition. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. I like to create images like that one: AP Workflow v3. There are strengths and weaknesses for each model, so is it possible to combine SDXL and SD 1. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. Aug 20, 2023 · In part 1 (link), we implemented the simplest SDXL Base workflow and generated our first images. 10 votes, 10 comments. SDXL Turbo is a SDXL model that can generate consistent images in a single step. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. 2 My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. I think that when you put too many things inside, it gives less attention to it. A great starting point for using img2img with SDXL: View Now: Upscaling: How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting For some workflow examples and see what ComfyUI can do you can check out: SDXL Turbo; AuraFlow; HunyuanDiT In the standalone windows build you can find this No, because it's not there yet. Emphasis on the strategic use of positive and negative prompts for customization. example to extra_model_paths. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. co). Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt I stopped the process at 50GB, then deleted the custom node and the models directory. This can be useful for systems with limited resources as the refiner takes another 6GB or ram. Thanks. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. I tried to find either of those two examples, but I have so many damn images I couldn't find them. ComfyUI workflow to play with this, embedded here: This gives sd3 style prompt following and impressive multi subject composition. From there, we will add LoRAs, upscalers, and other workflows. So far I find it amazing but so far I'm not achieving the same level of quality I had with Automatic 1111. Just load your image, and prompt and go. 0 Refiner. I understand how outpainting is supposed to work in comfyui (workflow… Based on Sytan SDXL 1. Sure, it's not 2. It can't do some things that sd3 can, but it's really good and leagues better than sdxl. 0. You can use more steps to increase the quality. 1 Pro Flux. Simple SDXL Template. 9 I was using some ComfyUI workflow shared here where the refiner was always an improved version versus the base. You can construct an image generation workflow by chaining different blocks (called nodes) together. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Intermediate SDXL Template. But let me know if you need help replicating some of the concepts in my process. 0 for ComfyUI - Now with support for SD 1. Only dog, also perfect. We don't know if ComfyUI will be the tool moving forwards but what we guarantee is that by following the series those spaghetti workflows will become a bit more understandable + you will gain a better understanding of SDXL. EDIT: For example this workflow shows the use of the other prompt windows. List of Templates. It's simple and straight to the point. Sep 7, 2024 · SDXL Examples. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. json · cmcjas/SDXL_ComfyUI_workflows at main (huggingface. Increasing the sample count leads to more stable and consistent results. Comfy1111 SDXL Workflow for ComfyUI. Feb 7, 2024 · Running SDXL models in ComfyUI is very straightforward as you must’ve seen in this guide. I have an image that I want to do a simple zoom out on. I made a preview of each step to see how the image changes itself after sdxl to sd1. View community ranking In the Top 1% of largest communities on Reddit. 5 in a single workflow in ComfyUI? EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. But it is extremely light as we speak, so much so Examples of ComfyUI workflows. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. Please share your tips, tricks, and workflows for using this software to create your AI art. Welcome to the unofficial ComfyUI subreddit. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. More to come. Share, discover, & run thousands of ComfyUI workflows. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. . 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set I have a ComfyUI workflow that produces great results. 0 的 Checkpoint Model,由於 SDXL 在訓練時圖片用上了 1024 x 1024 的圖片,解像度比 SD 1. It provides workflow for SDXL (base + refiner). 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images: sdxl_4k_workflow. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. It now includes: SDXL 1. First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. I mean, the image on the right looks "nice" and all. 5 and then after upscale and facefix, you ll be surprised how much change that was SDXL Controlnet Tiling Workflow I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: 下載 SDXL 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Created by: OpenArt: What this workflow does This basic workflow runs the base SDXL model with some optimization for SDXL. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. AP Workflow 6. Comfy1111 SDXL Workflow for ComfyUI Just a quick and simple workflow I whipped up this morning to mimic Automatic1111's layout. You do only face, perfect. Your efforts are much appreciated. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Nobody needs all that, LOL. pngs of metadata. SDXL 1. This was the base for my I'll do you one better, and send you a png you can directly load into Comfy. I'm glad to hear the workflow is useful. Part 3 - we will add an SDXL refiner for the full SDXL process. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. I put an example image/workflow in the most recent commit that uses a couple of the main ones, and the nodes are named pretty easily so if you have the extension installed you should be able to just skim through the menu and search the ones that aren't as straightforward. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5. I know it must be my workflows because I've seen some stunning images created with ComfyUI. Aug 13, 2023 · In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making certain kinds of horror images or people too lazy to use photoshop like me :P Then in Part 3, we will implement the SDXL refiner. For example: 896x1152 or 1536x640 are good resolutions. SDXL Examples. qfegg wmyrqmq zusu czpwtaqr fkrcnc whgp dvpsb fsek benxtn fvzglabf


-->