Comfyui load workflow from image reddit
Comfyui load workflow from image reddit
Comfyui load workflow from image reddit. That image would have the complete workflow, even with 2 extra nodes. That node will try to send all the images in at once, usually leading to 'out of memory' issues. I hope you like it. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. To be fair, I ran into a similar issue trying to load a generated image as an input image for a mask, but I haven't exhaustively looked for a solution. Please share your tips, tricks, and workflows for using this software to create your AI art. 0 includes the following experimental functions: Then I fix the seed to that specific image and use it's latent in the next step of the process. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Have fun. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References It is necessary to give the last generated image as it does load image locally. If it's a . You can Load these images in ComfyUI to get the full workflow. Add your workflows to the collection so that you can switch and manage them more easily. Ensure that you use this node and not Load Image Batch From Dir. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Get Started with ComfyUI - Drag and Drop Workflows from an Image! Run Diffusion. This causes my steps to take up a lot of RAM, leading to killed RAM. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. Unfortunately, the file names are often unhelpful for identifying the contents of the images. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. 5. I have to 2nd the comments here that this workflow is great. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. I'm not really checking my notifications. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. The graph that contains all of this information is refered to as a workflow in comfy. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. json file location, open it that way. A lot of people are just discovering this technology, and want to show off what they created. 1:8188 but when i try to load a flow through one of the example images it just does nothing. PNG into ComfyUI. The prompt for the first couple for example is this: Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. Your efforts are much appreciated. 168. No need to put in image size, and has a 3 stack lora with a Refiner. Those images have to contain a workflow, so one you've generated yourself for example. 8K views 11 months ago. Hello there. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. Thanks a lot for sharing the workflow. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I thought it was cool anyway, so here. the diagram doesn't load into comfyui so I can't test it out. Browse and manage your images/videos/workflows in the output folder. Upcoming tutorial - SDXL Lora + using 1. Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. this will open the live painting thing you are looking for. That's how I made and shared this. Details on how to use the workflow are in the workflow link. Drag and drop doesn't work for . Belittling their efforts will get you banned. This is what it looks like, second pic. 82. Are you referring to the Input folder in the Comfyui installation folder? Comfyui runs as a server and the input images are 'uploaded'/copied into that folder. Maybe a useful tool to some people. Load Image List From Dir (Inspire). My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. and spit it out in some shape or form. After borrowing many ideas, and learning ComfyUI. If this is what you are seeing when you go to choose an image in the image loader, then all you need to do is go to that folder and delete the ones you no longer need. It animates 16 frames and uses the looping context options to make a video that loops. AP Workflow v5. The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Initial Input block - I cant load workflows from the example images using a second computer. So, I just made this workflow ComfyUI. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. 1 or not. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. 5 by using XL in comfy. it's nothing spectacular but gives good consistent results without Starting workflow. I can load workflows from the example images through localhost:8188, this seems to work fine. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I have like 20 different ones made in my "web" folder, haha. I am trying to understand how it works and created an animation morphing between 2 image inputs. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . This is the node you are looking for. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. And above all, BE NICE. Get a quick introduction about how powerful ComfyUI Hidden Faces. Nobody needs all that, LOL. I had to load the image into the mask node after saving it to my hard drive. You need to select the directory your frames are located in (ie. enjoy. Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. Images created with anything else do not contain this data. These are examples demonstrating how to do img2img. There's a node called VAE Encode with two inputs. a search of the subreddit Didn't turn up any answers to my question. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. 75K subscribers. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. Thanks. ComfyUI/web folder is where you want to save/load . 0. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. [DOING] Clone public workflow by Git and load them more easily. I can load the comfyui through 192. They are completely separate from the main workflow. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. this is just a simple node build off what's given and some of the newer nodes that have come out. How to solve the problem of looping? I had an idea to just write an analog of two-in-one Save image, Load image in one node, that would save the last result to a file and then output it at the next rendering queue. In either case, you must load the target image in the I2I section of the workflow. Pretty Comfy, Right? ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Pixels and VAE. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. json file hit the "load" button and locate the . Notice that Face Swapper can work in conjunction with the Upscaler. Please keep posted images SFW. Load Image Node. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. Sync your collection everywhere by Git. json files. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. The images above were all created with this method. load your image to be inpainted into the mask node then right click on it and go to edit mask. It's simple and straight to the point. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this allows us to load old generated images as a part of our prompt without using the image itself as img2img. Aug 7, 2023 ยท Workflows can only be loaded from images that contain the actual workflow metadata created by ComfyUI, and stored in each image COmfyUI creates. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. 2. Flux Schnell is a distilled 4 step model. I liked the ability in MJ, to choose an image from the batch and upscale just that image. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. Ending Workflow. I have a video and I want to run SD on each frame of that video. Just load your image, and prompt and go. Experimental Functions. I've been using ComfyUI for nearly a year, during which I've accumulated a significant number of images in my input folder through the load image node. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! My ComfyUI workflow was created to solve that. My 2nd Attempt, i thought to myself, I will go as basic and as easy as possible, I will limit the models I am using to only large popular models, I will try to stick to basic ComfyUI nodes as possible, meaning I have none except for Manager and Workflow Spaces, thats it. This workflow generates an image with SD1. And images that are generated using ComfyBox will also embed the whole workflow, so it should be possible to just load it from an image. this is like copy paste basically and doesnt save the files to disk. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. You can save the workflow as json file and load it again from that file. I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. If you are still interested - basically I added 2 nodes to the workflow of the image (image load and save image). Welcome to the unofficial ComfyUI subreddit. . And you need to drag them into an empty spot, not a load image node or something. more. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. A quick question for people with more experience with ComfyUI than me. You need to load and save edited image. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. fhxan yno aclcsgm zqfouh krl ixfaywc sewl wfnfa syj rgk