PRODU

Sdxl comfyui workflow reddit pdf

Sdxl comfyui workflow reddit pdf. If you're able to reproduce your workflow in the "Generate" tab, you can use the "ComfyUI Workflow Editor" tab to import your Generate workflow to ComfyUI. 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images: sdxl_4k_workflow. rinse and repeat until you loose interest :-) Retouch the "inpainted layers" in your image editing software with masks if you must. I hope someone finds it useful. Add a Comment. ago. Yeah I noticed, wild DDIM 20 steps. Production value zero, rambling? a little anyway, some nice tips and tricks, shows you in a basic way how to build this workflow and why things in that workflow are done the way they are. In the documentation there's an example that works with a basic flow, but I can't figure out how to modify it for an SDXL workflow. SDXL-Turbo image-to-image [Testing WIP] per comfyanon's advice I managed a ImgToImg workflow using "Split Sigmas". I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). Looking for an Outpainting workflow using reference only for SDXL. MrLunk. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. My thought currently is to use ponyXL to generate the base image, and then use img2img or inpaint with the 1. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. What's worked better for me is running the SDXL image through a VAE encoder and then upscaling the latent before running it through another ksampler that harnesses SD1. Experimental LCM Workflow "The Ravens" for Würstchen v3 aka Stable Cascade is up and ready for download. -> you might have to resize your input-picture first (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. 5 model. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Step Ratio” formula defined in the dedicated widget. The problem is with the nodes labeled Text to Conditioning. Save the new image. I also had this problem in the beginning. Made a lora of my dog and tada xmas cards for all of time. 2. This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making certain kinds of horror images or people too lazy to use photoshop like me :P The main features are: Works with SDXL, SDXL Turbo as well as earlier version like SD1. ComfyUI workflow to play with this, embedded here: Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). json file. You know what to do. If you're low on VRAM and need the tiling Regarding FreeU, I'm becoming increasingly skeptical about its usefulness. I think it was 3DS Max. when i adjust the resolutions in that workflow to 2048x2048 im facing outofmemory errors. Tutorial 6 - upscaling. You could try to pp your denoise at the start of an iterative upscale at say . Click "Install Missing Custom Nodes" and install/update each of the missing nodes. For each of the sequences, I generated about ten of them and then chose the one I Start by installing 'ComfyUI manager' , you can google that. Downloading SDXL pics posted here on reddit and dropping them into comfyUI doesn't work either so I guess will need a direct download link comments sorted by Best Top New Controversial Q&A Add a Comment Simple ComfyUI Img2Img Upscale Workflow. co) Thanks for sharing this setup. Sai-enhance usually goes well with all the rest of styles. diffusers/stable-diffusion-xl-1. . From what I understand it is doing only the last step of a 5 step generation producing the same result as a low denoising threshold. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. As an alternative to the SDXL Base+Refiner models, or the Base/Fine-Tuned SDXL model, you can generate images with the ReVision method. Basically, Two nodes are doing the heavy lifting. In ComfyUI you don't need to be setting nodes like other node base apps, the workflow sytem takes care of everything and all the connections are already set. json workflow. In Part 1, we implement the SDXL base in the simplest way possible. Prerequisites Before you can use this workflow, you need to have ComfyUI installed. Only dog, also perfect. The gist of it: * The result should best be in the resolution-space of SDXL (1024x1024). Save the image and drop it into ComfyUI. With a higher config it seems to have decent results. But, I really wanted to update them with SDXL. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. If we think about what base 1. Workflow does following: load any image of any size. To use ReVision, you must enable it in the “Functions” section. I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: --------------. Sytan's SDXL Workflow will load: Welcome to the unofficial ComfyUI subreddit. but it has the complexity of an SD1. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. bat and ComfyUI will automatically open in your web browser. I know it's early but have a MERRY Christmas ALL ! I had this idea kicking around as I made mine this year. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. Launch the ComfyUI Manager using the sidebar in ComfyUI. Got sick of all the crazy workflows. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. I would like to use that in-tandem with with existing workflow I have that uses QR Code Monster that animates traversal of the portal. You can right click on the new nodes and select "Convert text to input" and connect them the same way as before. Input sources-. ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow. 0, trained for real-time synthesis. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. The first one is very similar to the old workflow and just called "simple". There is an imposter among us. * Still not sure about all the values, but from here it should be tweakable. Then ported it into Photoshop for further finishing a slight gradient layer to enhance the warm to cool lighting ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. The fact that you have to guess the right values every time you want to use it quickly becomes tedious. 9 and ran it through ComfyUI. haha thanks. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. It should work with SDXL models as well. Created by: Michael Hagge: My workflow for generating anime style images using Pony Diffusion based models. Also embedding the full workflow into images is so nice coming from A1111, where half the extensions either don't embed their params, or don't reuse those params when I am a fairly recent comfyui user. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 0 includes the following basic functions: Text2Image (T2I) with: SDXL Base+Refiner. Hi guys, my computer doesn't have enough VRAM to run certain workflows, so I been working on an opensource custom node that lets me run my workflows using cloud GPU resources! Why are you calling this "cloud vram" it insinuates it's different than just This is the image in the file, converted to a jpg. Everything was working fine but now when i try to load a model it gets stuck in this phase. I don’t know why there these example workflows are being done so compressed together. Not hard. A denoising strength of 0. But try both at once and they miss a bit of quality. true. • 4 mo. 5 'sketch to image' workflows in SDXL. Good idea for a small biz. Insert the new image in again in the workflow and inpaint something else. Click run_nvidia_gpu. Based on Sytan SDXL 1. you cant share via image here. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. Once you’ve altered the latent space with SD1. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. json workflow file you downloaded in the previous step. A hub dedicated to development and upkeep of the Sytan SDXL workflow for ComfyUI he workflow is provided as a . 5 steps with a split at step 4 using the lower sigma output. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. 5 with all the tutorials and compatible nodes available (ie: animatediff works smoother with sd1. SD Christmas Card Factory 2023. To increase resolution you have to upscale image, you can do a latent upscale and increase size of image by whatever you want eg. I played for a few days with ComfyUI and SDXL 1. 1 at main (huggingface. Tutorial 7 - Lora Usage. [Part 2] SDXL in ComfyUI from Scratch - Image Size, Bucket Size, and Crop Conditioning. Available On CIVITAI now! An alpha version of my W. Being the control freak that I am, I took the base refiner image into Automatic111 and inpainted the eyes and lips. To improve sharpness search for "was node suite comfyui workflow examples" on Google, should take you to a github page with various workflows, one of them I see is for running hipass for SDXL Lora + Refiner Workflow. In other words, I can do 1 or 0 and nothing in between. In this workflow we try and explore one concept of making T shirt mockups with some cool Input images and using the IP adaptor to convert same into final images. I love the use of the rerouting nodes to change the paths. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. Something about them is trying to feed an output back into an input. Click "Install Models" to install any missing Welcome to the unofficial ComfyUI subreddit. There are strengths and weaknesses for each model, so is it possible to combine SDXL and SD 1. Start ComfyUI by running the run_nvidia_gpu. Heya, tutorial 4 from my series is up, it covers the creation of an input selector switch, use of some math nodes and has a few tips and tricks. I typically use 20 steps of the base model and 5 steps of the refiner, using ddim. if you're the same controlnets as the video, you're using controlnets meant for 1. When you post stuff like this, please add a link to the . 5 + SDXL Refiner Workflow : StableDiffusion. That's because the base 1. AP Workflow 4. POD-MOCKUP generator using SDXL turbo and IP-adaptor plus #comfyUI. Please keep posted images SFW. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Give me one image, the model's name, the number of steps that you use and the scale so we can compare results. We don't know if ComfyUI will be the tool moving forwards but what With the Peak of perfection - Photorealism Style Pack, you can imbue your AI artistry with nuanced real-world details. You can create complete conditional subgraphs and even better UX than I did with this simple conversion. Sytan's SDXL Offical ComyfUI 1. 5. • 8 mo. Just looking for a workflow for outpainting using reference only for prompt or promptless outpainting for SDXL. Replace those with the built-in Comfy node CLIP Text Encode (Prompt). AP Workflow v3. 0 version of the SDXL model already has that VAE embedded in it. Regular SDXL is just a bunch of noise until 8! I tried on colab notebook with different styles, resolution and artists and results were amazing. MOCKUP generator using SDXL turbo and IP-adaptor plus workflow. scale image down to 1024px (after user has masked parts of image which should be affected) pick up prompt, go thru CN to sampler and produce new image (or same as og if no parts were masked) upscale result 4x. I think that when you put too many things inside, it gives less attention to it. First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. ZeonSeven. This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and-paper RPG, or tabletop RPGs, to computers (and later, consoles. Two workflows included. Camera and depth/focus styles. In contrast, the SDXL-clip driven image on the left, has much greater complexity of composition. SDXL cliptext node used on left, but default on right sdxl-clip vs default clip. Try bypassing both nodes and see how bad the image is by comparison. 5 and SDXL. Extract the workflow zip file. Hi community. Click the Load button and select the . bat file. Use a load image node connected to a sketch control net preprocessor connected to apply controlnet with a sketch or doodle control net. * Use Refiner. You do only face, perfect. any advice? Workflow: Trying to recreate my SD 1. Hello, I've fell out the AI image business for like a month or so, I wanted to ask what would be the best updated version to get a sketch image into…. this creats a very basic image from a simple prompt and sends it as a source. using your workflow with 5120x1440 resolution (endresult) works fine but needs ~275 seconds for completion. From there, we will add LoRAs, upscalers, and other workflows. Search for: 'resnet50' And you will find: And in the examples on the workflow page that I linked you can see that the workflow was used to generate several images that do need the face restore I even doubled it. [Part 1] SDXL in ComfyUI from Scratch - SDXL Base. P. 35-0. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. If you want to follow my tutorials from part one onwards you can learn to build a complex multi use workflow from the Welcome to the unofficial ComfyUI subreddit. Dec 19, 2023 · Step 4: Start ComfyUI. It's important to get all the steps and noise settings right: SDXL setup, basic to advanced workflow. I. I don't want it to get to the point where people are just making Ok guys, here's a quick workflow from comfy noobie. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". You can’t mix and match models. 5 you have to keep working with SD1 Apprehensive_Sky892. They still work well. 1. json · cmcjas/SDXL_ComfyUI_workflows at main (huggingface. • 10 mo. 5 model to change the figures. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) A complete re-write of the custom node extension and the SDXL workflow Highly optimized processing pipeline, now up to 20% faster than in older workflow versions Support for Controlnet and Revision, up to 5 can be applied together Multi-LoRA support with up to 5 LoRA's at once Hi there. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Nice, it seems like a very neat workflow and produces some nice images. yes sdxl follows prompts much better and doesn't require too much effort. A lot of people are just discovering this technology, and want to show off what they created. Input images can be any AI art generated or your own Has anyone ever managed to run a Comfyui workflow for SDXL in a python script? I'd like to automate image generation via Comfyui in python for an SDXL workflow, but I can't manage it. Thank you so much! The differences in level of detail is stunning! yeah totally, and you don't even need the hyperrealism and photorealism words in prompt, they tend to make the image worst than without. Nothing special but easy to build off of. Food for thought: check the "Default" workflow in edit mode which comes with ComfyBox. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. Stay away from sdxl when first starting out if hard drive space is a concern. (SDXL) + Workflow. 5 in a single workflow in ComfyUI? The difference between basic 1. I do like to go in depth and ramble a bit so maybe thats not for you, maybe you like that kind of thing. *SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. Mind-blowing how much you can build with it already. The only thing you need to know is where are the nodes and I made the video to show that, so people can jump in and locate the tools after that is as easy as A1111. When using the SDXL base model I find the refiner helps improve images, but I don't run it for anywhere close to the number of steps that the official workflow does. This is an example of an image that I generated with the advanced workflow. 5 X current size. 0 includes the following advanced functions: ReVision. SDXL Base + SD 1. json file which is easily loadable into the ComfyUI environment. •. I’ll check it out. Ok, so this is a bunch of tutorials I made centered on updating the same workflow step by step to look better. Layer copy & paste this PNG on top of the original in your go to image editing software. Its ONE STEP. Explore new ways of using Würstchen v3 architecture and gain a unique experience that sets it apart from SDXL and SD1. Combined Searge and some of the other custom nodes. I know it's simple for now. Plus there’s so much more you can do with SD 1. Reply. 0-inpainting-0. Problem solved. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. AP Workflow 5. Workflow is better than a video for others to diagnose issues or borrow concepts. Belittling their efforts will get you banned. finally i can use sdxl with a 1080ti 11gb vram. Then go to the 'Install Models' submenu in ComfyUI-manager. 3. And above all, BE NICE. Better than the abomination Disney is cooking. Vote. 5 LORA and a 1. Then in Part 3, we will implement the SDXL refiner. 0 is the first step in that direction. Model Description *SDXL-Turbo is a distilled version of SDXL 1. It wasn't long ago that I put together some workflows around sketch to image in SD 1. [Part 3] SDXL in ComfyUI from Scratch - Adding SDXL Refiner. I've been messing with this for the last few days and cannot for the life of me get the Detailer panel to work. 5 does and what could be achieved by refining it, this is really very good, hopefully it will be as dynamic as 1. SDXLTurbo+ SDXL Refiner Workflow for more detailed Image Generation. Just my two cents. Also, if this is new and exciting to you, feel free to post It uses ComfyUI as a back end, but have a similar UI to Automatic1111. The fact that SDXL has NSFW is a big plus, i expect some amazing checkpoints out of this. Pretty much the title. Before this I was working on ipadapter face id, i dont know if 2. Not entirely sure everything is correct. thanks for that workflow. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! News Hello everybody, I know I have been a little MIA for a while now, but I am back after a whole ordeal with a faulty 3090, and various reworks to my workflow to better utilize and leverage some new findings I have The goal is to build up knowledge, understanding of this tool, and intuition on SDXL pipelines. Can't believe people are bitching about the quality. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Click "Load" in ComfyUI and select the SDXL-ULTIMATE-WORKFLOW. 5 and the latest checkpoints is night and day. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. Same to you, be well and happy ! Allows you to choose the resolution of all output resolutions in the starter groups. . :) When rendering human creations, I still find significantly better results with 1. Welcome to the unofficial ComfyUI subreddit. It repeats the same phase again and again. Please share your tips, tricks, and workflows for using this software to create your AI art. Furthermore, as you pointed out, it depends not only on each model but also on the image style, whether there's one LoRa or two, and so on. 'FreeU_V2' for better contrast and detail, and 'PatchModelAddDownscale' so you can generate at a higher resolution. Then I pressed Fetch updates and Update ComfyUI and the line got up as it should and those two items disappeared. Sort by: wraith5. Working with z of shape (1, 4, 32, 32) = 4096 dimensions. There is also a controlnet pre So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. For example: 896x1152 or 1536x640 are good resolutions. will output this resolution to the bus. Has 5 parameters which will allow you to easily change the prompt and experiment. Using just the base model in AUTOMATIC with no VAE produces this same result. In my canny Edge preprocessor, I seem to not be able to go into decimal like you or other people I have seen do. yeah, thats not how reddit works. Exactly this, don't try to learn ComfyUI by building a workflow from scratch. 5 and need to replace those with the sdxl models. [Part 4] Advanced SDXL Workflows in Comfy DrStalker. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. 5 will keep you quite close to the original image and rebuild the noise caused by the latent upscale. There are sketch control net models for both SD 1. Use: Choose your models and Lora, write the prompt, choose your styles, and Render. ) These games tend to focus heavily on role-play and autonomy through the application of a player's chosen attributes and skills. This workflow also includes nodes to include all the resource data (within the limits of the node) when using the "Post Image" function at civitai instead of going to a model page and posting your image. Toggle if the seed should be included in the file name or not. 5). We will release Part 2 soon, where we add conditioning parameters and discuss those in detail. I thought it would as easy as replacing all of the components with SDXL equivalents of all of my SDXL 09 and Automatic 1111 Inpainting Trial (Workflow Included) I just installed SDXL 0. How the workflow progresses: Initial image Use cloud VRAM for SDXL, AnimateDiff, and upscaler workflows, from your local ComfyUI. Sample picture, Reddit deletes the metadata so this picture doesn't load in ComfyUI. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. It's a bit kludgy, so I was wondering if anyone had a better method, because I can't be the only one looking at SDXL going: Man that's nice, but my Lora of character X is 1. 5. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. This is amazing results for one step. I mean, the image on the right looks "nice" and all. co) . Simply apply the customizable styles when using Stable Diffusion to produce images with photorealistic polish. SDXL Controlnet Tiling Workflow. Stuck in SDXL model loading. ad dx ik ux vm jx hb zt ai oj