Comfyui apply ipadapter example reddit


Comfyui apply ipadapter example reddit. I still think the idea of using ipadapter to control tiles (in same way as controlnet tile) works pretty well. Make a bare minimum workflow with a single ipadapter and test it to see if it works. . I am trying to do something like this: Having my own picture as input to IP-Adapter, to draw a character like myself Have some detailed control over facial expression (I have some other picture as input for mediapipe face) Welcome to the unofficial ComfyUI subreddit. So all the motion calculations are made separately like in a regular txt2vid workflow with the ipadapter only affecting the “look” of the output. Yeah, that's exactly what I would do for maximum accuracy. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. The AP Workflow now supports the new PickScore nodes, used in the Aesthetic Score Predictor function. py", line 459, in load_insight_face. I'm trying to use IPadapter with only a cutout of an outfit rather than a whole image. It's called IPAdapter Advanced. Other options like denoise, the context area, mask operations (erode, dilate, whatever you want) are already possible with existing ComfyUI nodes. com and use that to guide the generation via OpenPose or depth. I'm sold on Comfyui but haven't even been able to generate an image as of yet. The IPAdapter function can leverage an attention mask defined via the Uploader function. (If you used a still image as input, then keep the weighting very, very low, because otherwise it could stop the animation from happening. I need (or not?) To use IPadapter as the result is pretty damn close of the original images. ) Welcome to the unofficial ComfyUI subreddit. IPAdapter: If you have to regenerate the subject or the background from scratch, it invariably loses too much likeness Still experimenting with it though. For example, download a video from Pexels. I had a ton of fun playing with it. ipadapter are using generic models to generate similar images. File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\ IPAdapterPlus. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. , 0. Do we need comfyui plus extension? seems to be working fine with regular ipadapter but not with the faceid plus adapter for me, only the regular faceid preprocessor, I get OOM errors with plus, any reason for this? is it related to not having the comfyui plus extension(i tried it but uninstalled after the OOM errors trying to find the problem) Welcome to the unofficial ComfyUI subreddit. 92) in the "Apply Flux IPAdapter" node to control the influence of the IP-Adapter on the base model. Lowering the weight just makes the outfit less accurate. Just remember for best result you should use detailer after you do upscale. It's amazing. 5 and SDXL model. It works if it's the outfit on a colored background, however, the background color also heavily influences the image generated once put through ipadapter. ) The order doesn't seem to matter that much either. gotta plug in the new ip adapter nodes, use ipadapter advanced (id watch the tutorials from the creator of ipadapter first) ~ the output window really does show you most problems, but you need to see each thing it says cause some are dependant on others. Before switching to ComfyUI I used FaceSwapLab extension in A1111. This workflow isn’t img2vid as there isn’t a controlnet involved but an ipadapter which works differently. 0 for ComfyUI - Now with support for SD 1. This Guys I need your help, I just reinstalled my ComfyUI, now I have a serious problem, comfyUI cannot see IPadapter nodes, I re-download, reboot but nothing changes, Locked post. The Grounding DINO SAM detector is used to automatically find a "man" and a "woman" and generate masks. INTRO. For stronger application, you're better using more sampling steps (so an initial image has time to form), and a lower starting control step, like 0. This is particularly useful for letting the initial image form before you apply the IP adapter, for example, start step at 0. This means it has fewer choices from the model db to make an image and when it has fewer choices it’s less likely to produce an aesthetic choice of chunks to blend together. I rarely go above 0. I can load a batch of images for Img2Img, for example, and with the click of one button, generate it separately for each image in the batch. Would love feedback on whether this was helpful, and as usual, any feedback on how I can improve the knowledge and in particular how I explain it! I've also started a weekly 2 minute tutorial series, so if there is anything you want covered that I Jul 29, 2024 · Hi, regardless of how accurate the clothes are produced, is there a way to accurately and consistently apply multiple articles of clothing to a You must already follow our instructions on how to install IP-Adapter V2, and it should all working properly. AP Workflow now supports the Kohya Deep Shrink optimization via a dedicated function. Short: I need to slide in this example from one image to another, 4 times in this example. Clicking on the right arrow on the box changes the name of whatever preset IPA Adapter name was present on the workspace to change to undefined. I don't know where else to turn. The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. Reply reply More replies More replies More replies. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. Jun 5, 2024 · IP-adapter (Image Prompt adapter) is a Stable Diffusion add-on for using images as prompts, similar to Midjourney and DaLLE 3. 2K subscribers in the comfyui community. 74 votes, 13 comments. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. But how take a sequence of reference images for an IP Adapter, let’s say there are 10 pictures, and apply them to a sequence of input pictures, let’s say a one sequence of 20 images. Ah, nevermind found it. Load the base model using the "UNETLoader" node and connect its output to the "Apply Flux IPAdapter" node. AP Workflow now supports the Perp-Neg optimization via a dedicated function. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Hey, using the following workflow that includes the node IPAdapterApply ComfyUI reference implementation for IPAdapter models. 5. Ideally the references wouldn't be so literal spatially. TiledIpAdapter is no longer needed: turn on "unfold_batch" and use the regular ksampler, should give similar results. As of the writing of this guide there are 2 Clipvision models that IPAdapter uses: a 1. The latter is used by the Face Cloner, the Face Swapper, and the IPAdapter functions. 5 and SDXL which use either Clipvision models - you have to make sure you pair the correct clipvision with the correct IPadpater model. Third, you can also use IPAdapter Face or use ReActor to improve your faces. 5 and end step As before, these are created in ComfyUl using: • AnimateDiff-Evolved Nodes • IPAdapter Plus for some shots • Advanced ControlNet to apply in-painting CN • KJNodes from u/Kijai are helpful for mask operations (grow/shrink) Animated masks created in After Effects. This is where things can get confusing. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The original implementation makes use of a 4-step lighting UNet . We'll walk through the process step-by-step, demonstrating how to use both ComfyUI and IPAdapter effectively. 29. raise Exception('IPAdapter: InsightFace is not installed! Install the missing dependencies if you wish to use FaceID models. Dec 10, 2023 · [IPAdapter] The maintainer of 'IPAdapter-ComfyUI' has decided to collaborate on development with the 'ComfyUI IPAdapter plus' repository. I don't think the generation info in ComfyUI gets saved with the video files. Second, you will need Detailer SEGS or Face Detailer nodes from ComfyUI-Impact Pack. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. Very informative but I've been stuck for almost a week. Please keep posted images SFW. I noticed that the log shows what prompts are added and most of the parameters used, which I can then bring over to ComfyUI. The Uploader function now allows you to upload both a source image and a reference image. May 12, 2024 · In the examples directory you'll find some basic workflows. While I'd personally like to generate rough sketches that I can use for a frame of reference when later drawing, we will work on creating full images that you could use to create entire working pages. Most everything in the model-path should be chained (FreeU, LoRAs, IPAdapter, RescaleCFG etc. Just replace that one and it should work the same Action Movies & Series; Animated Movies & Series; Comedy Movies & Series; Crime, Mystery, & Thriller Movies & Series; Documentary Movies & Series; Drama Movies & Series T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. To the OP, I would say training a lora would be most effective, if you can spare the time and effort. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. 🤦🏽‍♂️🤦🏽‍♂️ IP-adapter-plus-face_sdxl is not that good to get similar realistic face but it's really great if you want to change the domain. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. Please share your tips, tricks, and workflows for using this software to create your AI art. You can plug the IPAdapter model to there, the clip vision and image input. It looks freaking amazing! Anyhow, here is a screenshot and the . I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. This is something I have been chasing for a while. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for. In this episode, we focus on using ComfyUI and IPAdapter to apply articles of clothing to characters using up to three reference images. Create a ControlNet pose image with all characters in the 2:1 aspect ratio. And above all, BE NICE. Now you see a red node for “IPAdapterApply”. 5 and SDXL don't mix, unless a guide says otherwise. Belittling their efforts will get you banned. You can adjust the "control weight" slider downward for less impact, but upward tends to distort faces. for example to generate an image from an image in a similar way. Double check that you are using the right combination of models. I added the nodes that apply the model, and some that enable you to replicate Fooocus' fill for inpaint and outpaint modes. Features. Is it the right way of doing this ? Yes. Also, if this is new and exciting to you, feel free to post It would also be useful to be able to apply multiple IPAdapter source batches at once. Visit their github for examples. The IPAdapter are very powerful models for image-to-image conditioning. New comments cannot be posted. That was the reason why I preferred it over ReActor extension in A1111. json of the file I just used. A lot of people are just discovering this technology, and want to show off what they created. I was able to just replace it with the new "IPAdapter Advanced" node as a drop-in replacement and it worked. One thing I'm definitely noticing ((with a controlnet workflow)) is that if the reference image has a prominent feature on the left side (for example), it wants to recreate that image in ON THE LEFT SIDE. I am trying to keep consistency when it comes to generating images based on a specific subject's face. Welcome to the "Ultimate IPAdapter Guide," where we dive into the all-new IPAdapter ComfyUI extension Version 2 and its simplified installation process. You could also increase the start step, or decrease the end step, to only apply the IP adapter during part of the image generation. If you have ComfyUI_IPAdapter_plus with author cubiq installed (you can check by going to Manager->Custom nodes manager->search comfy_IPAdapter_plus) double click on the back grid and search for IP Adapter Apply with the spaces. Here are the Controlnet settings, as an example: Does anyone have a tutorial to do regional sampling + regional ip-adapter in the same comfyUI workflow? For example, i want to create an image which is "have a girl (with face-swap using this picture) in the top left, have a boy (with face-swap using another picture) in the bottom right, standing in a large field" Welcome to the unofficial ComfyUI subreddit. 3. Dec 7, 2023 · IPAdapter Models. The subject or even just the style of the reference image(s) can be easily transferred to a generation. This method offers precision and customization, allowing you to achieve impressive results easily. You can use it to copy the style, composition, or a face in the reference image. This node builds upon the capabilities of IPAdapterAdvanced, offering a wide range of parameters that allow you to fine-tune the behavior of the model and the Promptless Inpaint/Outpaint in ComfyUI made easier with canvas (ipadapter+cn inpaint+reference only) For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. I was waiting for this. In the specific example here, I generate a 1950s-style portrait of a random elderly couple by feeding in a photo like this as the style input and a photo like this as the source of characters and faces. ' 'IPAdapter-ComfyUI' has been moved to the legacy channel. IpAdapter needs square images as condition, so the above tile nodes makes it possible to upscale non-square aspect Welcome to the unofficial ComfyUI subreddit. Mar 25, 2024 · I've found that a direct replacement for Apply IPAdapter would be the IpAdapter Advanced, I'm itching to read the documentation about the new nodes! For now, I will try to download the example workflows and experiment for myself. Clicking on the ipadapter_file doesn't show a list of the various models. 0, 33, 99, 112). This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. I've done my best to consolidate my learnings on IPAdapter. AP Workflow 6. 3. ComfyUI only has ReActor, so I was hoping the dev would add it too. Anyone have a good workflow for inpainting parts of characters for better consistency using the newer IPAdapter models? I have an idea for a comic and would like to generate a base character with a predetermined appearance including outfit, and then use IPAdapter to inpaint and correct some of the inconsistency I get from generate the same character in difference poses and context (I A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I'd never ignore a post I saw asking for help :D So when I refer to denoising it, I am referring to the fact that the lower resolution faces caused by using Reactor need to be denoised if you want to add more resolution, this requires passing through a sampler with denoising, the higher the denoising is on this sampler the more it will change and mess the face back up again. Jun 25, 2024 · IPAdapter Mad Scientist: IPAdapterMS, also known as IPAdapter Mad Scientist, is an advanced node designed to provide extensive control and customization over image processing tasks. In other words, I'd like to know more about new custom nodes or inventive ways of using the more popular ones. I wonder if there are any workflows for ComfyUI that combine Ultimate SD Upscale + controlnet_tile + IP-Adapter. Welcome to the unofficial ComfyUI subreddit. I'm not really that familiar with ComfyUI, but in the SD 1. For instance if you are using an IPadapter model where the source image is, say, a photo of a car, then during tiled up scaling it would be nice to have the upscaling model pay attention to the tiled segments of the car photo using IPadapter during upscaling. Extensive ComfyUI IPadapter Tutorial. Im using your Reposer model and can't get past ipadapter face. I've watched all of your videos several times. g. It's clear. Especially the background doesn't keep changing, unlike usually whenever I try something. Ideally it would apply that style to comparable part of the target image. If its not showing check your custom nodes folder for any other custom nodes with ipadapter as name, if more than one controlnets use pretrained models for specific purposes. The new version has a node that is exactly the same as the old Apply IP-Adapter. The graphic style Welcome to the unofficial ComfyUI subreddit. " Apr 26, 2024 · Workflow. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. Thanks for posting this, the consistency is great. Please share your tips, tricks, and… Controlnet and ipadapter restrict the model db to items which match the controlnet or ipadapter. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. ') Exception: IPAdapter: InsightFace is not installed! In making an animation, ControlNet works best if you have an animated source. Set the desired mix strength (e. I made a few comparisons with the official Gradio demo using the same model in ComfyUI and I can't see any noticeable difference, meaning that this code should be faithful to the orignal. ComfyUI_IPAdapter_plus 「ComfyUI_IPAdapter_plus」は、「IPAdapter」モデルの「ComfyUI」リファレンス実装です。メモリ効率が高く、高速です。 ・IPAdapter + ControlNet 「IPAdapter」と「ControlNet」の組み合わせることができます。 ・IPAdapter Face 顔を I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. Users currently using 'IPAdapter-ComfyUI' are recommended to transition to installing 'ComfyUI IPAdapter plus. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I just dragged the inputs and outputs from the red box to the IPAdapter Advanced one, deleted the red one, and it worked! The IPAdapter is certainly not the only way but it is one of the most effective and efficient ways to achieve this composition. True, they have their limits but pretty much every technique and model do. Multiple characters from separate LoRAs interacting with each other. ps: Iv'e tried to pass the IPadapter into the model for the Lora, and then plug it to Ksampler. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Reduce the "weight" in the "apply IP adapter" box. You can also specifically save the workflow from the floating ComfyUI menu Welcome to the unofficial ComfyUI subreddit. That extension already had a tab with this feature, and it made a big difference in output. If you figure out anything that works, and does it automatically, please let me know! View community ranking In the Top 10% of largest communities on Reddit Trying to use AttentionCouple with IP-Adapter I'm using AttentionCouple extension to have prompts only apply to different regions of the image. I also collected the individual results and reference on this notion page The AP Workflow now supports new u/cubiq’s IPAdapter plus v2 nodes. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. 7. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff Really need some help here. If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. However there are IPAdapter models for each of 1. Sd1. combining the two can be used to make from a picture a similar picture in a specific pose. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. The only way to keep the code open and free is by sponsoring its development. Look into Area Composition (comes with ComfyUI by default), GLIGEN (an alternative area composition), and IPAdapter (custom node on GitHub, available for manual or ComfyUI manager installation). Lora + img2img or controlnet for composition shape and color + ipadapter (face if you only want the face or plus if you want the whole composition of the source image). Mar 24, 2024 · The “IP Adapter apply noise input” in ComfyUI was replaced with the IPAdapter Advanced node. for example openpose models to generate models with similar pose. Feb 11, 2024 · 「ComfyUI」で「IPAdapter + ControlNet」を試したので、まとめました。 1. This new node includes the clip_vision input, which seems to be the best replacement for the functionality that was previously provided by the “apply noise input” feature. It has same inputs and outputs. I get errored out every time-without fail. So, anyway, some of the things I noted that might be useful, get all the loras and ip adapters from the GitHub page and put them in the correct folders in comfyui, make sure you have clip vision models, I only have H one at this time, I added ipadapter advanced node (which is replacement for apply ipadapter), then I had to load an individual ip The IPAdapter function is now part of the main pipeline and not a branch on its own. I'm using Photomaker since it seemed like the right go-to over IPAdapter because of how much closer the resemblance on subjects is, however, faces are still far from looking like the actual original subject. unfortunately your examples didn't work. 5 workflow, is the Keyframe IPAdapter currently connected? I have 4 reference images (4 real different photos) that I want to transform through animateDIFF AND apply each of them onto exact keyframes (eg. FWIW, why do people do this on here so frequently? Something new comes out and is not easy to find, but you refer to it by half a name with no link or explanation?. Aug 26, 2024 · Connect the output of the "Flux Load IPAdapter" node to the "Apply Flux IPAdapter" node. In case anyone else wants to know, It's a feature added to "ComfyUI IPAdapter plus" node on Nov. Installing ComfyUI. ezl lgai swhvx pkbe shai mpfir maztmqb pqfdei dab vlutcwc

© 2018 CompuNET International Inc.