ComfyUI Course Ep 40: TeaCache – Speed Up Your Workflows with Smart Caching
Boost your creative output and save valuable time with TeaCache for ComfyUI. Learn how smart caching skips redundant steps, speeds up repetitive workflows, and gives you control over performance,so you can focus on results, not waiting.
Related Certification: Certification in Accelerating Workflows with Smart Caching in ComfyUI

Also includes Access to All:
What You Will Learn
- Understand smart caching and TeaCache (TCH) concepts
- Install and place TCH after loader nodes in ComfyUI
- Configure Relative L1 Threshold and Max Skip Steps
- Balance speed versus quality and tune settings
- Apply TCH to batch, iterative, and I2V workflows
Study Guide
Introduction: Why Smart Caching Transforms Your ComfyUI Workflow
Imagine working on an intricate piece of generative art, tweaking a prompt, and waiting,again,for your machine to grind through the same computations it did moments before. Time ticks. Frustration builds. Enter smart caching: the antidote to wasted cycles. This course is a comprehensive guide to mastering TeaCache (TCH), the custom node for ComfyUI that takes the grunt work out of your creative process by remembering what’s already been done and reusing it when possible.
We’ll start with the foundational concepts of caching and its analogy to making tea, then move through installation, integration, configuration, and optimization. Along the way, you’ll see real-world examples, practical steps, and actionable tips for balancing speed with quality. By the end, you’ll have the confidence to not only install and use TCH, but to tune it for your most demanding projects,saving hours while maintaining control over your results.
Understanding Caching in ComfyUI: The Tea Analogy
Let’s break caching down to its essence. Think about making tea. The first cup requires boiling water, steeping, and waiting. If you want a second cup and the teabag is still fresh, you don’t start from scratch,you use the same teabag, maybe even the same hot water. That’s what TCH does for your ComfyUI workflows: if nothing important has changed, it skips redundant steps and reuses previous results.
Example 1: You run a text-to-image workflow in ComfyUI using a specific model and seed. You decide to tweak the prompt slightly but leave the model, seed, and settings untouched. Without caching, ComfyUI would recompute everything. With TCH, if your change is minor enough, it reuses the earlier results and only computes what’s necessary.
Example 2: You’re batch-generating a series of images, all with the same model and base prompt but different seeds. TCH detects that the model hasn’t changed, so it skips reloading and reprocessing the model, saving you time on every iteration.
What Is TCH (TeaCache) and Why Should You Care?
TCH (TeaCache) is a custom smart caching node for ComfyUI. It acts as the memory of your workflow, storing the outputs of nodes and checking whether the inputs have changed significantly since the last run. If not, it simply reuses the stored output, bypassing redundant calculations. The value is clear: faster workflows, less wear on your hardware, and more time for creative iteration.
Key Benefits:
- Speed: Dramatically reduces processing time, especially in large and complex workflows.
- Efficiency: Avoids unnecessary recomputation by leveraging already existing results.
- Control: Offers settings to fine-tune when to reuse outputs versus when to recompute.
Practical Example:
- You’re building a video sequence from images using a diffusion model. Each frame requires heavy computation. With TCH, unchanged parts of the workflow skip redundant processing, making the whole sequence generation faster.
Another Example:
- You're working on a style transfer workflow, where you repeatedly adjust minor parameters. TCH ensures only the changed steps are recomputed, not the entire pipeline.
How Caching Works: The Technical Core
Caching, at its core, means storing the result of a computation and retrieving it when the same computation is needed again. In ComfyUI, workflows are composed of nodes,each node does a specific job, like loading a model or sampling an image. TCH observes these nodes. If the input to a node is the same as before (within a certain tolerance), TCH fetches the cached output instead of rerunning the computation.
Technical Example 1: Suppose you load a UNet model and run it through a sampler. TCH stores the output after each significant step. If you rerun the workflow with the same model, TCH checks if the input (model parameters, seed, etc.) has changed beyond a set threshold. If not, it uses the stored result.
Technical Example 2: You use a Laura loader to fine-tune a diffusion model. TCH caches the fine-tuned model. If you rerun with the same Laura weights, TCH skips reloading and fine-tuning steps, instantly providing the cached output.
Best Practice: Caching works best when your workflow has repetitive steps that don't change frequently. For highly dynamic workflows where every input changes, caching may offer limited benefits.
Installing the TCH Custom Node in ComfyUI
Bringing TCH into your workflow is simple and doesn’t require technical wizardry. Here’s how you do it:
- Open ComfyUI Manager: Launch your ComfyUI interface. Look for the custom nodes manager in the main menu.
- Search for "TCH": Type “TCH” in the search bar. You should see a result called “Comfy UI TCH.”
- Install: Click the install button next to “Comfy UI TCH.”
- Restart ComfyUI: Once the installation is complete, a prompt will ask you to restart ComfyUI. Do this to activate the node.
Example A: You’re running ComfyUI on your laptop. You open the manager, install TCH in less than a minute, restart, and it’s ready to use.
Example B: On a shared workstation, you install TCH from the manager, confirm with your team that a restart is okay, and everyone benefits from caching as soon as you’re back online.
Tip: Always restart ComfyUI after installation to ensure new nodes are properly loaded. If you don’t see TCH in your node list, double-check the installation steps.
Integrating TCH Into Your Workflow: Where Does It Go?
TCH isn't just a background process,it’s a node you place directly in your workflow. The correct placement is essential for optimal benefit.
General Rule:
TCH should always be placed immediately after the loader node, regardless of whether you’re using a UNet loader, diffusion model loader, or Laura loader. The loader node’s output feeds into the TCH node, and the TCH output then connects to the next processing node, typically the K sampler.
Example 1: Standard Text-to-Image Workflow
- UNet Loader → TCH Node → K Sampler → Output
Example 2: Laura Fine-Tuning Workflow
- Laura Loader → TCH Node → Diffusion Steps → Output
Best Practice: Think of TCH as the “gatekeeper” between model loading and computation-heavy steps. Placing it right after the loader ensures that all subsequent processing can benefit from caching.
Which Models Does TCH Support?
At the time of writing, TCH has been tested with specific models. The main ones are Flux and the Wan model. Before deploying TCH in your production workflows, always check the custom node’s GitHub page for the most recent list of supported models.
Example 1: You’re using the Flux model for a generative art project. TCH has been confirmed to work,simply install and enjoy the speedup.
Example 2: You’re experimenting with the Wan model for text-to-video conversion. TCH supports this model, making your iterative process much smoother.
Tip: For experimental or less common models, test TCH on a small batch first. Monitor for correct outputs and check the GitHub page for updates on compatibility.
How Much Faster? Real-World Performance Gains
TCH can dramatically accelerate your workflows. The numbers speak for themselves:
- Example 1: In a typical text-to-image workflow, the processing time drops from 26 seconds without TCH to 14 seconds with TCH. That’s nearly a 50% reduction.
- Example 2: For a complex, resource-intensive pipeline (such as a multi-step video generation workflow), processing time is slashed from over 400 seconds to just 227 seconds. That’s a savings of nearly three minutes per run.
Practical Application:
- Batch processing hundreds of prompts for a generative art collection? Multiply those savings by 100 or more,TCH can save you hours.
- Rapid prototyping, where you tweak settings and rerun workflows repeatedly, becomes far less tedious and time-consuming.
Best Practice: The larger and more repetitive your workflow, the bigger your gains with TCH. For small, one-off runs, the difference may be negligible, but for iterative or batch tasks, the impact is profound.
Performance vs. Quality: The Trade-Off Explained
With great speed comes a subtle trade-off: in some cases, image quality may decrease slightly when using TCH. Here’s why.
When TCH reuses cached outputs, it assumes that the inputs haven’t changed enough to require a full recompute. In rare cases, especially when minor input changes have a larger-than-expected impact, this can lead to outputs that are not identical in quality or detail to what a full computation would produce.
Example 1: You generate an image with a very specific prompt. You then change a single word. TCH may decide the change is minor and reuse cached data, but the new word could have a bigger influence than expected, resulting in a less accurate image.
Example 2: In a video generation workflow, you adjust the seed value by a small amount. TCH’s threshold setting determines whether to rerun the computation or rely on the cache. Sometimes, the subtle change escapes detection, and the output doesn’t fully reflect your tweak.
Best Practice: For critical outputs where you need pixel-perfect accuracy, consider lowering TCH’s sensitivity threshold to force more frequent recomputation. For drafts, explorations, or batch runs where speed trumps perfection, a more relaxed threshold can save significant time.
Configuring TCH: The Two Key Settings
TCH gives you granular control over when and how it uses cached data. The two main parameters are Relative L1 Threshold and Max Skip Steps.
Relative L1 Threshold: Sensitivity to Change
The Relative L1 Threshold determines how sensitive TCH is to changes in input data. Think of it as the “pickiness” setting.
- Low Threshold: TCH is hyper-vigilant. Even tiny changes in input force a recomputation. This ensures maximum fidelity but reduces speed gains.
- High Threshold: TCH is easygoing. It allows minor changes to pass unnoticed and reuses cached results, maximizing speed at the potential cost of missing subtle differences.
Example 1: You’re generating a series of images with slightly different prompts. A low threshold forces TCH to process each as new, retaining quality but slowing down the workflow.
Example 2: You’re batch processing images with only slight parameter variations. A higher threshold lets TCH reuse prior results, drastically reducing runtime.
Best Practice: Start with the default setting and adjust based on your needs. For experimental runs where quantity matters, increase the threshold. For production-quality images where every detail counts, decrease it.
Max Skip Steps: Limiting Consecutive Caching
Max Skip Steps controls how many times in a row TCH is allowed to skip recomputation by reusing cached results.
- Low Value: TCH will limit skipping, forcing more frequent recomputation. Good for workflows where changes accumulate over steps and accuracy is key.
- High Value: TCH skips more frequently, optimizing for speed. Useful in stable, repetitive workflows.
Default Recommendation: The source recommends a default of three for all models. This strikes a balance between speed and caution.
Example 1: You set Max Skip Steps to 1 in a workflow where every single detail matters. TCH will only skip one step before recomputing, ensuring accuracy.
Example 2: In a batch processing scenario, you set Max Skip Steps to 5. TCH skips five steps before checking for new changes, greatly accelerating the process.
Tip: If you notice TCH is not saving as much time as expected, try increasing Max Skip Steps. If your images start losing fidelity, decrease it.
Practical Applications: Where TCH Shines
TCH is most beneficial in scenarios where workflows are heavy, repetitive, or where only minor tweaks are made between runs. Let’s explore some real-life use cases.
-
Example 1: Iterative Artwork Creation
You’re designing a series of AI-generated posters. Each iteration involves tiny prompt adjustments but the same model and seed. TCH ensures you’re not waiting for redundant computations, letting you focus on creativity. -
Example 2: Batch Video Generation
You’re converting a stack of images to video clips using the same model. TCH caches expensive model loading and transformation steps, shaving hours off your workflow. -
Example 3: Parameter Sweeps
You run experiments, changing only one parameter at a time to find the best style. TCH handles repetitive calculations, freeing up resources for new explorations.
Tip: Always review outputs after major parameter changes. If something looks off, adjust your threshold or skip settings accordingly.
Potential Drawbacks and How to Avoid Them
No tool is perfect. While TCH offers big speed boosts, there are situations where it might not be ideal:
- Loss of Subtlety: If your workflow is highly sensitive to input changes, even a seemingly minor tweak can have a significant impact on output quality. TCH might miss this if the threshold is too high.
- Incompatible Models: TCH currently works with specific models. Using unsupported models may lead to errors or unexpected results.
- Debugging Complexity: If you’re troubleshooting a workflow, cached results can sometimes hide changes you’ve made, making it harder to pinpoint issues.
Best Practices:
- For critical, high-quality outputs, lower the sensitivity threshold or temporarily disable TCH to ensure all changes are computed from scratch.
- Check the TCH GitHub page regularly for updates on supported models and recommended settings.
- When in doubt, clear the cache or rerun the workflow with stricter settings.
Exploring TCH Settings in Detail: Your Tuning Guide
Let’s dive deeper into the two main configuration options,Relative L1 Threshold and Max Skip Steps,and how to fine-tune them for your needs.
-
Relative L1 Threshold: This setting controls how much change in input data is tolerated before TCH considers the output “stale” and recomputes it.
- Best for Speed: Set higher for exploratory or batch runs.
- Best for Quality: Set lower for final renders or when precise details matter.
-
Max Skip Steps: Choose a value that fits your workflow’s stability. The default of three is a safe starting point, but don’t be afraid to experiment.
- Tip: For workflows with lots of repeating steps (like animations or frame interpolations), a higher value delivers more savings.
- Tip: For rapid prototyping, maximize skip steps. For high fidelity, minimize.
Example Advanced Tuning:
- You’re batch-processing hundreds of images overnight. Set the threshold high and skip steps to five or more. In the morning, check a random sample for quality. If all is well, you’ve saved massive time.
- For a gallery-worthy final image, lower the threshold and skip steps to one. Accept the longer runtime for the highest quality.
Checking Supported Models and Staying Updated
Because TCH is under active development, supported models and recommended settings may evolve. Here’s how to stay in the loop:
- Visit the TCH GitHub Page: The latest list of supported models, optimal settings, and performance benchmarks are always posted there. Make it a habit to check before starting a major workflow.
- Join Community Forums: User experiences, troubleshooting tips, and advanced use cases are often discussed in ComfyUI forums and Discord channels.
Example: You’re about to try TCH with a new diffusion model. You check the GitHub page, see that it’s not yet supported, and avoid a wasted session.
Tip: If you encounter issues or want to request support for a new model, open an issue on the TCH GitHub or reach out in community channels.
Glossary: Mastering the Terminology
To work effectively with TCH and ComfyUI, let’s clarify some key terms:
- ComfyUI: A node-based interface for building generative AI workflows, primarily with diffusion models.
- Custom Nodes Manager: The in-app tool for finding, installing, and managing add-on nodes like TCH.
- TCH (TeaCache): The smart caching node that speeds up workflows by remembering and reusing outputs.
- Caching: Storing results for reuse, avoiding redundant computation.
- Loader Node: Loads models (e.g., diffusion, UNet, Laura) into your workflow.
- UNet Loader: Loads UNet models for the diffusion process.
- Laura Loader: Loads Laura (Low-Rank Adaptation) models, typically used for fine-tuning.
- K Sampler: The node responsible for producing the final output from your loaded model.
- Relative L1 Threshold: Controls how “picky” TCH is about input changes when deciding whether to reuse cached results.
- Max Skip Steps: Limits how many times TCH can skip recomputation in a row.
- Flux / Wan Model: Specific models tested and confirmed to work with TCH.
- Diffusion Model: The generative backbone of most ComfyUI workflows.
- Image to Video (I2V): Workflows that turn still images into video sequences.
Quiz Yourself: Test Your Understanding
1. What is the primary function of the TCH custom node in ComfyUI?
Answer: To speed up workflows by caching and reusing results from previously run nodes if the inputs haven’t changed significantly.
2. How do you install the TCH custom node?
Answer: Open the Custom Nodes Manager, search for “TCH,” install, and restart ComfyUI.
3. Where should the TCH node be placed in a workflow?
Answer: Immediately after the loader node (e.g., UNet Loader, Laura Loader).
4. What’s the main trade-off when using TCH?
Answer: Potential decrease in output quality for some images, although often not noticeable.
5. Two loader types TCH works with?
Answer: UNet Loader (or diffusion model loader) and Laura Loader.
6. What’s the benefit of TCH in large workflows?
Answer: Huge time savings by reducing processing duration.
7. What does the Relative L1 Threshold control?
Answer: Sensitivity to small changes in inputs, determining when to rerun a node.
8. What does Max Skip Steps control?
Answer: How many consecutive steps TCH is allowed to skip by using cached results.
9. Default recommended setting for Max Skip Steps?
Answer: Three for all supported models.
10. Two models tested with TCH?
Answer: Flux and Wan model.
Expert Tips and Best Practices for Power Users
1. Clear Your Cache Occasionally:
Long-running sessions or numerous workflow changes can clutter your cache. Periodically clear it to avoid stale data.
2. Document Your Settings:
When tuning TCH for a big project, note your threshold and skip values for future reference. This makes reproducing successful runs easier.
3. Batch Versus Single Runs:
For batch jobs, use relaxed TCH settings. For single, critical outputs, increase sensitivity and decrease skip steps.
4. Stay Connected:
Regularly visit the TCH GitHub and community forums for updates, bug fixes, and new compatibility announcements.
5. Mix with Other Nodes:
TCH works best when used strategically. Sometimes, you may want to bypass TCH for certain nodes or steps. Don’t be afraid to experiment.
Scenarios: When to Use and When to Be Cautious
Best Use Cases:
Use Caution When:
Conclusion: Unlocking the Power of Smart Caching in ComfyUI
You’ve now mapped out every corner of the TCH (TeaCache) node for ComfyUI. You understand what caching is, how TCH implements it, and where it fits in your workflow. You know how to install, place, and configure it, and you’re equipped to tune its settings for your specific needs. You’re aware of the benefits,dramatic speedups, reduced hardware strain, and more freedom to explore,and of the trade-offs, like potential minor quality loss.
Applying these skills, you’ll enter a new era of efficiency in your generative AI projects. You’ll iterate faster, experiment boldly, and deliver results without the drag of redundant computation. Keep exploring, keep tuning, and let TCH handle the heavy lifting,your creative process is about to get a whole lot smoother.
Frequently Asked Questions
This FAQ provides clear, practical answers to common and advanced questions about using the TeaCache (TCH) node in ComfyUI, focusing on how smart caching can accelerate workflows, how to install and set up TCH, recommended practices, troubleshooting, and maximizing the benefits while being aware of any trade-offs. Whether you’re just starting with ComfyUI or looking to optimize complex pipelines, you’ll find actionable insights here.
What is TCH in ComfyUI and how does it work?
TCH is a custom node for ComfyUI that adds smart caching to your workflows.
It stores the results of node executions and checks if the inputs have changed since the last run. If there’s no significant difference, TCH reuses the cached result instead of recomputing, which saves time and computing resources. This is especially useful in iterative workflows or when tweaking settings, as it avoids redundant calculations without sacrificing accuracy.
How do I install the Comfy UI TCH node?
To install the Comfy UI TCH node:
Open ComfyUI and navigate to the Manager, then to the Custom Nodes Manager. Search for “TCH” and find “Comfy UI TCH.” Click “Install.” After installation, you’ll see a prompt to restart ComfyUI. Click “Restart,” confirm, and wait for it to finish. Once restarted, the TCH node will appear in your installed nodes list.
Where should the TCH node be placed in a ComfyUI workflow?
The TCH node should be placed immediately after the loader node that provides your model input, such as a UNet Loader, Load Diffusion Model, or Laura Loader. Route the output from the loader node into the TCH node, then connect the TCH node to the rest of your workflow. This ensures that TCH caches the right computations and delivers optimal efficiency.
Which models are supported by the TCH node?
TCH has been tested with Flux and Wan models.
For the most recent list of supported models and recommended configurations, check the custom node’s GitHub page. New model support may be added over time, so it’s useful to consult the official documentation when working with unfamiliar models.
Does using TCH affect the quality of the generated output?
Using TCH can sometimes lead to a small drop in output quality,
but this is often not noticeable in typical use cases. The primary benefit is faster workflow execution. In most scenarios, the quality difference is minor, but for highly sensitive or critical applications, you may want to compare outputs with and without TCH.
How much faster can workflows be with TCH?
Workflows can be significantly faster when using TCH.
For example, a text-to-image workflow that usually takes 26 seconds may complete in 14 seconds with TCH. More complex workflows might see even greater improvements. The exact speedup depends on workflow complexity and which nodes are being cached. The more computationally intensive the steps, the greater the time savings.
What do the Relative L1 threshold and Max skip steps settings control in the TCH node?
Relative L1 threshold determines how sensitive TCH is to input changes. A low threshold means even small changes trigger a new computation, while a higher threshold allows more tolerance and reuses cached results more often.
Max skip steps sets how many consecutive steps TCH can skip by reusing cached results. If set too low, you may not realize full speed benefits.
Where can I find recommended settings for TCH for different models?
Recommended settings are typically listed on the TCH node’s GitHub page.
For example, the default recommended Max skip steps is often three for supported models. Always check the documentation for model-specific advice, as some configurations may work better with certain models.
What is the primary function of the TCH custom node?
The TCH node speeds up ComfyUI workflows by caching and reusing results from previous runs when the inputs have not changed significantly. This reduces unnecessary computations, making iterative experimentation and prototyping much more efficient.
Name two types of loader nodes mentioned that TCH works with.
UNet Loader and Laura Loader are specifically mentioned as compatible loader nodes for TCH. These nodes handle loading models that serve as the input for subsequent workflow steps.
What benefit does using TCH provide for large, time-consuming workflows?
TCH can dramatically reduce processing times in large workflows by avoiding redundant computations. This is particularly valuable for business professionals working with image-to-video or high-resolution image generation, where each run may otherwise take several minutes.
What is the default recommended setting for Max skip steps?
The default recommended Max skip steps setting is three for all supported models, according to the latest guidance from the source. This value balances performance with reliability in most workflows.
What two specific models or model types are mentioned as being tested with TCH in the video?
Flux and the Wan model are both referenced as being tested and confirmed to work with TCH in the provided examples.
What is caching and why is it useful in ComfyUI?
Caching stores already-computed data or results so they can be quickly reused, rather than recalculated each time.
In ComfyUI, this means you can tweak workflow parameters or try different prompts without waiting for the same computations to finish over and over. For example, if you’re iterating on an image style, caching saves time by only updating changed elements.
How does the TCH node decide when to reuse cached results?
TCH compares the current inputs to the node with the inputs from the last cached run using the Relative L1 threshold setting. If the difference is smaller than the threshold, it reuses the previous result; otherwise, it computes a new one. This approach ensures outputs remain accurate while avoiding unnecessary work.
Can I use TCH with custom or community models?
TCH is designed to work with models that follow standard loader nodes like UNet Loader or Laura Loader. If your custom or community model integrates with these nodes, TCH should function as expected. For untested models, check the TCH GitHub or test a small workflow first to confirm compatibility.
Is there any risk in using TCH for critical or final outputs?
While TCH is reliable for most use cases, minor output differences can occur due to caching, especially with sensitive or highly variable inputs. For final deliverables where every detail matters, consider comparing both cached and non-cached results to ensure quality meets expectations.
How can I optimize TCH settings for my workflow?
Start with the recommended settings (e.g., Max skip steps = 3), then adjust Relative L1 threshold based on your need for speed versus sensitivity to input changes. For workflows with subtle changes between runs, use a lower threshold. For rapid prototyping, a higher threshold may be appropriate.
Does TCH work with image-to-video (I2V) workflows?
Yes, TCH can be used with image-to-video (I2V) workflows as long as the workflow includes supported loader nodes. By caching intermediate steps, you can significantly accelerate video generation, especially when testing variations.
What common mistakes should I avoid when using TCH in ComfyUI?
Avoid placing the TCH node before a model loader, as this prevents proper caching. Also, don’t set the Relative L1 threshold too high unless you’re comfortable with more frequent reuse of cached results, which can affect subtle output changes. Finally, always restart ComfyUI after installing TCH to ensure activation.
How can I tell if TCH is actually caching or skipping steps?
TCH typically logs when it reuses cached results or skips steps, so check the workflow output logs or the node’s status indicators. You’ll notice faster execution times and repeated outputs for unchanged inputs, confirming that caching is in effect.
Can TCH handle randomness or seed changes in my workflow?
TCH uses the input data, including seed values, to determine cache hits. If you change the seed, TCH recognizes this as a significant input change and recalculates the output. Cached results are only reused when all relevant inputs match within the set threshold.
Is there any visual indication that TCH is being used in a workflow?
Once installed, the TCH node appears as a distinct node in your ComfyUI workflow diagram. You can also monitor node logs or performance metrics to see when caching is applied.
How does TCH affect experimentation or iteration in ComfyUI?
TCH is ideal for rapid iteration, letting you tweak settings, prompts, or minor parameters and see results quickly without rerunning the entire workflow. This is especially valuable for creative professionals or R&D teams who need to test many variations efficiently.
Does TCH support batch processing or automation in ComfyUI?
TCH can be used in batch processing by caching repeated steps across similar tasks, making batch automation more efficient. For example, when generating a series of images with minor prompt changes, TCH avoids recalculating unchanged elements, reducing total runtime.
Does TCH interfere with seed-based reproducibility?
No, TCH only reuses outputs when all relevant inputs, including the seed, are the same. If the seed or any input changes, TCH recalculates the result to maintain reproducibility. This ensures consistent outputs for the same inputs, which is important for business or research documentation.
What should I do if TCH is not giving me the expected speedup?
Check your node placement and settings. Make sure TCH is after the loader node and your Relative L1 threshold is not set too low (which would limit caching). Also, verify that your workflow contains steps that actually benefit from caching, such as repeated computations or model loading.
Can I use multiple TCH nodes in one workflow?
Yes, you can use multiple TCH nodes for different sections of a complex workflow. This is useful when you have several independent model branches or want to cache results at various stages. Just ensure each TCH node is correctly placed after its respective loader node.
What are some practical business use cases for TCH in ComfyUI?
TCH is valuable for prototyping new image or video styles, A/B testing creative assets, or running batch content generation. For marketing, product design, or AI research teams, TCH streamlines workflows by saving time on repeated model executions, freeing up resources for more strategic tasks.
How does TCH handle changes to model weights or updates?
If you update or change the underlying model weights, TCH detects this as a significant input change and recalculates the output. Cached results will only be reused if the model and all relevant parameters match the cached version.
Are there any known limitations or issues with TCH?
Some models or custom nodes may not be fully compatible with TCH, especially if they don’t follow standard loader protocols. Always test with new models, and check the GitHub page for up-to-date compatibility notes. Also, in rare cases, very subtle input changes might be missed if the threshold is set too high.
Can I share TCH-cached workflows or results with team members?
Caching is local to your ComfyUI instance, so cached results aren’t automatically shared between users. For collaboration, share workflow settings and ensure team members have matching inputs and models to reproduce results.
Is TCH recommended for all types of ComfyUI workflows?
TCH is most beneficial in workflows where repeated computations are common, such as image generation with iterative parameter changes. For one-off runs or highly variable workflows, the speedup may be less noticeable.
Does TCH require extra hardware or resources?
TCH does not require special hardware, but it does use some additional memory to store cached results. For large projects, monitor your memory usage to ensure smooth operation.
Where can I get help or report issues with TCH?
The best place for support is the TCH node’s GitHub page or the ComfyUI community forums. There you’ll find documentation, troubleshooting tips, and a place to submit bug reports or feature requests.
Certification
About the Certification
Boost your creative output and save valuable time with TeaCache for ComfyUI. Learn how smart caching skips redundant steps, speeds up repetitive workflows, and gives you control over performance,so you can focus on results, not waiting.
Official Certification
Upon successful completion of the "ComfyUI Course Ep 40: TeaCache – Speed Up Your Workflows with Smart Caching", you will receive a verifiable digital certificate. This certificate demonstrates your expertise in the subject matter covered in this course.
Benefits of Certification
- Enhance your professional credibility and stand out in the job market.
- Validate your skills and knowledge in a high-demand area of AI.
- Unlock new career opportunities in AI and HR technology.
- Share your achievement on your resume, LinkedIn, and other professional platforms.
How to complete your certification successfully?
To earn your certification, you’ll need to complete all video lessons, study the guide carefully, and review the FAQ. After that, you’ll be prepared to pass the certification requirements.
Join 20,000+ Professionals, Using AI to transform their Careers
Join professionals who didn’t just adapt, they thrived. You can too, with AI training designed for your job.