About 270,000 results
Open links in new tab
  1. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    Dec 20, 2024 · We begin our exploration with a comprehensive summary of existing efficient attention mechanisms and identify four key factors crucial for successful linearization of pre-trained DiTs: …

  2. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    Dec 20, 2024 · The paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up" addresses the computational challenges involved in generating high-resolution images using …

  3. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    Dec 20, 2024 · This paper investigates how to convert a pre-trained Diffusion Transformer into a linear DiT, as its simplicity, parallelism, and efficiency for image generation are investigated, and proposes …

  4. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    This paper talks about CLEAR, a new method that improves the efficiency of Diffusion Transformers (DiTs) used for generating images by changing how they handle attention, which is the process of …

  5. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers ...

    CLEAR:图像生成中的线性化革命 项目介绍 CLEAR (Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up)是一个针对预训练扩散变换器(DiT)的高效注意力机制研究项目。

  6. 扩散模型解读 (二十一):CLEAR:类卷积线性扩散 Transformer

    Jan 6, 2025 · Diffusion Transformer (DiT) 已经成为图像生成的主要架构。 然而,Self-Attention 的二次复杂度负责对 token 之间的关系进行建模,在生成高分辨率图像时会产生显著的时延。 为了解决这个 …

  7. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers ...

    We begin our exploration with a comprehensive summary of existing efficient attention mechanisms and identify four key factors crucial for successful linearization of pre-trained DiTs: locality, formulation …

  8. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    In this paper, we present CLEAR, a convolution-like local attention strategy that effectively linearizes the attention mechanism in pre-trained Diffusion Transformers (DiTs), making them significantly more …

  9. Daily Papers - Hugging Face

    Mar 13, 2025 · Based on these insights, we introduce a convolution-like local attention strategy termed CLEAR, which limits feature interactions to a local window around each query token, and thus …

  10. CLEAR/README.md at main · Huage001/CLEAR · GitHub

    Based on these insights, we introduce a convolution-like local attention strategy termed CLEAR, which limits feature interactions to a local window around each query token, and thus achieves linear …

  11. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    Dec 22, 2024 · The paper presents CLEAR, a novel convolution-like local attention mechanism that reduces the attention complexity of pre-trained Diffusion Transformers from quadratic to linear, …

  12. CLEAR:图像生成中的线性化革命 - CSDN博客

    Mar 29, 2025 · CLEAR(Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up)是一个针对预训练扩散变换器(DiT)的高效注意力机制研究项目。

  13. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    Dec 20, 2024 · The proposed CLEAR method uses a convolution-like local attention mechanism to linearize pre-trained diffusion transformers. This reduces computational complexity by 99.5% and …

  14. CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

    CLEAR modifies the attention computation in diffusion transformers by introducing a linearized operation that approximates standard attention. The method uses a convolution-inspired approach to process …

  15. Daily Paper Cast | CLEAR: Conv-Like Linearization Revs Pre-Trained ...

    We begin our exploration with a comprehensive summary of existing efficient attention mechanisms and identify four key factors crucial for successful linearization of pre-trained DiTs: locality, formulation …