DEV Community

Takara Taniguchi
Takara Taniguchi

Posted on

[memo]Training-free Regional Prompting for Diffusion Transformers

Abstract

Existing model cannot handle long prompts

Regional prompting

Introduction

Contribution

  • Training-free attention manipulation
  • Applicable to other models

Related works

Difficult to generate

T5-XXL, Playground 3.0, SD 1.5, CLIP ViT-L, SDXL, OpenCLIP Vit-bigG, Pixart, Stable diffusion 3, Flux.1

Kolors, SDXL, GLM

Compositional t2i generation

GLIGEN, InstanceDiffusion

MS-diffusion, Mixture of diffusers, multidiffusion, RPG, Omost

Conclusion

Generated new training-free prompts

Top comments (0)