Diffusion-based text-to-image models have rapidly gained popularity for their ability to generate detailed and realistic images from textual
descriptions. However, these models often reflect the biases present in their training data, especially impacting marginalized groups.
While prior efforts to debias language models have focused on addressing specific biases, such as racial or gender biases, efforts to tackle
intersectional bias have been limited. Intersectional bias refers to the unique form of bias experienced by individuals at the intersection of
multiple social identities. Addressing intersectional bias is crucial because it amplifies the negative effects of discrimination based on
race, gender, and other identities. In this paper, we introduce a method that addresses intersectional bias in diffusion-based text-to-image models
by modifying cross-attention maps in a disentangled manner. Our approach utilizes a pre-trained Stable Diffusion model, eliminates the need for an
additional set of reference images, and preserves the original quality for unaltered concepts. Comprehensive experiments demonstrate that our
method surpasses existing approaches in mitigating both single and intersectional biases across various attributes. We make our source code and debiased
models for various attributes available to encourage fairness in generative models and to support further research.