all AI news
MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation
April 18, 2024, 4:44 a.m. | Kuan-Chieh (Jackson), Wang, Daniil Ostashev, Yuwei Fang, Sergey Tulyakov, Kfir Aberman
cs.CV updates on arXiv.org arxiv.org
Abstract: We introduce a new architecture for personalization of text-to-image diffusion models, coined Mixture-of-Attention (MoA). Inspired by the Mixture-of-Experts mechanism utilized in large language models (LLMs), MoA distributes the generation workload between two attention pathways: a personalized branch and a non-personalized prior branch. MoA is designed to retain the original model's prior by fixing its attention layers in the prior branch, while minimally intervening in the generation process with the personalized branch that learns to embed …
arxiv attention context cs.ai cs.cv cs.gr image image generation personalized type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Machine Learning Engineer
@ Samsara | Canada - Remote