all AI news
Fine-tune stable diffusion on corpora
Sept. 28, 2022, 8:18 a.m. | /u/Academy-
Natural Language Processing www.reddit.com
I’m curious as to whether there is any way to fine-tune a stable diffusion (text2img) model on, say, a larger (1000+) corpus of (img, text) pairs.
I have seen posts on [textual inversion](https://towardsdatascience.com/how-to-fine-tune-stable-diffusion-using-textual-inversion-b995d7ecc095) which enable one to fine-tune the underlying embeddings. However, this method seems to work well with 3 - 5 new examples and doesn’t scale well.
So, is there a way to efficiently fine-tune on more data? Compute cost …
More from www.reddit.com / Natural Language Processing
What Do You Love About NLP?
1 day, 11 hours ago |
www.reddit.com
How to Install and Deploy LLaMA 3 Into Production
2 days, 4 hours ago |
www.reddit.com
The Languages AI Is Leaving Behind
5 days, 4 hours ago |
www.reddit.com
NLP: building a sentiment model
6 days, 1 hour ago |
www.reddit.com
ReFT: Representation Finetuning for Language Models
1 week, 6 days ago |
www.reddit.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Staff Software Engineer, Generative AI, Google Cloud AI
@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA
Expert Data Sciences
@ Gainwell Technologies | Any city, CO, US, 99999