all AI news
Topic: peft
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model
2 days, 18 hours ago |
arxiv.org
Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora
1 week, 5 days ago |
www.philschmid.de
LoReFT: Representation Finetuning for Language Models
2 weeks, 2 days ago |
www.unite.ai
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search
2 weeks, 2 days ago |
arxiv.org
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices
1 month, 1 week ago |
arxiv.org
Improving LoRA in Privacy-preserving Federated Learning
1 month, 2 weeks ago |
arxiv.org
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
2 months, 1 week ago |
arxiv.org
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
2 months, 2 weeks ago |
arxiv.org
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
2 months, 2 weeks ago |
arxiv.org
I Compared PEFT-Lora vs Full Fine-Tune on Open AI’s Whisper
2 months, 3 weeks ago |
pub.towardsai.net
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning
2 months, 3 weeks ago |
arxiv.org
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model
2 days, 18 hours ago |
arxiv.org
Items published with this topic over the last 90 days.
Latest
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model
2 days, 18 hours ago |
arxiv.org
Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora
1 week, 5 days ago |
www.philschmid.de
LoReFT: Representation Finetuning for Language Models
2 weeks, 2 days ago |
www.unite.ai
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search
2 weeks, 2 days ago |
arxiv.org
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices
1 month, 1 week ago |
arxiv.org
Improving LoRA in Privacy-preserving Federated Learning
1 month, 2 weeks ago |
arxiv.org
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
2 months, 1 week ago |
arxiv.org
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
2 months, 2 weeks ago |
arxiv.org
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
2 months, 2 weeks ago |
arxiv.org
I Compared PEFT-Lora vs Full Fine-Tune on Open AI’s Whisper
2 months, 3 weeks ago |
pub.towardsai.net
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning
2 months, 3 weeks ago |
arxiv.org
Topic trend (last 90 days)
Top (last 7 days)
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model
2 days, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne