April 6, 2024, 6:38 p.m. | /u/vov_or

Machine Learning www.reddit.com

Demo: [https://huggingface.co/spaces/unum-cloud/uform-gen2-qwen-500m-dpo-demo](https://huggingface.co/spaces/unum-cloud/uform-gen2-qwen-500m-dpo-demo)
HF card: [https://huggingface.co/unum-cloud/uform-gen2-dpo](https://huggingface.co/unum-cloud/uform-gen2-dpo)
Code: [https://github.com/unum-cloud/uform](https://github.com/unum-cloud/uform)

TLDR:
Addressing hallucinations in Vision-Language models presents a complex challenge. Through experimentation with DPO alignment on open-source datasets, notable success was achieved.
The model has been fine-tuned based on our previous checkpoint (unum-cloud/uform-gen2-qwen-500m) using preference datasets:

1. MMInstruction/VLFeedback (consisting of 80k synthetic instructions)
2. zhiqings/LLaVA-Human-Preference-10K (containing human-annotated instructions)

This refinement has resulted in a remarkable improvement of over 15% in perception on the MME Benchmark.

alignment challenge checkpoint cloud datasets experimentation gen2 hallucinations human improvement language language models llava machinelearning qwen success synthetic through vision vision-language models

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US