April 6, 2024, 6:38 p.m. | /u/vov_or

Machine Learning www.reddit.com

Demo: [https://huggingface.co/spaces/unum-cloud/uform-gen2-qwen-500m-dpo-demo](https://huggingface.co/spaces/unum-cloud/uform-gen2-qwen-500m-dpo-demo)
HF card: [https://huggingface.co/unum-cloud/uform-gen2-dpo](https://huggingface.co/unum-cloud/uform-gen2-dpo)
Code: [https://github.com/unum-cloud/uform](https://github.com/unum-cloud/uform)

TLDR:
Addressing hallucinations in Vision-Language models presents a complex challenge. Through experimentation with DPO alignment on open-source datasets, notable success was achieved.
The model has been fine-tuned based on our previous checkpoint (unum-cloud/uform-gen2-qwen-500m) using preference datasets:

1. MMInstruction/VLFeedback (consisting of 80k synthetic instructions)
2. zhiqings/LLaVA-Human-Preference-10K (containing human-annotated instructions)

This refinement has resulted in a remarkable improvement of over 15% in perception on the MME Benchmark.

alignment challenge checkpoint cloud datasets experimentation gen2 hallucinations human improvement language language models llava machinelearning qwen success synthetic through vision vision-language models

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA