May 3, 2024, 4:15 a.m. | Pinzhen Chen, Zhicheng Guo, Barry Haddow, Kenneth Heafield

cs.CL updates on arXiv.org arxiv.org

arXiv:2306.03856v2 Announce Type: replace
Abstract: We propose iteratively prompting a large language model to self-correct a translation, with inspiration from their strong language understanding and translation capability as well as a human-like translation approach. Interestingly, multi-turn querying reduces the output's string-based metric scores, but neural metrics suggest comparable or improved quality. Human evaluations indicate better fluency and naturalness compared to initial translations and even human references, all while maintaining quality. Ablation studies underscore the importance of anchoring the refinement to …

abstract arxiv capability cs.ai cs.cl human human-like inspiration iterative language language model language models language understanding large language large language model large language models metrics prompting quality string translation type understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US