April 29, 2022, 12:33 p.m. | /u/Far-Explorer-2300

Natural Language Processing www.reddit.com

I fine-tuned a language model on a dataset for sentence generation and decided to use perplexity to evaluate that model. However, I noticed that visually, the many of the sentences that had lower perplexities made less sense than the ones with high perplexities. Perhaps, i'm misunderstanding this metric but aren't sentences that make more sense supposed to have lower perplexities? I'm a beginner in this area, so if anyone could enlighten me, I'd appreciate it.

good language language models languagetechnology

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

AI Scientist/Engineer

@ OKX | Singapore

Research Engineering/ Scientist Associate I

@ The University of Texas at Austin | AUSTIN, TX

Senior Data Engineer

@ Algolia | London, England

Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)

@ BlackRock | NY7 - 50 Hudson Yards, New York

Snowflake Data Analytics

@ Devoteam | Madrid, Spain