Dec. 31, 2023, 7:37 p.m. | /u/aadityaura

Machine Learning www.reddit.com

How do you evaluate Large Language Models (LLMs) on the MMLU and other benchmarks without writing a lot of prompts and all?
Is there a repository that offers few-shot learning, chain-of-thought (CoT), and other techniques in a user-friendly format, allowing for easy integration and evaluation of an LLM?
currently developing an 'easy eval' module. I'm checking to see if there's anything similar already available.

benchmarks easy evaluation few-shot few-shot learning format integration language language models large language large language models llm llms machinelearning mmlu prompts thought writing

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US