June 15, 2022, 1:47 p.m. | /u/Competitive-Rub-1958

Machine Learning www.reddit.com

This is a bit of a complex topic, so I feel a simple discussion post may not be the best place to fully flesh it out - but in the context of few-shot learning in LLMs (Large Language Models), we observe static/unchanged/un-updated weights being able to infer patterns and sometimes even learn complex tasks.

I was wondering why forward passing *works* *so well* \- by all means, we should've been updating weights with new information but forward passing seems to …

behavior llms machinelearning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote