Nov. 16, 2023, 2:35 p.m. | /u/ytu876

Natural Language Processing www.reddit.com

Hi,

I read somewhere about these 2 approaches for text classification

1. BERT + a new head layer: freeze the BERT, train the new head layer
2. SBERT + logistic regression: use SBERT to generate the embedding, and train the logistic layer as a traditional classifier

How are these 2 different? I believe in approach #1, the head layer is just a regular feed forward, which is not much different from a logistic regression. Both are using LLM to convert …

bert classification classifier embedding generate head languagetechnology layer logistic regression regression sbert text text classification train

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne