Dec. 20, 2023, 11:56 a.m. | /u/manu_3257

Natural Language Processing www.reddit.com

Hey there, I am new to NLP and currently working on a semantic role labelling model where i am using Word and sentence embeddings from a PLM like deberta/T5 and using It for the downstream task. The Palm Is not involved in the training and its embeddings are made and stored during pre-processing.

My understanding Is that if i can finetune the plm weights on my own dataset before the training, It should in theory provide Better embeddings which Will …

embeddings hey labelling languagetechnology lora nlp palm pre-processing processing role semantic training word

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA