Oct. 24, 2022, 7:29 p.m. | /u/_Arsenie_Boca_

Deep Learning www.reddit.com

I want to finetune a GPT-like model on a Seq2Seq task. Even though I have been using Huggingface for quite some time, I am very confused about how to pass the input to the model.

Some examples simply concatenate prompt and target and pass them via `input_ids`. But how are you supposed to tell the model where the prompt ends and the target begins?

The 2 most intuitive candidates from reading the docs seem to be `token_type_ids` and `labels`, but …

deeplearning format gpt huggingface seq2seq

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Research Scientist

@ d-Matrix | San Diego, Ca