all AI news
[D] Techniques for handling input documents with a large number of tokens in BERT/GPT2 style models?
April 12, 2024, 6:45 p.m. | /u/wantondevious
Machine Learning www.reddit.com
I'm wondering if anyone has a survey of the easiest way to handle classification tasks where the input token space is >> 512 (or whatever the single GPU models are limited to).
I'm working in a complex space. I'm looking at a ranking (actually may even be simply binary, with a class imbalance) type problem, so not generative, but where the text's contents is important, not just some pooled version of the embeddings, and so I'd like to make …
bert classification documents gpu machinelearning ranking space style survey tasks token tokens
More from www.reddit.com / Machine Learning
[R] AlphaMath Almost Zero: process Supervision without process
1 day, 4 hours ago |
www.reddit.com
[D] ECCV 2024 Review Discussion
1 day, 4 hours ago |
www.reddit.com
[D] Is it a good idea for a 3rd year PhD student to start a …
1 day, 6 hours ago |
www.reddit.com
[D] Use VQ-VAEs for SSL?
1 day, 7 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US