Web: https://www.reddit.com/r/MachineLearning/comments/xhxpsl/d_using_special_tokens_for_a_domainspecific/

Sept. 19, 2022, 1:10 a.m. | /u/McAvagr

Machine Learning reddit.com

Hi everyone

I've recently dived into ViTs, and a thought crossed my mind that I was surprised to not find many papers exploring. Special tokens are pretty common in transformer architectures, but they usually play a background role, such as structural (like \[BEG\], \[END\], \[SEP\]) or a placeholder of sorts (\[CLS\], \[MASK\]). But I feel like self-attention allows for far more intricate constructs, and theoretically one can create a whole "mini-language" to somehow influence model's behaviour.

Is there a particular …

language machinelearning tokens transformers

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Data Scientist (Analytics) - Singapore

@ Momos | Singapore, Central, Singapore

Machine Learning Scientist, Drug Discovery

@ Flagship Pioneering, Inc. | Cambridge, MA

Applied Scientist - Computer Vision

@ Flawless | Los Angeles, California, United States

Sr. Data Engineer, Customer Service

@ Wayfair Inc. | Boston, MA