Feb. 12, 2024, 3:08 p.m. | /u/graphbook

Machine Learning www.reddit.com



https://preview.redd.it/g4oh19wy56ic1.png?width=5096&format=png&auto=webp&s=fd1fa1350f86281ba56c6daa1609b410f5120c71

[Project](https://cerbrec.com) **Origin**:

My colleague and I were MLEs/ applied researchers and were constantly annoyed at trying to troubleshoot and customize transformers in production NLP use cases. This was starting from when BERT came out on Tensorflow1 and you couldn’t really step through a model at all. Just to clarify, of course considerable effort is spent purely cleaning data, but we found that we could do much better understanding and fixing problems by digging into model architecture as well. …

applied research bert cases compute edit feedback interactive machinelearning nlp platform production research researchers transformer transformer models transformers use cases

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA