Feb. 29, 2024, 5:48 a.m. | Gregor Donabauer, Udo Kruschwitz

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.18179v1 Announce Type: new
Abstract: Pre-training of neural networks has recently revolutionized the field of Natural Language Processing (NLP) and has before demonstrated its effectiveness in computer vision. At the same time, advances around the detection of fake news were mainly driven by the context-based paradigm, where different types of signals (e.g. from social media) form graph-like structures that hold contextual information apart from the news article to classify. We propose to merge these two developments by applying pre-training of …

abstract advances arxiv challenges computer computer vision context cs.cl current detection evaluation fake fake news graph graph neural networks language language processing limitations natural natural language natural language processing networks neural networks nlp pre-training processing strategies training type vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States