Nov. 3, 2022, 1:12 a.m. | Mattie Tesfaldet, Derek Nowrouzezahrai, Christopher Pal

cs.LG updates on arXiv.org arxiv.org

Recent extensions of Cellular Automata (CA) have incorporated key ideas from
modern deep learning, dramatically extending their capabilities and catalyzing
a new family of Neural Cellular Automata (NCA) techniques. Inspired by
Transformer-based architectures, our work presents a new class of
$\textit{attention-based}$ NCAs formed using a spatially
localized$\unicode{x2014}$yet globally organized$\unicode{x2014}$self-attention
scheme. We introduce an instance of this class named $\textit{Vision
Transformer Cellular Automata}$ (ViTCA). We present quantitative and
qualitative results on denoising autoencoding across six benchmark datasets,
comparing ViTCA to a …

arxiv attention cellular

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Chubb | Simsbury, CT, United States

Research Analyst , NA Light Vehicle Powertrain Forecasting

@ S&P Global | US - MI - VIRTUAL

Sr. Data Scientist - ML Ops Job

@ Yash Technologies | Indore, IN

Alternance-Data Management

@ Keolis | Courbevoie, FR, 92400