Feb. 26, 2024, 5:44 a.m. | Anirbit Mukherjee, Amartya Roy

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.06338v3 Announce Type: replace
Abstract: Deep Operator Networks are an increasingly popular paradigm for solving regression in infinite dimensions and hence solve families of PDEs in one shot. In this work, we aim to establish a first-of-its-kind data-dependent lowerbound on the size of DeepONets required for them to be able to reduce empirical error on noisy data. In particular, we show that for low training errors to be obtained on $n$ data points it is necessary that the common output …

abstract aim arxiv cs.cc cs.lg cs.na data dimensions error families kind math.ap math.na networks paradigm popular reduce regression solve them type work

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York