all AI news
Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control
March 27, 2024, 4:42 a.m. | Andrew Lamperski, Tyler Lekang
cs.LG updates on arXiv.org arxiv.org
Abstract: Neural networks are regularly employed in adaptive control of nonlinear systems and related methods o reinforcement learning. A common architecture uses a neural network with a single hidden layer (i.e. a shallow network), in which the weights and biases are fixed in advance and only the output layer is trained. While classical results show that there exist neural networks of this type that can approximate arbitrary continuous functions over bounded regions, they are non-constructive, and …
abstract applications approximation architecture arxiv biases control cs.lg cs.sy eess.sy hidden layer math.oc network networks neural network neural networks random reference reinforcement reinforcement learning relu systems type weights and biases
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 13 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
RL Analytics - Content, Data Science Manager
@ Meta | Burlingame, CA
Research Engineer
@ BASF | Houston, TX, US, 77079