Feb. 20, 2024, 5:43 a.m. | Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, Zhouchen Lin

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.11984v1 Announce Type: cross
Abstract: Neuromorphic computing with spiking neural networks is promising for energy-efficient artificial intelligence (AI) applications. However, different from humans who continually learn different tasks in a lifetime, neural network models suffer from catastrophic forgetting. How could neuronal operations solve this problem is an important question for AI and neuroscience. Many previous studies draw inspiration from observed neuroscience phenomena and propose episodic replay or synaptic metaplasticity, but they are not guaranteed to explicitly preserve knowledge for neuron …

abstract applications artificial artificial intelligence arxiv catastrophic forgetting computing continual cs.ai cs.lg cs.ne energy humans intelligence learn network networks neural network neural networks neuromorphic neuromorphic computing operations projection solve spiking neural networks tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States