May 4, 2022, 1:11 a.m. | Ling Liang, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie

cs.LG updates on arXiv.org arxiv.org

As spiking neural networks (SNNs) are deployed increasingly in real-world
efficiency critical applications, the security concerns in SNNs attract more
attention. Currently, researchers have already demonstrated an SNN can be
attacked with adversarial examples. How to build a robust SNN becomes an urgent
issue. Recently, many studies apply certified training in artificial neural
networks (ANNs), which can improve the robustness of an NN model promisely.
However, existing certifications cannot transfer to SNNs directly because of
the distinct neuron behavior and …

arxiv network neural network spiking neural network

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US