March 19, 2024, 4:42 a.m. | Tousif Rahman, Gang Mao, Sidharth Maheshwari, Rishad Shafik, Alex Yakovlev

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10538v1 Announce Type: cross
Abstract: System-on-Chip Field-Programmable Gate Arrays (SoC-FPGAs) offer significant throughput gains for machine learning (ML) edge inference applications via the design of co-processor accelerator systems. However, the design effort for training and translating ML models into SoC-FPGA solutions can be substantial and requires specialist knowledge aware trade-offs between model performance, power consumption, latency and resource utilization. Contrary to other ML algorithms, Tsetlin Machine (TM) performs classification by forming logic proposition between boolean actions from the Tsetlin Automata …

abstract accelerator applications arrays arxiv automated chip cs.ai cs.ar cs.lg design edge fpga fpgas gate however inference knowledge machine machine learning ml models processor soc solutions specialist system-on-chip systems trade training type via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US