Feb. 6, 2024, 5:44 a.m. | Matthew DeLorenzo Animesh Basak Chowdhury Vasudev Gohil Shailja Thakur Ramesh Karri Siddharth Garg Jey

cs.LG updates on arXiv.org arxiv.org

Existing large language models (LLMs) for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency. This is due to the lack of PPA awareness in conventional transformer decoding algorithms. In response, we present an automated transformer decoding algorithm that integrates Monte Carlo tree-search for lookahead, guiding the transformer to produce compilable, functionally correct, and PPA-optimized code. Empirical evaluation with a fine-tuned language model on RTL codesets shows that our proposed technique …

algorithm algorithms automated challenges code code generation compilation count cs.ai cs.ar cs.lg decoding decoding algorithm efficiency every face language language models large language large language models llm llms performance power quality transfer transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA