Aug. 9, 2023, 5:11 p.m. | /u/crowwork

Machine Learning www.reddit.com

There have been many LLM inference solutions since the bloom of open-source LLMs. Most of the performant inference solutions are based on CUDA and optimized for NVIDIA GPUs. In the meantime, with the high demand for compute availability, it is useful to bring support to a broader class of hardware accelerators. AMD is one potential candidate.

We build a project that makes it possible to compile LLMs and deploy them on AMD GPUs using ROCm and get competitive performance. More …

accelerators amd availability bloom compute cuda demand gpus hardware inference llm llms machinelearning making nvidia nvidia gpus project solutions support

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US