Feb. 5, 2024, 11:25 a.m. | /u/HumanSpinach2

Machine Learning www.reddit.com

Obviously CUDA is available for low-level GPU programming, but takes a lot of time to program in. Then you have libraries like Pytorch that implement high-level operations, but can be extremely slow for trying to do complex things.

Then there's the interesting space of languages that try to slot in just above CUDA on the abstraction level - Triton and Halide.

Then there are the einstein-notation flavored livraries that are good for tensor reductions. Tensor Comprehensions is one that uses …

abstraction compilation cuda etc gpu languages libraries low machinelearning operations programming pytorch space tensor triton

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US