Feb. 5, 2024, 11:25 a.m. | /u/HumanSpinach2

Machine Learning www.reddit.com

Obviously CUDA is available for low-level GPU programming, but takes a lot of time to program in. Then you have libraries like Pytorch that implement high-level operations, but can be extremely slow for trying to do complex things.

Then there's the interesting space of languages that try to slot in just above CUDA on the abstraction level - Triton and Halide.

Then there are the einstein-notation flavored livraries that are good for tensor reductions. Tensor Comprehensions is one that uses …

abstraction compilation cuda etc gpu languages libraries low machinelearning operations programming pytorch space tensor triton

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120