April 2, 2024, 1:36 p.m. |

Latest stories for ZDNET in Artificial-Intelligence www.zdnet.com

Two new large language models, Jamba and DBRX, dramatically reduce the compute and memory needed for predictions, while meeting or beating the performance of top models such as GPT-3.5 and Llama 2.

ai21 compute databricks dbrx gpt gpt-3 gpt-3.5 jamba language language models large language large language models llama llama 2 memory open source performance predictions reduce show

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South