March 13, 2024, 1 p.m. |

Latest stories for ZDNET in Artificial-Intelligence www.zdnet.com

The chip, the size of a single semiconductor wafer, has doubled performance to handle large language models in the tens of trillions of parameters.

cerebras chip generative language language models large language large language models parameters performance semiconductor startup

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore