Jan. 12, 2024, 12:19 a.m. | /u/herodotick

Machine Learning www.reddit.com

Hi all, I've stumbled upon this Neurips paper "Large Language Models Are Zero-Shot Time Series Forecasters" [2310.07820.pdf (arxiv.org)](https://arxiv.org/pdf/2310.07820.pdf?trk=public_post_comment-text) and wonder what people in time series think about it. The paper's authors summarize the method: "At its core, this method represents the time series as a string of numerical digits, and views time series forecasting as next-token prediction in text".

The authors seem to show performance nearly matching and sometimes exceeding the standard baseline such as ARIMA on DARTS baseline, with …

arima authors current forecasting foundation machinelearning people performance series show standard think time series time series forecasting training will

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote