Web: http://arxiv.org/abs/2205.02269

May 6, 2022, 1:11 a.m. | Pengmiao Zhang, Ajitesh Srivastava, Anant V. Nori, Rajgopal Kannan, Viktor K. Prasanna

cs.LG updates on arXiv.org arxiv.org

Machine learning algorithms have shown potential to improve prefetching
performance by accurately predicting future memory accesses. Existing
approaches are based on the modeling of text prediction, considering
prefetching as a classification problem for sequence prediction. However, the
vast and sparse memory address space leads to large vocabulary, which makes
this modeling impractical. The number and order of outputs for multiple cache
line prefetching are also fundamentally different from text prediction. We
propose TransFetch, a novel way to model prefetching. To …

ar arxiv attention segmentation

More from arxiv.org / cs.LG updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote