Feb. 13, 2024, 5:42 a.m. | Francisco Dur\'an Silverio Mart\'inez-Fern\'andez Matias Martinez Patricia Lago

cs.LG updates on arXiv.org arxiv.org

The growing use of large machine learning models highlights concerns about their increasing computational demands. While the energy consumption of their training phase has received attention, fewer works have considered the inference phase. For ML inference, the binding of ML models to the ML system for user access, known as ML serving, is a critical yet understudied step for achieving efficiency in ML applications.
We examine the literature in ML architectural design decisions and Green AI, with a special focus …

attention computational concerns consumption cs.lg cs.se decisions design energy green highlights inference machine machine learning machine learning models ml inference ml models training

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Business Intelligence Analyst Lead

@ Zillow | Mexico City

Lead Data Engineer

@ Bristol Myers Squibb | Hyderabad

Big Data Solutions Architect

@ Databricks | Munich, Germany

Senior Data Scientist - Trendyol Seller

@ Trendyol | Istanbul (All)