Sept. 13, 2022, midnight | schmidphilipp1995@gmail.com (Philipp Schmid)

philschmid blog www.philschmid.de

Learn how to optimize GPT-J for GPU inference with a 1-line of code using Hugging Face Transformers and DeepSpeed.

code deepspeed face gpt gpt-j gptj gpu gpus hugging face huggingface inference learn line optimization transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne