July 24, 2023, 8:58 a.m. | /u/comical_cow

Machine Learning www.reddit.com

I am running text inferencing on Llama2-7b through langchain. I have downloaded the model from langchain's Huggingface library, and I am running the model on AWS ml.g4dn.12xlarge which has 4x**nvidia t4**, which gives a total 64GB of GPU memory and 192GB of normal memory. It is able to answer my queries in around 10 seconds for small queries, and upto 3 mins for big queries.

The task I am doing is retrieving information from a document(Understanding Machine Learning PDF) in …

aws gpu huggingface inferencing langchain library llm machinelearning memory normal nvidia reduce running text through

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India