March 28, 2024, 11:04 a.m. | Prabhu R

DEV Community dev.to

In the preceding article, we were introduced to AI/ML concepts and explored the process of running a local Large Language Model (LLM) - Ollama. We further delved into interacting with it via Java using JBang and Langchain4j.


Now, let's explore into what "chat memory" is and how langchain4j helps in the cumbersome task of maintaining the chat memory.


To begin with, let's discuss the necessity of chat memory. Since language models (LLMs) inherently lack the ability to …

article chat concepts explore java jbang langchain4j language language model large language large language model llm machinelearning memory ollama process running via

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France