April 27, 2024, 5:07 a.m. | Indika_Wimalasuriya

DEV Community dev.to

A few weeks back, I presented at Conf42 LLM2024. My presentation was on "LLMs in AWS: Observability Maturity from Foundations to AIOps." This blog post covers the technical elements I discussed as part of my presentation.


Disclaimer: LLM observability is in two parts:



  1. Direct LLM observability, which means integration of observability capabilities into LLM itself during training, deployment, and maintenance. This allows us to gain insight into LLM's overall performance, detect anomalies or performance issues, and understand …

aiops amazon amazon bedrock aws bedrock blog capabilities disclaimer integration llm llms observability part presentation sre technical

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote