Skip to main content

Using ChatLiteLLM() - Langchain

Pre-Requisites​

!pip install litellm langchain

Quick Start​

import os
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage

os.environ['OPENAI_API_KEY'] = ""
chat = ChatLiteLLM(model="gpt-3.5-turbo")
messages = [
HumanMessage(
content="what model are you"
)
]
chat.invoke(messages)

Use Langchain ChatLiteLLM with MLflow​

MLflow provides open-source observability solution for ChatLiteLLM.

To enable the integration, simply call mlflow.litellm.autolog() before in your code. No other setup is necessary.

import mlflow

mlflow.litellm.autolog()

Once the auto-tracing is enabled, you can invoke ChatLiteLLM and see recorded traces in MLflow.

import os
from langchain.chat_models import ChatLiteLLM

os.environ['OPENAI_API_KEY']="sk-..."

chat = ChatLiteLLM(model="gpt-4o-mini")
chat.invoke("Hi!")

Use Langchain ChatLiteLLM with Lunary​

import os
from langchain.chat_models import ChatLiteLLM
from langchain.schema import HumanMessage
import litellm

os.environ["LUNARY_PUBLIC_KEY"] = "" # from https://app.lunary.ai/settings
os.environ['OPENAI_API_KEY']="sk-..."

litellm.success_callback = ["lunary"]
litellm.failure_callback = ["lunary"]

chat = ChatLiteLLM(
model="gpt-4o"
messages = [
HumanMessage(
content="what model are you"
)
]
chat(messages)

Get more details here

Use LangChain ChatLiteLLM + Langfuse​

Checkout this section here for more details on how to integrate Langfuse with ChatLiteLLM.