Introducción a LangChain
Desarrollo de aplicaciones con LLM utilizando LangChain
Estructura y módulos de LangChain
Uso de modelos Open Source de Hugging Face
Uso de modelos de OpenAI API
Prompt templates de LangChain
Cadenas en LangChain
Utility chains
RetrievalQA chain
Foundational chains
Quiz: Introducción a LangChain
Casos de uso de LangChain
Casos de uso de LangChain
¿Cómo utilizar LangChain en mi equipo?
Quiz: Casos de uso de LangChain
Manejo de documentos con índices
¿Cómo manejar documentos con índices en LangChain?
La clase Document
Document Loaders: PDF
Document Loaders: CSV con Pandas DataFrames
Document Loaders: JSONL
Document Transformers: TextSplitters
Proyecto de Chatbot: configuración de entorno para LangChain y obtención de datos
Proyecto de Chatbot: creación de documents de Hugging Face
Quiz: Manejo de documentos con índices
Embeddings y bases de datos vectoriales
Uso de embeddings y bases de datos vectoriales con LangChain
¿Cómo usar embeddings de OpenAI en LangChain?
¿Cómo usar embeddings de Hugging Face en LangChaing?
Chroma vector store en LangChain
Proyecto de Chatbot: ingesta de documents en Chroma
RetrievalQA: cadena para preguntar
Proyecto de Chatbot: cadena de conversación
Proyecto de Chatbot: RetrievalQA chain
Quiz: Embeddings y bases de datos vectoriales
Chats y memoria con LangChain
¿Para qué sirve la memoria en cadenas y chats?
Uso de modelos de chat con LangChain
Chat prompt templates
ConversationBufferMemory
ConversationBufferWindowMemory
ConversationSummaryMemory
ConversationSummaryBufferMemory
Entity memory
Proyecto de Chatbot: chat history con ConversationalRetrievalChain
Quiz: Chats y memoria con LangChain
Evolución del uso de LLM
LangChain y LLM en evolución constante
You don't have access to this class
Keep learning! Join and start boosting your career
In the world of chatbot development, it is critical to understand how chatbots remember and respond to previous interactions. One efficient technique for managing memory is the use of ConversationBufferWindowMemory. This option allows storing only a specific number of interactions, keeping only the most recent in memory. The buffer window technology is essential for those cases where resource management is critical, such as in extensive interactions with language models.
To implement this memory, it is necessary to create an instance of ConversationBufferWindowMemory
from a specific memory library. The parameter K
defines how many interactions will be stored in memory. It is recommended that the number of interactions to be remembered depends on the specific context of the chatbot being developed:
The use of a well-configured window memory brings multiple benefits:
from memory_library import ConversationBufferWindowMemory
# Create an instance of window memory window_memory = ConversationBufferWindowMemory(K=3) # For example, remember the 3 most recent interactions.
Once the window buffer is configured, the next step is to integrate it with the language model. We are going to use a pre-instantiated chat model, such as OpenAI's GPT-3.5 Turbo. The process is straightforward, and ensuring that the model is correctly configured to take advantage of the memory is crucial.
ConversationChain
where you specify the language model and the memory to use.verbose
to true
can be useful in development phases to verify how the model processes messages and how memory stores interactions.predict
method, you can converse with the chatbot, verifying how it responds and how it enriches the stored interactions.# Create a conversation chain with the model and memoryconversation = ConversationChain(language_model=gpt_3_5_turbo, verbose=True, memory=window_memory)
# Start interactionresponse = conversation.predict(input="What's up? How are you? I'm Omar and I write very colloquial.")print(response)
When working with window memories, it is fundamental to consider the balance between memory depth and cost:
It is essential to define the type of conversation desired with the chatbot. This will influence both the design and the resources required for its operation. Considering these aspects from the beginning will facilitate a more efficient and effective development.
Contributions 3
Questions 1
Esta guía proporciona un caso de uso sobre “Conversation Buffer Window Memory”, para tener conversaciones sesgadas en interacciones.
.
En LangChain, se puede implementa:
.
.
Cada aplicación puede tener diferentes requisitos en cuanto a la forma de consultar la memoria. El módulo de memoria debería facilitar tanto la puesta en marcha de sistemas de memoria sencillos como la escritura de sistemas personalizados si fuera necesario.
.
Enlaces auxiliares:
.
ConversationBufferWindowMemory
mantiene una lista de las interacciones de la conversación a lo largo del tiempo. Sólo utiliza las últimas k
interacciones.
import { BufferWindowMemory, ChatMessageHistory } from 'langchain/memory'
import { HumanMessage, AIMessage } from 'langchain/schema'
const chatHistory = new ChatMessageHistory([
new HumanMessage("Hi! I'm Jim."),
new AIMessage(
"Hi Jim! It's nice to meet you. My name is AI. What would you like to talk about?",
),
])
const memory = new BufferWindowMemory({
k: 1,
chatHistory,
})
Con lo anterior, se agregó de un historial de conversación al LLM. Para darle de contexto y efectuar correctamente la siguiente tarea o pregunta. Sin embargo, después de algunas interacciones, debido a los k
o mensajes pasados, olvide su contexto.
import { ConversationChain } from 'langchain/chains'
const chain = new ConversationChain({
llm,
memory,
})
let response = await chain.call({ input: 'What is it a LLM?' })
console.log(response)
response = await chain.call({ input: 'What is it OpenAI?' })
console.log(response)
response = await chain.call({ input: "What's my name?" })
console.log(response)
ConversationBufferWindowMemory
es un tipo de memorización útil para mantener una ventana deslizante de las interacciones más recientes, para que el buffer no se haga demasiado grande.
import { ConversationChain } from 'langchain/chains'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import { BufferWindowMemory, ChatMessageHistory } from 'langchain/memory'
import { HumanMessage, AIMessage } from 'langchain/schema'
const API_TOKEN = // 👈 Enter the API Token from OpenAI
const llm = new ChatOpenAI({
maxTokens: -1,
modelName: 'gpt-4',
temperature: 0,
openAIApiKey: API_TOKEN,
})
const chatHistory = new ChatMessageHistory([
new HumanMessage("Hi! I'm Jim."),
new AIMessage(
"Hi Jim! It's nice to meet you. My name is AI. What would you like to talk about?",
),
])
const memory = new BufferWindowMemory({
k: 1,
chatHistory,
})
const chain = new ConversationChain({
llm,
memory,
prompt,
})
let response = await chain.call({ input: 'What is it a LLM?' })
console.log(response)
response = await chain.call({ input: 'What is it OpenAI?' })
console.log(response)
response = await chain.call({ input: "What's my name?" })
console.log(response)
Want to see more contributions, questions and answers from the community?