LangChain 실습 2 - LLM 캐시와 메모리
1. LLM / ChatModel
LLM?
ChatModel ?
2. Caching
1) InMemoryCache
from langchain.globals import set_llm_cache
from langchain.cache import InMemoryCache
# 인메모리 캐시
set_llm_cache(InMemoryCache())2) SQLiteCache
3) RedisCache
3. Memory
1) ConversationBufferMemory
2) ConversationBufferWindowMemory
3) ConversationTokenBufferMemory
4) ConversationSummaryMemory / ConversationSummaryBufferMemory
5) ConversationEntityMemory
6) ConversationKGMemory
7) VectorStoreRetrieverMemory
Last updated