Ask a question to get started
Enter to send•Shift+Enter new line
ConversationTokenBufferMemory()
BaseChatMemory
The key from the model Run's inputs to use as the eval input.
Async save the context of this chain run to memory.
Clear memory contents.
Async clear memory contents.
Async return key-value pairs given the text input to the chain.
String buffer of memory.
Exposes the buffer as a string in case return_messages is False.
Exposes the buffer as a list of messages in case return_messages is True.
Will always return list of memory variables.
Return history buffer.
Save context from this conversation to buffer. Pruned.
Conversation chat memory with token limit.
Keeps only the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit.