Skip to main content

AI Memory

CrestApps.Core includes reusable memory services for applications that want an AI assistant to remember durable user facts across sessions.

What the framework provides

AddCoreAIMemory() adds the shared runtime behavior for:

  • memory tool registration
  • safety validation for memory writes
  • semantic memory search orchestration
  • preemptive memory retrieval during orchestration
  • shared indexing and search helpers
builder.Services
.AddCoreAIServices()
.AddCoreAIOrchestration()
.AddCoreAIMemory()
.AddCoreAIOpenAI();

What your host must provide

The framework does not assume a single persistence model. A host application is responsible for wiring the storage and search pieces that match its runtime:

  • an IAIMemoryStore implementation for durable memory entries
  • an ISearchIndexProfileStore implementation for index profile lookup
  • one or more keyed IMemoryVectorSearchService implementations
  • options such as AIMemoryOptions, GeneralAIOptions, and ChatInteractionMemoryOptions

Core concepts

Memory entries

A memory entry is a durable user-scoped fact:

FieldPurpose
UserIdIdentifies the owner of the memory
NameStable key such as preferred-language
DescriptionSemantic summary used to improve retrieval quality
ContentThe value to retain for later recall
CreatedUtc / UpdatedUtcLifecycle timestamps

Safety validation

Before a memory is stored, IAIMemorySafetyService can reject obviously sensitive data such as credentials, connection strings, SSNs, or payment card numbers. The framework ships with the validation pipeline; hosts decide how they surface validation failures.

User scoping

Memory tools operate on the current authenticated user. The framework resolves identity from orchestration scope or the current HTTP context so retrieval stays user-specific.

Key contracts

ContractPurpose
IAIMemoryStoreCRUD and query access for persisted memory entries
IAIMemorySearchServiceShared semantic retrieval over memory entries
IMemoryVectorSearchServiceProvider-specific vector search adapter
IAIMemorySafetyServiceValidation for writes before they are stored
IPreemptiveRagHandlerInjects relevant memory context before the model responds

Built-in tools

When memory is enabled, the orchestration layer can expose these system tools:

ToolPurpose
save_user_memoryCreate or update a durable memory
search_user_memoriesFind relevant memories by semantic similarity
list_user_memoriesEnumerate saved memories for the current user
remove_user_memoryDelete a saved memory by name

These tools are intended for long-lived facts such as preferences, recurring projects, or roles, not for transient one-off chat state.

Typical flow

  1. Register AddCoreAIMemory() with the rest of the AI runtime.
  2. Provide the store, vector search, and option bindings for your host.
  3. Enable memory-aware orchestration for the profiles or chat surfaces that should use it.
  4. Let the orchestrator decide when to store, search, or inject memory context.
  • Pair memory with Orchestration when you want automatic recall
  • Pair memory with Data Sources when you also need document or index-backed RAG