No description
- TypeScript 100%
|
|
||
|---|---|---|
| src | ||
| tests | ||
| .gitignore | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| tsconfig.json | ||
| vitest.config.ts | ||
@fractal-synapse/memory-injection-plugin
Coordinator plugin for the multi-plugin memory stack. Not a memory store itself — it sits in front of all memory plugins and compiles their context fragments for injection into the agent.
What it does
- Registers memory plugins and calls their
getContext()at the right injection points - Prioritises and trims fragments to stay within a token budget
- Optionally synthesises fragments into coherent prose via a lightweight LLM call
- Exposes an
onContextInjectedcallback for real-time memory viewer integration
Installation
npm install @fractal-synapse/memory-injection-plugin
Usage
import { MemoryInjectionPlugin } from '@fractal-synapse/memory-injection-plugin';
const injection = new MemoryInjectionPlugin({
totalTokenBudget: 1250,
modelRegistry, // enables LLM synthesis
onContextInjected: (event) => {
// send to memory viewer panel
},
logger,
});
// Register memory plugins before initializing the agent
injection.register(workspacePlugin);
injection.register(knowledgePlugin, 300); // custom budget override
// Wire into agent
await injection.initializeAgent(agent);
Config
| Option | Type | Default | Description |
|---|---|---|---|
totalTokenBudget |
number |
1250 |
Token cap per compilation pass. Applied independently to session-start and per-message. |
modelRegistry |
ModelRegistry |
— | When provided, enables LLM synthesis by default. |
synthesisModel |
string |
— | Override model name for synthesis. Defaults to getExtractionModelName() → getDefaultModelName(). |
enableSynthesis |
boolean |
true |
Explicitly enable/disable synthesis. |
onContextInjected |
(event) => void |
— | Callback fired after each compilation. Used for memory viewer. |
logger |
LoggingInterface |
— | Logger from @fractal-synapse/agent-core. |
IMemoryInjectionPlugin interface
Memory plugins implement this interface:
interface IMemoryInjectionPlugin {
readonly name: string;
readonly injectionPoints: ('session-start' | 'per-message')[];
getContext(request: ContextRequest): Promise<ContextFragment[]>;
}
session-start— called once when the agent starts. Compiled blocks go into the system prompt viasystemAppend.per-message— called on every user turn. Compiled content returned asuserMessageAppend.
ContextRequest
The request object passed to getContext() in every injection-point call:
interface ContextRequest {
agentId: string;
injectionPoint: 'session-start' | 'per-message';
}
agentId— The stable agent identity passed to everygetContext()call. Derived fromagent.getDefinitionName() ?? agent.getId(). Use this to scope per-agent storage — do not useagent.getId()directly, as it is a session-level identifier that changes each conversation.injectionPoint— Which injection point triggered the call, so plugins can return different fragments depending on context.
ContextFragment
interface ContextFragment {
content: string;
tokens: number; // self-reported estimate
priority: number; // 0–100; lower = cut first
label?: string; // section label in output
synthesize?: boolean; // false = inject raw, never synthesised
}
onContextInjected callback
Receives an InjectedContextEvent after each compilation:
interface InjectedContextEvent {
agentId: string;
injectionPoint: 'session-start' | 'per-message';
fragments: Array<{
pluginName: string;
content: string;
tokens: number;
priority: number;
included: boolean; // false if trimmed by budget
}>;
synthesized: boolean;
finalContent: string;
timestamp: string;
}
Synthesis
When modelRegistry is provided and synthesis is not disabled, synthesizable fragments are sent in one LLM call to produce a coherent prose summary targeting ~60% of the token budget. Raw fragments (synthesize: false) are always concatenated verbatim and never touch the LLM. If synthesis fails, it falls back to concatenation silently.