Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The strike-based hardening mechanism is the part worth building on. Rules that automatically tighten after repeated failures don't require a separate logic engine to be useful. I run a simpler version: a corrections file the agent reads at session start, written in plain language, updated every time something goes wrong.

Three months in, the agent rarely repeats the same class of mistake. The 42% failure rate without deterministic enforcement tracks with what I saw before that system existed. What I'm still figuring out: when does a rule that prevents failures also prevent the agent from handling genuinely novel situations it should handle?

Alex's avatar

Great framework — the Oracle/Logician split and dynamic context injection resonate deeply. I've been building something very similar in production.

Your point about injecting exact documents when a topic activates (and purging on switch) is exactly what I implemented as TSVC — Topic-Scoped Virtual Context. It treats each conversation topic as a virtual process with its own isolated context, state, and lifecycle. When a topic activates, its full context is injected deterministically. When you switch, it saves state and purges — no stale context bleeding across topics.

I wrote it up on dev.to if you're interested: TSVC: Treating AI Conversation Topics as Virtual Processes

The strike-based self-improvement pattern you describe is the piece I'm adding next — two independent sources now pointing me the same direction. Would love to hear how yours handles cross-topic corrections (i.e., a rule learned in one topic context that should apply globally).

No posts

Ready for more?